Test Report: KVM_Linux_crio 18779

                    
                      c20b56ce109690ce92fd9e26e987f9b16f237ff0:2024-05-01:34278
                    
                

Test fail (32/311)

Order failed test Duration
30 TestAddons/parallel/Ingress 153.97
32 TestAddons/parallel/MetricsServer 335.44
38 TestAddons/parallel/LocalPath 16.61
44 TestAddons/StoppedEnableDisable 154.39
134 TestFunctional/parallel/ImageCommands/ImageRemove 2.74
163 TestMultiControlPlane/serial/StopSecondaryNode 142.13
165 TestMultiControlPlane/serial/RestartSecondaryNode 59.68
167 TestMultiControlPlane/serial/RestartClusterKeepsNodes 421.9
170 TestMultiControlPlane/serial/StopCluster 142
230 TestMultiNode/serial/RestartKeepsNodes 310.29
232 TestMultiNode/serial/StopMultiNode 141.48
239 TestPreload 266.08
247 TestKubernetesUpgrade 474.18
282 TestPause/serial/SecondStartNoReconfiguration 54.74
284 TestStartStop/group/old-k8s-version/serial/FirstStart 278.58
294 TestStartStop/group/no-preload/serial/Stop 139.16
296 TestStartStop/group/embed-certs/serial/Stop 139.15
299 TestStartStop/group/default-k8s-diff-port/serial/Stop 138.99
300 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.39
301 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.38
303 TestStartStop/group/old-k8s-version/serial/DeployApp 0.51
304 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 105.04
306 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.38
310 TestStartStop/group/old-k8s-version/serial/SecondStart 726.86
311 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 544.4
312 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 544.41
313 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 544.37
314 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 543.63
315 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 400.96
316 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 451.57
317 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 277.32
318 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 144.19
x
+
TestAddons/parallel/Ingress (153.97s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-286595 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-286595 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-286595 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [e8f25648-6f7c-4d88-9b95-89988ad85a6b] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [e8f25648-6f7c-4d88-9b95-89988ad85a6b] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 12.005569369s
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-286595 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-286595 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m9.232750997s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-286595 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-286595 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.173
addons_test.go:306: (dbg) Run:  out/minikube-linux-amd64 -p addons-286595 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-linux-amd64 -p addons-286595 addons disable ingress-dns --alsologtostderr -v=1: (1.437480475s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-amd64 -p addons-286595 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-amd64 -p addons-286595 addons disable ingress --alsologtostderr -v=1: (7.950916535s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-286595 -n addons-286595
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-286595 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-286595 logs -n 25: (1.486419175s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-686563 | jenkins | v1.33.0 | 01 May 24 02:07 UTC |                     |
	|         | -p download-only-686563                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                                                                |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.33.0 | 01 May 24 02:07 UTC | 01 May 24 02:07 UTC |
	| delete  | -p download-only-686563                                                                     | download-only-686563 | jenkins | v1.33.0 | 01 May 24 02:07 UTC | 01 May 24 02:07 UTC |
	| delete  | -p download-only-099811                                                                     | download-only-099811 | jenkins | v1.33.0 | 01 May 24 02:07 UTC | 01 May 24 02:07 UTC |
	| delete  | -p download-only-686563                                                                     | download-only-686563 | jenkins | v1.33.0 | 01 May 24 02:07 UTC | 01 May 24 02:07 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-940490 | jenkins | v1.33.0 | 01 May 24 02:07 UTC |                     |
	|         | binary-mirror-940490                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:33553                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-940490                                                                     | binary-mirror-940490 | jenkins | v1.33.0 | 01 May 24 02:07 UTC | 01 May 24 02:07 UTC |
	| addons  | disable dashboard -p                                                                        | addons-286595        | jenkins | v1.33.0 | 01 May 24 02:07 UTC |                     |
	|         | addons-286595                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-286595        | jenkins | v1.33.0 | 01 May 24 02:07 UTC |                     |
	|         | addons-286595                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-286595 --wait=true                                                                | addons-286595        | jenkins | v1.33.0 | 01 May 24 02:07 UTC | 01 May 24 02:11 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --driver=kvm2                                                                 |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                                   |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| ssh     | addons-286595 ssh cat                                                                       | addons-286595        | jenkins | v1.33.0 | 01 May 24 02:11 UTC | 01 May 24 02:11 UTC |
	|         | /opt/local-path-provisioner/pvc-e2a3e7ab-0856-4130-bea1-c8089bb4ffec_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-286595 addons disable                                                                | addons-286595        | jenkins | v1.33.0 | 01 May 24 02:11 UTC |                     |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-286595 ip                                                                            | addons-286595        | jenkins | v1.33.0 | 01 May 24 02:11 UTC | 01 May 24 02:11 UTC |
	| addons  | addons-286595 addons disable                                                                | addons-286595        | jenkins | v1.33.0 | 01 May 24 02:11 UTC | 01 May 24 02:11 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-286595        | jenkins | v1.33.0 | 01 May 24 02:11 UTC | 01 May 24 02:11 UTC |
	|         | addons-286595                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-286595 ssh curl -s                                                                   | addons-286595        | jenkins | v1.33.0 | 01 May 24 02:11 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-286595        | jenkins | v1.33.0 | 01 May 24 02:11 UTC | 01 May 24 02:11 UTC |
	|         | addons-286595                                                                               |                      |         |         |                     |                     |
	| addons  | addons-286595 addons disable                                                                | addons-286595        | jenkins | v1.33.0 | 01 May 24 02:12 UTC | 01 May 24 02:12 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-286595 addons                                                                        | addons-286595        | jenkins | v1.33.0 | 01 May 24 02:12 UTC | 01 May 24 02:12 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-286595        | jenkins | v1.33.0 | 01 May 24 02:12 UTC | 01 May 24 02:12 UTC |
	|         | -p addons-286595                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-286595 addons                                                                        | addons-286595        | jenkins | v1.33.0 | 01 May 24 02:12 UTC | 01 May 24 02:12 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-286595        | jenkins | v1.33.0 | 01 May 24 02:12 UTC | 01 May 24 02:12 UTC |
	|         | -p addons-286595                                                                            |                      |         |         |                     |                     |
	| ip      | addons-286595 ip                                                                            | addons-286595        | jenkins | v1.33.0 | 01 May 24 02:14 UTC | 01 May 24 02:14 UTC |
	| addons  | addons-286595 addons disable                                                                | addons-286595        | jenkins | v1.33.0 | 01 May 24 02:14 UTC | 01 May 24 02:14 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-286595 addons disable                                                                | addons-286595        | jenkins | v1.33.0 | 01 May 24 02:14 UTC | 01 May 24 02:14 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/01 02:07:55
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0501 02:07:55.315587   21421 out.go:291] Setting OutFile to fd 1 ...
	I0501 02:07:55.315833   21421 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 02:07:55.315843   21421 out.go:304] Setting ErrFile to fd 2...
	I0501 02:07:55.315848   21421 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 02:07:55.316039   21421 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18779-13391/.minikube/bin
	I0501 02:07:55.316668   21421 out.go:298] Setting JSON to false
	I0501 02:07:55.317527   21421 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3018,"bootTime":1714526257,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0501 02:07:55.317589   21421 start.go:139] virtualization: kvm guest
	I0501 02:07:55.319630   21421 out.go:177] * [addons-286595] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0501 02:07:55.320952   21421 out.go:177]   - MINIKUBE_LOCATION=18779
	I0501 02:07:55.320988   21421 notify.go:220] Checking for updates...
	I0501 02:07:55.322264   21421 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0501 02:07:55.323862   21421 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18779-13391/kubeconfig
	I0501 02:07:55.325233   21421 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18779-13391/.minikube
	I0501 02:07:55.326613   21421 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0501 02:07:55.327920   21421 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0501 02:07:55.329277   21421 driver.go:392] Setting default libvirt URI to qemu:///system
	I0501 02:07:55.361108   21421 out.go:177] * Using the kvm2 driver based on user configuration
	I0501 02:07:55.362520   21421 start.go:297] selected driver: kvm2
	I0501 02:07:55.362541   21421 start.go:901] validating driver "kvm2" against <nil>
	I0501 02:07:55.362554   21421 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0501 02:07:55.363265   21421 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0501 02:07:55.363341   21421 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18779-13391/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0501 02:07:55.378569   21421 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0501 02:07:55.378651   21421 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0501 02:07:55.378911   21421 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0501 02:07:55.378971   21421 cni.go:84] Creating CNI manager for ""
	I0501 02:07:55.378990   21421 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0501 02:07:55.378998   21421 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0501 02:07:55.379064   21421 start.go:340] cluster config:
	{Name:addons-286595 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:addons-286595 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 02:07:55.379155   21421 iso.go:125] acquiring lock: {Name:mkcd2496aadb29931e179193d707c731bfda98b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0501 02:07:55.381034   21421 out.go:177] * Starting "addons-286595" primary control-plane node in "addons-286595" cluster
	I0501 02:07:55.382342   21421 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0501 02:07:55.382389   21421 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0501 02:07:55.382424   21421 cache.go:56] Caching tarball of preloaded images
	I0501 02:07:55.382538   21421 preload.go:173] Found /home/jenkins/minikube-integration/18779-13391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0501 02:07:55.382551   21421 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0501 02:07:55.382853   21421 profile.go:143] Saving config to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/addons-286595/config.json ...
	I0501 02:07:55.382875   21421 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/addons-286595/config.json: {Name:mk5c1f83b71f5f2c1ef1b19fc5b8782100690a28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:07:55.383027   21421 start.go:360] acquireMachinesLock for addons-286595: {Name:mkc4548e5afe1d4b0a833f6c522103562b0cefff Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0501 02:07:55.383077   21421 start.go:364] duration metric: took 35.997µs to acquireMachinesLock for "addons-286595"
	I0501 02:07:55.383098   21421 start.go:93] Provisioning new machine with config: &{Name:addons-286595 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.0 ClusterName:addons-286595 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0501 02:07:55.383157   21421 start.go:125] createHost starting for "" (driver="kvm2")
	I0501 02:07:55.384926   21421 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0501 02:07:55.385071   21421 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:07:55.385111   21421 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:07:55.399944   21421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38141
	I0501 02:07:55.400401   21421 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:07:55.400917   21421 main.go:141] libmachine: Using API Version  1
	I0501 02:07:55.400945   21421 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:07:55.401337   21421 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:07:55.401519   21421 main.go:141] libmachine: (addons-286595) Calling .GetMachineName
	I0501 02:07:55.401671   21421 main.go:141] libmachine: (addons-286595) Calling .DriverName
	I0501 02:07:55.401878   21421 start.go:159] libmachine.API.Create for "addons-286595" (driver="kvm2")
	I0501 02:07:55.401929   21421 client.go:168] LocalClient.Create starting
	I0501 02:07:55.401972   21421 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem
	I0501 02:07:55.469501   21421 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem
	I0501 02:07:55.584020   21421 main.go:141] libmachine: Running pre-create checks...
	I0501 02:07:55.584044   21421 main.go:141] libmachine: (addons-286595) Calling .PreCreateCheck
	I0501 02:07:55.584546   21421 main.go:141] libmachine: (addons-286595) Calling .GetConfigRaw
	I0501 02:07:55.584947   21421 main.go:141] libmachine: Creating machine...
	I0501 02:07:55.584961   21421 main.go:141] libmachine: (addons-286595) Calling .Create
	I0501 02:07:55.585110   21421 main.go:141] libmachine: (addons-286595) Creating KVM machine...
	I0501 02:07:55.586374   21421 main.go:141] libmachine: (addons-286595) DBG | found existing default KVM network
	I0501 02:07:55.587079   21421 main.go:141] libmachine: (addons-286595) DBG | I0501 02:07:55.586933   21443 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ad0}
	I0501 02:07:55.587105   21421 main.go:141] libmachine: (addons-286595) DBG | created network xml: 
	I0501 02:07:55.587118   21421 main.go:141] libmachine: (addons-286595) DBG | <network>
	I0501 02:07:55.587138   21421 main.go:141] libmachine: (addons-286595) DBG |   <name>mk-addons-286595</name>
	I0501 02:07:55.587176   21421 main.go:141] libmachine: (addons-286595) DBG |   <dns enable='no'/>
	I0501 02:07:55.587203   21421 main.go:141] libmachine: (addons-286595) DBG |   
	I0501 02:07:55.587215   21421 main.go:141] libmachine: (addons-286595) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0501 02:07:55.587226   21421 main.go:141] libmachine: (addons-286595) DBG |     <dhcp>
	I0501 02:07:55.587240   21421 main.go:141] libmachine: (addons-286595) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0501 02:07:55.587251   21421 main.go:141] libmachine: (addons-286595) DBG |     </dhcp>
	I0501 02:07:55.587262   21421 main.go:141] libmachine: (addons-286595) DBG |   </ip>
	I0501 02:07:55.587270   21421 main.go:141] libmachine: (addons-286595) DBG |   
	I0501 02:07:55.587275   21421 main.go:141] libmachine: (addons-286595) DBG | </network>
	I0501 02:07:55.587282   21421 main.go:141] libmachine: (addons-286595) DBG | 
	I0501 02:07:55.592433   21421 main.go:141] libmachine: (addons-286595) DBG | trying to create private KVM network mk-addons-286595 192.168.39.0/24...
	I0501 02:07:55.656406   21421 main.go:141] libmachine: (addons-286595) DBG | private KVM network mk-addons-286595 192.168.39.0/24 created
	I0501 02:07:55.656442   21421 main.go:141] libmachine: (addons-286595) DBG | I0501 02:07:55.656374   21443 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18779-13391/.minikube
	I0501 02:07:55.656456   21421 main.go:141] libmachine: (addons-286595) Setting up store path in /home/jenkins/minikube-integration/18779-13391/.minikube/machines/addons-286595 ...
	I0501 02:07:55.656472   21421 main.go:141] libmachine: (addons-286595) Building disk image from file:///home/jenkins/minikube-integration/18779-13391/.minikube/cache/iso/amd64/minikube-v1.33.0-1714498396-18779-amd64.iso
	I0501 02:07:55.656501   21421 main.go:141] libmachine: (addons-286595) Downloading /home/jenkins/minikube-integration/18779-13391/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18779-13391/.minikube/cache/iso/amd64/minikube-v1.33.0-1714498396-18779-amd64.iso...
	I0501 02:07:55.887225   21421 main.go:141] libmachine: (addons-286595) DBG | I0501 02:07:55.887056   21443 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/addons-286595/id_rsa...
	I0501 02:07:56.006973   21421 main.go:141] libmachine: (addons-286595) DBG | I0501 02:07:56.006861   21443 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/addons-286595/addons-286595.rawdisk...
	I0501 02:07:56.006998   21421 main.go:141] libmachine: (addons-286595) DBG | Writing magic tar header
	I0501 02:07:56.007012   21421 main.go:141] libmachine: (addons-286595) DBG | Writing SSH key tar header
	I0501 02:07:56.007022   21421 main.go:141] libmachine: (addons-286595) DBG | I0501 02:07:56.006979   21443 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18779-13391/.minikube/machines/addons-286595 ...
	I0501 02:07:56.007086   21421 main.go:141] libmachine: (addons-286595) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/addons-286595
	I0501 02:07:56.007185   21421 main.go:141] libmachine: (addons-286595) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18779-13391/.minikube/machines
	I0501 02:07:56.007215   21421 main.go:141] libmachine: (addons-286595) Setting executable bit set on /home/jenkins/minikube-integration/18779-13391/.minikube/machines/addons-286595 (perms=drwx------)
	I0501 02:07:56.007226   21421 main.go:141] libmachine: (addons-286595) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18779-13391/.minikube
	I0501 02:07:56.007237   21421 main.go:141] libmachine: (addons-286595) Setting executable bit set on /home/jenkins/minikube-integration/18779-13391/.minikube/machines (perms=drwxr-xr-x)
	I0501 02:07:56.007250   21421 main.go:141] libmachine: (addons-286595) Setting executable bit set on /home/jenkins/minikube-integration/18779-13391/.minikube (perms=drwxr-xr-x)
	I0501 02:07:56.007261   21421 main.go:141] libmachine: (addons-286595) Setting executable bit set on /home/jenkins/minikube-integration/18779-13391 (perms=drwxrwxr-x)
	I0501 02:07:56.007275   21421 main.go:141] libmachine: (addons-286595) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0501 02:07:56.007290   21421 main.go:141] libmachine: (addons-286595) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0501 02:07:56.007297   21421 main.go:141] libmachine: (addons-286595) Creating domain...
	I0501 02:07:56.007309   21421 main.go:141] libmachine: (addons-286595) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18779-13391
	I0501 02:07:56.007319   21421 main.go:141] libmachine: (addons-286595) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0501 02:07:56.007333   21421 main.go:141] libmachine: (addons-286595) DBG | Checking permissions on dir: /home/jenkins
	I0501 02:07:56.007342   21421 main.go:141] libmachine: (addons-286595) DBG | Checking permissions on dir: /home
	I0501 02:07:56.007350   21421 main.go:141] libmachine: (addons-286595) DBG | Skipping /home - not owner
	I0501 02:07:56.008303   21421 main.go:141] libmachine: (addons-286595) define libvirt domain using xml: 
	I0501 02:07:56.008324   21421 main.go:141] libmachine: (addons-286595) <domain type='kvm'>
	I0501 02:07:56.008335   21421 main.go:141] libmachine: (addons-286595)   <name>addons-286595</name>
	I0501 02:07:56.008344   21421 main.go:141] libmachine: (addons-286595)   <memory unit='MiB'>4000</memory>
	I0501 02:07:56.008350   21421 main.go:141] libmachine: (addons-286595)   <vcpu>2</vcpu>
	I0501 02:07:56.008357   21421 main.go:141] libmachine: (addons-286595)   <features>
	I0501 02:07:56.008362   21421 main.go:141] libmachine: (addons-286595)     <acpi/>
	I0501 02:07:56.008369   21421 main.go:141] libmachine: (addons-286595)     <apic/>
	I0501 02:07:56.008374   21421 main.go:141] libmachine: (addons-286595)     <pae/>
	I0501 02:07:56.008378   21421 main.go:141] libmachine: (addons-286595)     
	I0501 02:07:56.008384   21421 main.go:141] libmachine: (addons-286595)   </features>
	I0501 02:07:56.008391   21421 main.go:141] libmachine: (addons-286595)   <cpu mode='host-passthrough'>
	I0501 02:07:56.008396   21421 main.go:141] libmachine: (addons-286595)   
	I0501 02:07:56.008406   21421 main.go:141] libmachine: (addons-286595)   </cpu>
	I0501 02:07:56.008412   21421 main.go:141] libmachine: (addons-286595)   <os>
	I0501 02:07:56.008419   21421 main.go:141] libmachine: (addons-286595)     <type>hvm</type>
	I0501 02:07:56.008444   21421 main.go:141] libmachine: (addons-286595)     <boot dev='cdrom'/>
	I0501 02:07:56.008467   21421 main.go:141] libmachine: (addons-286595)     <boot dev='hd'/>
	I0501 02:07:56.008500   21421 main.go:141] libmachine: (addons-286595)     <bootmenu enable='no'/>
	I0501 02:07:56.008522   21421 main.go:141] libmachine: (addons-286595)   </os>
	I0501 02:07:56.008534   21421 main.go:141] libmachine: (addons-286595)   <devices>
	I0501 02:07:56.008543   21421 main.go:141] libmachine: (addons-286595)     <disk type='file' device='cdrom'>
	I0501 02:07:56.008553   21421 main.go:141] libmachine: (addons-286595)       <source file='/home/jenkins/minikube-integration/18779-13391/.minikube/machines/addons-286595/boot2docker.iso'/>
	I0501 02:07:56.008560   21421 main.go:141] libmachine: (addons-286595)       <target dev='hdc' bus='scsi'/>
	I0501 02:07:56.008566   21421 main.go:141] libmachine: (addons-286595)       <readonly/>
	I0501 02:07:56.008574   21421 main.go:141] libmachine: (addons-286595)     </disk>
	I0501 02:07:56.008580   21421 main.go:141] libmachine: (addons-286595)     <disk type='file' device='disk'>
	I0501 02:07:56.008588   21421 main.go:141] libmachine: (addons-286595)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0501 02:07:56.008597   21421 main.go:141] libmachine: (addons-286595)       <source file='/home/jenkins/minikube-integration/18779-13391/.minikube/machines/addons-286595/addons-286595.rawdisk'/>
	I0501 02:07:56.008611   21421 main.go:141] libmachine: (addons-286595)       <target dev='hda' bus='virtio'/>
	I0501 02:07:56.008629   21421 main.go:141] libmachine: (addons-286595)     </disk>
	I0501 02:07:56.008642   21421 main.go:141] libmachine: (addons-286595)     <interface type='network'>
	I0501 02:07:56.008654   21421 main.go:141] libmachine: (addons-286595)       <source network='mk-addons-286595'/>
	I0501 02:07:56.008670   21421 main.go:141] libmachine: (addons-286595)       <model type='virtio'/>
	I0501 02:07:56.008684   21421 main.go:141] libmachine: (addons-286595)     </interface>
	I0501 02:07:56.008696   21421 main.go:141] libmachine: (addons-286595)     <interface type='network'>
	I0501 02:07:56.008709   21421 main.go:141] libmachine: (addons-286595)       <source network='default'/>
	I0501 02:07:56.008724   21421 main.go:141] libmachine: (addons-286595)       <model type='virtio'/>
	I0501 02:07:56.008737   21421 main.go:141] libmachine: (addons-286595)     </interface>
	I0501 02:07:56.008749   21421 main.go:141] libmachine: (addons-286595)     <serial type='pty'>
	I0501 02:07:56.008778   21421 main.go:141] libmachine: (addons-286595)       <target port='0'/>
	I0501 02:07:56.008801   21421 main.go:141] libmachine: (addons-286595)     </serial>
	I0501 02:07:56.008814   21421 main.go:141] libmachine: (addons-286595)     <console type='pty'>
	I0501 02:07:56.008824   21421 main.go:141] libmachine: (addons-286595)       <target type='serial' port='0'/>
	I0501 02:07:56.008832   21421 main.go:141] libmachine: (addons-286595)     </console>
	I0501 02:07:56.008840   21421 main.go:141] libmachine: (addons-286595)     <rng model='virtio'>
	I0501 02:07:56.008855   21421 main.go:141] libmachine: (addons-286595)       <backend model='random'>/dev/random</backend>
	I0501 02:07:56.008866   21421 main.go:141] libmachine: (addons-286595)     </rng>
	I0501 02:07:56.008879   21421 main.go:141] libmachine: (addons-286595)     
	I0501 02:07:56.008893   21421 main.go:141] libmachine: (addons-286595)     
	I0501 02:07:56.008908   21421 main.go:141] libmachine: (addons-286595)   </devices>
	I0501 02:07:56.008921   21421 main.go:141] libmachine: (addons-286595) </domain>
	I0501 02:07:56.008938   21421 main.go:141] libmachine: (addons-286595) 
	I0501 02:07:56.014732   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined MAC address 52:54:00:22:3f:81 in network default
	I0501 02:07:56.015256   21421 main.go:141] libmachine: (addons-286595) Ensuring networks are active...
	I0501 02:07:56.015274   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:07:56.015867   21421 main.go:141] libmachine: (addons-286595) Ensuring network default is active
	I0501 02:07:56.016102   21421 main.go:141] libmachine: (addons-286595) Ensuring network mk-addons-286595 is active
	I0501 02:07:56.016566   21421 main.go:141] libmachine: (addons-286595) Getting domain xml...
	I0501 02:07:56.017210   21421 main.go:141] libmachine: (addons-286595) Creating domain...
	I0501 02:07:57.377993   21421 main.go:141] libmachine: (addons-286595) Waiting to get IP...
	I0501 02:07:57.378764   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:07:57.379157   21421 main.go:141] libmachine: (addons-286595) DBG | unable to find current IP address of domain addons-286595 in network mk-addons-286595
	I0501 02:07:57.379200   21421 main.go:141] libmachine: (addons-286595) DBG | I0501 02:07:57.379149   21443 retry.go:31] will retry after 254.326066ms: waiting for machine to come up
	I0501 02:07:57.634700   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:07:57.635120   21421 main.go:141] libmachine: (addons-286595) DBG | unable to find current IP address of domain addons-286595 in network mk-addons-286595
	I0501 02:07:57.635153   21421 main.go:141] libmachine: (addons-286595) DBG | I0501 02:07:57.635064   21443 retry.go:31] will retry after 249.868559ms: waiting for machine to come up
	I0501 02:07:57.886647   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:07:57.887035   21421 main.go:141] libmachine: (addons-286595) DBG | unable to find current IP address of domain addons-286595 in network mk-addons-286595
	I0501 02:07:57.887069   21421 main.go:141] libmachine: (addons-286595) DBG | I0501 02:07:57.887001   21443 retry.go:31] will retry after 445.355301ms: waiting for machine to come up
	I0501 02:07:58.333589   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:07:58.334022   21421 main.go:141] libmachine: (addons-286595) DBG | unable to find current IP address of domain addons-286595 in network mk-addons-286595
	I0501 02:07:58.334051   21421 main.go:141] libmachine: (addons-286595) DBG | I0501 02:07:58.333975   21443 retry.go:31] will retry after 487.078231ms: waiting for machine to come up
	I0501 02:07:58.822615   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:07:58.823027   21421 main.go:141] libmachine: (addons-286595) DBG | unable to find current IP address of domain addons-286595 in network mk-addons-286595
	I0501 02:07:58.823050   21421 main.go:141] libmachine: (addons-286595) DBG | I0501 02:07:58.822999   21443 retry.go:31] will retry after 637.55693ms: waiting for machine to come up
	I0501 02:07:59.461947   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:07:59.462373   21421 main.go:141] libmachine: (addons-286595) DBG | unable to find current IP address of domain addons-286595 in network mk-addons-286595
	I0501 02:07:59.462422   21421 main.go:141] libmachine: (addons-286595) DBG | I0501 02:07:59.462326   21443 retry.go:31] will retry after 711.50572ms: waiting for machine to come up
	I0501 02:08:00.175263   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:08:00.175675   21421 main.go:141] libmachine: (addons-286595) DBG | unable to find current IP address of domain addons-286595 in network mk-addons-286595
	I0501 02:08:00.175700   21421 main.go:141] libmachine: (addons-286595) DBG | I0501 02:08:00.175617   21443 retry.go:31] will retry after 1.097804426s: waiting for machine to come up
	I0501 02:08:01.275302   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:08:01.275754   21421 main.go:141] libmachine: (addons-286595) DBG | unable to find current IP address of domain addons-286595 in network mk-addons-286595
	I0501 02:08:01.276330   21421 main.go:141] libmachine: (addons-286595) DBG | I0501 02:08:01.276098   21443 retry.go:31] will retry after 1.219199563s: waiting for machine to come up
	I0501 02:08:02.496666   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:08:02.497066   21421 main.go:141] libmachine: (addons-286595) DBG | unable to find current IP address of domain addons-286595 in network mk-addons-286595
	I0501 02:08:02.497094   21421 main.go:141] libmachine: (addons-286595) DBG | I0501 02:08:02.497031   21443 retry.go:31] will retry after 1.494167654s: waiting for machine to come up
	I0501 02:08:03.993680   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:08:03.994088   21421 main.go:141] libmachine: (addons-286595) DBG | unable to find current IP address of domain addons-286595 in network mk-addons-286595
	I0501 02:08:03.994115   21421 main.go:141] libmachine: (addons-286595) DBG | I0501 02:08:03.994048   21443 retry.go:31] will retry after 2.157364528s: waiting for machine to come up
	I0501 02:08:06.152699   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:08:06.153083   21421 main.go:141] libmachine: (addons-286595) DBG | unable to find current IP address of domain addons-286595 in network mk-addons-286595
	I0501 02:08:06.153106   21421 main.go:141] libmachine: (addons-286595) DBG | I0501 02:08:06.153060   21443 retry.go:31] will retry after 2.06631124s: waiting for machine to come up
	I0501 02:08:08.222546   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:08:08.222962   21421 main.go:141] libmachine: (addons-286595) DBG | unable to find current IP address of domain addons-286595 in network mk-addons-286595
	I0501 02:08:08.222985   21421 main.go:141] libmachine: (addons-286595) DBG | I0501 02:08:08.222927   21443 retry.go:31] will retry after 2.959305142s: waiting for machine to come up
	I0501 02:08:11.183544   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:08:11.183944   21421 main.go:141] libmachine: (addons-286595) DBG | unable to find current IP address of domain addons-286595 in network mk-addons-286595
	I0501 02:08:11.183971   21421 main.go:141] libmachine: (addons-286595) DBG | I0501 02:08:11.183898   21443 retry.go:31] will retry after 4.259579563s: waiting for machine to come up
	I0501 02:08:15.445367   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:08:15.445760   21421 main.go:141] libmachine: (addons-286595) DBG | unable to find current IP address of domain addons-286595 in network mk-addons-286595
	I0501 02:08:15.445790   21421 main.go:141] libmachine: (addons-286595) DBG | I0501 02:08:15.445719   21443 retry.go:31] will retry after 4.682748792s: waiting for machine to come up
	I0501 02:08:20.133571   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:08:20.134012   21421 main.go:141] libmachine: (addons-286595) Found IP for machine: 192.168.39.173
	I0501 02:08:20.134042   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has current primary IP address 192.168.39.173 and MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:08:20.134051   21421 main.go:141] libmachine: (addons-286595) Reserving static IP address...
	I0501 02:08:20.134449   21421 main.go:141] libmachine: (addons-286595) DBG | unable to find host DHCP lease matching {name: "addons-286595", mac: "52:54:00:74:55:7e", ip: "192.168.39.173"} in network mk-addons-286595
	I0501 02:08:20.207290   21421 main.go:141] libmachine: (addons-286595) DBG | Getting to WaitForSSH function...
	I0501 02:08:20.207320   21421 main.go:141] libmachine: (addons-286595) Reserved static IP address: 192.168.39.173
	I0501 02:08:20.207332   21421 main.go:141] libmachine: (addons-286595) Waiting for SSH to be available...
	I0501 02:08:20.209437   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:08:20.209892   21421 main.go:141] libmachine: (addons-286595) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:55:7e", ip: ""} in network mk-addons-286595: {Iface:virbr1 ExpiryTime:2024-05-01 03:08:11 +0000 UTC Type:0 Mac:52:54:00:74:55:7e Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:minikube Clientid:01:52:54:00:74:55:7e}
	I0501 02:08:20.209937   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined IP address 192.168.39.173 and MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:08:20.209961   21421 main.go:141] libmachine: (addons-286595) DBG | Using SSH client type: external
	I0501 02:08:20.209991   21421 main.go:141] libmachine: (addons-286595) DBG | Using SSH private key: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/addons-286595/id_rsa (-rw-------)
	I0501 02:08:20.210051   21421 main.go:141] libmachine: (addons-286595) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.173 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18779-13391/.minikube/machines/addons-286595/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0501 02:08:20.210071   21421 main.go:141] libmachine: (addons-286595) DBG | About to run SSH command:
	I0501 02:08:20.210083   21421 main.go:141] libmachine: (addons-286595) DBG | exit 0
	I0501 02:08:20.342966   21421 main.go:141] libmachine: (addons-286595) DBG | SSH cmd err, output: <nil>: 
	I0501 02:08:20.343259   21421 main.go:141] libmachine: (addons-286595) KVM machine creation complete!
	I0501 02:08:20.343571   21421 main.go:141] libmachine: (addons-286595) Calling .GetConfigRaw
	I0501 02:08:20.344111   21421 main.go:141] libmachine: (addons-286595) Calling .DriverName
	I0501 02:08:20.344283   21421 main.go:141] libmachine: (addons-286595) Calling .DriverName
	I0501 02:08:20.344484   21421 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0501 02:08:20.344498   21421 main.go:141] libmachine: (addons-286595) Calling .GetState
	I0501 02:08:20.345605   21421 main.go:141] libmachine: Detecting operating system of created instance...
	I0501 02:08:20.345619   21421 main.go:141] libmachine: Waiting for SSH to be available...
	I0501 02:08:20.345625   21421 main.go:141] libmachine: Getting to WaitForSSH function...
	I0501 02:08:20.345630   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHHostname
	I0501 02:08:20.347918   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:08:20.348255   21421 main.go:141] libmachine: (addons-286595) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:55:7e", ip: ""} in network mk-addons-286595: {Iface:virbr1 ExpiryTime:2024-05-01 03:08:11 +0000 UTC Type:0 Mac:52:54:00:74:55:7e Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:addons-286595 Clientid:01:52:54:00:74:55:7e}
	I0501 02:08:20.348278   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined IP address 192.168.39.173 and MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:08:20.348442   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHPort
	I0501 02:08:20.348626   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHKeyPath
	I0501 02:08:20.348805   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHKeyPath
	I0501 02:08:20.348945   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHUsername
	I0501 02:08:20.349105   21421 main.go:141] libmachine: Using SSH client type: native
	I0501 02:08:20.349335   21421 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.173 22 <nil> <nil>}
	I0501 02:08:20.349350   21421 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0501 02:08:20.458068   21421 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0501 02:08:20.458095   21421 main.go:141] libmachine: Detecting the provisioner...
	I0501 02:08:20.458102   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHHostname
	I0501 02:08:20.460903   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:08:20.461288   21421 main.go:141] libmachine: (addons-286595) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:55:7e", ip: ""} in network mk-addons-286595: {Iface:virbr1 ExpiryTime:2024-05-01 03:08:11 +0000 UTC Type:0 Mac:52:54:00:74:55:7e Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:addons-286595 Clientid:01:52:54:00:74:55:7e}
	I0501 02:08:20.461349   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined IP address 192.168.39.173 and MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:08:20.461481   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHPort
	I0501 02:08:20.461663   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHKeyPath
	I0501 02:08:20.461803   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHKeyPath
	I0501 02:08:20.461927   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHUsername
	I0501 02:08:20.462056   21421 main.go:141] libmachine: Using SSH client type: native
	I0501 02:08:20.462205   21421 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.173 22 <nil> <nil>}
	I0501 02:08:20.462216   21421 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0501 02:08:20.572508   21421 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0501 02:08:20.572576   21421 main.go:141] libmachine: found compatible host: buildroot
	I0501 02:08:20.572583   21421 main.go:141] libmachine: Provisioning with buildroot...
	I0501 02:08:20.572591   21421 main.go:141] libmachine: (addons-286595) Calling .GetMachineName
	I0501 02:08:20.572852   21421 buildroot.go:166] provisioning hostname "addons-286595"
	I0501 02:08:20.572896   21421 main.go:141] libmachine: (addons-286595) Calling .GetMachineName
	I0501 02:08:20.573062   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHHostname
	I0501 02:08:20.575445   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:08:20.575772   21421 main.go:141] libmachine: (addons-286595) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:55:7e", ip: ""} in network mk-addons-286595: {Iface:virbr1 ExpiryTime:2024-05-01 03:08:11 +0000 UTC Type:0 Mac:52:54:00:74:55:7e Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:addons-286595 Clientid:01:52:54:00:74:55:7e}
	I0501 02:08:20.575805   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined IP address 192.168.39.173 and MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:08:20.575903   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHPort
	I0501 02:08:20.576089   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHKeyPath
	I0501 02:08:20.576245   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHKeyPath
	I0501 02:08:20.576387   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHUsername
	I0501 02:08:20.576530   21421 main.go:141] libmachine: Using SSH client type: native
	I0501 02:08:20.576681   21421 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.173 22 <nil> <nil>}
	I0501 02:08:20.576693   21421 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-286595 && echo "addons-286595" | sudo tee /etc/hostname
	I0501 02:08:20.711261   21421 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-286595
	
	I0501 02:08:20.711290   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHHostname
	I0501 02:08:20.713949   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:08:20.714242   21421 main.go:141] libmachine: (addons-286595) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:55:7e", ip: ""} in network mk-addons-286595: {Iface:virbr1 ExpiryTime:2024-05-01 03:08:11 +0000 UTC Type:0 Mac:52:54:00:74:55:7e Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:addons-286595 Clientid:01:52:54:00:74:55:7e}
	I0501 02:08:20.714279   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined IP address 192.168.39.173 and MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:08:20.714472   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHPort
	I0501 02:08:20.714682   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHKeyPath
	I0501 02:08:20.714848   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHKeyPath
	I0501 02:08:20.714989   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHUsername
	I0501 02:08:20.715167   21421 main.go:141] libmachine: Using SSH client type: native
	I0501 02:08:20.715314   21421 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.173 22 <nil> <nil>}
	I0501 02:08:20.715330   21421 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-286595' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-286595/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-286595' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0501 02:08:20.839461   21421 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0501 02:08:20.839491   21421 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18779-13391/.minikube CaCertPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18779-13391/.minikube}
	I0501 02:08:20.839533   21421 buildroot.go:174] setting up certificates
	I0501 02:08:20.839544   21421 provision.go:84] configureAuth start
	I0501 02:08:20.839553   21421 main.go:141] libmachine: (addons-286595) Calling .GetMachineName
	I0501 02:08:20.839811   21421 main.go:141] libmachine: (addons-286595) Calling .GetIP
	I0501 02:08:20.842034   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:08:20.842436   21421 main.go:141] libmachine: (addons-286595) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:55:7e", ip: ""} in network mk-addons-286595: {Iface:virbr1 ExpiryTime:2024-05-01 03:08:11 +0000 UTC Type:0 Mac:52:54:00:74:55:7e Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:addons-286595 Clientid:01:52:54:00:74:55:7e}
	I0501 02:08:20.842466   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined IP address 192.168.39.173 and MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:08:20.842579   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHHostname
	I0501 02:08:20.844673   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:08:20.844962   21421 main.go:141] libmachine: (addons-286595) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:55:7e", ip: ""} in network mk-addons-286595: {Iface:virbr1 ExpiryTime:2024-05-01 03:08:11 +0000 UTC Type:0 Mac:52:54:00:74:55:7e Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:addons-286595 Clientid:01:52:54:00:74:55:7e}
	I0501 02:08:20.844986   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined IP address 192.168.39.173 and MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:08:20.845168   21421 provision.go:143] copyHostCerts
	I0501 02:08:20.845250   21421 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem (1123 bytes)
	I0501 02:08:20.845405   21421 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem (1675 bytes)
	I0501 02:08:20.845489   21421 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem (1078 bytes)
	I0501 02:08:20.845560   21421 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca-key.pem org=jenkins.addons-286595 san=[127.0.0.1 192.168.39.173 addons-286595 localhost minikube]
	I0501 02:08:20.925369   21421 provision.go:177] copyRemoteCerts
	I0501 02:08:20.925429   21421 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0501 02:08:20.925456   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHHostname
	I0501 02:08:20.927667   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:08:20.927959   21421 main.go:141] libmachine: (addons-286595) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:55:7e", ip: ""} in network mk-addons-286595: {Iface:virbr1 ExpiryTime:2024-05-01 03:08:11 +0000 UTC Type:0 Mac:52:54:00:74:55:7e Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:addons-286595 Clientid:01:52:54:00:74:55:7e}
	I0501 02:08:20.927988   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined IP address 192.168.39.173 and MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:08:20.928146   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHPort
	I0501 02:08:20.928339   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHKeyPath
	I0501 02:08:20.928485   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHUsername
	I0501 02:08:20.928610   21421 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/addons-286595/id_rsa Username:docker}
	I0501 02:08:21.013616   21421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0501 02:08:21.041583   21421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0501 02:08:21.068530   21421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0501 02:08:21.096204   21421 provision.go:87] duration metric: took 256.648307ms to configureAuth
	I0501 02:08:21.096233   21421 buildroot.go:189] setting minikube options for container-runtime
	I0501 02:08:21.096457   21421 config.go:182] Loaded profile config "addons-286595": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0501 02:08:21.096555   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHHostname
	I0501 02:08:21.099048   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:08:21.099437   21421 main.go:141] libmachine: (addons-286595) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:55:7e", ip: ""} in network mk-addons-286595: {Iface:virbr1 ExpiryTime:2024-05-01 03:08:11 +0000 UTC Type:0 Mac:52:54:00:74:55:7e Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:addons-286595 Clientid:01:52:54:00:74:55:7e}
	I0501 02:08:21.099468   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined IP address 192.168.39.173 and MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:08:21.099637   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHPort
	I0501 02:08:21.099862   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHKeyPath
	I0501 02:08:21.100018   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHKeyPath
	I0501 02:08:21.100172   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHUsername
	I0501 02:08:21.100313   21421 main.go:141] libmachine: Using SSH client type: native
	I0501 02:08:21.100533   21421 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.173 22 <nil> <nil>}
	I0501 02:08:21.100557   21421 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0501 02:08:21.383701   21421 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0501 02:08:21.383728   21421 main.go:141] libmachine: Checking connection to Docker...
	I0501 02:08:21.383741   21421 main.go:141] libmachine: (addons-286595) Calling .GetURL
	I0501 02:08:21.384973   21421 main.go:141] libmachine: (addons-286595) DBG | Using libvirt version 6000000
	I0501 02:08:21.387017   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:08:21.387287   21421 main.go:141] libmachine: (addons-286595) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:55:7e", ip: ""} in network mk-addons-286595: {Iface:virbr1 ExpiryTime:2024-05-01 03:08:11 +0000 UTC Type:0 Mac:52:54:00:74:55:7e Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:addons-286595 Clientid:01:52:54:00:74:55:7e}
	I0501 02:08:21.387334   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined IP address 192.168.39.173 and MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:08:21.387419   21421 main.go:141] libmachine: Docker is up and running!
	I0501 02:08:21.387443   21421 main.go:141] libmachine: Reticulating splines...
	I0501 02:08:21.387451   21421 client.go:171] duration metric: took 25.98551086s to LocalClient.Create
	I0501 02:08:21.387475   21421 start.go:167] duration metric: took 25.985598472s to libmachine.API.Create "addons-286595"
	I0501 02:08:21.387485   21421 start.go:293] postStartSetup for "addons-286595" (driver="kvm2")
	I0501 02:08:21.387494   21421 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0501 02:08:21.387538   21421 main.go:141] libmachine: (addons-286595) Calling .DriverName
	I0501 02:08:21.387828   21421 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0501 02:08:21.387854   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHHostname
	I0501 02:08:21.389686   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:08:21.389904   21421 main.go:141] libmachine: (addons-286595) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:55:7e", ip: ""} in network mk-addons-286595: {Iface:virbr1 ExpiryTime:2024-05-01 03:08:11 +0000 UTC Type:0 Mac:52:54:00:74:55:7e Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:addons-286595 Clientid:01:52:54:00:74:55:7e}
	I0501 02:08:21.389928   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined IP address 192.168.39.173 and MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:08:21.390024   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHPort
	I0501 02:08:21.390178   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHKeyPath
	I0501 02:08:21.390336   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHUsername
	I0501 02:08:21.390475   21421 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/addons-286595/id_rsa Username:docker}
	I0501 02:08:21.473577   21421 ssh_runner.go:195] Run: cat /etc/os-release
	I0501 02:08:21.478614   21421 info.go:137] Remote host: Buildroot 2023.02.9
	I0501 02:08:21.478641   21421 filesync.go:126] Scanning /home/jenkins/minikube-integration/18779-13391/.minikube/addons for local assets ...
	I0501 02:08:21.478720   21421 filesync.go:126] Scanning /home/jenkins/minikube-integration/18779-13391/.minikube/files for local assets ...
	I0501 02:08:21.478751   21421 start.go:296] duration metric: took 91.260746ms for postStartSetup
	I0501 02:08:21.478789   21421 main.go:141] libmachine: (addons-286595) Calling .GetConfigRaw
	I0501 02:08:21.479330   21421 main.go:141] libmachine: (addons-286595) Calling .GetIP
	I0501 02:08:21.481594   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:08:21.481921   21421 main.go:141] libmachine: (addons-286595) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:55:7e", ip: ""} in network mk-addons-286595: {Iface:virbr1 ExpiryTime:2024-05-01 03:08:11 +0000 UTC Type:0 Mac:52:54:00:74:55:7e Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:addons-286595 Clientid:01:52:54:00:74:55:7e}
	I0501 02:08:21.481964   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined IP address 192.168.39.173 and MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:08:21.482147   21421 profile.go:143] Saving config to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/addons-286595/config.json ...
	I0501 02:08:21.482344   21421 start.go:128] duration metric: took 26.099176468s to createHost
	I0501 02:08:21.482371   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHHostname
	I0501 02:08:21.484256   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:08:21.484574   21421 main.go:141] libmachine: (addons-286595) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:55:7e", ip: ""} in network mk-addons-286595: {Iface:virbr1 ExpiryTime:2024-05-01 03:08:11 +0000 UTC Type:0 Mac:52:54:00:74:55:7e Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:addons-286595 Clientid:01:52:54:00:74:55:7e}
	I0501 02:08:21.484596   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined IP address 192.168.39.173 and MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:08:21.484720   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHPort
	I0501 02:08:21.484866   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHKeyPath
	I0501 02:08:21.485016   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHKeyPath
	I0501 02:08:21.485166   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHUsername
	I0501 02:08:21.485304   21421 main.go:141] libmachine: Using SSH client type: native
	I0501 02:08:21.485458   21421 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.173 22 <nil> <nil>}
	I0501 02:08:21.485469   21421 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0501 02:08:21.595702   21421 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714529301.579854174
	
	I0501 02:08:21.595726   21421 fix.go:216] guest clock: 1714529301.579854174
	I0501 02:08:21.595734   21421 fix.go:229] Guest: 2024-05-01 02:08:21.579854174 +0000 UTC Remote: 2024-05-01 02:08:21.482357717 +0000 UTC m=+26.213215784 (delta=97.496457ms)
	I0501 02:08:21.595754   21421 fix.go:200] guest clock delta is within tolerance: 97.496457ms
	I0501 02:08:21.595759   21421 start.go:83] releasing machines lock for "addons-286595", held for 26.212671298s
	I0501 02:08:21.595776   21421 main.go:141] libmachine: (addons-286595) Calling .DriverName
	I0501 02:08:21.596009   21421 main.go:141] libmachine: (addons-286595) Calling .GetIP
	I0501 02:08:21.598718   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:08:21.599049   21421 main.go:141] libmachine: (addons-286595) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:55:7e", ip: ""} in network mk-addons-286595: {Iface:virbr1 ExpiryTime:2024-05-01 03:08:11 +0000 UTC Type:0 Mac:52:54:00:74:55:7e Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:addons-286595 Clientid:01:52:54:00:74:55:7e}
	I0501 02:08:21.599080   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined IP address 192.168.39.173 and MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:08:21.599195   21421 main.go:141] libmachine: (addons-286595) Calling .DriverName
	I0501 02:08:21.599965   21421 main.go:141] libmachine: (addons-286595) Calling .DriverName
	I0501 02:08:21.600153   21421 main.go:141] libmachine: (addons-286595) Calling .DriverName
	I0501 02:08:21.600247   21421 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0501 02:08:21.600286   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHHostname
	I0501 02:08:21.600366   21421 ssh_runner.go:195] Run: cat /version.json
	I0501 02:08:21.600385   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHHostname
	I0501 02:08:21.602839   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:08:21.602862   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:08:21.603239   21421 main.go:141] libmachine: (addons-286595) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:55:7e", ip: ""} in network mk-addons-286595: {Iface:virbr1 ExpiryTime:2024-05-01 03:08:11 +0000 UTC Type:0 Mac:52:54:00:74:55:7e Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:addons-286595 Clientid:01:52:54:00:74:55:7e}
	I0501 02:08:21.603268   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined IP address 192.168.39.173 and MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:08:21.603372   21421 main.go:141] libmachine: (addons-286595) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:55:7e", ip: ""} in network mk-addons-286595: {Iface:virbr1 ExpiryTime:2024-05-01 03:08:11 +0000 UTC Type:0 Mac:52:54:00:74:55:7e Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:addons-286595 Clientid:01:52:54:00:74:55:7e}
	I0501 02:08:21.603398   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined IP address 192.168.39.173 and MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:08:21.603401   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHPort
	I0501 02:08:21.603575   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHKeyPath
	I0501 02:08:21.603590   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHPort
	I0501 02:08:21.603757   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHUsername
	I0501 02:08:21.603757   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHKeyPath
	I0501 02:08:21.603958   21421 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/addons-286595/id_rsa Username:docker}
	I0501 02:08:21.603970   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHUsername
	I0501 02:08:21.604092   21421 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/addons-286595/id_rsa Username:docker}
	I0501 02:08:21.711583   21421 ssh_runner.go:195] Run: systemctl --version
	I0501 02:08:21.717909   21421 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0501 02:08:21.885783   21421 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0501 02:08:21.892692   21421 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0501 02:08:21.892747   21421 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0501 02:08:21.910176   21421 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0501 02:08:21.910190   21421 start.go:494] detecting cgroup driver to use...
	I0501 02:08:21.910245   21421 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0501 02:08:21.928118   21421 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0501 02:08:21.942966   21421 docker.go:217] disabling cri-docker service (if available) ...
	I0501 02:08:21.943041   21421 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0501 02:08:21.956917   21421 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0501 02:08:21.970716   21421 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0501 02:08:22.086139   21421 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0501 02:08:22.253424   21421 docker.go:233] disabling docker service ...
	I0501 02:08:22.253495   21421 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0501 02:08:22.269905   21421 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0501 02:08:22.283870   21421 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0501 02:08:22.407277   21421 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0501 02:08:22.525776   21421 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0501 02:08:22.541243   21421 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0501 02:08:22.561820   21421 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0501 02:08:22.561901   21421 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 02:08:22.573493   21421 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0501 02:08:22.573564   21421 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 02:08:22.585200   21421 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 02:08:22.596927   21421 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 02:08:22.608362   21421 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0501 02:08:22.619974   21421 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 02:08:22.631160   21421 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 02:08:22.649639   21421 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 02:08:22.660488   21421 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0501 02:08:22.670506   21421 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0501 02:08:22.670555   21421 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0501 02:08:22.685256   21421 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0501 02:08:22.696083   21421 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:08:22.834876   21421 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0501 02:08:22.982747   21421 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0501 02:08:22.982844   21421 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0501 02:08:22.988641   21421 start.go:562] Will wait 60s for crictl version
	I0501 02:08:22.988721   21421 ssh_runner.go:195] Run: which crictl
	I0501 02:08:22.993315   21421 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0501 02:08:23.034485   21421 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0501 02:08:23.034604   21421 ssh_runner.go:195] Run: crio --version
	I0501 02:08:23.065303   21421 ssh_runner.go:195] Run: crio --version
	I0501 02:08:23.097679   21421 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0501 02:08:23.099042   21421 main.go:141] libmachine: (addons-286595) Calling .GetIP
	I0501 02:08:23.101897   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:08:23.102290   21421 main.go:141] libmachine: (addons-286595) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:55:7e", ip: ""} in network mk-addons-286595: {Iface:virbr1 ExpiryTime:2024-05-01 03:08:11 +0000 UTC Type:0 Mac:52:54:00:74:55:7e Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:addons-286595 Clientid:01:52:54:00:74:55:7e}
	I0501 02:08:23.102318   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined IP address 192.168.39.173 and MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:08:23.102542   21421 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0501 02:08:23.107669   21421 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0501 02:08:23.122255   21421 kubeadm.go:877] updating cluster {Name:addons-286595 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
0 ClusterName:addons-286595 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.173 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0501 02:08:23.122380   21421 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0501 02:08:23.122451   21421 ssh_runner.go:195] Run: sudo crictl images --output json
	I0501 02:08:23.157908   21421 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0501 02:08:23.157978   21421 ssh_runner.go:195] Run: which lz4
	I0501 02:08:23.162627   21421 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0501 02:08:23.167516   21421 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0501 02:08:23.167546   21421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394544937 bytes)
	I0501 02:08:24.648981   21421 crio.go:462] duration metric: took 1.48639053s to copy over tarball
	I0501 02:08:24.649045   21421 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0501 02:08:27.328224   21421 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.679148159s)
	I0501 02:08:27.328275   21421 crio.go:469] duration metric: took 2.679269008s to extract the tarball
	I0501 02:08:27.328287   21421 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0501 02:08:27.366840   21421 ssh_runner.go:195] Run: sudo crictl images --output json
	I0501 02:08:27.416579   21421 crio.go:514] all images are preloaded for cri-o runtime.
	I0501 02:08:27.416601   21421 cache_images.go:84] Images are preloaded, skipping loading
	I0501 02:08:27.416609   21421 kubeadm.go:928] updating node { 192.168.39.173 8443 v1.30.0 crio true true} ...
	I0501 02:08:27.416721   21421 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-286595 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.173
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:addons-286595 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0501 02:08:27.416782   21421 ssh_runner.go:195] Run: crio config
	I0501 02:08:27.468311   21421 cni.go:84] Creating CNI manager for ""
	I0501 02:08:27.468334   21421 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0501 02:08:27.468345   21421 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0501 02:08:27.468365   21421 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.173 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-286595 NodeName:addons-286595 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.173"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.173 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0501 02:08:27.468496   21421 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.173
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-286595"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.173
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.173"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0501 02:08:27.468554   21421 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0501 02:08:27.479727   21421 binaries.go:44] Found k8s binaries, skipping transfer
	I0501 02:08:27.479796   21421 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0501 02:08:27.490057   21421 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0501 02:08:27.508749   21421 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0501 02:08:27.528607   21421 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0501 02:08:27.547096   21421 ssh_runner.go:195] Run: grep 192.168.39.173	control-plane.minikube.internal$ /etc/hosts
	I0501 02:08:27.551283   21421 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.173	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0501 02:08:27.564410   21421 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:08:27.682977   21421 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0501 02:08:27.701299   21421 certs.go:68] Setting up /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/addons-286595 for IP: 192.168.39.173
	I0501 02:08:27.701317   21421 certs.go:194] generating shared ca certs ...
	I0501 02:08:27.701342   21421 certs.go:226] acquiring lock for ca certs: {Name:mk85a75d14c00780d4bf09cd9bdaf0f96f1b8fce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:08:27.701485   21421 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/18779-13391/.minikube/ca.key
	I0501 02:08:27.978339   21421 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18779-13391/.minikube/ca.crt ...
	I0501 02:08:27.978369   21421 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18779-13391/.minikube/ca.crt: {Name:mk2aa64ed3ffa43baef26cb76f6975fb66c3c12e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:08:27.978567   21421 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18779-13391/.minikube/ca.key ...
	I0501 02:08:27.978583   21421 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18779-13391/.minikube/ca.key: {Name:mk6b15aedf9e8fb8b4e2dafe20ce2c834eb1faff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:08:27.978682   21421 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.key
	I0501 02:08:28.112877   21421 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.crt ...
	I0501 02:08:28.112905   21421 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.crt: {Name:mkf177ac27a2dfe775a48543cda735a9e19f5da0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:08:28.113070   21421 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.key ...
	I0501 02:08:28.113087   21421 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.key: {Name:mk29b0e94925d4b16264e43f4a48d33fd9427cf0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:08:28.113193   21421 certs.go:256] generating profile certs ...
	I0501 02:08:28.113246   21421 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/addons-286595/client.key
	I0501 02:08:28.113262   21421 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/addons-286595/client.crt with IP's: []
	I0501 02:08:28.314901   21421 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/addons-286595/client.crt ...
	I0501 02:08:28.314934   21421 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/addons-286595/client.crt: {Name:mk5a02944d247a426dd8a7e06384f15984cfa36e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:08:28.315117   21421 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/addons-286595/client.key ...
	I0501 02:08:28.315131   21421 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/addons-286595/client.key: {Name:mk1afc697d58cae69a9e0addf4c201cb1879cde9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:08:28.315222   21421 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/addons-286595/apiserver.key.3e9978e6
	I0501 02:08:28.315247   21421 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/addons-286595/apiserver.crt.3e9978e6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.173]
	I0501 02:08:28.542717   21421 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/addons-286595/apiserver.crt.3e9978e6 ...
	I0501 02:08:28.542750   21421 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/addons-286595/apiserver.crt.3e9978e6: {Name:mk0b5fe76d0e797a2b8e7d8e7a73a27288ed48cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:08:28.542936   21421 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/addons-286595/apiserver.key.3e9978e6 ...
	I0501 02:08:28.542958   21421 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/addons-286595/apiserver.key.3e9978e6: {Name:mkc457c7f86361301c073c2f383c901b6fd9431d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:08:28.543049   21421 certs.go:381] copying /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/addons-286595/apiserver.crt.3e9978e6 -> /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/addons-286595/apiserver.crt
	I0501 02:08:28.543139   21421 certs.go:385] copying /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/addons-286595/apiserver.key.3e9978e6 -> /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/addons-286595/apiserver.key
	I0501 02:08:28.543199   21421 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/addons-286595/proxy-client.key
	I0501 02:08:28.543222   21421 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/addons-286595/proxy-client.crt with IP's: []
	I0501 02:08:28.682021   21421 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/addons-286595/proxy-client.crt ...
	I0501 02:08:28.682050   21421 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/addons-286595/proxy-client.crt: {Name:mk75acd4b588454a97ed7ee5f8ba7ad77e58f89e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:08:28.682220   21421 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/addons-286595/proxy-client.key ...
	I0501 02:08:28.682236   21421 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/addons-286595/proxy-client.key: {Name:mkecc1c15c34afdbf2add76596b544294bb88da5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:08:28.682581   21421 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca-key.pem (1679 bytes)
	I0501 02:08:28.682636   21421 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem (1078 bytes)
	I0501 02:08:28.682669   21421 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem (1123 bytes)
	I0501 02:08:28.682693   21421 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem (1675 bytes)
	I0501 02:08:28.683253   21421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0501 02:08:28.715610   21421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0501 02:08:28.746121   21421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0501 02:08:28.777013   21421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0501 02:08:28.805255   21421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/addons-286595/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0501 02:08:28.833273   21421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/addons-286595/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0501 02:08:28.862002   21421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/addons-286595/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0501 02:08:28.891400   21421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/addons-286595/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0501 02:08:28.919259   21421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0501 02:08:28.946617   21421 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0501 02:08:28.965525   21421 ssh_runner.go:195] Run: openssl version
	I0501 02:08:28.972778   21421 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0501 02:08:28.984998   21421 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0501 02:08:28.990298   21421 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May  1 02:08 /usr/share/ca-certificates/minikubeCA.pem
	I0501 02:08:28.990356   21421 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0501 02:08:28.996864   21421 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0501 02:08:29.008760   21421 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0501 02:08:29.013721   21421 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0501 02:08:29.013782   21421 kubeadm.go:391] StartCluster: {Name:addons-286595 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 C
lusterName:addons-286595 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.173 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 02:08:29.013879   21421 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0501 02:08:29.013948   21421 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0501 02:08:29.057509   21421 cri.go:89] found id: ""
	I0501 02:08:29.057585   21421 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0501 02:08:29.069506   21421 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0501 02:08:29.081025   21421 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0501 02:08:29.092429   21421 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0501 02:08:29.092449   21421 kubeadm.go:156] found existing configuration files:
	
	I0501 02:08:29.092490   21421 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0501 02:08:29.103039   21421 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0501 02:08:29.103096   21421 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0501 02:08:29.113759   21421 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0501 02:08:29.124976   21421 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0501 02:08:29.125033   21421 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0501 02:08:29.135573   21421 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0501 02:08:29.145326   21421 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0501 02:08:29.145394   21421 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0501 02:08:29.155480   21421 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0501 02:08:29.165170   21421 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0501 02:08:29.165231   21421 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0501 02:08:29.176312   21421 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0501 02:08:29.231689   21421 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0501 02:08:29.231827   21421 kubeadm.go:309] [preflight] Running pre-flight checks
	I0501 02:08:29.375556   21421 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0501 02:08:29.375656   21421 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0501 02:08:29.375746   21421 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0501 02:08:29.632538   21421 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0501 02:08:29.831558   21421 out.go:204]   - Generating certificates and keys ...
	I0501 02:08:29.831679   21421 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0501 02:08:29.831744   21421 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0501 02:08:29.831840   21421 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0501 02:08:30.124526   21421 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0501 02:08:30.263886   21421 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0501 02:08:30.440710   21421 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0501 02:08:30.593916   21421 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0501 02:08:30.594214   21421 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-286595 localhost] and IPs [192.168.39.173 127.0.0.1 ::1]
	I0501 02:08:30.816866   21421 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0501 02:08:30.817065   21421 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-286595 localhost] and IPs [192.168.39.173 127.0.0.1 ::1]
	I0501 02:08:30.980470   21421 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0501 02:08:31.061720   21421 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0501 02:08:31.231023   21421 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0501 02:08:31.231276   21421 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0501 02:08:31.382794   21421 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0501 02:08:31.495288   21421 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0501 02:08:31.695041   21421 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0501 02:08:31.771181   21421 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0501 02:08:32.089643   21421 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0501 02:08:32.090114   21421 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0501 02:08:32.092550   21421 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0501 02:08:32.094418   21421 out.go:204]   - Booting up control plane ...
	I0501 02:08:32.094530   21421 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0501 02:08:32.094623   21421 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0501 02:08:32.094727   21421 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0501 02:08:32.110779   21421 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0501 02:08:32.114283   21421 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0501 02:08:32.114325   21421 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0501 02:08:32.265813   21421 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0501 02:08:32.265914   21421 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0501 02:08:32.767536   21421 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 501.777678ms
	I0501 02:08:32.767646   21421 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0501 02:08:37.768609   21421 kubeadm.go:309] [api-check] The API server is healthy after 5.002253197s
	I0501 02:08:37.784310   21421 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0501 02:08:37.801495   21421 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0501 02:08:37.831816   21421 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0501 02:08:37.832011   21421 kubeadm.go:309] [mark-control-plane] Marking the node addons-286595 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0501 02:08:37.846923   21421 kubeadm.go:309] [bootstrap-token] Using token: 7px8y6.mfs7lhrgb9xogpi0
	I0501 02:08:37.848423   21421 out.go:204]   - Configuring RBAC rules ...
	I0501 02:08:37.848544   21421 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0501 02:08:37.860832   21421 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0501 02:08:37.868527   21421 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0501 02:08:37.872634   21421 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0501 02:08:37.876166   21421 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0501 02:08:37.879538   21421 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0501 02:08:38.177708   21421 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0501 02:08:38.626673   21421 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0501 02:08:39.176479   21421 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0501 02:08:39.176502   21421 kubeadm.go:309] 
	I0501 02:08:39.176578   21421 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0501 02:08:39.176587   21421 kubeadm.go:309] 
	I0501 02:08:39.176662   21421 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0501 02:08:39.176670   21421 kubeadm.go:309] 
	I0501 02:08:39.176711   21421 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0501 02:08:39.176797   21421 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0501 02:08:39.176873   21421 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0501 02:08:39.176886   21421 kubeadm.go:309] 
	I0501 02:08:39.176937   21421 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0501 02:08:39.176946   21421 kubeadm.go:309] 
	I0501 02:08:39.176997   21421 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0501 02:08:39.177004   21421 kubeadm.go:309] 
	I0501 02:08:39.177072   21421 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0501 02:08:39.177168   21421 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0501 02:08:39.177266   21421 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0501 02:08:39.177295   21421 kubeadm.go:309] 
	I0501 02:08:39.177421   21421 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0501 02:08:39.177530   21421 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0501 02:08:39.177540   21421 kubeadm.go:309] 
	I0501 02:08:39.177643   21421 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 7px8y6.mfs7lhrgb9xogpi0 \
	I0501 02:08:39.177775   21421 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:bd94cc6acfb95fe36f70b12c6a1f2980601b8235e7c4dea65cebdf35eb514754 \
	I0501 02:08:39.177815   21421 kubeadm.go:309] 	--control-plane 
	I0501 02:08:39.177824   21421 kubeadm.go:309] 
	I0501 02:08:39.177929   21421 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0501 02:08:39.177942   21421 kubeadm.go:309] 
	I0501 02:08:39.178048   21421 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 7px8y6.mfs7lhrgb9xogpi0 \
	I0501 02:08:39.178191   21421 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:bd94cc6acfb95fe36f70b12c6a1f2980601b8235e7c4dea65cebdf35eb514754 
	I0501 02:08:39.178346   21421 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0501 02:08:39.178363   21421 cni.go:84] Creating CNI manager for ""
	I0501 02:08:39.178372   21421 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0501 02:08:39.180320   21421 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0501 02:08:39.181537   21421 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0501 02:08:39.194728   21421 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0501 02:08:39.216034   21421 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0501 02:08:39.216121   21421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:08:39.216127   21421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-286595 minikube.k8s.io/updated_at=2024_05_01T02_08_39_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=2c4eae41cda912e6a762d77f0d8868e00f97bb4e minikube.k8s.io/name=addons-286595 minikube.k8s.io/primary=true
	I0501 02:08:39.249433   21421 ops.go:34] apiserver oom_adj: -16
	I0501 02:08:39.366801   21421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:08:39.867705   21421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:08:40.367573   21421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:08:40.867598   21421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:08:41.367658   21421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:08:41.867114   21421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:08:42.367729   21421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:08:42.866964   21421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:08:43.366888   21421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:08:43.867647   21421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:08:44.366934   21421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:08:44.866907   21421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:08:45.367710   21421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:08:45.867637   21421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:08:46.367271   21421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:08:46.867261   21421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:08:47.366939   21421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:08:47.867905   21421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:08:48.367168   21421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:08:48.866957   21421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:08:49.367396   21421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:08:49.867589   21421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:08:50.366996   21421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:08:50.867510   21421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:08:51.366907   21421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:08:51.867645   21421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:08:52.367085   21421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:08:52.950168   21421 kubeadm.go:1107] duration metric: took 13.734115773s to wait for elevateKubeSystemPrivileges
	W0501 02:08:52.950209   21421 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0501 02:08:52.950219   21421 kubeadm.go:393] duration metric: took 23.936442112s to StartCluster
	I0501 02:08:52.950248   21421 settings.go:142] acquiring lock: {Name:mkcfe08bcadd45d99c00ed1eaf4e9c226a489fe2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:08:52.950388   21421 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18779-13391/kubeconfig
	I0501 02:08:52.950761   21421 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18779-13391/kubeconfig: {Name:mk5d1131557f885800877622151c0915337cb23a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:08:52.950959   21421 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0501 02:08:52.950987   21421 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.173 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0501 02:08:52.952716   21421 out.go:177] * Verifying Kubernetes components...
	I0501 02:08:52.951055   21421 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0501 02:08:52.952764   21421 addons.go:69] Setting cloud-spanner=true in profile "addons-286595"
	I0501 02:08:52.954426   21421 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:08:52.954449   21421 addons.go:234] Setting addon cloud-spanner=true in "addons-286595"
	I0501 02:08:52.951208   21421 config.go:182] Loaded profile config "addons-286595": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0501 02:08:52.952782   21421 addons.go:69] Setting inspektor-gadget=true in profile "addons-286595"
	I0501 02:08:52.954606   21421 addons.go:234] Setting addon inspektor-gadget=true in "addons-286595"
	I0501 02:08:52.954649   21421 host.go:66] Checking if "addons-286595" exists ...
	I0501 02:08:52.952780   21421 addons.go:69] Setting yakd=true in profile "addons-286595"
	I0501 02:08:52.954687   21421 addons.go:234] Setting addon yakd=true in "addons-286595"
	I0501 02:08:52.954711   21421 host.go:66] Checking if "addons-286595" exists ...
	I0501 02:08:52.952791   21421 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-286595"
	I0501 02:08:52.954806   21421 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-286595"
	I0501 02:08:52.954846   21421 host.go:66] Checking if "addons-286595" exists ...
	I0501 02:08:52.952793   21421 addons.go:69] Setting metrics-server=true in profile "addons-286595"
	I0501 02:08:52.952801   21421 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-286595"
	I0501 02:08:52.952801   21421 addons.go:69] Setting default-storageclass=true in profile "addons-286595"
	I0501 02:08:52.952808   21421 addons.go:69] Setting volumesnapshots=true in profile "addons-286595"
	I0501 02:08:52.952810   21421 addons.go:69] Setting gcp-auth=true in profile "addons-286595"
	I0501 02:08:52.952817   21421 addons.go:69] Setting helm-tiller=true in profile "addons-286595"
	I0501 02:08:52.952809   21421 addons.go:69] Setting storage-provisioner=true in profile "addons-286595"
	I0501 02:08:52.952824   21421 addons.go:69] Setting ingress=true in profile "addons-286595"
	I0501 02:08:52.952829   21421 addons.go:69] Setting ingress-dns=true in profile "addons-286595"
	I0501 02:08:52.952848   21421 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-286595"
	I0501 02:08:52.952873   21421 addons.go:69] Setting registry=true in profile "addons-286595"
	I0501 02:08:52.954495   21421 host.go:66] Checking if "addons-286595" exists ...
	I0501 02:08:52.954906   21421 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-286595"
	I0501 02:08:52.954915   21421 addons.go:234] Setting addon volumesnapshots=true in "addons-286595"
	I0501 02:08:52.954940   21421 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-286595"
	I0501 02:08:52.954949   21421 host.go:66] Checking if "addons-286595" exists ...
	I0501 02:08:52.955104   21421 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:08:52.955112   21421 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:08:52.955128   21421 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:08:52.955132   21421 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:08:52.955261   21421 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:08:52.955291   21421 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:08:52.955296   21421 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:08:52.955300   21421 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:08:52.955313   21421 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:08:52.955319   21421 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:08:52.955340   21421 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:08:52.955363   21421 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:08:52.955375   21421 mustload.go:65] Loading cluster: addons-286595
	I0501 02:08:52.955382   21421 addons.go:234] Setting addon metrics-server=true in "addons-286595"
	I0501 02:08:52.955400   21421 addons.go:234] Setting addon helm-tiller=true in "addons-286595"
	I0501 02:08:52.955401   21421 addons.go:234] Setting addon ingress-dns=true in "addons-286595"
	I0501 02:08:52.955418   21421 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-286595"
	I0501 02:08:52.955419   21421 addons.go:234] Setting addon storage-provisioner=true in "addons-286595"
	I0501 02:08:52.955435   21421 addons.go:234] Setting addon registry=true in "addons-286595"
	I0501 02:08:52.955439   21421 addons.go:234] Setting addon ingress=true in "addons-286595"
	I0501 02:08:52.955543   21421 host.go:66] Checking if "addons-286595" exists ...
	I0501 02:08:52.955595   21421 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:08:52.955907   21421 host.go:66] Checking if "addons-286595" exists ...
	I0501 02:08:52.955907   21421 host.go:66] Checking if "addons-286595" exists ...
	I0501 02:08:52.955826   21421 config.go:182] Loaded profile config "addons-286595": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0501 02:08:52.956331   21421 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:08:52.956365   21421 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:08:52.956393   21421 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:08:52.955851   21421 host.go:66] Checking if "addons-286595" exists ...
	I0501 02:08:52.956468   21421 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:08:52.956484   21421 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:08:52.956514   21421 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:08:52.955866   21421 host.go:66] Checking if "addons-286595" exists ...
	I0501 02:08:52.956905   21421 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:08:52.956924   21421 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:08:52.955875   21421 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:08:52.957167   21421 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:08:52.955885   21421 host.go:66] Checking if "addons-286595" exists ...
	I0501 02:08:52.959060   21421 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:08:52.959085   21421 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:08:52.955651   21421 host.go:66] Checking if "addons-286595" exists ...
	I0501 02:08:52.955921   21421 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:08:52.976530   21421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46043
	I0501 02:08:52.976624   21421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46045
	I0501 02:08:52.977157   21421 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:08:52.977673   21421 main.go:141] libmachine: Using API Version  1
	I0501 02:08:52.977694   21421 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:08:52.977704   21421 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:08:52.978031   21421 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:08:52.978170   21421 main.go:141] libmachine: Using API Version  1
	I0501 02:08:52.978195   21421 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:08:52.978807   21421 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:08:52.978850   21421 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:08:52.979041   21421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32787
	I0501 02:08:52.979061   21421 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:08:52.979462   21421 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:08:52.979738   21421 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:08:52.979770   21421 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:08:52.979948   21421 main.go:141] libmachine: Using API Version  1
	I0501 02:08:52.979971   21421 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:08:52.980275   21421 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:08:52.980456   21421 main.go:141] libmachine: (addons-286595) Calling .GetState
	I0501 02:08:52.982292   21421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38547
	I0501 02:08:52.982687   21421 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:08:52.983189   21421 main.go:141] libmachine: Using API Version  1
	I0501 02:08:52.983210   21421 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:08:52.983586   21421 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:08:52.984139   21421 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:08:52.984163   21421 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:08:52.984845   21421 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-286595"
	I0501 02:08:52.984892   21421 host.go:66] Checking if "addons-286595" exists ...
	I0501 02:08:52.985244   21421 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:08:52.985270   21421 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:08:52.986838   21421 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:08:52.986878   21421 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:08:52.986923   21421 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:08:52.986958   21421 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:08:52.999537   21421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44279
	I0501 02:08:53.000238   21421 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:08:53.000810   21421 main.go:141] libmachine: Using API Version  1
	I0501 02:08:53.000828   21421 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:08:53.001203   21421 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:08:53.001409   21421 main.go:141] libmachine: (addons-286595) Calling .GetState
	I0501 02:08:53.002043   21421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37591
	I0501 02:08:53.002411   21421 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:08:53.003276   21421 main.go:141] libmachine: Using API Version  1
	I0501 02:08:53.003300   21421 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:08:53.003626   21421 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:08:53.003718   21421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37107
	I0501 02:08:53.004908   21421 addons.go:234] Setting addon default-storageclass=true in "addons-286595"
	I0501 02:08:53.004956   21421 host.go:66] Checking if "addons-286595" exists ...
	I0501 02:08:53.005318   21421 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:08:53.005362   21421 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:08:53.005553   21421 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:08:53.005783   21421 main.go:141] libmachine: (addons-286595) Calling .GetState
	I0501 02:08:53.006671   21421 main.go:141] libmachine: Using API Version  1
	I0501 02:08:53.006693   21421 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:08:53.007072   21421 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:08:53.007295   21421 main.go:141] libmachine: (addons-286595) Calling .GetState
	I0501 02:08:53.008023   21421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43813
	I0501 02:08:53.008420   21421 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:08:53.008630   21421 main.go:141] libmachine: (addons-286595) Calling .DriverName
	I0501 02:08:53.008914   21421 main.go:141] libmachine: Using API Version  1
	I0501 02:08:53.008936   21421 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:08:53.010849   21421 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0501 02:08:53.009292   21421 main.go:141] libmachine: (addons-286595) Calling .DriverName
	I0501 02:08:53.009461   21421 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:08:53.012148   21421 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0501 02:08:53.012163   21421 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0501 02:08:53.012181   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHHostname
	I0501 02:08:53.012311   21421 main.go:141] libmachine: (addons-286595) Calling .GetState
	I0501 02:08:53.016169   21421 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.27.0
	I0501 02:08:53.014837   21421 host.go:66] Checking if "addons-286595" exists ...
	I0501 02:08:53.015944   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:08:53.016508   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHPort
	I0501 02:08:53.017514   21421 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0501 02:08:53.017528   21421 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0501 02:08:53.017549   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHHostname
	I0501 02:08:53.017603   21421 main.go:141] libmachine: (addons-286595) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:55:7e", ip: ""} in network mk-addons-286595: {Iface:virbr1 ExpiryTime:2024-05-01 03:08:11 +0000 UTC Type:0 Mac:52:54:00:74:55:7e Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:addons-286595 Clientid:01:52:54:00:74:55:7e}
	I0501 02:08:53.017637   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined IP address 192.168.39.173 and MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:08:53.017787   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHKeyPath
	I0501 02:08:53.017855   21421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33571
	I0501 02:08:53.017940   21421 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:08:53.017976   21421 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:08:53.018193   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHUsername
	I0501 02:08:53.018193   21421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44443
	I0501 02:08:53.018443   21421 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/addons-286595/id_rsa Username:docker}
	I0501 02:08:53.018931   21421 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:08:53.019000   21421 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:08:53.019487   21421 main.go:141] libmachine: Using API Version  1
	I0501 02:08:53.019504   21421 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:08:53.019726   21421 main.go:141] libmachine: Using API Version  1
	I0501 02:08:53.019741   21421 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:08:53.020029   21421 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:08:53.020648   21421 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:08:53.020673   21421 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:08:53.021148   21421 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:08:53.021408   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:08:53.021852   21421 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:08:53.021894   21421 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:08:53.022480   21421 main.go:141] libmachine: (addons-286595) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:55:7e", ip: ""} in network mk-addons-286595: {Iface:virbr1 ExpiryTime:2024-05-01 03:08:11 +0000 UTC Type:0 Mac:52:54:00:74:55:7e Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:addons-286595 Clientid:01:52:54:00:74:55:7e}
	I0501 02:08:53.022802   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined IP address 192.168.39.173 and MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:08:53.022993   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHPort
	I0501 02:08:53.023181   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHKeyPath
	I0501 02:08:53.023372   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHUsername
	I0501 02:08:53.023518   21421 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/addons-286595/id_rsa Username:docker}
	I0501 02:08:53.024019   21421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46597
	I0501 02:08:53.024418   21421 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:08:53.024941   21421 main.go:141] libmachine: Using API Version  1
	I0501 02:08:53.024958   21421 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:08:53.025306   21421 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:08:53.026060   21421 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:08:53.026107   21421 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:08:53.040585   21421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37537
	I0501 02:08:53.041240   21421 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:08:53.041741   21421 main.go:141] libmachine: Using API Version  1
	I0501 02:08:53.041758   21421 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:08:53.042075   21421 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:08:53.042632   21421 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:08:53.042656   21421 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:08:53.042855   21421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43447
	I0501 02:08:53.043295   21421 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:08:53.043815   21421 main.go:141] libmachine: Using API Version  1
	I0501 02:08:53.043834   21421 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:08:53.044041   21421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36633
	I0501 02:08:53.044257   21421 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:08:53.044459   21421 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:08:53.044880   21421 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:08:53.044912   21421 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:08:53.045117   21421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39211
	I0501 02:08:53.045126   21421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34383
	I0501 02:08:53.045649   21421 main.go:141] libmachine: Using API Version  1
	I0501 02:08:53.045665   21421 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:08:53.045725   21421 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:08:53.045792   21421 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:08:53.046133   21421 main.go:141] libmachine: Using API Version  1
	I0501 02:08:53.046149   21421 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:08:53.046265   21421 main.go:141] libmachine: Using API Version  1
	I0501 02:08:53.046274   21421 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:08:53.046624   21421 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:08:53.047109   21421 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:08:53.047143   21421 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:08:53.047333   21421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33389
	I0501 02:08:53.047346   21421 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:08:53.047607   21421 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:08:53.047835   21421 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:08:53.048072   21421 main.go:141] libmachine: (addons-286595) Calling .GetState
	I0501 02:08:53.048545   21421 main.go:141] libmachine: Using API Version  1
	I0501 02:08:53.048562   21421 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:08:53.048880   21421 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:08:53.049067   21421 main.go:141] libmachine: (addons-286595) Calling .DriverName
	I0501 02:08:53.050573   21421 main.go:141] libmachine: (addons-286595) Calling .DriverName
	I0501 02:08:53.050638   21421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35749
	I0501 02:08:53.050947   21421 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:08:53.050970   21421 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:08:53.052844   21421 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0501 02:08:53.051743   21421 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:08:53.054009   21421 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0501 02:08:53.054034   21421 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0501 02:08:53.054058   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHHostname
	I0501 02:08:53.054511   21421 main.go:141] libmachine: Using API Version  1
	I0501 02:08:53.054536   21421 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:08:53.054913   21421 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:08:53.055379   21421 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:08:53.055414   21421 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:08:53.057397   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:08:53.057793   21421 main.go:141] libmachine: (addons-286595) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:55:7e", ip: ""} in network mk-addons-286595: {Iface:virbr1 ExpiryTime:2024-05-01 03:08:11 +0000 UTC Type:0 Mac:52:54:00:74:55:7e Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:addons-286595 Clientid:01:52:54:00:74:55:7e}
	I0501 02:08:53.057813   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined IP address 192.168.39.173 and MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:08:53.057967   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHPort
	I0501 02:08:53.058115   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHKeyPath
	I0501 02:08:53.058255   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHUsername
	I0501 02:08:53.058365   21421 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/addons-286595/id_rsa Username:docker}
	I0501 02:08:53.059864   21421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35727
	I0501 02:08:53.060173   21421 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:08:53.060568   21421 main.go:141] libmachine: Using API Version  1
	I0501 02:08:53.060580   21421 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:08:53.060863   21421 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:08:53.061329   21421 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:08:53.061358   21421 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:08:53.065868   21421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39679
	I0501 02:08:53.066186   21421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43523
	I0501 02:08:53.066410   21421 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:08:53.067271   21421 main.go:141] libmachine: Using API Version  1
	I0501 02:08:53.067289   21421 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:08:53.067687   21421 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:08:53.067886   21421 main.go:141] libmachine: (addons-286595) Calling .GetState
	I0501 02:08:53.068877   21421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38351
	I0501 02:08:53.069175   21421 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:08:53.069308   21421 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:08:53.069726   21421 main.go:141] libmachine: Using API Version  1
	I0501 02:08:53.069748   21421 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:08:53.070136   21421 main.go:141] libmachine: Using API Version  1
	I0501 02:08:53.070157   21421 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:08:53.070853   21421 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:08:53.071021   21421 main.go:141] libmachine: (addons-286595) Calling .DriverName
	I0501 02:08:53.072922   21421 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0501 02:08:53.072950   21421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40605
	I0501 02:08:53.074350   21421 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0501 02:08:53.074365   21421 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0501 02:08:53.074383   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHHostname
	I0501 02:08:53.072848   21421 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:08:53.071561   21421 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:08:53.074504   21421 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:08:53.074613   21421 main.go:141] libmachine: (addons-286595) Calling .GetState
	I0501 02:08:53.074935   21421 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:08:53.075681   21421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37751
	I0501 02:08:53.076515   21421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42399
	I0501 02:08:53.076531   21421 main.go:141] libmachine: (addons-286595) Calling .DriverName
	I0501 02:08:53.078095   21421 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0501 02:08:53.079211   21421 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0501 02:08:53.079232   21421 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0501 02:08:53.079245   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHHostname
	I0501 02:08:53.077249   21421 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:08:53.079211   21421 main.go:141] libmachine: Using API Version  1
	I0501 02:08:53.079319   21421 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:08:53.077413   21421 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:08:53.078185   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:08:53.079430   21421 main.go:141] libmachine: (addons-286595) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:55:7e", ip: ""} in network mk-addons-286595: {Iface:virbr1 ExpiryTime:2024-05-01 03:08:11 +0000 UTC Type:0 Mac:52:54:00:74:55:7e Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:addons-286595 Clientid:01:52:54:00:74:55:7e}
	I0501 02:08:53.079456   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined IP address 192.168.39.173 and MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:08:53.078872   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHPort
	I0501 02:08:53.079878   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHKeyPath
	I0501 02:08:53.079878   21421 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:08:53.080093   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHUsername
	I0501 02:08:53.080298   21421 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/addons-286595/id_rsa Username:docker}
	I0501 02:08:53.080714   21421 main.go:141] libmachine: Using API Version  1
	I0501 02:08:53.080741   21421 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:08:53.080892   21421 main.go:141] libmachine: Using API Version  1
	I0501 02:08:53.080905   21421 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:08:53.081187   21421 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:08:53.081217   21421 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:08:53.081418   21421 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:08:53.081632   21421 main.go:141] libmachine: (addons-286595) Calling .GetState
	I0501 02:08:53.082390   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:08:53.083515   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHPort
	I0501 02:08:53.083565   21421 main.go:141] libmachine: (addons-286595) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:55:7e", ip: ""} in network mk-addons-286595: {Iface:virbr1 ExpiryTime:2024-05-01 03:08:11 +0000 UTC Type:0 Mac:52:54:00:74:55:7e Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:addons-286595 Clientid:01:52:54:00:74:55:7e}
	I0501 02:08:53.083584   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined IP address 192.168.39.173 and MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:08:53.083617   21421 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:08:53.083755   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHKeyPath
	I0501 02:08:53.084035   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHUsername
	I0501 02:08:53.084158   21421 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/addons-286595/id_rsa Username:docker}
	I0501 02:08:53.084440   21421 main.go:141] libmachine: (addons-286595) Calling .DriverName
	I0501 02:08:53.084577   21421 main.go:141] libmachine: (addons-286595) Calling .GetState
	I0501 02:08:53.086064   21421 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.16
	I0501 02:08:53.084795   21421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42675
	I0501 02:08:53.086447   21421 main.go:141] libmachine: (addons-286595) Calling .DriverName
	I0501 02:08:53.087326   21421 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0501 02:08:53.087334   21421 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0501 02:08:53.087345   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHHostname
	I0501 02:08:53.087748   21421 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:08:53.089339   21421 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0501 02:08:53.090598   21421 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0501 02:08:53.090617   21421 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0501 02:08:53.090634   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHHostname
	I0501 02:08:53.088434   21421 main.go:141] libmachine: Using API Version  1
	I0501 02:08:53.089584   21421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32885
	I0501 02:08:53.090703   21421 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:08:53.090331   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:08:53.090985   21421 main.go:141] libmachine: (addons-286595) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:55:7e", ip: ""} in network mk-addons-286595: {Iface:virbr1 ExpiryTime:2024-05-01 03:08:11 +0000 UTC Type:0 Mac:52:54:00:74:55:7e Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:addons-286595 Clientid:01:52:54:00:74:55:7e}
	I0501 02:08:53.091019   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined IP address 192.168.39.173 and MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:08:53.091047   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHPort
	I0501 02:08:53.091254   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHKeyPath
	I0501 02:08:53.091428   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHUsername
	I0501 02:08:53.091804   21421 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/addons-286595/id_rsa Username:docker}
	I0501 02:08:53.092199   21421 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:08:53.092268   21421 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:08:53.092999   21421 main.go:141] libmachine: Using API Version  1
	I0501 02:08:53.093019   21421 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:08:53.093382   21421 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:08:53.093533   21421 main.go:141] libmachine: (addons-286595) Calling .GetState
	I0501 02:08:53.094190   21421 main.go:141] libmachine: (addons-286595) Calling .GetState
	I0501 02:08:53.094258   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:08:53.095041   21421 main.go:141] libmachine: (addons-286595) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:55:7e", ip: ""} in network mk-addons-286595: {Iface:virbr1 ExpiryTime:2024-05-01 03:08:11 +0000 UTC Type:0 Mac:52:54:00:74:55:7e Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:addons-286595 Clientid:01:52:54:00:74:55:7e}
	I0501 02:08:53.095082   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined IP address 192.168.39.173 and MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:08:53.095272   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHPort
	I0501 02:08:53.095332   21421 main.go:141] libmachine: (addons-286595) Calling .DriverName
	I0501 02:08:53.095602   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHKeyPath
	I0501 02:08:53.097068   21421 out.go:177]   - Using image docker.io/busybox:stable
	I0501 02:08:53.096505   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHUsername
	I0501 02:08:53.096615   21421 main.go:141] libmachine: (addons-286595) Calling .DriverName
	I0501 02:08:53.098160   21421 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0501 02:08:53.098455   21421 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/addons-286595/id_rsa Username:docker}
	I0501 02:08:53.099682   21421 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0501 02:08:53.099696   21421 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0501 02:08:53.099715   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHHostname
	I0501 02:08:53.100903   21421 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.15.0
	I0501 02:08:53.100366   21421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45035
	I0501 02:08:53.102133   21421 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0501 02:08:53.102145   21421 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0501 02:08:53.102160   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHHostname
	I0501 02:08:53.102754   21421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43689
	I0501 02:08:53.103197   21421 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:08:53.103366   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:08:53.103419   21421 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:08:53.104385   21421 main.go:141] libmachine: Using API Version  1
	I0501 02:08:53.104401   21421 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:08:53.104523   21421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41789
	I0501 02:08:53.104990   21421 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:08:53.105080   21421 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:08:53.105318   21421 main.go:141] libmachine: (addons-286595) Calling .GetState
	I0501 02:08:53.105727   21421 main.go:141] libmachine: (addons-286595) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:55:7e", ip: ""} in network mk-addons-286595: {Iface:virbr1 ExpiryTime:2024-05-01 03:08:11 +0000 UTC Type:0 Mac:52:54:00:74:55:7e Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:addons-286595 Clientid:01:52:54:00:74:55:7e}
	I0501 02:08:53.105762   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined IP address 192.168.39.173 and MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:08:53.105912   21421 main.go:141] libmachine: Using API Version  1
	I0501 02:08:53.105934   21421 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:08:53.106604   21421 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:08:53.107098   21421 main.go:141] libmachine: Using API Version  1
	I0501 02:08:53.107117   21421 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:08:53.107198   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHPort
	I0501 02:08:53.107252   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:08:53.107312   21421 main.go:141] libmachine: (addons-286595) Calling .GetState
	I0501 02:08:53.107355   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHKeyPath
	I0501 02:08:53.107453   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHUsername
	I0501 02:08:53.107484   21421 main.go:141] libmachine: (addons-286595) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:55:7e", ip: ""} in network mk-addons-286595: {Iface:virbr1 ExpiryTime:2024-05-01 03:08:11 +0000 UTC Type:0 Mac:52:54:00:74:55:7e Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:addons-286595 Clientid:01:52:54:00:74:55:7e}
	I0501 02:08:53.107496   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined IP address 192.168.39.173 and MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:08:53.107565   21421 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/addons-286595/id_rsa Username:docker}
	I0501 02:08:53.107889   21421 main.go:141] libmachine: (addons-286595) Calling .DriverName
	I0501 02:08:53.107981   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHPort
	I0501 02:08:53.108210   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHKeyPath
	I0501 02:08:53.108284   21421 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0501 02:08:53.108293   21421 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0501 02:08:53.108296   21421 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:08:53.108303   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHHostname
	I0501 02:08:53.108423   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHUsername
	I0501 02:08:53.108853   21421 main.go:141] libmachine: (addons-286595) Calling .GetState
	I0501 02:08:53.109109   21421 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/addons-286595/id_rsa Username:docker}
	I0501 02:08:53.109226   21421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46267
	I0501 02:08:53.109338   21421 main.go:141] libmachine: (addons-286595) Calling .DriverName
	I0501 02:08:53.110861   21421 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.1
	I0501 02:08:53.109877   21421 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:08:53.110992   21421 main.go:141] libmachine: (addons-286595) Calling .DriverName
	I0501 02:08:53.111932   21421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40941
	I0501 02:08:53.111953   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:08:53.113332   21421 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0501 02:08:53.112565   21421 main.go:141] libmachine: (addons-286595) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:55:7e", ip: ""} in network mk-addons-286595: {Iface:virbr1 ExpiryTime:2024-05-01 03:08:11 +0000 UTC Type:0 Mac:52:54:00:74:55:7e Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:addons-286595 Clientid:01:52:54:00:74:55:7e}
	I0501 02:08:53.112614   21421 main.go:141] libmachine: Using API Version  1
	I0501 02:08:53.112632   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHPort
	I0501 02:08:53.112861   21421 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:08:53.114538   21421 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0501 02:08:53.114554   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined IP address 192.168.39.173 and MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:08:53.115688   21421 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0501 02:08:53.116859   21421 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0501 02:08:53.116874   21421 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0501 02:08:53.116885   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHHostname
	I0501 02:08:53.115705   21421 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:08:53.115882   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHKeyPath
	I0501 02:08:53.116112   21421 main.go:141] libmachine: Using API Version  1
	I0501 02:08:53.118023   21421 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:08:53.117206   21421 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:08:53.118135   21421 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0501 02:08:53.118153   21421 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0501 02:08:53.118173   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHHostname
	I0501 02:08:53.118295   21421 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:08:53.118310   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHUsername
	I0501 02:08:53.118469   21421 main.go:141] libmachine: (addons-286595) Calling .GetState
	I0501 02:08:53.118706   21421 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/addons-286595/id_rsa Username:docker}
	I0501 02:08:53.118912   21421 main.go:141] libmachine: (addons-286595) Calling .GetState
	I0501 02:08:53.119891   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:08:53.120448   21421 main.go:141] libmachine: (addons-286595) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:55:7e", ip: ""} in network mk-addons-286595: {Iface:virbr1 ExpiryTime:2024-05-01 03:08:11 +0000 UTC Type:0 Mac:52:54:00:74:55:7e Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:addons-286595 Clientid:01:52:54:00:74:55:7e}
	I0501 02:08:53.120483   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined IP address 192.168.39.173 and MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:08:53.120692   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHPort
	I0501 02:08:53.120848   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHKeyPath
	I0501 02:08:53.121004   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHUsername
	I0501 02:08:53.121155   21421 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/addons-286595/id_rsa Username:docker}
	I0501 02:08:53.121163   21421 main.go:141] libmachine: (addons-286595) Calling .DriverName
	I0501 02:08:53.121424   21421 main.go:141] libmachine: (addons-286595) Calling .DriverName
	I0501 02:08:53.122658   21421 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0501 02:08:53.122366   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:08:53.122953   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHPort
	I0501 02:08:53.123778   21421 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0501 02:08:53.123819   21421 main.go:141] libmachine: (addons-286595) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:55:7e", ip: ""} in network mk-addons-286595: {Iface:virbr1 ExpiryTime:2024-05-01 03:08:11 +0000 UTC Type:0 Mac:52:54:00:74:55:7e Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:addons-286595 Clientid:01:52:54:00:74:55:7e}
	I0501 02:08:53.123937   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHKeyPath
	I0501 02:08:53.125076   21421 out.go:177]   - Using image docker.io/registry:2.8.3
	I0501 02:08:53.126440   21421 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0501 02:08:53.126458   21421 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0501 02:08:53.126474   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHHostname
	I0501 02:08:53.125146   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined IP address 192.168.39.173 and MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:08:53.125133   21421 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0501 02:08:53.125279   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHUsername
	I0501 02:08:53.129000   21421 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0501 02:08:53.127965   21421 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/addons-286595/id_rsa Username:docker}
	I0501 02:08:53.128940   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:08:53.129447   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHPort
	I0501 02:08:53.130124   21421 main.go:141] libmachine: (addons-286595) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:55:7e", ip: ""} in network mk-addons-286595: {Iface:virbr1 ExpiryTime:2024-05-01 03:08:11 +0000 UTC Type:0 Mac:52:54:00:74:55:7e Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:addons-286595 Clientid:01:52:54:00:74:55:7e}
	I0501 02:08:53.130141   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined IP address 192.168.39.173 and MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:08:53.131239   21421 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0501 02:08:53.132405   21421 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0501 02:08:53.130280   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHKeyPath
	I0501 02:08:53.134568   21421 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0501 02:08:53.133679   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHUsername
	I0501 02:08:53.135673   21421 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0501 02:08:53.136848   21421 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0501 02:08:53.135846   21421 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/addons-286595/id_rsa Username:docker}
	I0501 02:08:53.138036   21421 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0501 02:08:53.138048   21421 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0501 02:08:53.138058   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHHostname
	I0501 02:08:53.140593   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:08:53.141472   21421 main.go:141] libmachine: (addons-286595) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:55:7e", ip: ""} in network mk-addons-286595: {Iface:virbr1 ExpiryTime:2024-05-01 03:08:11 +0000 UTC Type:0 Mac:52:54:00:74:55:7e Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:addons-286595 Clientid:01:52:54:00:74:55:7e}
	I0501 02:08:53.141496   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined IP address 192.168.39.173 and MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:08:53.141630   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHPort
	I0501 02:08:53.141803   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHKeyPath
	I0501 02:08:53.141956   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHUsername
	I0501 02:08:53.142067   21421 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/addons-286595/id_rsa Username:docker}
	W0501 02:08:53.142721   21421 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:58574->192.168.39.173:22: read: connection reset by peer
	I0501 02:08:53.142742   21421 retry.go:31] will retry after 198.557266ms: ssh: handshake failed: read tcp 192.168.39.1:58574->192.168.39.173:22: read: connection reset by peer
	I0501 02:08:53.358930   21421 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0501 02:08:53.358947   21421 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0501 02:08:53.419948   21421 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0501 02:08:53.419969   21421 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0501 02:08:53.427400   21421 node_ready.go:35] waiting up to 6m0s for node "addons-286595" to be "Ready" ...
	I0501 02:08:53.430796   21421 node_ready.go:49] node "addons-286595" has status "Ready":"True"
	I0501 02:08:53.430823   21421 node_ready.go:38] duration metric: took 3.387168ms for node "addons-286595" to be "Ready" ...
	I0501 02:08:53.430834   21421 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0501 02:08:53.440520   21421 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-rlvmm" in "kube-system" namespace to be "Ready" ...
	I0501 02:08:53.533751   21421 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0501 02:08:53.533761   21421 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0501 02:08:53.533780   21421 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0501 02:08:53.562822   21421 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0501 02:08:53.562844   21421 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0501 02:08:53.562855   21421 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0501 02:08:53.592680   21421 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0501 02:08:53.592702   21421 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0501 02:08:53.614415   21421 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0501 02:08:53.625729   21421 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0501 02:08:53.633610   21421 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0501 02:08:53.657973   21421 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0501 02:08:53.671985   21421 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0501 02:08:53.687836   21421 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0501 02:08:53.687857   21421 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0501 02:08:53.691866   21421 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0501 02:08:53.691883   21421 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0501 02:08:53.713548   21421 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0501 02:08:53.713565   21421 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0501 02:08:53.718699   21421 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0501 02:08:53.718713   21421 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0501 02:08:53.782102   21421 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0501 02:08:53.782125   21421 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0501 02:08:53.872289   21421 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0501 02:08:53.872312   21421 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0501 02:08:53.872899   21421 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0501 02:08:53.872922   21421 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0501 02:08:53.918979   21421 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0501 02:08:53.919011   21421 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0501 02:08:53.934642   21421 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0501 02:08:53.934657   21421 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0501 02:08:53.969760   21421 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0501 02:08:53.969785   21421 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0501 02:08:53.987439   21421 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0501 02:08:53.987461   21421 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0501 02:08:54.018839   21421 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0501 02:08:54.037591   21421 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0501 02:08:54.037617   21421 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0501 02:08:54.103205   21421 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0501 02:08:54.103227   21421 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0501 02:08:54.123025   21421 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0501 02:08:54.123063   21421 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0501 02:08:54.163101   21421 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0501 02:08:54.306026   21421 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0501 02:08:54.306053   21421 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0501 02:08:54.386426   21421 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0501 02:08:54.395755   21421 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0501 02:08:54.395777   21421 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0501 02:08:54.581706   21421 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0501 02:08:54.581726   21421 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0501 02:08:54.621478   21421 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0501 02:08:54.621496   21421 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0501 02:08:54.768813   21421 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0501 02:08:54.791137   21421 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0501 02:08:54.791158   21421 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0501 02:08:54.861158   21421 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0501 02:08:54.861187   21421 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0501 02:08:55.084156   21421 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0501 02:08:55.084181   21421 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0501 02:08:55.140919   21421 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0501 02:08:55.140942   21421 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0501 02:08:55.293710   21421 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0501 02:08:55.293738   21421 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0501 02:08:55.324404   21421 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0501 02:08:55.324425   21421 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0501 02:08:55.362335   21421 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0501 02:08:55.362355   21421 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0501 02:08:55.453799   21421 pod_ready.go:102] pod "coredns-7db6d8ff4d-rlvmm" in "kube-system" namespace has status "Ready":"False"
	I0501 02:08:55.618628   21421 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0501 02:08:55.618649   21421 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0501 02:08:55.634771   21421 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0501 02:08:55.634793   21421 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0501 02:08:55.845672   21421 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0501 02:08:55.984163   21421 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0501 02:08:56.040917   21421 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0501 02:08:56.040956   21421 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0501 02:08:56.145915   21421 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.78693674s)
	I0501 02:08:56.145955   21421 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0501 02:08:56.357841   21421 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0501 02:08:56.357862   21421 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0501 02:08:56.519223   21421 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.985428529s)
	I0501 02:08:56.519243   21421 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.95636659s)
	I0501 02:08:56.519290   21421 main.go:141] libmachine: Making call to close driver server
	I0501 02:08:56.519307   21421 main.go:141] libmachine: (addons-286595) Calling .Close
	I0501 02:08:56.519325   21421 main.go:141] libmachine: Making call to close driver server
	I0501 02:08:56.519341   21421 main.go:141] libmachine: (addons-286595) Calling .Close
	I0501 02:08:56.519599   21421 main.go:141] libmachine: Successfully made call to close driver server
	I0501 02:08:56.519611   21421 main.go:141] libmachine: (addons-286595) DBG | Closing plugin on server side
	I0501 02:08:56.519622   21421 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 02:08:56.519632   21421 main.go:141] libmachine: Making call to close driver server
	I0501 02:08:56.519642   21421 main.go:141] libmachine: Successfully made call to close driver server
	I0501 02:08:56.519658   21421 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 02:08:56.519665   21421 main.go:141] libmachine: Making call to close driver server
	I0501 02:08:56.519676   21421 main.go:141] libmachine: (addons-286595) Calling .Close
	I0501 02:08:56.519646   21421 main.go:141] libmachine: (addons-286595) Calling .Close
	I0501 02:08:56.519633   21421 main.go:141] libmachine: (addons-286595) DBG | Closing plugin on server side
	I0501 02:08:56.519963   21421 main.go:141] libmachine: (addons-286595) DBG | Closing plugin on server side
	I0501 02:08:56.519976   21421 main.go:141] libmachine: Successfully made call to close driver server
	I0501 02:08:56.519989   21421 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 02:08:56.519988   21421 main.go:141] libmachine: (addons-286595) DBG | Closing plugin on server side
	I0501 02:08:56.519995   21421 main.go:141] libmachine: Successfully made call to close driver server
	I0501 02:08:56.520003   21421 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 02:08:56.555755   21421 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0501 02:08:56.654133   21421 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-286595" context rescaled to 1 replicas
	I0501 02:08:57.536717   21421 pod_ready.go:102] pod "coredns-7db6d8ff4d-rlvmm" in "kube-system" namespace has status "Ready":"False"
	I0501 02:08:58.891225   21421 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.276755478s)
	I0501 02:08:58.891238   21421 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.265478039s)
	I0501 02:08:58.891280   21421 main.go:141] libmachine: Making call to close driver server
	I0501 02:08:58.891295   21421 main.go:141] libmachine: (addons-286595) Calling .Close
	I0501 02:08:58.891307   21421 main.go:141] libmachine: Making call to close driver server
	I0501 02:08:58.891322   21421 main.go:141] libmachine: (addons-286595) Calling .Close
	I0501 02:08:58.891265   21421 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.257632377s)
	I0501 02:08:58.891379   21421 main.go:141] libmachine: Making call to close driver server
	I0501 02:08:58.891392   21421 main.go:141] libmachine: (addons-286595) Calling .Close
	I0501 02:08:58.891553   21421 main.go:141] libmachine: Successfully made call to close driver server
	I0501 02:08:58.891570   21421 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 02:08:58.891579   21421 main.go:141] libmachine: Making call to close driver server
	I0501 02:08:58.891588   21421 main.go:141] libmachine: (addons-286595) Calling .Close
	I0501 02:08:58.891781   21421 main.go:141] libmachine: (addons-286595) DBG | Closing plugin on server side
	I0501 02:08:58.891800   21421 main.go:141] libmachine: (addons-286595) DBG | Closing plugin on server side
	I0501 02:08:58.891806   21421 main.go:141] libmachine: Successfully made call to close driver server
	I0501 02:08:58.891823   21421 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 02:08:58.891824   21421 main.go:141] libmachine: Successfully made call to close driver server
	I0501 02:08:58.891829   21421 main.go:141] libmachine: (addons-286595) DBG | Closing plugin on server side
	I0501 02:08:58.891834   21421 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 02:08:58.891843   21421 main.go:141] libmachine: Making call to close driver server
	I0501 02:08:58.891851   21421 main.go:141] libmachine: (addons-286595) Calling .Close
	I0501 02:08:58.891857   21421 main.go:141] libmachine: Successfully made call to close driver server
	I0501 02:08:58.891863   21421 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 02:08:58.891872   21421 main.go:141] libmachine: Making call to close driver server
	I0501 02:08:58.891878   21421 main.go:141] libmachine: (addons-286595) Calling .Close
	I0501 02:08:58.892089   21421 main.go:141] libmachine: Successfully made call to close driver server
	I0501 02:08:58.892102   21421 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 02:08:58.892203   21421 main.go:141] libmachine: Successfully made call to close driver server
	I0501 02:08:58.892215   21421 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 02:08:58.892275   21421 main.go:141] libmachine: (addons-286595) DBG | Closing plugin on server side
	I0501 02:08:59.049359   21421 main.go:141] libmachine: Making call to close driver server
	I0501 02:08:59.049376   21421 main.go:141] libmachine: Making call to close driver server
	I0501 02:08:59.049382   21421 main.go:141] libmachine: (addons-286595) Calling .Close
	I0501 02:08:59.049389   21421 main.go:141] libmachine: (addons-286595) Calling .Close
	I0501 02:08:59.049655   21421 main.go:141] libmachine: Successfully made call to close driver server
	I0501 02:08:59.049675   21421 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 02:08:59.049739   21421 main.go:141] libmachine: Successfully made call to close driver server
	I0501 02:08:59.049784   21421 main.go:141] libmachine: Making call to close connection to plugin binary
	W0501 02:08:59.049894   21421 out.go:239] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0501 02:08:59.977632   21421 pod_ready.go:92] pod "coredns-7db6d8ff4d-rlvmm" in "kube-system" namespace has status "Ready":"True"
	I0501 02:08:59.977657   21421 pod_ready.go:81] duration metric: took 6.53710628s for pod "coredns-7db6d8ff4d-rlvmm" in "kube-system" namespace to be "Ready" ...
	I0501 02:08:59.977668   21421 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-s2t68" in "kube-system" namespace to be "Ready" ...
	I0501 02:09:00.014149   21421 pod_ready.go:92] pod "coredns-7db6d8ff4d-s2t68" in "kube-system" namespace has status "Ready":"True"
	I0501 02:09:00.014189   21421 pod_ready.go:81] duration metric: took 36.512612ms for pod "coredns-7db6d8ff4d-s2t68" in "kube-system" namespace to be "Ready" ...
	I0501 02:09:00.014203   21421 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-286595" in "kube-system" namespace to be "Ready" ...
	I0501 02:09:00.051925   21421 pod_ready.go:92] pod "etcd-addons-286595" in "kube-system" namespace has status "Ready":"True"
	I0501 02:09:00.051953   21421 pod_ready.go:81] duration metric: took 37.741297ms for pod "etcd-addons-286595" in "kube-system" namespace to be "Ready" ...
	I0501 02:09:00.051966   21421 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-286595" in "kube-system" namespace to be "Ready" ...
	I0501 02:09:00.078344   21421 pod_ready.go:92] pod "kube-apiserver-addons-286595" in "kube-system" namespace has status "Ready":"True"
	I0501 02:09:00.078374   21421 pod_ready.go:81] duration metric: took 26.399132ms for pod "kube-apiserver-addons-286595" in "kube-system" namespace to be "Ready" ...
	I0501 02:09:00.078387   21421 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-286595" in "kube-system" namespace to be "Ready" ...
	I0501 02:09:00.088111   21421 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0501 02:09:00.088145   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHHostname
	I0501 02:09:00.091564   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:09:00.091990   21421 main.go:141] libmachine: (addons-286595) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:55:7e", ip: ""} in network mk-addons-286595: {Iface:virbr1 ExpiryTime:2024-05-01 03:08:11 +0000 UTC Type:0 Mac:52:54:00:74:55:7e Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:addons-286595 Clientid:01:52:54:00:74:55:7e}
	I0501 02:09:00.092021   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined IP address 192.168.39.173 and MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:09:00.092209   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHPort
	I0501 02:09:00.092405   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHKeyPath
	I0501 02:09:00.092575   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHUsername
	I0501 02:09:00.092713   21421 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/addons-286595/id_rsa Username:docker}
	I0501 02:09:00.094243   21421 pod_ready.go:92] pod "kube-controller-manager-addons-286595" in "kube-system" namespace has status "Ready":"True"
	I0501 02:09:00.094258   21421 pod_ready.go:81] duration metric: took 15.863807ms for pod "kube-controller-manager-addons-286595" in "kube-system" namespace to be "Ready" ...
	I0501 02:09:00.094267   21421 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7dw4g" in "kube-system" namespace to be "Ready" ...
	I0501 02:09:00.347230   21421 pod_ready.go:92] pod "kube-proxy-7dw4g" in "kube-system" namespace has status "Ready":"True"
	I0501 02:09:00.347255   21421 pod_ready.go:81] duration metric: took 252.978049ms for pod "kube-proxy-7dw4g" in "kube-system" namespace to be "Ready" ...
	I0501 02:09:00.347267   21421 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-286595" in "kube-system" namespace to be "Ready" ...
	I0501 02:09:00.749600   21421 pod_ready.go:92] pod "kube-scheduler-addons-286595" in "kube-system" namespace has status "Ready":"True"
	I0501 02:09:00.749630   21421 pod_ready.go:81] duration metric: took 402.354526ms for pod "kube-scheduler-addons-286595" in "kube-system" namespace to be "Ready" ...
	I0501 02:09:00.749640   21421 pod_ready.go:38] duration metric: took 7.318788702s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0501 02:09:00.749658   21421 api_server.go:52] waiting for apiserver process to appear ...
	I0501 02:09:00.749732   21421 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 02:09:01.021122   21421 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0501 02:09:01.218110   21421 addons.go:234] Setting addon gcp-auth=true in "addons-286595"
	I0501 02:09:01.218166   21421 host.go:66] Checking if "addons-286595" exists ...
	I0501 02:09:01.218591   21421 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:09:01.218627   21421 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:09:01.233848   21421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39917
	I0501 02:09:01.234328   21421 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:09:01.234771   21421 main.go:141] libmachine: Using API Version  1
	I0501 02:09:01.234787   21421 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:09:01.235074   21421 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:09:01.235711   21421 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:09:01.235748   21421 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:09:01.251180   21421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39717
	I0501 02:09:01.251616   21421 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:09:01.252112   21421 main.go:141] libmachine: Using API Version  1
	I0501 02:09:01.252143   21421 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:09:01.252445   21421 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:09:01.252603   21421 main.go:141] libmachine: (addons-286595) Calling .GetState
	I0501 02:09:01.254079   21421 main.go:141] libmachine: (addons-286595) Calling .DriverName
	I0501 02:09:01.254281   21421 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0501 02:09:01.254302   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHHostname
	I0501 02:09:01.257104   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:09:01.257524   21421 main.go:141] libmachine: (addons-286595) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:55:7e", ip: ""} in network mk-addons-286595: {Iface:virbr1 ExpiryTime:2024-05-01 03:08:11 +0000 UTC Type:0 Mac:52:54:00:74:55:7e Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:addons-286595 Clientid:01:52:54:00:74:55:7e}
	I0501 02:09:01.257554   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined IP address 192.168.39.173 and MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:09:01.257689   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHPort
	I0501 02:09:01.257884   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHKeyPath
	I0501 02:09:01.258085   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHUsername
	I0501 02:09:01.258235   21421 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/addons-286595/id_rsa Username:docker}
	I0501 02:09:02.835709   21421 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (9.177699111s)
	I0501 02:09:02.835779   21421 main.go:141] libmachine: Making call to close driver server
	I0501 02:09:02.835786   21421 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (9.163771093s)
	I0501 02:09:02.835831   21421 main.go:141] libmachine: Making call to close driver server
	I0501 02:09:02.835833   21421 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (8.816965849s)
	I0501 02:09:02.835848   21421 main.go:141] libmachine: (addons-286595) Calling .Close
	I0501 02:09:02.835793   21421 main.go:141] libmachine: (addons-286595) Calling .Close
	I0501 02:09:02.835862   21421 main.go:141] libmachine: Making call to close driver server
	I0501 02:09:02.835934   21421 main.go:141] libmachine: (addons-286595) Calling .Close
	I0501 02:09:02.835945   21421 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.672804964s)
	I0501 02:09:02.835973   21421 main.go:141] libmachine: Making call to close driver server
	I0501 02:09:02.835978   21421 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.449516824s)
	I0501 02:09:02.836001   21421 main.go:141] libmachine: Making call to close driver server
	I0501 02:09:02.836011   21421 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (8.067170489s)
	I0501 02:09:02.835984   21421 main.go:141] libmachine: (addons-286595) Calling .Close
	I0501 02:09:02.836030   21421 main.go:141] libmachine: Making call to close driver server
	I0501 02:09:02.836040   21421 main.go:141] libmachine: (addons-286595) Calling .Close
	I0501 02:09:02.836014   21421 main.go:141] libmachine: (addons-286595) Calling .Close
	I0501 02:09:02.836149   21421 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.990437694s)
	I0501 02:09:02.836171   21421 main.go:141] libmachine: Making call to close driver server
	I0501 02:09:02.836182   21421 main.go:141] libmachine: (addons-286595) Calling .Close
	I0501 02:09:02.836316   21421 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.852116618s)
	I0501 02:09:02.836328   21421 main.go:141] libmachine: Successfully made call to close driver server
	I0501 02:09:02.836343   21421 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 02:09:02.836364   21421 main.go:141] libmachine: Making call to close driver server
	I0501 02:09:02.836372   21421 main.go:141] libmachine: (addons-286595) Calling .Close
	W0501 02:09:02.836371   21421 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0501 02:09:02.836423   21421 retry.go:31] will retry after 149.528632ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0501 02:09:02.836502   21421 main.go:141] libmachine: (addons-286595) DBG | Closing plugin on server side
	I0501 02:09:02.836506   21421 main.go:141] libmachine: (addons-286595) DBG | Closing plugin on server side
	I0501 02:09:02.836516   21421 main.go:141] libmachine: Successfully made call to close driver server
	I0501 02:09:02.836527   21421 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 02:09:02.836535   21421 main.go:141] libmachine: Making call to close driver server
	I0501 02:09:02.836536   21421 main.go:141] libmachine: (addons-286595) DBG | Closing plugin on server side
	I0501 02:09:02.836537   21421 main.go:141] libmachine: Successfully made call to close driver server
	I0501 02:09:02.836542   21421 main.go:141] libmachine: (addons-286595) Calling .Close
	I0501 02:09:02.836548   21421 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 02:09:02.836558   21421 main.go:141] libmachine: Making call to close driver server
	I0501 02:09:02.836565   21421 main.go:141] libmachine: Successfully made call to close driver server
	I0501 02:09:02.836574   21421 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 02:09:02.836583   21421 main.go:141] libmachine: Making call to close driver server
	I0501 02:09:02.836588   21421 main.go:141] libmachine: Successfully made call to close driver server
	I0501 02:09:02.836591   21421 main.go:141] libmachine: (addons-286595) Calling .Close
	I0501 02:09:02.836597   21421 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 02:09:02.836607   21421 main.go:141] libmachine: Making call to close driver server
	I0501 02:09:02.836614   21421 main.go:141] libmachine: (addons-286595) Calling .Close
	I0501 02:09:02.836641   21421 main.go:141] libmachine: (addons-286595) DBG | Closing plugin on server side
	I0501 02:09:02.836567   21421 main.go:141] libmachine: (addons-286595) Calling .Close
	I0501 02:09:02.836920   21421 main.go:141] libmachine: (addons-286595) DBG | Closing plugin on server side
	I0501 02:09:02.836953   21421 main.go:141] libmachine: (addons-286595) DBG | Closing plugin on server side
	I0501 02:09:02.836976   21421 main.go:141] libmachine: Successfully made call to close driver server
	I0501 02:09:02.836986   21421 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 02:09:02.837043   21421 main.go:141] libmachine: (addons-286595) DBG | Closing plugin on server side
	I0501 02:09:02.837070   21421 main.go:141] libmachine: Successfully made call to close driver server
	I0501 02:09:02.837090   21421 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 02:09:02.837123   21421 addons.go:470] Verifying addon registry=true in "addons-286595"
	I0501 02:09:02.837135   21421 main.go:141] libmachine: Successfully made call to close driver server
	I0501 02:09:02.837153   21421 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 02:09:02.838883   21421 out.go:177] * Verifying registry addon...
	I0501 02:09:02.838371   21421 main.go:141] libmachine: (addons-286595) DBG | Closing plugin on server side
	I0501 02:09:02.838375   21421 main.go:141] libmachine: Successfully made call to close driver server
	I0501 02:09:02.840470   21421 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 02:09:02.840481   21421 main.go:141] libmachine: Making call to close driver server
	I0501 02:09:02.840493   21421 main.go:141] libmachine: (addons-286595) Calling .Close
	I0501 02:09:02.838392   21421 main.go:141] libmachine: Successfully made call to close driver server
	I0501 02:09:02.840518   21421 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 02:09:02.841920   21421 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-286595 service yakd-dashboard -n yakd-dashboard
	
	I0501 02:09:02.839697   21421 main.go:141] libmachine: (addons-286595) DBG | Closing plugin on server side
	I0501 02:09:02.839708   21421 main.go:141] libmachine: Successfully made call to close driver server
	I0501 02:09:02.839973   21421 main.go:141] libmachine: Successfully made call to close driver server
	I0501 02:09:02.840032   21421 main.go:141] libmachine: (addons-286595) DBG | Closing plugin on server side
	I0501 02:09:02.840800   21421 main.go:141] libmachine: (addons-286595) DBG | Closing plugin on server side
	I0501 02:09:02.840828   21421 main.go:141] libmachine: Successfully made call to close driver server
	I0501 02:09:02.841317   21421 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0501 02:09:02.843023   21421 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 02:09:02.843036   21421 main.go:141] libmachine: Making call to close driver server
	I0501 02:09:02.843048   21421 main.go:141] libmachine: (addons-286595) Calling .Close
	I0501 02:09:02.843050   21421 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 02:09:02.843065   21421 addons.go:470] Verifying addon metrics-server=true in "addons-286595"
	I0501 02:09:02.843094   21421 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 02:09:02.843107   21421 addons.go:470] Verifying addon ingress=true in "addons-286595"
	I0501 02:09:02.844154   21421 out.go:177] * Verifying ingress addon...
	I0501 02:09:02.843361   21421 main.go:141] libmachine: Successfully made call to close driver server
	I0501 02:09:02.845508   21421 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 02:09:02.843381   21421 main.go:141] libmachine: (addons-286595) DBG | Closing plugin on server side
	I0501 02:09:02.846124   21421 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0501 02:09:02.851653   21421 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0501 02:09:02.851669   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:02.852280   21421 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0501 02:09:02.852303   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:02.987058   21421 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0501 02:09:03.354373   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:03.356921   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:03.520272   21421 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.770514591s)
	I0501 02:09:03.520309   21421 api_server.go:72] duration metric: took 10.569297355s to wait for apiserver process to appear ...
	I0501 02:09:03.520318   21421 api_server.go:88] waiting for apiserver healthz status ...
	I0501 02:09:03.520342   21421 api_server.go:253] Checking apiserver healthz at https://192.168.39.173:8443/healthz ...
	I0501 02:09:03.520383   21421 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.266082714s)
	I0501 02:09:03.521848   21421 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0501 02:09:03.520585   21421 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.964777698s)
	I0501 02:09:03.521892   21421 main.go:141] libmachine: Making call to close driver server
	I0501 02:09:03.521902   21421 main.go:141] libmachine: (addons-286595) Calling .Close
	I0501 02:09:03.523196   21421 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0501 02:09:03.522196   21421 main.go:141] libmachine: Successfully made call to close driver server
	I0501 02:09:03.522222   21421 main.go:141] libmachine: (addons-286595) DBG | Closing plugin on server side
	I0501 02:09:03.524355   21421 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0501 02:09:03.524368   21421 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0501 02:09:03.523235   21421 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 02:09:03.524420   21421 main.go:141] libmachine: Making call to close driver server
	I0501 02:09:03.524434   21421 main.go:141] libmachine: (addons-286595) Calling .Close
	I0501 02:09:03.524682   21421 main.go:141] libmachine: (addons-286595) DBG | Closing plugin on server side
	I0501 02:09:03.524696   21421 main.go:141] libmachine: Successfully made call to close driver server
	I0501 02:09:03.524709   21421 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 02:09:03.524728   21421 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-286595"
	I0501 02:09:03.525940   21421 out.go:177] * Verifying csi-hostpath-driver addon...
	I0501 02:09:03.527831   21421 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0501 02:09:03.536113   21421 api_server.go:279] https://192.168.39.173:8443/healthz returned 200:
	ok
	I0501 02:09:03.537095   21421 api_server.go:141] control plane version: v1.30.0
	I0501 02:09:03.537123   21421 api_server.go:131] duration metric: took 16.797746ms to wait for apiserver health ...
	I0501 02:09:03.537134   21421 system_pods.go:43] waiting for kube-system pods to appear ...
	I0501 02:09:03.557169   21421 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0501 02:09:03.557195   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:03.559708   21421 system_pods.go:59] 19 kube-system pods found
	I0501 02:09:03.559750   21421 system_pods.go:61] "coredns-7db6d8ff4d-rlvmm" [b9eb9071-e21b-46fc-8605-055d6915f55e] Running
	I0501 02:09:03.559760   21421 system_pods.go:61] "coredns-7db6d8ff4d-s2t68" [7bc229a2-c453-440a-99d1-ed6eca63a179] Running
	I0501 02:09:03.559768   21421 system_pods.go:61] "csi-hostpath-attacher-0" [1171b8a4-c4ea-44f6-b440-b20e6789c3c1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0501 02:09:03.559775   21421 system_pods.go:61] "csi-hostpath-resizer-0" [0c417652-a924-43e6-ad18-0d1adc827868] Pending
	I0501 02:09:03.559781   21421 system_pods.go:61] "csi-hostpathplugin-h96nk" [406dcf80-86a8-4b1d-8c1a-c3e446a15d47] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0501 02:09:03.559787   21421 system_pods.go:61] "etcd-addons-286595" [95d9e711-65f9-4dce-84e8-2cff4b9c00dd] Running
	I0501 02:09:03.559790   21421 system_pods.go:61] "kube-apiserver-addons-286595" [d7533d69-f88a-4772-8292-76367bc8ef2f] Running
	I0501 02:09:03.559794   21421 system_pods.go:61] "kube-controller-manager-addons-286595" [59918d31-1d66-43ab-bfd8-319ca2366ae1] Running
	I0501 02:09:03.559801   21421 system_pods.go:61] "kube-ingress-dns-minikube" [2c0204aa-5d9f-4c78-a423-4378e147abf4] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0501 02:09:03.559807   21421 system_pods.go:61] "kube-proxy-7dw4g" [7aec44ec-1615-4aa4-9d65-e464831f8518] Running
	I0501 02:09:03.559811   21421 system_pods.go:61] "kube-scheduler-addons-286595" [37f73d9c-b5ac-4946-92b5-b826a3cf9ed1] Running
	I0501 02:09:03.559817   21421 system_pods.go:61] "metrics-server-c59844bb4-gvcdl" [9385fe21-53b5-4105-bb14-3008fcd7dc3a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0501 02:09:03.559826   21421 system_pods.go:61] "nvidia-device-plugin-daemonset-rkmjq" [ed0cb4b4-ad39-4ba6-8e70-771dffc9b32e] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0501 02:09:03.559837   21421 system_pods.go:61] "registry-f6tfr" [cf6f5911-c14d-4b26-9767-c66913822a34] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0501 02:09:03.559846   21421 system_pods.go:61] "registry-proxy-6hksn" [f6f624f2-3e51-4453-b84e-7d908b7736fa] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0501 02:09:03.559852   21421 system_pods.go:61] "snapshot-controller-745499f584-blqww" [6c914d0a-4f6b-458b-9601-41d41a96d448] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0501 02:09:03.559860   21421 system_pods.go:61] "snapshot-controller-745499f584-cnc7j" [e70de7d8-e03e-4147-b633-6fec7dbe1e88] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0501 02:09:03.559867   21421 system_pods.go:61] "storage-provisioner" [b23a96b2-9c34-4d4f-9df5-90dc5195248b] Running
	I0501 02:09:03.559872   21421 system_pods.go:61] "tiller-deploy-6677d64bcd-btpph" [f3632fb8-1c95-4630-b3ce-f08c09d4a4ff] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0501 02:09:03.559879   21421 system_pods.go:74] duration metric: took 22.734985ms to wait for pod list to return data ...
	I0501 02:09:03.559889   21421 default_sa.go:34] waiting for default service account to be created ...
	I0501 02:09:03.591890   21421 default_sa.go:45] found service account: "default"
	I0501 02:09:03.591912   21421 default_sa.go:55] duration metric: took 32.017511ms for default service account to be created ...
	I0501 02:09:03.591923   21421 system_pods.go:116] waiting for k8s-apps to be running ...
	I0501 02:09:03.614860   21421 system_pods.go:86] 19 kube-system pods found
	I0501 02:09:03.614886   21421 system_pods.go:89] "coredns-7db6d8ff4d-rlvmm" [b9eb9071-e21b-46fc-8605-055d6915f55e] Running
	I0501 02:09:03.614891   21421 system_pods.go:89] "coredns-7db6d8ff4d-s2t68" [7bc229a2-c453-440a-99d1-ed6eca63a179] Running
	I0501 02:09:03.614899   21421 system_pods.go:89] "csi-hostpath-attacher-0" [1171b8a4-c4ea-44f6-b440-b20e6789c3c1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0501 02:09:03.614904   21421 system_pods.go:89] "csi-hostpath-resizer-0" [0c417652-a924-43e6-ad18-0d1adc827868] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0501 02:09:03.614917   21421 system_pods.go:89] "csi-hostpathplugin-h96nk" [406dcf80-86a8-4b1d-8c1a-c3e446a15d47] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0501 02:09:03.614924   21421 system_pods.go:89] "etcd-addons-286595" [95d9e711-65f9-4dce-84e8-2cff4b9c00dd] Running
	I0501 02:09:03.614931   21421 system_pods.go:89] "kube-apiserver-addons-286595" [d7533d69-f88a-4772-8292-76367bc8ef2f] Running
	I0501 02:09:03.614941   21421 system_pods.go:89] "kube-controller-manager-addons-286595" [59918d31-1d66-43ab-bfd8-319ca2366ae1] Running
	I0501 02:09:03.614955   21421 system_pods.go:89] "kube-ingress-dns-minikube" [2c0204aa-5d9f-4c78-a423-4378e147abf4] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0501 02:09:03.614962   21421 system_pods.go:89] "kube-proxy-7dw4g" [7aec44ec-1615-4aa4-9d65-e464831f8518] Running
	I0501 02:09:03.614971   21421 system_pods.go:89] "kube-scheduler-addons-286595" [37f73d9c-b5ac-4946-92b5-b826a3cf9ed1] Running
	I0501 02:09:03.614980   21421 system_pods.go:89] "metrics-server-c59844bb4-gvcdl" [9385fe21-53b5-4105-bb14-3008fcd7dc3a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0501 02:09:03.614988   21421 system_pods.go:89] "nvidia-device-plugin-daemonset-rkmjq" [ed0cb4b4-ad39-4ba6-8e70-771dffc9b32e] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0501 02:09:03.614998   21421 system_pods.go:89] "registry-f6tfr" [cf6f5911-c14d-4b26-9767-c66913822a34] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0501 02:09:03.615007   21421 system_pods.go:89] "registry-proxy-6hksn" [f6f624f2-3e51-4453-b84e-7d908b7736fa] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0501 02:09:03.615013   21421 system_pods.go:89] "snapshot-controller-745499f584-blqww" [6c914d0a-4f6b-458b-9601-41d41a96d448] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0501 02:09:03.615023   21421 system_pods.go:89] "snapshot-controller-745499f584-cnc7j" [e70de7d8-e03e-4147-b633-6fec7dbe1e88] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0501 02:09:03.615032   21421 system_pods.go:89] "storage-provisioner" [b23a96b2-9c34-4d4f-9df5-90dc5195248b] Running
	I0501 02:09:03.615045   21421 system_pods.go:89] "tiller-deploy-6677d64bcd-btpph" [f3632fb8-1c95-4630-b3ce-f08c09d4a4ff] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0501 02:09:03.615060   21421 system_pods.go:126] duration metric: took 23.130136ms to wait for k8s-apps to be running ...
	I0501 02:09:03.615073   21421 system_svc.go:44] waiting for kubelet service to be running ....
	I0501 02:09:03.615115   21421 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 02:09:03.670208   21421 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0501 02:09:03.670228   21421 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0501 02:09:03.850025   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:03.851011   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:03.858310   21421 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0501 02:09:03.858329   21421 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0501 02:09:03.935569   21421 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0501 02:09:04.033534   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:04.347212   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:04.351655   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:04.540386   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:04.849999   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:04.856268   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:05.050362   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:05.347903   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:05.351994   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:05.535886   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:05.837272   21421 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.222127757s)
	I0501 02:09:05.837309   21421 system_svc.go:56] duration metric: took 2.222232995s WaitForService to wait for kubelet
	I0501 02:09:05.837320   21421 kubeadm.go:576] duration metric: took 12.886308147s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0501 02:09:05.837347   21421 node_conditions.go:102] verifying NodePressure condition ...
	I0501 02:09:05.837389   21421 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.850287785s)
	I0501 02:09:05.837433   21421 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.901781058s)
	I0501 02:09:05.837462   21421 main.go:141] libmachine: Making call to close driver server
	I0501 02:09:05.837472   21421 main.go:141] libmachine: Making call to close driver server
	I0501 02:09:05.837484   21421 main.go:141] libmachine: (addons-286595) Calling .Close
	I0501 02:09:05.837489   21421 main.go:141] libmachine: (addons-286595) Calling .Close
	I0501 02:09:05.837756   21421 main.go:141] libmachine: Successfully made call to close driver server
	I0501 02:09:05.837780   21421 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 02:09:05.837789   21421 main.go:141] libmachine: Making call to close driver server
	I0501 02:09:05.837796   21421 main.go:141] libmachine: (addons-286595) Calling .Close
	I0501 02:09:05.839516   21421 main.go:141] libmachine: (addons-286595) DBG | Closing plugin on server side
	I0501 02:09:05.839522   21421 main.go:141] libmachine: (addons-286595) DBG | Closing plugin on server side
	I0501 02:09:05.839530   21421 main.go:141] libmachine: Successfully made call to close driver server
	I0501 02:09:05.839542   21421 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 02:09:05.839532   21421 main.go:141] libmachine: Successfully made call to close driver server
	I0501 02:09:05.839590   21421 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 02:09:05.839609   21421 main.go:141] libmachine: Making call to close driver server
	I0501 02:09:05.839622   21421 main.go:141] libmachine: (addons-286595) Calling .Close
	I0501 02:09:05.839803   21421 main.go:141] libmachine: Successfully made call to close driver server
	I0501 02:09:05.839817   21421 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 02:09:05.841843   21421 addons.go:470] Verifying addon gcp-auth=true in "addons-286595"
	I0501 02:09:05.844672   21421 out.go:177] * Verifying gcp-auth addon...
	I0501 02:09:05.842414   21421 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0501 02:09:05.846020   21421 node_conditions.go:123] node cpu capacity is 2
	I0501 02:09:05.846060   21421 node_conditions.go:105] duration metric: took 8.705183ms to run NodePressure ...
	I0501 02:09:05.846078   21421 start.go:240] waiting for startup goroutines ...
	I0501 02:09:05.846684   21421 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0501 02:09:05.848170   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:05.855974   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:05.856795   21421 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0501 02:09:05.856811   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:06.034156   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:06.348380   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:06.352404   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:06.353196   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:06.538870   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:06.848324   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:06.853902   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:06.854479   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:07.034379   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:07.348756   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:07.355884   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:07.356291   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:07.538562   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:07.848051   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:07.851436   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:07.851873   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:08.034390   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:08.349312   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:08.352692   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:08.353142   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:08.534385   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:08.850117   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:08.853366   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:08.854279   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:09.036789   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:09.346973   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:09.350654   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:09.351578   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:09.772287   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:09.849975   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:09.853462   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:09.854058   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:10.034323   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:10.348146   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:10.350393   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:10.351895   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:10.533987   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:10.847392   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:10.852765   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:10.853317   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:11.033889   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:11.348464   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:11.353081   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:11.353580   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:11.533688   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:11.848467   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:11.862613   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:11.863353   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:12.037430   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:12.348707   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:12.351206   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:12.351588   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:12.533906   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:12.850524   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:12.851394   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:12.852427   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:13.034904   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:13.348493   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:13.350879   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:13.351580   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:13.534481   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:13.848066   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:13.850094   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:13.850957   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:14.033930   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:14.348673   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:14.349986   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:14.351472   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:14.533389   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:14.849263   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:14.850498   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:14.851743   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:15.036301   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:15.350035   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:15.357342   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:15.357864   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:15.536337   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:15.848788   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:15.851336   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:15.852369   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:16.034189   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:16.349607   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:16.350435   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:16.353669   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:16.534389   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:16.850325   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:16.853233   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:16.854624   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:17.033618   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:17.348218   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:17.352125   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:17.352206   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:17.533426   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:17.848544   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:17.853423   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:17.853941   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:18.034354   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:18.351855   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:18.352672   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:18.354657   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:18.898953   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:18.899095   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:18.899631   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:18.901011   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:19.034465   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:19.350005   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:19.350797   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:19.352127   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:19.534191   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:19.848237   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:19.850637   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:19.851864   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:20.034530   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:20.355179   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:20.356063   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:20.359358   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:20.534931   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:20.848879   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:20.850657   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:20.852092   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:21.037008   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:21.350432   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:21.350487   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:21.351417   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:21.548758   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:21.848156   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:21.852489   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:21.853211   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:22.033909   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:22.353887   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:22.355430   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:22.355900   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:22.534273   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:22.848335   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:22.852672   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:22.853059   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:23.033649   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:23.349904   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:23.352068   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:23.352833   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:23.534439   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:23.850408   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:23.852949   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:23.856123   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:24.033560   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:24.348155   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:24.350705   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:24.351200   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:24.534628   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:24.848032   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:24.850120   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:24.851373   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:25.033826   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:25.349232   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:25.351132   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:25.352042   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:25.533445   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:25.848837   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:25.850778   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:25.851234   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:26.033982   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:26.348587   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:26.349831   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:26.351628   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:26.533729   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:26.854068   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:26.854128   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:26.854803   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:27.034655   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:27.349852   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:27.353330   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:27.353784   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:27.547238   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:27.850801   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:27.853955   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:27.855458   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:28.033724   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:28.348220   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:28.355197   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:28.356921   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:28.533711   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:28.851253   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:28.851797   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:28.853367   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:29.033196   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:29.347570   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:29.351744   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:29.353104   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:29.534166   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:29.848797   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:29.852557   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:29.853164   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:30.033924   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:30.350086   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:30.353365   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:30.353841   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:30.534032   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:30.847632   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:30.850743   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:30.850807   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:31.034089   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:31.348560   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:31.351074   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:31.352426   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:31.534387   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:31.848980   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:31.851305   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:31.852287   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:32.033808   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:32.348001   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:32.351658   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:32.352556   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:32.533437   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:32.855761   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:32.857438   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:32.858348   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:33.039174   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:33.350521   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:33.350703   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:33.351114   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:33.534361   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:33.851786   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:33.857074   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:33.859110   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:34.033676   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:34.347549   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:34.352477   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:34.352982   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:34.533457   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:34.848746   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:34.850203   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:34.850812   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:35.034186   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:35.352677   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:35.357913   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:35.358705   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:35.534412   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:35.853567   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:35.855679   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:35.856004   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:36.034429   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:36.349488   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:36.352523   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:36.353361   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:36.539580   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:36.854307   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:36.857663   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:36.858186   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:37.035242   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:37.586108   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:37.586647   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:37.589501   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:37.591939   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:37.848645   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:37.850797   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:37.854099   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:38.033480   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:38.347555   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:38.349983   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:38.351182   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:38.540293   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:38.852068   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:38.854100   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:38.854637   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:39.033661   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:39.349951   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:39.352147   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:39.355468   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:39.534190   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:39.976897   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:39.977454   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:39.977691   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:40.035416   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:40.349046   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:40.353737   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:40.353948   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:40.533536   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:40.850787   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:40.851707   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:40.852826   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:41.034758   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:41.351402   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:41.352316   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:41.353292   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:41.534487   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:41.856032   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:41.856834   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:41.857830   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:42.034772   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:42.352930   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:42.353334   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:42.353756   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:42.533592   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:42.852617   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:42.853014   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:42.857826   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:43.384837   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:43.385490   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:43.385523   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:43.386767   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:43.534089   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:43.848395   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:43.855593   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:43.856280   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:44.035508   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:44.348670   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:44.352744   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:44.353865   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:44.534299   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:44.850263   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:44.852499   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:44.853253   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:45.035437   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:45.353222   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:45.353230   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:45.353476   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:45.533974   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:45.848441   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:45.851053   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:45.851658   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:46.033458   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:46.348754   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:46.352045   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:46.352306   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:46.533364   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:46.848354   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:46.851442   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:46.851773   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:47.035152   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:47.348226   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:47.350062   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:47.351398   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:47.534979   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:47.849529   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:47.851547   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:47.852382   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:48.033357   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:48.348475   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:48.350959   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:48.351648   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:48.534440   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:48.849302   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:48.851306   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:48.851360   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:49.033951   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:49.347596   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:49.351024   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:49.353326   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:49.534112   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:49.848434   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:49.852686   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:49.853251   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:50.501578   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:50.521026   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:50.524657   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:50.524883   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:50.539948   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:50.848082   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:50.850508   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:50.850981   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:51.032555   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:51.349433   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:51.351446   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:51.352257   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:51.533082   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:51.848207   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:51.849760   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:51.851156   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:52.034147   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:52.349417   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:52.351817   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:52.353965   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:52.534599   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:52.848151   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:52.850956   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:52.851145   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:53.034039   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:53.350437   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:53.352294   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:53.355296   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:53.535005   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:53.848811   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:53.851636   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:53.852381   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:54.033840   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:54.348718   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:54.351045   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:54.351785   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:54.826161   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:54.854654   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:54.854989   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:54.855705   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:55.034372   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:55.347925   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:55.349843   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:55.350667   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:55.534319   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:55.848469   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:55.850628   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:55.850929   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:56.033854   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:56.347793   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:56.349846   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:56.350379   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:56.534943   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:56.862432   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:56.862966   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:56.863312   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:57.033924   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:57.359796   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:57.368249   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:57.371151   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:57.533854   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:57.848380   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:57.850870   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:57.852490   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:58.035140   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:58.348872   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:58.351487   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:58.352270   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:58.533633   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:58.846753   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:58.852161   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:58.852307   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:59.034053   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:59.348946   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:59.352643   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:59.353252   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:59.534215   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:59.851321   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:59.853295   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:59.854023   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:10:00.034025   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:10:00.350643   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:10:00.358624   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:00.358827   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:10:00.678022   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:10:00.855611   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:00.861319   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:10:00.861430   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:10:01.042088   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:10:01.350571   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:10:01.351285   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:01.351585   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:10:01.534254   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:10:01.849712   21421 kapi.go:107] duration metric: took 59.008389808s to wait for kubernetes.io/minikube-addons=registry ...
	I0501 02:10:01.852061   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:10:01.852649   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:02.034979   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:10:02.351431   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:02.351810   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:10:02.542990   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:10:02.851388   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:02.851705   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:10:03.034497   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:10:03.353937   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:03.354227   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:10:03.534513   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:10:03.855917   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:10:03.859387   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:04.035327   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:10:04.352128   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:04.354318   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:10:04.539166   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:10:04.851244   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:10:04.851590   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:05.034933   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:10:05.351548   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:05.351998   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:10:05.534182   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:10:05.852561   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:05.852718   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:10:06.034464   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:10:06.355789   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:06.356955   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:10:06.534454   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:10:06.850752   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:06.851049   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:10:07.034132   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:10:07.352356   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:07.353142   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:10:07.534098   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:10:07.852618   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:07.853113   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:10:08.045853   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:10:08.351751   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:08.352064   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:10:08.535018   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:10:08.850819   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:08.851536   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:10:09.035087   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:10:09.352544   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:10:09.353926   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:09.535722   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:10:09.850368   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:10:09.852043   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:10.037336   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:10:10.352997   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:10.353223   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:10:10.535588   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:10:10.852064   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:10.852242   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:10:11.034391   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:10:11.353931   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:11.354039   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:10:11.537803   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:10:11.852251   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:11.853099   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:10:12.042513   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:10:12.352852   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:10:12.353364   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:12.549443   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:10:12.856741   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:10:12.857460   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:13.041564   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:10:13.358750   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:13.358917   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:10:13.539477   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:10:13.851822   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:13.852466   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:10:14.034484   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:10:14.356258   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:14.356489   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:10:14.533730   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:10:14.851854   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:14.857079   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:10:15.036002   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:10:15.352291   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:15.353211   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:10:15.539150   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:10:15.856345   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:15.856735   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:10:16.058900   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:10:16.351967   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:16.357810   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:10:16.536954   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:10:16.851762   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:16.852952   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:10:17.034113   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:10:17.351743   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:10:17.352162   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:17.533930   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:10:17.852556   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:10:17.852752   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:18.035543   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:10:18.772704   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:10:18.774651   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:18.780267   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:10:18.851153   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:10:18.852478   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:19.039429   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:10:19.351656   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:19.351916   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:10:19.534584   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:10:19.856658   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:19.857492   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:10:20.035488   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:10:20.353124   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:20.359879   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:10:20.534553   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:10:20.850493   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:20.850774   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:10:21.055086   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:10:21.355882   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:21.356106   21421 kapi.go:107] duration metric: took 1m18.509978097s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0501 02:10:21.534496   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:10:21.851201   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:22.034755   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:10:22.350744   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:22.533804   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:10:22.850911   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:23.034726   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:10:23.350869   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:23.540177   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:10:23.851795   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:24.034565   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:10:24.351529   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:24.534154   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:10:24.850740   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:25.045172   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:10:25.351440   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:25.534600   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:10:25.851426   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:26.033630   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:10:26.351409   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:26.534481   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:10:26.851008   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:27.034691   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:10:27.350997   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:27.535752   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:10:27.850455   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:28.035244   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:10:28.350584   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:28.535056   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:10:28.851604   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:29.034987   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:10:29.351003   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:29.533975   21421 kapi.go:107] duration metric: took 1m26.006142357s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0501 02:10:29.851326   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:30.350882   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:30.851739   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:31.350590   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:31.850375   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:32.351233   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:32.851240   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:33.352081   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:33.852173   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:34.352633   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:34.850461   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:35.350886   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:35.851033   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:36.351169   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:36.852021   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:37.351335   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:37.850791   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:38.350512   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:38.851213   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:39.351143   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:39.851202   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:40.351553   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:40.850844   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:41.350827   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:41.867204   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:42.351434   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:42.851865   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:43.352116   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:43.851343   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:44.351992   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:44.851836   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:45.351676   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:45.851190   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:46.351119   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:46.850726   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:47.350329   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:47.852543   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:48.351952   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:48.852299   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:49.351591   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:49.850165   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:50.351503   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:50.850811   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:51.350453   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:51.851111   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:52.350350   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:52.852117   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:53.351435   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:53.854509   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:54.351422   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:54.851332   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:55.350269   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:55.850578   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:56.350202   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:56.851153   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:57.351245   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:57.850921   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:58.351225   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:58.851448   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:59.351225   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:59.852150   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:11:00.351655   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:11:00.852020   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:11:01.352548   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:11:01.851598   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:11:02.351074   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:11:02.851507   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:11:03.353134   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:11:03.851104   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:11:04.350825   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:11:04.851533   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:11:05.350557   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:11:05.851239   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:11:06.351691   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:11:06.850540   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:11:07.351668   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:11:07.851662   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:11:08.349948   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:11:08.851320   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:11:09.353401   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:11:09.851374   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:11:10.351003   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:11:10.851166   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:11:11.350983   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:11:11.850985   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:11:12.351679   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:11:12.851119   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:11:13.351738   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:11:13.851073   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:11:14.351672   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:11:14.851098   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:11:15.351690   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:11:15.850661   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:11:16.351040   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:11:16.850854   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:11:17.350797   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:11:17.850510   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:11:18.350961   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:11:18.851654   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:11:19.350706   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:11:19.850457   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:11:20.350022   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:11:20.851666   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:11:21.351527   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:11:21.851128   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:11:22.351195   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:11:22.851066   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:11:23.355687   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:11:23.850924   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:11:24.350129   21421 kapi.go:107] duration metric: took 2m18.503442071s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0501 02:11:24.351556   21421 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-286595 cluster.
	I0501 02:11:24.352718   21421 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0501 02:11:24.353793   21421 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0501 02:11:24.354947   21421 out.go:177] * Enabled addons: nvidia-device-plugin, cloud-spanner, storage-provisioner, default-storageclass, ingress-dns, helm-tiller, yakd, metrics-server, inspektor-gadget, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I0501 02:11:24.356093   21421 addons.go:505] duration metric: took 2m31.405039024s for enable addons: enabled=[nvidia-device-plugin cloud-spanner storage-provisioner default-storageclass ingress-dns helm-tiller yakd metrics-server inspektor-gadget volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I0501 02:11:24.356135   21421 start.go:245] waiting for cluster config update ...
	I0501 02:11:24.356162   21421 start.go:254] writing updated cluster config ...
	I0501 02:11:24.356406   21421 ssh_runner.go:195] Run: rm -f paused
	I0501 02:11:24.408013   21421 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0501 02:11:24.409678   21421 out.go:177] * Done! kubectl is now configured to use "addons-286595" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	May 01 02:14:16 addons-286595 crio[678]: time="2024-05-01 02:14:16.151326206Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2008cc0d-29fb-4ea8-9357-b5f7318311ea name=/runtime.v1.RuntimeService/ListContainers
	May 01 02:14:16 addons-286595 crio[678]: time="2024-05-01 02:14:16.152059344Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3dc8451202c845e1cfc4e3c28974631a42f49e34ea0411ce1b1faac0ae57f237,PodSandboxId:8e72cc6f09b3e3fadf9ec75b9310c2d78126d3b97a1d88fd61ef25f8991d9a5f,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1714529649599140523,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-86c47465fc-jtwrv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0125296c-388d-4687-96e0-fa6da417e535,},Annotations:map[string]string{io.kubernetes.container.hash: a80b439f,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9fa4fb94b082c838ebe8a1532662751e9183f51fb69467cfab3cd6cc237ca435,PodSandboxId:a2dc1241af174a36ec9c93e88afa189e0d7100872a8ca7c508e972b3dff4683c,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:9d84f30d4c5e54cdc40f63b060e93ba6a0cd8a4c05d28d7cda4cd14f6b56490f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7373e995f4086a9db4ce8b2f96af2c2ae7f319e3e7e2ebdc1291e9c50ae4437e,State:CONTAINER_RUNNING,CreatedAt:1714529543354931898,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7559bf459f-844d4,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: d7626782-57eb-48f0-907d-f0d1e86e250c,},Annota
tions:map[string]string{io.kubernetes.container.hash: 6de05c84,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2bc8bf0be7ba2293941b7d54385fbe3193360b4e299b1b731da89f470069a51,PodSandboxId:2e7d62998cc8784acf5c9dec6b82bd83857310927d192fa8b08bee020d42647d,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:dd524baac105f5353429a7022c26a02c8c80d95a50cb4d34b6e19a3a4289ff88,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f4215f6ee683f29c0a4611b02d1adc3b7d986a96ab894eb5f7b9437c862c9499,State:CONTAINER_RUNNING,CreatedAt:1714529508994443047,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: defaul
t,io.kubernetes.pod.uid: e8f25648-6f7c-4d88-9b95-89988ad85a6b,},Annotations:map[string]string{io.kubernetes.container.hash: 9da3a0f0,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10db6a0b55c4b872bf2919f3b05779544eabcca22bf61fbb6744de0ab2d8afb5,PodSandboxId:f844e33a463589aea8f33444bb22c7b510e164c021bba4c9c600c3212811974e,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1714529483336772469,Labels:map[string]string{io.kubernetes.container.name: gcp-a
uth,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-dgngh,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 73a446bd-5b8b-4a38-a644-68a5bae5a7d3,},Annotations:map[string]string{io.kubernetes.container.hash: 77530408,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:432cc59073fd29047b54aeeb84e6631af1878efef18205659447c86e2699bcb9,PodSandboxId:4d6f33130a9c0d902d918c179687e19ff1af184e85254c8096d42fd3115628cf,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTA
INER_EXITED,CreatedAt:1714529406268095809,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-2kg2q,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 3a2a7cc6-b7d8-4cf0-87e7-42eb1da63615,},Annotations:map[string]string{io.kubernetes.container.hash: 681df2d5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c40abe3c7fabbea438a13626a124e09b026a80d64d27c409ea806f4fd413d56c,PodSandboxId:682f29487bc3a6ae58d6e989b63f6f31f87eb549b1af1cce8687405bef6da8db,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f
6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1714529406154275075,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-tflkt,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 8531ca67-8098-4ba6-8ddb-d8b5132bcd02,},Annotations:map[string]string{io.kubernetes.container.hash: cafd80cb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a62a664191c4977760c04227d07b9b820559d285509fc2756deed35ae140a10,PodSandboxId:ad1e349c324dba757460ebdcb36722456dd3dfcb30afd93c00934130590bf0f1,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259b
fb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:1714529390648983383,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-5ddbf7d777-q2wzp,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 85549b73-ebbe-4fa9-9fe0-72d18004bc71,},Annotations:map[string]string{io.kubernetes.container.hash: a5b1870f,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2e873794e6a54a807a04cb4169d0cdf7072fdc2c36d4eb97fa559d1c1077ce6,PodSandboxId:6294a66cd68bc9c243ae456aea52a5bc0b3ab300e9d2370d2649dfaa8deda9be,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172
e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1714529384391927028,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-gvcdl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9385fe21-53b5-4105-bb14-3008fcd7dc3a,},Annotations:map[string]string{io.kubernetes.container.hash: b4e4ef8,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4410f9180e8b7a76ed62481d4939e86cb64584d5029463b76ac78aba4d683fb2,PodSandboxId:52f085039ab716b6a5764a7f162fb92caeb9f15f16496e0238724151a3bcc477,Metadata:&ContainerMetadata{Name:local-
path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1714529373783972306,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-8d985888d-6wdsq,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: ce525196-ae88-42bb-8519-66775d8bfd11,},Annotations:map[string]string{io.kubernetes.container.hash: 82a9c618,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea17f2d9434251df9401981536acddc1f90957bd5e65bc3d10cd23f2258cecbc,PodSandboxId:cc387497d8c12938de15c71ac1d5667043a9
293348e8a60abdb3c871258371e2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714529340397208355,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b23a96b2-9c34-4d4f-9df5-90dc5195248b,},Annotations:map[string]string{io.kubernetes.container.hash: 13732288,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09d11ea02380a6ff352ea6ce929b940136fe970bfecd9ad03d3100cc98c598b6,PodSandboxId:bb441acccf15e35c28edd043946725c6b690c977da855240
8faeed2463860243,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714529337800103649,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rlvmm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9eb9071-e21b-46fc-8605-055d6915f55e,},Annotations:map[string]string{io.kubernetes.container.hash: 69f92d0f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3cfa2da63bbf5b5bf434bebf921cd1711d24a75e5e358306e59c34caf06382f,PodSandboxId:38ef8fa37bf0ee992cf804eea09c31a3645f258cb6483f8bd8e876a77faf5186,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714529334395019868,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7dw4g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7aec44ec-1615-4aa4-9d65-e464831f8518,},Annotations:map[string]string{io.kubernetes.container.hash: 851c5df7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminati
onGracePeriod: 30,},},&Container{Id:f2a049b4c17d6d072b9097aa0071b82d6d4edc2a255d26f724807d4ac369f9c2,PodSandboxId:6085650b010323972a3db452fa956a57d8c7020bd388875734adcffecd114fcb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714529313518598784,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-286595,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d221c44f5de61b31369bfd052ad23bd,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30
,},},&Container{Id:ff3c851c7688d3c9fbb0d390c99ba4b9407c06fff923031bc3115f0c17f49cac,PodSandboxId:c8262f04ef4be0d9d65eee36bdcdd8c16ba76c2ef296e0274e4e12a870d0b39c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714529313487129105,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-286595,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ae6bc0c9189de883182d2bdeaf96bb1,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGrace
Period: 30,},},&Container{Id:976be39bc269736268dbe23a871c448f5827e29fde81ff90e0159d69f9af5bd2,PodSandboxId:f4de0dae893536d69fed3e1ba4efd516bf0ebcb53f31e84ef2e794ec189d1476,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714529313444695735,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-286595,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6fe04d16c604e95cfae2a0539842d3a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a081a71,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&C
ontainer{Id:f5d66ed0ede7ea6abf6b73f76e9bd96372ad218e49de932b1f7d31ddf968ae30,PodSandboxId:2787103be5c6e4a6c4e2799e1eb48b451b4a6b9f490477a0420833d00ec32937,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714529313430592961,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-286595,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e1ad2676189ad08ca8b732b2016bda4,},Annotations:map[string]string{io.kubernetes.container.hash: 9a2629b0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=
2008cc0d-29fb-4ea8-9357-b5f7318311ea name=/runtime.v1.RuntimeService/ListContainers
	May 01 02:14:16 addons-286595 crio[678]: time="2024-05-01 02:14:16.201760233Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5e159f62-ede7-4c2b-ae94-eb6bcb9ec11e name=/runtime.v1.RuntimeService/Version
	May 01 02:14:16 addons-286595 crio[678]: time="2024-05-01 02:14:16.201892551Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5e159f62-ede7-4c2b-ae94-eb6bcb9ec11e name=/runtime.v1.RuntimeService/Version
	May 01 02:14:16 addons-286595 crio[678]: time="2024-05-01 02:14:16.203895414Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4001f498-0ba8-4ea6-9973-2f608f8790e8 name=/runtime.v1.ImageService/ImageFsInfo
	May 01 02:14:16 addons-286595 crio[678]: time="2024-05-01 02:14:16.206116550Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714529656206082924,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579668,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4001f498-0ba8-4ea6-9973-2f608f8790e8 name=/runtime.v1.ImageService/ImageFsInfo
	May 01 02:14:16 addons-286595 crio[678]: time="2024-05-01 02:14:16.207059484Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=151119bb-1146-4784-9b8a-b08af788b5f3 name=/runtime.v1.RuntimeService/ListContainers
	May 01 02:14:16 addons-286595 crio[678]: time="2024-05-01 02:14:16.207150449Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=151119bb-1146-4784-9b8a-b08af788b5f3 name=/runtime.v1.RuntimeService/ListContainers
	May 01 02:14:16 addons-286595 crio[678]: time="2024-05-01 02:14:16.208072703Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3dc8451202c845e1cfc4e3c28974631a42f49e34ea0411ce1b1faac0ae57f237,PodSandboxId:8e72cc6f09b3e3fadf9ec75b9310c2d78126d3b97a1d88fd61ef25f8991d9a5f,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1714529649599140523,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-86c47465fc-jtwrv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0125296c-388d-4687-96e0-fa6da417e535,},Annotations:map[string]string{io.kubernetes.container.hash: a80b439f,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9fa4fb94b082c838ebe8a1532662751e9183f51fb69467cfab3cd6cc237ca435,PodSandboxId:a2dc1241af174a36ec9c93e88afa189e0d7100872a8ca7c508e972b3dff4683c,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:9d84f30d4c5e54cdc40f63b060e93ba6a0cd8a4c05d28d7cda4cd14f6b56490f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7373e995f4086a9db4ce8b2f96af2c2ae7f319e3e7e2ebdc1291e9c50ae4437e,State:CONTAINER_RUNNING,CreatedAt:1714529543354931898,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7559bf459f-844d4,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: d7626782-57eb-48f0-907d-f0d1e86e250c,},Annota
tions:map[string]string{io.kubernetes.container.hash: 6de05c84,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2bc8bf0be7ba2293941b7d54385fbe3193360b4e299b1b731da89f470069a51,PodSandboxId:2e7d62998cc8784acf5c9dec6b82bd83857310927d192fa8b08bee020d42647d,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:dd524baac105f5353429a7022c26a02c8c80d95a50cb4d34b6e19a3a4289ff88,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f4215f6ee683f29c0a4611b02d1adc3b7d986a96ab894eb5f7b9437c862c9499,State:CONTAINER_RUNNING,CreatedAt:1714529508994443047,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: defaul
t,io.kubernetes.pod.uid: e8f25648-6f7c-4d88-9b95-89988ad85a6b,},Annotations:map[string]string{io.kubernetes.container.hash: 9da3a0f0,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10db6a0b55c4b872bf2919f3b05779544eabcca22bf61fbb6744de0ab2d8afb5,PodSandboxId:f844e33a463589aea8f33444bb22c7b510e164c021bba4c9c600c3212811974e,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1714529483336772469,Labels:map[string]string{io.kubernetes.container.name: gcp-a
uth,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-dgngh,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 73a446bd-5b8b-4a38-a644-68a5bae5a7d3,},Annotations:map[string]string{io.kubernetes.container.hash: 77530408,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:432cc59073fd29047b54aeeb84e6631af1878efef18205659447c86e2699bcb9,PodSandboxId:4d6f33130a9c0d902d918c179687e19ff1af184e85254c8096d42fd3115628cf,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTA
INER_EXITED,CreatedAt:1714529406268095809,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-2kg2q,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 3a2a7cc6-b7d8-4cf0-87e7-42eb1da63615,},Annotations:map[string]string{io.kubernetes.container.hash: 681df2d5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c40abe3c7fabbea438a13626a124e09b026a80d64d27c409ea806f4fd413d56c,PodSandboxId:682f29487bc3a6ae58d6e989b63f6f31f87eb549b1af1cce8687405bef6da8db,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f
6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1714529406154275075,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-tflkt,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 8531ca67-8098-4ba6-8ddb-d8b5132bcd02,},Annotations:map[string]string{io.kubernetes.container.hash: cafd80cb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a62a664191c4977760c04227d07b9b820559d285509fc2756deed35ae140a10,PodSandboxId:ad1e349c324dba757460ebdcb36722456dd3dfcb30afd93c00934130590bf0f1,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259b
fb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:1714529390648983383,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-5ddbf7d777-q2wzp,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 85549b73-ebbe-4fa9-9fe0-72d18004bc71,},Annotations:map[string]string{io.kubernetes.container.hash: a5b1870f,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2e873794e6a54a807a04cb4169d0cdf7072fdc2c36d4eb97fa559d1c1077ce6,PodSandboxId:6294a66cd68bc9c243ae456aea52a5bc0b3ab300e9d2370d2649dfaa8deda9be,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172
e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1714529384391927028,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-gvcdl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9385fe21-53b5-4105-bb14-3008fcd7dc3a,},Annotations:map[string]string{io.kubernetes.container.hash: b4e4ef8,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4410f9180e8b7a76ed62481d4939e86cb64584d5029463b76ac78aba4d683fb2,PodSandboxId:52f085039ab716b6a5764a7f162fb92caeb9f15f16496e0238724151a3bcc477,Metadata:&ContainerMetadata{Name:local-
path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1714529373783972306,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-8d985888d-6wdsq,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: ce525196-ae88-42bb-8519-66775d8bfd11,},Annotations:map[string]string{io.kubernetes.container.hash: 82a9c618,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea17f2d9434251df9401981536acddc1f90957bd5e65bc3d10cd23f2258cecbc,PodSandboxId:cc387497d8c12938de15c71ac1d5667043a9
293348e8a60abdb3c871258371e2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714529340397208355,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b23a96b2-9c34-4d4f-9df5-90dc5195248b,},Annotations:map[string]string{io.kubernetes.container.hash: 13732288,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09d11ea02380a6ff352ea6ce929b940136fe970bfecd9ad03d3100cc98c598b6,PodSandboxId:bb441acccf15e35c28edd043946725c6b690c977da855240
8faeed2463860243,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714529337800103649,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rlvmm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9eb9071-e21b-46fc-8605-055d6915f55e,},Annotations:map[string]string{io.kubernetes.container.hash: 69f92d0f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3cfa2da63bbf5b5bf434bebf921cd1711d24a75e5e358306e59c34caf06382f,PodSandboxId:38ef8fa37bf0ee992cf804eea09c31a3645f258cb6483f8bd8e876a77faf5186,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714529334395019868,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7dw4g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7aec44ec-1615-4aa4-9d65-e464831f8518,},Annotations:map[string]string{io.kubernetes.container.hash: 851c5df7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminati
onGracePeriod: 30,},},&Container{Id:f2a049b4c17d6d072b9097aa0071b82d6d4edc2a255d26f724807d4ac369f9c2,PodSandboxId:6085650b010323972a3db452fa956a57d8c7020bd388875734adcffecd114fcb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714529313518598784,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-286595,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d221c44f5de61b31369bfd052ad23bd,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30
,},},&Container{Id:ff3c851c7688d3c9fbb0d390c99ba4b9407c06fff923031bc3115f0c17f49cac,PodSandboxId:c8262f04ef4be0d9d65eee36bdcdd8c16ba76c2ef296e0274e4e12a870d0b39c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714529313487129105,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-286595,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ae6bc0c9189de883182d2bdeaf96bb1,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGrace
Period: 30,},},&Container{Id:976be39bc269736268dbe23a871c448f5827e29fde81ff90e0159d69f9af5bd2,PodSandboxId:f4de0dae893536d69fed3e1ba4efd516bf0ebcb53f31e84ef2e794ec189d1476,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714529313444695735,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-286595,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6fe04d16c604e95cfae2a0539842d3a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a081a71,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&C
ontainer{Id:f5d66ed0ede7ea6abf6b73f76e9bd96372ad218e49de932b1f7d31ddf968ae30,PodSandboxId:2787103be5c6e4a6c4e2799e1eb48b451b4a6b9f490477a0420833d00ec32937,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714529313430592961,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-286595,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e1ad2676189ad08ca8b732b2016bda4,},Annotations:map[string]string{io.kubernetes.container.hash: 9a2629b0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=
151119bb-1146-4784-9b8a-b08af788b5f3 name=/runtime.v1.RuntimeService/ListContainers
	May 01 02:14:16 addons-286595 crio[678]: time="2024-05-01 02:14:16.250234600Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1ff3b4ee-96e0-4137-b324-65aaf8f6d7c8 name=/runtime.v1.RuntimeService/Version
	May 01 02:14:16 addons-286595 crio[678]: time="2024-05-01 02:14:16.250310668Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1ff3b4ee-96e0-4137-b324-65aaf8f6d7c8 name=/runtime.v1.RuntimeService/Version
	May 01 02:14:16 addons-286595 crio[678]: time="2024-05-01 02:14:16.252062149Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=76c4481d-06fe-4fed-a412-48623d3d7551 name=/runtime.v1.ImageService/ImageFsInfo
	May 01 02:14:16 addons-286595 crio[678]: time="2024-05-01 02:14:16.253625917Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714529656253600584,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579668,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=76c4481d-06fe-4fed-a412-48623d3d7551 name=/runtime.v1.ImageService/ImageFsInfo
	May 01 02:14:16 addons-286595 crio[678]: time="2024-05-01 02:14:16.254311231Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0903e77f-ad9e-461b-bdc2-2fd324506585 name=/runtime.v1.RuntimeService/ListContainers
	May 01 02:14:16 addons-286595 crio[678]: time="2024-05-01 02:14:16.254501529Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0903e77f-ad9e-461b-bdc2-2fd324506585 name=/runtime.v1.RuntimeService/ListContainers
	May 01 02:14:16 addons-286595 crio[678]: time="2024-05-01 02:14:16.254953307Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3dc8451202c845e1cfc4e3c28974631a42f49e34ea0411ce1b1faac0ae57f237,PodSandboxId:8e72cc6f09b3e3fadf9ec75b9310c2d78126d3b97a1d88fd61ef25f8991d9a5f,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1714529649599140523,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-86c47465fc-jtwrv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0125296c-388d-4687-96e0-fa6da417e535,},Annotations:map[string]string{io.kubernetes.container.hash: a80b439f,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9fa4fb94b082c838ebe8a1532662751e9183f51fb69467cfab3cd6cc237ca435,PodSandboxId:a2dc1241af174a36ec9c93e88afa189e0d7100872a8ca7c508e972b3dff4683c,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:9d84f30d4c5e54cdc40f63b060e93ba6a0cd8a4c05d28d7cda4cd14f6b56490f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7373e995f4086a9db4ce8b2f96af2c2ae7f319e3e7e2ebdc1291e9c50ae4437e,State:CONTAINER_RUNNING,CreatedAt:1714529543354931898,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7559bf459f-844d4,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: d7626782-57eb-48f0-907d-f0d1e86e250c,},Annota
tions:map[string]string{io.kubernetes.container.hash: 6de05c84,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2bc8bf0be7ba2293941b7d54385fbe3193360b4e299b1b731da89f470069a51,PodSandboxId:2e7d62998cc8784acf5c9dec6b82bd83857310927d192fa8b08bee020d42647d,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:dd524baac105f5353429a7022c26a02c8c80d95a50cb4d34b6e19a3a4289ff88,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f4215f6ee683f29c0a4611b02d1adc3b7d986a96ab894eb5f7b9437c862c9499,State:CONTAINER_RUNNING,CreatedAt:1714529508994443047,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: defaul
t,io.kubernetes.pod.uid: e8f25648-6f7c-4d88-9b95-89988ad85a6b,},Annotations:map[string]string{io.kubernetes.container.hash: 9da3a0f0,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10db6a0b55c4b872bf2919f3b05779544eabcca22bf61fbb6744de0ab2d8afb5,PodSandboxId:f844e33a463589aea8f33444bb22c7b510e164c021bba4c9c600c3212811974e,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1714529483336772469,Labels:map[string]string{io.kubernetes.container.name: gcp-a
uth,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-dgngh,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 73a446bd-5b8b-4a38-a644-68a5bae5a7d3,},Annotations:map[string]string{io.kubernetes.container.hash: 77530408,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:432cc59073fd29047b54aeeb84e6631af1878efef18205659447c86e2699bcb9,PodSandboxId:4d6f33130a9c0d902d918c179687e19ff1af184e85254c8096d42fd3115628cf,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTA
INER_EXITED,CreatedAt:1714529406268095809,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-2kg2q,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 3a2a7cc6-b7d8-4cf0-87e7-42eb1da63615,},Annotations:map[string]string{io.kubernetes.container.hash: 681df2d5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c40abe3c7fabbea438a13626a124e09b026a80d64d27c409ea806f4fd413d56c,PodSandboxId:682f29487bc3a6ae58d6e989b63f6f31f87eb549b1af1cce8687405bef6da8db,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f
6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1714529406154275075,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-tflkt,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 8531ca67-8098-4ba6-8ddb-d8b5132bcd02,},Annotations:map[string]string{io.kubernetes.container.hash: cafd80cb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a62a664191c4977760c04227d07b9b820559d285509fc2756deed35ae140a10,PodSandboxId:ad1e349c324dba757460ebdcb36722456dd3dfcb30afd93c00934130590bf0f1,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259b
fb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:1714529390648983383,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-5ddbf7d777-q2wzp,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 85549b73-ebbe-4fa9-9fe0-72d18004bc71,},Annotations:map[string]string{io.kubernetes.container.hash: a5b1870f,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2e873794e6a54a807a04cb4169d0cdf7072fdc2c36d4eb97fa559d1c1077ce6,PodSandboxId:6294a66cd68bc9c243ae456aea52a5bc0b3ab300e9d2370d2649dfaa8deda9be,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172
e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1714529384391927028,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-gvcdl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9385fe21-53b5-4105-bb14-3008fcd7dc3a,},Annotations:map[string]string{io.kubernetes.container.hash: b4e4ef8,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4410f9180e8b7a76ed62481d4939e86cb64584d5029463b76ac78aba4d683fb2,PodSandboxId:52f085039ab716b6a5764a7f162fb92caeb9f15f16496e0238724151a3bcc477,Metadata:&ContainerMetadata{Name:local-
path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1714529373783972306,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-8d985888d-6wdsq,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: ce525196-ae88-42bb-8519-66775d8bfd11,},Annotations:map[string]string{io.kubernetes.container.hash: 82a9c618,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea17f2d9434251df9401981536acddc1f90957bd5e65bc3d10cd23f2258cecbc,PodSandboxId:cc387497d8c12938de15c71ac1d5667043a9
293348e8a60abdb3c871258371e2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714529340397208355,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b23a96b2-9c34-4d4f-9df5-90dc5195248b,},Annotations:map[string]string{io.kubernetes.container.hash: 13732288,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09d11ea02380a6ff352ea6ce929b940136fe970bfecd9ad03d3100cc98c598b6,PodSandboxId:bb441acccf15e35c28edd043946725c6b690c977da855240
8faeed2463860243,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714529337800103649,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rlvmm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9eb9071-e21b-46fc-8605-055d6915f55e,},Annotations:map[string]string{io.kubernetes.container.hash: 69f92d0f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3cfa2da63bbf5b5bf434bebf921cd1711d24a75e5e358306e59c34caf06382f,PodSandboxId:38ef8fa37bf0ee992cf804eea09c31a3645f258cb6483f8bd8e876a77faf5186,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714529334395019868,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7dw4g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7aec44ec-1615-4aa4-9d65-e464831f8518,},Annotations:map[string]string{io.kubernetes.container.hash: 851c5df7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminati
onGracePeriod: 30,},},&Container{Id:f2a049b4c17d6d072b9097aa0071b82d6d4edc2a255d26f724807d4ac369f9c2,PodSandboxId:6085650b010323972a3db452fa956a57d8c7020bd388875734adcffecd114fcb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714529313518598784,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-286595,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d221c44f5de61b31369bfd052ad23bd,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30
,},},&Container{Id:ff3c851c7688d3c9fbb0d390c99ba4b9407c06fff923031bc3115f0c17f49cac,PodSandboxId:c8262f04ef4be0d9d65eee36bdcdd8c16ba76c2ef296e0274e4e12a870d0b39c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714529313487129105,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-286595,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ae6bc0c9189de883182d2bdeaf96bb1,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGrace
Period: 30,},},&Container{Id:976be39bc269736268dbe23a871c448f5827e29fde81ff90e0159d69f9af5bd2,PodSandboxId:f4de0dae893536d69fed3e1ba4efd516bf0ebcb53f31e84ef2e794ec189d1476,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714529313444695735,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-286595,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6fe04d16c604e95cfae2a0539842d3a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a081a71,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&C
ontainer{Id:f5d66ed0ede7ea6abf6b73f76e9bd96372ad218e49de932b1f7d31ddf968ae30,PodSandboxId:2787103be5c6e4a6c4e2799e1eb48b451b4a6b9f490477a0420833d00ec32937,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714529313430592961,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-286595,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e1ad2676189ad08ca8b732b2016bda4,},Annotations:map[string]string{io.kubernetes.container.hash: 9a2629b0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=
0903e77f-ad9e-461b-bdc2-2fd324506585 name=/runtime.v1.RuntimeService/ListContainers
	May 01 02:14:16 addons-286595 crio[678]: time="2024-05-01 02:14:16.299619728Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e3337fd2-80f1-420e-8b9d-afa29fcf8606 name=/runtime.v1.RuntimeService/Version
	May 01 02:14:16 addons-286595 crio[678]: time="2024-05-01 02:14:16.299721030Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e3337fd2-80f1-420e-8b9d-afa29fcf8606 name=/runtime.v1.RuntimeService/Version
	May 01 02:14:16 addons-286595 crio[678]: time="2024-05-01 02:14:16.301144092Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2923b394-6cac-4346-93b4-5c4335285979 name=/runtime.v1.ImageService/ImageFsInfo
	May 01 02:14:16 addons-286595 crio[678]: time="2024-05-01 02:14:16.302697619Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714529656302670260,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579668,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2923b394-6cac-4346-93b4-5c4335285979 name=/runtime.v1.ImageService/ImageFsInfo
	May 01 02:14:16 addons-286595 crio[678]: time="2024-05-01 02:14:16.303855643Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=abee8f72-96be-463b-ba7f-5af5de22d354 name=/runtime.v1.RuntimeService/ListContainers
	May 01 02:14:16 addons-286595 crio[678]: time="2024-05-01 02:14:16.303911298Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=abee8f72-96be-463b-ba7f-5af5de22d354 name=/runtime.v1.RuntimeService/ListContainers
	May 01 02:14:16 addons-286595 crio[678]: time="2024-05-01 02:14:16.304681731Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3dc8451202c845e1cfc4e3c28974631a42f49e34ea0411ce1b1faac0ae57f237,PodSandboxId:8e72cc6f09b3e3fadf9ec75b9310c2d78126d3b97a1d88fd61ef25f8991d9a5f,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1714529649599140523,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-86c47465fc-jtwrv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0125296c-388d-4687-96e0-fa6da417e535,},Annotations:map[string]string{io.kubernetes.container.hash: a80b439f,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9fa4fb94b082c838ebe8a1532662751e9183f51fb69467cfab3cd6cc237ca435,PodSandboxId:a2dc1241af174a36ec9c93e88afa189e0d7100872a8ca7c508e972b3dff4683c,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:9d84f30d4c5e54cdc40f63b060e93ba6a0cd8a4c05d28d7cda4cd14f6b56490f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7373e995f4086a9db4ce8b2f96af2c2ae7f319e3e7e2ebdc1291e9c50ae4437e,State:CONTAINER_RUNNING,CreatedAt:1714529543354931898,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7559bf459f-844d4,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: d7626782-57eb-48f0-907d-f0d1e86e250c,},Annota
tions:map[string]string{io.kubernetes.container.hash: 6de05c84,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2bc8bf0be7ba2293941b7d54385fbe3193360b4e299b1b731da89f470069a51,PodSandboxId:2e7d62998cc8784acf5c9dec6b82bd83857310927d192fa8b08bee020d42647d,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:dd524baac105f5353429a7022c26a02c8c80d95a50cb4d34b6e19a3a4289ff88,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f4215f6ee683f29c0a4611b02d1adc3b7d986a96ab894eb5f7b9437c862c9499,State:CONTAINER_RUNNING,CreatedAt:1714529508994443047,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: defaul
t,io.kubernetes.pod.uid: e8f25648-6f7c-4d88-9b95-89988ad85a6b,},Annotations:map[string]string{io.kubernetes.container.hash: 9da3a0f0,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10db6a0b55c4b872bf2919f3b05779544eabcca22bf61fbb6744de0ab2d8afb5,PodSandboxId:f844e33a463589aea8f33444bb22c7b510e164c021bba4c9c600c3212811974e,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1714529483336772469,Labels:map[string]string{io.kubernetes.container.name: gcp-a
uth,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-dgngh,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 73a446bd-5b8b-4a38-a644-68a5bae5a7d3,},Annotations:map[string]string{io.kubernetes.container.hash: 77530408,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:432cc59073fd29047b54aeeb84e6631af1878efef18205659447c86e2699bcb9,PodSandboxId:4d6f33130a9c0d902d918c179687e19ff1af184e85254c8096d42fd3115628cf,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTA
INER_EXITED,CreatedAt:1714529406268095809,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-2kg2q,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 3a2a7cc6-b7d8-4cf0-87e7-42eb1da63615,},Annotations:map[string]string{io.kubernetes.container.hash: 681df2d5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c40abe3c7fabbea438a13626a124e09b026a80d64d27c409ea806f4fd413d56c,PodSandboxId:682f29487bc3a6ae58d6e989b63f6f31f87eb549b1af1cce8687405bef6da8db,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f
6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1714529406154275075,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-tflkt,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 8531ca67-8098-4ba6-8ddb-d8b5132bcd02,},Annotations:map[string]string{io.kubernetes.container.hash: cafd80cb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a62a664191c4977760c04227d07b9b820559d285509fc2756deed35ae140a10,PodSandboxId:ad1e349c324dba757460ebdcb36722456dd3dfcb30afd93c00934130590bf0f1,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259b
fb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:1714529390648983383,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-5ddbf7d777-q2wzp,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 85549b73-ebbe-4fa9-9fe0-72d18004bc71,},Annotations:map[string]string{io.kubernetes.container.hash: a5b1870f,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2e873794e6a54a807a04cb4169d0cdf7072fdc2c36d4eb97fa559d1c1077ce6,PodSandboxId:6294a66cd68bc9c243ae456aea52a5bc0b3ab300e9d2370d2649dfaa8deda9be,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172
e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1714529384391927028,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-gvcdl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9385fe21-53b5-4105-bb14-3008fcd7dc3a,},Annotations:map[string]string{io.kubernetes.container.hash: b4e4ef8,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4410f9180e8b7a76ed62481d4939e86cb64584d5029463b76ac78aba4d683fb2,PodSandboxId:52f085039ab716b6a5764a7f162fb92caeb9f15f16496e0238724151a3bcc477,Metadata:&ContainerMetadata{Name:local-
path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1714529373783972306,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-8d985888d-6wdsq,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: ce525196-ae88-42bb-8519-66775d8bfd11,},Annotations:map[string]string{io.kubernetes.container.hash: 82a9c618,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea17f2d9434251df9401981536acddc1f90957bd5e65bc3d10cd23f2258cecbc,PodSandboxId:cc387497d8c12938de15c71ac1d5667043a9
293348e8a60abdb3c871258371e2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714529340397208355,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b23a96b2-9c34-4d4f-9df5-90dc5195248b,},Annotations:map[string]string{io.kubernetes.container.hash: 13732288,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09d11ea02380a6ff352ea6ce929b940136fe970bfecd9ad03d3100cc98c598b6,PodSandboxId:bb441acccf15e35c28edd043946725c6b690c977da855240
8faeed2463860243,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714529337800103649,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rlvmm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9eb9071-e21b-46fc-8605-055d6915f55e,},Annotations:map[string]string{io.kubernetes.container.hash: 69f92d0f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3cfa2da63bbf5b5bf434bebf921cd1711d24a75e5e358306e59c34caf06382f,PodSandboxId:38ef8fa37bf0ee992cf804eea09c31a3645f258cb6483f8bd8e876a77faf5186,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714529334395019868,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7dw4g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7aec44ec-1615-4aa4-9d65-e464831f8518,},Annotations:map[string]string{io.kubernetes.container.hash: 851c5df7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminati
onGracePeriod: 30,},},&Container{Id:f2a049b4c17d6d072b9097aa0071b82d6d4edc2a255d26f724807d4ac369f9c2,PodSandboxId:6085650b010323972a3db452fa956a57d8c7020bd388875734adcffecd114fcb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714529313518598784,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-286595,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d221c44f5de61b31369bfd052ad23bd,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30
,},},&Container{Id:ff3c851c7688d3c9fbb0d390c99ba4b9407c06fff923031bc3115f0c17f49cac,PodSandboxId:c8262f04ef4be0d9d65eee36bdcdd8c16ba76c2ef296e0274e4e12a870d0b39c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714529313487129105,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-286595,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ae6bc0c9189de883182d2bdeaf96bb1,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGrace
Period: 30,},},&Container{Id:976be39bc269736268dbe23a871c448f5827e29fde81ff90e0159d69f9af5bd2,PodSandboxId:f4de0dae893536d69fed3e1ba4efd516bf0ebcb53f31e84ef2e794ec189d1476,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714529313444695735,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-286595,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6fe04d16c604e95cfae2a0539842d3a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a081a71,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&C
ontainer{Id:f5d66ed0ede7ea6abf6b73f76e9bd96372ad218e49de932b1f7d31ddf968ae30,PodSandboxId:2787103be5c6e4a6c4e2799e1eb48b451b4a6b9f490477a0420833d00ec32937,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714529313430592961,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-286595,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e1ad2676189ad08ca8b732b2016bda4,},Annotations:map[string]string{io.kubernetes.container.hash: 9a2629b0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=
abee8f72-96be-463b-ba7f-5af5de22d354 name=/runtime.v1.RuntimeService/ListContainers
	May 01 02:14:16 addons-286595 crio[678]: time="2024-05-01 02:14:16.320658828Z" level=debug msg="Request: &VersionRequest{Version:0.1.0,}" file="otel-collector/interceptors.go:62" id=ea6db7e8-de56-4719-8523-7bc7e0b42728 name=/runtime.v1.RuntimeService/Version
	May 01 02:14:16 addons-286595 crio[678]: time="2024-05-01 02:14:16.320752178Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ea6db7e8-de56-4719-8523-7bc7e0b42728 name=/runtime.v1.RuntimeService/Version
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	3dc8451202c84       gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7                      6 seconds ago        Running             hello-world-app           0                   8e72cc6f09b3e       hello-world-app-86c47465fc-jtwrv
	9fa4fb94b082c       ghcr.io/headlamp-k8s/headlamp@sha256:9d84f30d4c5e54cdc40f63b060e93ba6a0cd8a4c05d28d7cda4cd14f6b56490f                        About a minute ago   Running             headlamp                  0                   a2dc1241af174       headlamp-7559bf459f-844d4
	a2bc8bf0be7ba       docker.io/library/nginx@sha256:dd524baac105f5353429a7022c26a02c8c80d95a50cb4d34b6e19a3a4289ff88                              2 minutes ago        Running             nginx                     0                   2e7d62998cc87       nginx
	10db6a0b55c4b       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b                 2 minutes ago        Running             gcp-auth                  0                   f844e33a46358       gcp-auth-5db96cd9b4-dgngh
	432cc59073fd2       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2   4 minutes ago        Exited              patch                     0                   4d6f33130a9c0       ingress-nginx-admission-patch-2kg2q
	c40abe3c7fabb       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2   4 minutes ago        Exited              create                    0                   682f29487bc3a       ingress-nginx-admission-create-tflkt
	3a62a664191c4       docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                              4 minutes ago        Running             yakd                      0                   ad1e349c324db       yakd-dashboard-5ddbf7d777-q2wzp
	c2e873794e6a5       registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872        4 minutes ago        Running             metrics-server            0                   6294a66cd68bc       metrics-server-c59844bb4-gvcdl
	4410f9180e8b7       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef             4 minutes ago        Running             local-path-provisioner    0                   52f085039ab71       local-path-provisioner-8d985888d-6wdsq
	ea17f2d943425       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             5 minutes ago        Running             storage-provisioner       0                   cc387497d8c12       storage-provisioner
	09d11ea02380a       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                             5 minutes ago        Running             coredns                   0                   bb441acccf15e       coredns-7db6d8ff4d-rlvmm
	e3cfa2da63bbf       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b                                                             5 minutes ago        Running             kube-proxy                0                   38ef8fa37bf0e       kube-proxy-7dw4g
	f2a049b4c17d6       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced                                                             5 minutes ago        Running             kube-scheduler            0                   6085650b01032       kube-scheduler-addons-286595
	ff3c851c7688d       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b                                                             5 minutes ago        Running             kube-controller-manager   0                   c8262f04ef4be       kube-controller-manager-addons-286595
	976be39bc2697       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0                                                             5 minutes ago        Running             kube-apiserver            0                   f4de0dae89353       kube-apiserver-addons-286595
	f5d66ed0ede7e       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                                             5 minutes ago        Running             etcd                      0                   2787103be5c6e       etcd-addons-286595
	
	
	==> coredns [09d11ea02380a6ff352ea6ce929b940136fe970bfecd9ad03d3100cc98c598b6] <==
	[INFO] 10.244.0.8:56913 - 16125 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000619599s
	[INFO] 10.244.0.8:33023 - 5277 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000149458s
	[INFO] 10.244.0.8:33023 - 45723 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000373581s
	[INFO] 10.244.0.8:43266 - 36546 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000146545s
	[INFO] 10.244.0.8:43266 - 15812 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000132869s
	[INFO] 10.244.0.8:59983 - 49520 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000113131s
	[INFO] 10.244.0.8:59983 - 29299 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000058142s
	[INFO] 10.244.0.8:41609 - 25749 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000480171s
	[INFO] 10.244.0.8:41609 - 26000 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000547791s
	[INFO] 10.244.0.8:38364 - 15795 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000122158s
	[INFO] 10.244.0.8:38364 - 43952 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000156286s
	[INFO] 10.244.0.8:33956 - 37525 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000074082s
	[INFO] 10.244.0.8:33956 - 60311 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00019137s
	[INFO] 10.244.0.8:35494 - 32708 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000067575s
	[INFO] 10.244.0.8:35494 - 12762 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000033832s
	[INFO] 10.244.0.22:58112 - 64578 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000504346s
	[INFO] 10.244.0.22:57420 - 44570 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000283478s
	[INFO] 10.244.0.22:33874 - 4246 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000185886s
	[INFO] 10.244.0.22:35305 - 60562 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00012197s
	[INFO] 10.244.0.22:38894 - 1320 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000077391s
	[INFO] 10.244.0.22:50127 - 57463 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000345002s
	[INFO] 10.244.0.22:47380 - 48268 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001029882s
	[INFO] 10.244.0.22:55881 - 56349 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 496 0.001253411s
	[INFO] 10.244.0.26:45143 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00021357s
	[INFO] 10.244.0.26:49210 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000096037s
	
	
	==> describe nodes <==
	Name:               addons-286595
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-286595
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2c4eae41cda912e6a762d77f0d8868e00f97bb4e
	                    minikube.k8s.io/name=addons-286595
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_01T02_08_39_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-286595
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 01 May 2024 02:08:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-286595
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 01 May 2024 02:14:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 01 May 2024 02:14:16 +0000   Wed, 01 May 2024 02:08:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 01 May 2024 02:14:16 +0000   Wed, 01 May 2024 02:08:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 01 May 2024 02:14:16 +0000   Wed, 01 May 2024 02:08:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 01 May 2024 02:14:16 +0000   Wed, 01 May 2024 02:08:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.173
	  Hostname:    addons-286595
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 bc112c0dc1d8478892371fb1c7c107fa
	  System UUID:                bc112c0d-c1d8-4788-9237-1fb1c7c107fa
	  Boot ID:                    d6cd403d-3270-41ed-8568-6727e96b7924
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                      CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                      ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-86c47465fc-jtwrv          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11s
	  default                     nginx                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m32s
	  gcp-auth                    gcp-auth-5db96cd9b4-dgngh                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m11s
	  headlamp                    headlamp-7559bf459f-844d4                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m
	  kube-system                 coredns-7db6d8ff4d-rlvmm                  100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     5m24s
	  kube-system                 etcd-addons-286595                        100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         5m38s
	  kube-system                 kube-apiserver-addons-286595              250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m38s
	  kube-system                 kube-controller-manager-addons-286595     200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m38s
	  kube-system                 kube-proxy-7dw4g                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m24s
	  kube-system                 kube-scheduler-addons-286595              100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m38s
	  kube-system                 metrics-server-c59844bb4-gvcdl            100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (5%!)(MISSING)       0 (0%!)(MISSING)         5m17s
	  kube-system                 storage-provisioner                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m18s
	  local-path-storage          local-path-provisioner-8d985888d-6wdsq    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m18s
	  yakd-dashboard              yakd-dashboard-5ddbf7d777-q2wzp           0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     5m17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             498Mi (13%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 5m20s  kube-proxy       
	  Normal  Starting                 5m38s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  5m38s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m38s  kubelet          Node addons-286595 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m38s  kubelet          Node addons-286595 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m38s  kubelet          Node addons-286595 status is now: NodeHasSufficientPID
	  Normal  NodeReady                5m37s  kubelet          Node addons-286595 status is now: NodeReady
	  Normal  RegisteredNode           5m25s  node-controller  Node addons-286595 event: Registered Node addons-286595 in Controller
	
	
	==> dmesg <==
	[  +5.077667] kauditd_printk_skb: 101 callbacks suppressed
	[May 1 02:09] kauditd_printk_skb: 136 callbacks suppressed
	[  +5.233590] kauditd_printk_skb: 89 callbacks suppressed
	[ +22.125333] kauditd_printk_skb: 4 callbacks suppressed
	[ +10.667585] kauditd_printk_skb: 2 callbacks suppressed
	[  +9.336300] kauditd_printk_skb: 30 callbacks suppressed
	[  +9.842048] kauditd_printk_skb: 4 callbacks suppressed
	[May 1 02:10] kauditd_printk_skb: 7 callbacks suppressed
	[  +5.144770] kauditd_printk_skb: 39 callbacks suppressed
	[  +6.587913] kauditd_printk_skb: 50 callbacks suppressed
	[  +5.576870] kauditd_printk_skb: 9 callbacks suppressed
	[  +5.225177] kauditd_printk_skb: 7 callbacks suppressed
	[ +21.336193] kauditd_printk_skb: 28 callbacks suppressed
	[May 1 02:11] kauditd_printk_skb: 24 callbacks suppressed
	[  +5.585570] kauditd_printk_skb: 9 callbacks suppressed
	[  +5.429410] kauditd_printk_skb: 25 callbacks suppressed
	[  +5.103066] kauditd_printk_skb: 4 callbacks suppressed
	[  +5.146607] kauditd_printk_skb: 20 callbacks suppressed
	[  +6.879216] kauditd_printk_skb: 71 callbacks suppressed
	[May 1 02:12] kauditd_printk_skb: 3 callbacks suppressed
	[  +6.401210] kauditd_printk_skb: 23 callbacks suppressed
	[  +7.766567] kauditd_printk_skb: 53 callbacks suppressed
	[  +5.041797] kauditd_printk_skb: 21 callbacks suppressed
	[May 1 02:14] kauditd_printk_skb: 8 callbacks suppressed
	[  +5.768945] kauditd_printk_skb: 21 callbacks suppressed
	
	
	==> etcd [f5d66ed0ede7ea6abf6b73f76e9bd96372ad218e49de932b1f7d31ddf968ae30] <==
	{"level":"info","ts":"2024-05-01T02:10:00.667256Z","caller":"traceutil/trace.go:171","msg":"trace[1763656184] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:982; }","duration":"138.376765ms","start":"2024-05-01T02:10:00.528866Z","end":"2024-05-01T02:10:00.667243Z","steps":["trace[1763656184] 'range keys from in-memory index tree'  (duration: 138.080651ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-01T02:10:18.654654Z","caller":"traceutil/trace.go:171","msg":"trace[708212347] transaction","detail":"{read_only:false; response_revision:1090; number_of_response:1; }","duration":"428.323682ms","start":"2024-05-01T02:10:18.226294Z","end":"2024-05-01T02:10:18.654618Z","steps":["trace[708212347] 'process raft request'  (duration: 428.203604ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-01T02:10:18.654923Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-01T02:10:18.226274Z","time spent":"428.519597ms","remote":"127.0.0.1:53582","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":764,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/events/gadget/gadget-2kks5.17cb3b6c022d329e\" mod_revision:0 > success:<request_put:<key:\"/registry/events/gadget/gadget-2kks5.17cb3b6c022d329e\" value_size:693 lease:2165825915339397124 >> failure:<>"}
	{"level":"info","ts":"2024-05-01T02:10:18.75593Z","caller":"traceutil/trace.go:171","msg":"trace[229671378] transaction","detail":"{read_only:false; response_revision:1091; number_of_response:1; }","duration":"510.596733ms","start":"2024-05-01T02:10:18.245317Z","end":"2024-05-01T02:10:18.755913Z","steps":["trace[229671378] 'process raft request'  (duration: 509.963049ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-01T02:10:18.75607Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-01T02:10:18.245298Z","time spent":"510.71107ms","remote":"127.0.0.1:53680","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":11080,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/pods/gadget/gadget-2kks5\" mod_revision:1070 > success:<request_put:<key:\"/registry/pods/gadget/gadget-2kks5\" value_size:11038 >> failure:<request_range:<key:\"/registry/pods/gadget/gadget-2kks5\" > >"}
	{"level":"warn","ts":"2024-05-01T02:10:18.75611Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"415.398582ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:14363"}
	{"level":"info","ts":"2024-05-01T02:10:18.756141Z","caller":"traceutil/trace.go:171","msg":"trace[1491624114] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1092; }","duration":"415.458887ms","start":"2024-05-01T02:10:18.340676Z","end":"2024-05-01T02:10:18.756135Z","steps":["trace[1491624114] 'agreement among raft nodes before linearized reading'  (duration: 415.294191ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-01T02:10:18.75616Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-01T02:10:18.340662Z","time spent":"415.492252ms","remote":"127.0.0.1:53680","response type":"/etcdserverpb.KV/Range","request count":0,"request size":62,"response count":3,"response size":14387,"request content":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" "}
	{"level":"info","ts":"2024-05-01T02:10:18.755929Z","caller":"traceutil/trace.go:171","msg":"trace[305672440] linearizableReadLoop","detail":"{readStateIndex:1127; appliedIndex:1126; }","duration":"415.217462ms","start":"2024-05-01T02:10:18.3407Z","end":"2024-05-01T02:10:18.755918Z","steps":["trace[305672440] 'read index received'  (duration: 314.533058ms)","trace[305672440] 'applied index is now lower than readState.Index'  (duration: 100.682167ms)"],"step_count":2}
	{"level":"warn","ts":"2024-05-01T02:10:18.756293Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"415.50259ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:11453"}
	{"level":"info","ts":"2024-05-01T02:10:18.75631Z","caller":"traceutil/trace.go:171","msg":"trace[1404787745] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1092; }","duration":"415.540287ms","start":"2024-05-01T02:10:18.340764Z","end":"2024-05-01T02:10:18.756305Z","steps":["trace[1404787745] 'agreement among raft nodes before linearized reading'  (duration: 415.463297ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-01T02:10:18.756325Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-01T02:10:18.340727Z","time spent":"415.594813ms","remote":"127.0.0.1:53680","response type":"/etcdserverpb.KV/Range","request count":0,"request size":52,"response count":3,"response size":11477,"request content":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" "}
	{"level":"info","ts":"2024-05-01T02:10:18.756468Z","caller":"traceutil/trace.go:171","msg":"trace[1388601756] transaction","detail":"{read_only:false; response_revision:1092; number_of_response:1; }","duration":"169.339832ms","start":"2024-05-01T02:10:18.587119Z","end":"2024-05-01T02:10:18.756458Z","steps":["trace[1388601756] 'process raft request'  (duration: 168.729753ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-01T02:10:18.756666Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"235.014365ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:85556"}
	{"level":"info","ts":"2024-05-01T02:10:18.756686Z","caller":"traceutil/trace.go:171","msg":"trace[2085779939] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1092; }","duration":"235.056712ms","start":"2024-05-01T02:10:18.521624Z","end":"2024-05-01T02:10:18.756681Z","steps":["trace[2085779939] 'agreement among raft nodes before linearized reading'  (duration: 234.888839ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-01T02:10:18.75678Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"378.100245ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" ","response":"range_response_count:1 size:499"}
	{"level":"info","ts":"2024-05-01T02:10:18.756802Z","caller":"traceutil/trace.go:171","msg":"trace[528246052] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1092; }","duration":"378.147187ms","start":"2024-05-01T02:10:18.378649Z","end":"2024-05-01T02:10:18.756796Z","steps":["trace[528246052] 'agreement among raft nodes before linearized reading'  (duration: 378.077291ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-01T02:10:18.756824Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-01T02:10:18.378635Z","time spent":"378.185809ms","remote":"127.0.0.1:53768","response type":"/etcdserverpb.KV/Range","request count":0,"request size":57,"response count":1,"response size":523,"request content":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" "}
	{"level":"warn","ts":"2024-05-01T02:11:34.265514Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"162.139762ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/podtemplates/\" range_end:\"/registry/podtemplates0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-05-01T02:11:34.265602Z","caller":"traceutil/trace.go:171","msg":"trace[46202076] range","detail":"{range_begin:/registry/podtemplates/; range_end:/registry/podtemplates0; response_count:0; response_revision:1338; }","duration":"162.252899ms","start":"2024-05-01T02:11:34.103335Z","end":"2024-05-01T02:11:34.265587Z","steps":["trace[46202076] 'count revisions from in-memory index tree'  (duration: 161.969468ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-01T02:11:34.265553Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"203.679123ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-05-01T02:11:34.265658Z","caller":"traceutil/trace.go:171","msg":"trace[330583683] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1338; }","duration":"203.841499ms","start":"2024-05-01T02:11:34.061807Z","end":"2024-05-01T02:11:34.265648Z","steps":["trace[330583683] 'range keys from in-memory index tree'  (duration: 203.66836ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-01T02:11:40.250535Z","caller":"traceutil/trace.go:171","msg":"trace[1595426452] transaction","detail":"{read_only:false; response_revision:1361; number_of_response:1; }","duration":"135.472331ms","start":"2024-05-01T02:11:40.115041Z","end":"2024-05-01T02:11:40.250513Z","steps":["trace[1595426452] 'process raft request'  (duration: 134.905036ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-01T02:12:22.340283Z","caller":"traceutil/trace.go:171","msg":"trace[1906510856] transaction","detail":"{read_only:false; response_revision:1741; number_of_response:1; }","duration":"425.738233ms","start":"2024-05-01T02:12:21.913606Z","end":"2024-05-01T02:12:22.339344Z","steps":["trace[1906510856] 'process raft request'  (duration: 424.340141ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-01T02:12:22.341499Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-01T02:12:21.913588Z","time spent":"427.204429ms","remote":"127.0.0.1:53676","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1731 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	
	
	==> gcp-auth [10db6a0b55c4b872bf2919f3b05779544eabcca22bf61fbb6744de0ab2d8afb5] <==
	2024/05/01 02:11:23 GCP Auth Webhook started!
	2024/05/01 02:11:24 Ready to marshal response ...
	2024/05/01 02:11:24 Ready to write response ...
	2024/05/01 02:11:24 Ready to marshal response ...
	2024/05/01 02:11:24 Ready to write response ...
	2024/05/01 02:11:26 Ready to marshal response ...
	2024/05/01 02:11:26 Ready to write response ...
	2024/05/01 02:11:35 Ready to marshal response ...
	2024/05/01 02:11:35 Ready to write response ...
	2024/05/01 02:11:40 Ready to marshal response ...
	2024/05/01 02:11:40 Ready to write response ...
	2024/05/01 02:11:44 Ready to marshal response ...
	2024/05/01 02:11:44 Ready to write response ...
	2024/05/01 02:12:02 Ready to marshal response ...
	2024/05/01 02:12:02 Ready to write response ...
	2024/05/01 02:12:02 Ready to marshal response ...
	2024/05/01 02:12:02 Ready to write response ...
	2024/05/01 02:12:16 Ready to marshal response ...
	2024/05/01 02:12:16 Ready to write response ...
	2024/05/01 02:12:16 Ready to marshal response ...
	2024/05/01 02:12:16 Ready to write response ...
	2024/05/01 02:12:16 Ready to marshal response ...
	2024/05/01 02:12:16 Ready to write response ...
	2024/05/01 02:14:05 Ready to marshal response ...
	2024/05/01 02:14:05 Ready to write response ...
	
	
	==> kernel <==
	 02:14:16 up 6 min,  0 users,  load average: 0.48, 1.01, 0.59
	Linux addons-286595 5.10.207 #1 SMP Tue Apr 30 22:38:43 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [976be39bc269736268dbe23a871c448f5827e29fde81ff90e0159d69f9af5bd2] <==
	E0501 02:10:50.075819       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.99.100.129:443/apis/metrics.k8s.io/v1beta1: Get "https://10.99.100.129:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.99.100.129:443: connect: connection refused
	E0501 02:10:50.077983       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.99.100.129:443/apis/metrics.k8s.io/v1beta1: Get "https://10.99.100.129:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.99.100.129:443: connect: connection refused
	E0501 02:10:50.082957       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.99.100.129:443/apis/metrics.k8s.io/v1beta1: Get "https://10.99.100.129:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.99.100.129:443: connect: connection refused
	I0501 02:10:50.197599       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0501 02:11:42.839672       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0501 02:11:43.854833       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0501 02:11:44.085132       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.104.170.104"}
	I0501 02:11:46.578526       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0501 02:11:47.618719       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	E0501 02:12:07.258988       1 upgradeaware.go:427] Error proxying data from client to backend: read tcp 192.168.39.173:8443->10.244.0.30:36490: read: connection reset by peer
	I0501 02:12:16.601233       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.104.104.63"}
	I0501 02:12:18.783704       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0501 02:12:18.783754       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0501 02:12:18.812604       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0501 02:12:18.812671       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0501 02:12:18.872882       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0501 02:12:18.872914       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0501 02:12:18.906782       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0501 02:12:18.906876       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0501 02:12:18.995006       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0501 02:12:18.995035       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0501 02:12:19.906915       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0501 02:12:19.996042       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0501 02:12:20.003716       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0501 02:14:05.655996       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.108.167.24"}
	
	
	==> kube-controller-manager [ff3c851c7688d3c9fbb0d390c99ba4b9407c06fff923031bc3115f0c17f49cac] <==
	E0501 02:12:40.928436       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0501 02:12:54.380475       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0501 02:12:54.380517       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0501 02:12:56.387773       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0501 02:12:56.387829       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0501 02:13:03.816647       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0501 02:13:03.816842       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0501 02:13:06.645719       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0501 02:13:06.645751       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0501 02:13:32.109169       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0501 02:13:32.109197       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0501 02:13:32.837937       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0501 02:13:32.837995       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0501 02:13:33.397529       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0501 02:13:33.397585       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0501 02:13:50.323349       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0501 02:13:50.323536       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0501 02:14:05.490226       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="44.112301ms"
	I0501 02:14:05.510715       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="20.458293ms"
	I0501 02:14:05.510818       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="30.48µs"
	I0501 02:14:08.276834       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create"
	I0501 02:14:08.289746       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch"
	I0501 02:14:08.300165       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-768f948f8f" duration="7.233µs"
	I0501 02:14:10.367840       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="21.863844ms"
	I0501 02:14:10.368585       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="39.634µs"
	
	
	==> kube-proxy [e3cfa2da63bbf5b5bf434bebf921cd1711d24a75e5e358306e59c34caf06382f] <==
	I0501 02:08:55.440577       1 server_linux.go:69] "Using iptables proxy"
	I0501 02:08:55.468633       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.173"]
	I0501 02:08:55.591758       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0501 02:08:55.591795       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0501 02:08:55.591817       1 server_linux.go:165] "Using iptables Proxier"
	I0501 02:08:55.601494       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0501 02:08:55.601722       1 server.go:872] "Version info" version="v1.30.0"
	I0501 02:08:55.601733       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0501 02:08:55.602732       1 config.go:192] "Starting service config controller"
	I0501 02:08:55.602776       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0501 02:08:55.602800       1 config.go:101] "Starting endpoint slice config controller"
	I0501 02:08:55.602803       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0501 02:08:55.603241       1 config.go:319] "Starting node config controller"
	I0501 02:08:55.603279       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0501 02:08:55.703149       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0501 02:08:55.703190       1 shared_informer.go:320] Caches are synced for service config
	I0501 02:08:55.703455       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [f2a049b4c17d6d072b9097aa0071b82d6d4edc2a255d26f724807d4ac369f9c2] <==
	W0501 02:08:36.058229       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0501 02:08:36.062463       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0501 02:08:37.002461       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0501 02:08:37.002515       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0501 02:08:37.118477       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0501 02:08:37.118529       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0501 02:08:37.136128       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0501 02:08:37.136190       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0501 02:08:37.194929       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0501 02:08:37.195453       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0501 02:08:37.214862       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0501 02:08:37.214917       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0501 02:08:37.219246       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0501 02:08:37.219331       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0501 02:08:37.244163       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0501 02:08:37.244225       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0501 02:08:37.337586       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0501 02:08:37.337683       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0501 02:08:37.341063       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0501 02:08:37.341149       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0501 02:08:37.371774       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0501 02:08:37.371831       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0501 02:08:37.584546       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0501 02:08:37.584878       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0501 02:08:39.348743       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	May 01 02:14:05 addons-286595 kubelet[1272]: I0501 02:14:05.485595    1272 memory_manager.go:354] "RemoveStaleState removing state" podUID="e70de7d8-e03e-4147-b633-6fec7dbe1e88" containerName="volume-snapshot-controller"
	May 01 02:14:05 addons-286595 kubelet[1272]: I0501 02:14:05.610719    1272 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/0125296c-388d-4687-96e0-fa6da417e535-gcp-creds\") pod \"hello-world-app-86c47465fc-jtwrv\" (UID: \"0125296c-388d-4687-96e0-fa6da417e535\") " pod="default/hello-world-app-86c47465fc-jtwrv"
	May 01 02:14:05 addons-286595 kubelet[1272]: I0501 02:14:05.610765    1272 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58h28\" (UniqueName: \"kubernetes.io/projected/0125296c-388d-4687-96e0-fa6da417e535-kube-api-access-58h28\") pod \"hello-world-app-86c47465fc-jtwrv\" (UID: \"0125296c-388d-4687-96e0-fa6da417e535\") " pod="default/hello-world-app-86c47465fc-jtwrv"
	May 01 02:14:06 addons-286595 kubelet[1272]: I0501 02:14:06.823134    1272 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hpl9q\" (UniqueName: \"kubernetes.io/projected/2c0204aa-5d9f-4c78-a423-4378e147abf4-kube-api-access-hpl9q\") pod \"2c0204aa-5d9f-4c78-a423-4378e147abf4\" (UID: \"2c0204aa-5d9f-4c78-a423-4378e147abf4\") "
	May 01 02:14:06 addons-286595 kubelet[1272]: I0501 02:14:06.826922    1272 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c0204aa-5d9f-4c78-a423-4378e147abf4-kube-api-access-hpl9q" (OuterVolumeSpecName: "kube-api-access-hpl9q") pod "2c0204aa-5d9f-4c78-a423-4378e147abf4" (UID: "2c0204aa-5d9f-4c78-a423-4378e147abf4"). InnerVolumeSpecName "kube-api-access-hpl9q". PluginName "kubernetes.io/projected", VolumeGidValue ""
	May 01 02:14:06 addons-286595 kubelet[1272]: I0501 02:14:06.924204    1272 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-hpl9q\" (UniqueName: \"kubernetes.io/projected/2c0204aa-5d9f-4c78-a423-4378e147abf4-kube-api-access-hpl9q\") on node \"addons-286595\" DevicePath \"\""
	May 01 02:14:07 addons-286595 kubelet[1272]: I0501 02:14:07.264699    1272 scope.go:117] "RemoveContainer" containerID="d508c00526c092bcf28b27a52276a8677df6c8e9c57478977d630377d6db4627"
	May 01 02:14:07 addons-286595 kubelet[1272]: I0501 02:14:07.312479    1272 scope.go:117] "RemoveContainer" containerID="d508c00526c092bcf28b27a52276a8677df6c8e9c57478977d630377d6db4627"
	May 01 02:14:07 addons-286595 kubelet[1272]: E0501 02:14:07.313142    1272 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d508c00526c092bcf28b27a52276a8677df6c8e9c57478977d630377d6db4627\": container with ID starting with d508c00526c092bcf28b27a52276a8677df6c8e9c57478977d630377d6db4627 not found: ID does not exist" containerID="d508c00526c092bcf28b27a52276a8677df6c8e9c57478977d630377d6db4627"
	May 01 02:14:07 addons-286595 kubelet[1272]: I0501 02:14:07.313327    1272 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d508c00526c092bcf28b27a52276a8677df6c8e9c57478977d630377d6db4627"} err="failed to get container status \"d508c00526c092bcf28b27a52276a8677df6c8e9c57478977d630377d6db4627\": rpc error: code = NotFound desc = could not find container \"d508c00526c092bcf28b27a52276a8677df6c8e9c57478977d630377d6db4627\": container with ID starting with d508c00526c092bcf28b27a52276a8677df6c8e9c57478977d630377d6db4627 not found: ID does not exist"
	May 01 02:14:08 addons-286595 kubelet[1272]: I0501 02:14:08.502012    1272 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c0204aa-5d9f-4c78-a423-4378e147abf4" path="/var/lib/kubelet/pods/2c0204aa-5d9f-4c78-a423-4378e147abf4/volumes"
	May 01 02:14:08 addons-286595 kubelet[1272]: I0501 02:14:08.502473    1272 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a2a7cc6-b7d8-4cf0-87e7-42eb1da63615" path="/var/lib/kubelet/pods/3a2a7cc6-b7d8-4cf0-87e7-42eb1da63615/volumes"
	May 01 02:14:08 addons-286595 kubelet[1272]: I0501 02:14:08.502862    1272 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8531ca67-8098-4ba6-8ddb-d8b5132bcd02" path="/var/lib/kubelet/pods/8531ca67-8098-4ba6-8ddb-d8b5132bcd02/volumes"
	May 01 02:14:10 addons-286595 kubelet[1272]: I0501 02:14:10.346923    1272 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-86c47465fc-jtwrv" podStartSLOduration=1.863701096 podStartE2EDuration="5.346891827s" podCreationTimestamp="2024-05-01 02:14:05 +0000 UTC" firstStartedPulling="2024-05-01 02:14:06.096336666 +0000 UTC m=+327.733614489" lastFinishedPulling="2024-05-01 02:14:09.579527396 +0000 UTC m=+331.216805220" observedRunningTime="2024-05-01 02:14:10.343789378 +0000 UTC m=+331.981067221" watchObservedRunningTime="2024-05-01 02:14:10.346891827 +0000 UTC m=+331.984169665"
	May 01 02:14:11 addons-286595 kubelet[1272]: I0501 02:14:11.668535    1272 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7fcf68af-3746-4e04-b99d-d825463f5d3b-webhook-cert\") pod \"7fcf68af-3746-4e04-b99d-d825463f5d3b\" (UID: \"7fcf68af-3746-4e04-b99d-d825463f5d3b\") "
	May 01 02:14:11 addons-286595 kubelet[1272]: I0501 02:14:11.668582    1272 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tnq28\" (UniqueName: \"kubernetes.io/projected/7fcf68af-3746-4e04-b99d-d825463f5d3b-kube-api-access-tnq28\") pod \"7fcf68af-3746-4e04-b99d-d825463f5d3b\" (UID: \"7fcf68af-3746-4e04-b99d-d825463f5d3b\") "
	May 01 02:14:11 addons-286595 kubelet[1272]: I0501 02:14:11.674066    1272 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fcf68af-3746-4e04-b99d-d825463f5d3b-kube-api-access-tnq28" (OuterVolumeSpecName: "kube-api-access-tnq28") pod "7fcf68af-3746-4e04-b99d-d825463f5d3b" (UID: "7fcf68af-3746-4e04-b99d-d825463f5d3b"). InnerVolumeSpecName "kube-api-access-tnq28". PluginName "kubernetes.io/projected", VolumeGidValue ""
	May 01 02:14:11 addons-286595 kubelet[1272]: I0501 02:14:11.674323    1272 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fcf68af-3746-4e04-b99d-d825463f5d3b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "7fcf68af-3746-4e04-b99d-d825463f5d3b" (UID: "7fcf68af-3746-4e04-b99d-d825463f5d3b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	May 01 02:14:11 addons-286595 kubelet[1272]: I0501 02:14:11.769525    1272 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-tnq28\" (UniqueName: \"kubernetes.io/projected/7fcf68af-3746-4e04-b99d-d825463f5d3b-kube-api-access-tnq28\") on node \"addons-286595\" DevicePath \"\""
	May 01 02:14:11 addons-286595 kubelet[1272]: I0501 02:14:11.769580    1272 reconciler_common.go:289] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7fcf68af-3746-4e04-b99d-d825463f5d3b-webhook-cert\") on node \"addons-286595\" DevicePath \"\""
	May 01 02:14:12 addons-286595 kubelet[1272]: I0501 02:14:12.336254    1272 scope.go:117] "RemoveContainer" containerID="97e736f339ff91cd971e7360196d6866e467f58194fcdebf17fb4b91edc79332"
	May 01 02:14:12 addons-286595 kubelet[1272]: I0501 02:14:12.361781    1272 scope.go:117] "RemoveContainer" containerID="97e736f339ff91cd971e7360196d6866e467f58194fcdebf17fb4b91edc79332"
	May 01 02:14:12 addons-286595 kubelet[1272]: E0501 02:14:12.362477    1272 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"97e736f339ff91cd971e7360196d6866e467f58194fcdebf17fb4b91edc79332\": container with ID starting with 97e736f339ff91cd971e7360196d6866e467f58194fcdebf17fb4b91edc79332 not found: ID does not exist" containerID="97e736f339ff91cd971e7360196d6866e467f58194fcdebf17fb4b91edc79332"
	May 01 02:14:12 addons-286595 kubelet[1272]: I0501 02:14:12.362523    1272 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97e736f339ff91cd971e7360196d6866e467f58194fcdebf17fb4b91edc79332"} err="failed to get container status \"97e736f339ff91cd971e7360196d6866e467f58194fcdebf17fb4b91edc79332\": rpc error: code = NotFound desc = could not find container \"97e736f339ff91cd971e7360196d6866e467f58194fcdebf17fb4b91edc79332\": container with ID starting with 97e736f339ff91cd971e7360196d6866e467f58194fcdebf17fb4b91edc79332 not found: ID does not exist"
	May 01 02:14:12 addons-286595 kubelet[1272]: I0501 02:14:12.500950    1272 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fcf68af-3746-4e04-b99d-d825463f5d3b" path="/var/lib/kubelet/pods/7fcf68af-3746-4e04-b99d-d825463f5d3b/volumes"
	
	
	==> storage-provisioner [ea17f2d9434251df9401981536acddc1f90957bd5e65bc3d10cd23f2258cecbc] <==
	I0501 02:09:00.917737       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0501 02:09:00.928035       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0501 02:09:00.928072       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0501 02:09:00.943117       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0501 02:09:00.943339       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-286595_5d8efd10-74c8-4326-b5c8-ec5c064e6fc1!
	I0501 02:09:00.944530       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"7ba41ed1-cb3f-4e11-b6c3-df3b8bded704", APIVersion:"v1", ResourceVersion:"625", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-286595_5d8efd10-74c8-4326-b5c8-ec5c064e6fc1 became leader
	I0501 02:09:01.043748       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-286595_5d8efd10-74c8-4326-b5c8-ec5c064e6fc1!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-286595 -n addons-286595
helpers_test.go:261: (dbg) Run:  kubectl --context addons-286595 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (153.97s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (335.44s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 22.997374ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-gvcdl" [9385fe21-53b5-4105-bb14-3008fcd7dc3a] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.006063181s
addons_test.go:415: (dbg) Run:  kubectl --context addons-286595 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-286595 top pods -n kube-system: exit status 1 (96.703738ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-rlvmm, age: 2m38.544452774s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-286595 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-286595 top pods -n kube-system: exit status 1 (67.449253ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-rlvmm, age: 2m43.107454591s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-286595 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-286595 top pods -n kube-system: exit status 1 (72.179765ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-rlvmm, age: 2m49.180523766s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-286595 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-286595 top pods -n kube-system: exit status 1 (91.314859ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-rlvmm, age: 2m54.81746624s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-286595 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-286595 top pods -n kube-system: exit status 1 (66.2513ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-rlvmm, age: 3m1.819273138s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-286595 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-286595 top pods -n kube-system: exit status 1 (71.278504ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-rlvmm, age: 3m23.411934604s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-286595 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-286595 top pods -n kube-system: exit status 1 (63.107274ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-rlvmm, age: 3m36.074791759s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-286595 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-286595 top pods -n kube-system: exit status 1 (64.960748ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-rlvmm, age: 4m12.134557907s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-286595 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-286595 top pods -n kube-system: exit status 1 (64.305104ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-rlvmm, age: 5m11.208377624s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-286595 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-286595 top pods -n kube-system: exit status 1 (66.039763ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-rlvmm, age: 6m7.255581167s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-286595 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-286595 top pods -n kube-system: exit status 1 (72.640926ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-rlvmm, age: 7m23.695358188s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-286595 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-286595 top pods -n kube-system: exit status 1 (65.283805ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-rlvmm, age: 8m4.730021788s

                                                
                                                
** /stderr **
addons_test.go:429: failed checking metric server: exit status 1
addons_test.go:432: (dbg) Run:  out/minikube-linux-amd64 -p addons-286595 addons disable metrics-server --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-286595 -n addons-286595
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-286595 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-286595 logs -n 25: (1.63702412s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | --all                                                                                       | minikube             | jenkins | v1.33.0 | 01 May 24 02:07 UTC | 01 May 24 02:07 UTC |
	| delete  | -p download-only-686563                                                                     | download-only-686563 | jenkins | v1.33.0 | 01 May 24 02:07 UTC | 01 May 24 02:07 UTC |
	| delete  | -p download-only-099811                                                                     | download-only-099811 | jenkins | v1.33.0 | 01 May 24 02:07 UTC | 01 May 24 02:07 UTC |
	| delete  | -p download-only-686563                                                                     | download-only-686563 | jenkins | v1.33.0 | 01 May 24 02:07 UTC | 01 May 24 02:07 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-940490 | jenkins | v1.33.0 | 01 May 24 02:07 UTC |                     |
	|         | binary-mirror-940490                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:33553                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-940490                                                                     | binary-mirror-940490 | jenkins | v1.33.0 | 01 May 24 02:07 UTC | 01 May 24 02:07 UTC |
	| addons  | disable dashboard -p                                                                        | addons-286595        | jenkins | v1.33.0 | 01 May 24 02:07 UTC |                     |
	|         | addons-286595                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-286595        | jenkins | v1.33.0 | 01 May 24 02:07 UTC |                     |
	|         | addons-286595                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-286595 --wait=true                                                                | addons-286595        | jenkins | v1.33.0 | 01 May 24 02:07 UTC | 01 May 24 02:11 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --driver=kvm2                                                                 |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                                   |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| ssh     | addons-286595 ssh cat                                                                       | addons-286595        | jenkins | v1.33.0 | 01 May 24 02:11 UTC | 01 May 24 02:11 UTC |
	|         | /opt/local-path-provisioner/pvc-e2a3e7ab-0856-4130-bea1-c8089bb4ffec_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-286595 addons disable                                                                | addons-286595        | jenkins | v1.33.0 | 01 May 24 02:11 UTC |                     |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-286595 ip                                                                            | addons-286595        | jenkins | v1.33.0 | 01 May 24 02:11 UTC | 01 May 24 02:11 UTC |
	| addons  | addons-286595 addons disable                                                                | addons-286595        | jenkins | v1.33.0 | 01 May 24 02:11 UTC | 01 May 24 02:11 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-286595        | jenkins | v1.33.0 | 01 May 24 02:11 UTC | 01 May 24 02:11 UTC |
	|         | addons-286595                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-286595 ssh curl -s                                                                   | addons-286595        | jenkins | v1.33.0 | 01 May 24 02:11 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-286595        | jenkins | v1.33.0 | 01 May 24 02:11 UTC | 01 May 24 02:11 UTC |
	|         | addons-286595                                                                               |                      |         |         |                     |                     |
	| addons  | addons-286595 addons disable                                                                | addons-286595        | jenkins | v1.33.0 | 01 May 24 02:12 UTC | 01 May 24 02:12 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-286595 addons                                                                        | addons-286595        | jenkins | v1.33.0 | 01 May 24 02:12 UTC | 01 May 24 02:12 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-286595        | jenkins | v1.33.0 | 01 May 24 02:12 UTC | 01 May 24 02:12 UTC |
	|         | -p addons-286595                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-286595 addons                                                                        | addons-286595        | jenkins | v1.33.0 | 01 May 24 02:12 UTC | 01 May 24 02:12 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-286595        | jenkins | v1.33.0 | 01 May 24 02:12 UTC | 01 May 24 02:12 UTC |
	|         | -p addons-286595                                                                            |                      |         |         |                     |                     |
	| ip      | addons-286595 ip                                                                            | addons-286595        | jenkins | v1.33.0 | 01 May 24 02:14 UTC | 01 May 24 02:14 UTC |
	| addons  | addons-286595 addons disable                                                                | addons-286595        | jenkins | v1.33.0 | 01 May 24 02:14 UTC | 01 May 24 02:14 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-286595 addons disable                                                                | addons-286595        | jenkins | v1.33.0 | 01 May 24 02:14 UTC | 01 May 24 02:14 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-286595 addons                                                                        | addons-286595        | jenkins | v1.33.0 | 01 May 24 02:16 UTC | 01 May 24 02:16 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/01 02:07:55
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0501 02:07:55.315587   21421 out.go:291] Setting OutFile to fd 1 ...
	I0501 02:07:55.315833   21421 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 02:07:55.315843   21421 out.go:304] Setting ErrFile to fd 2...
	I0501 02:07:55.315848   21421 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 02:07:55.316039   21421 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18779-13391/.minikube/bin
	I0501 02:07:55.316668   21421 out.go:298] Setting JSON to false
	I0501 02:07:55.317527   21421 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3018,"bootTime":1714526257,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0501 02:07:55.317589   21421 start.go:139] virtualization: kvm guest
	I0501 02:07:55.319630   21421 out.go:177] * [addons-286595] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0501 02:07:55.320952   21421 out.go:177]   - MINIKUBE_LOCATION=18779
	I0501 02:07:55.320988   21421 notify.go:220] Checking for updates...
	I0501 02:07:55.322264   21421 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0501 02:07:55.323862   21421 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18779-13391/kubeconfig
	I0501 02:07:55.325233   21421 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18779-13391/.minikube
	I0501 02:07:55.326613   21421 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0501 02:07:55.327920   21421 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0501 02:07:55.329277   21421 driver.go:392] Setting default libvirt URI to qemu:///system
	I0501 02:07:55.361108   21421 out.go:177] * Using the kvm2 driver based on user configuration
	I0501 02:07:55.362520   21421 start.go:297] selected driver: kvm2
	I0501 02:07:55.362541   21421 start.go:901] validating driver "kvm2" against <nil>
	I0501 02:07:55.362554   21421 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0501 02:07:55.363265   21421 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0501 02:07:55.363341   21421 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18779-13391/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0501 02:07:55.378569   21421 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0501 02:07:55.378651   21421 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0501 02:07:55.378911   21421 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0501 02:07:55.378971   21421 cni.go:84] Creating CNI manager for ""
	I0501 02:07:55.378990   21421 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0501 02:07:55.378998   21421 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0501 02:07:55.379064   21421 start.go:340] cluster config:
	{Name:addons-286595 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:addons-286595 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 02:07:55.379155   21421 iso.go:125] acquiring lock: {Name:mkcd2496aadb29931e179193d707c731bfda98b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0501 02:07:55.381034   21421 out.go:177] * Starting "addons-286595" primary control-plane node in "addons-286595" cluster
	I0501 02:07:55.382342   21421 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0501 02:07:55.382389   21421 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0501 02:07:55.382424   21421 cache.go:56] Caching tarball of preloaded images
	I0501 02:07:55.382538   21421 preload.go:173] Found /home/jenkins/minikube-integration/18779-13391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0501 02:07:55.382551   21421 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0501 02:07:55.382853   21421 profile.go:143] Saving config to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/addons-286595/config.json ...
	I0501 02:07:55.382875   21421 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/addons-286595/config.json: {Name:mk5c1f83b71f5f2c1ef1b19fc5b8782100690a28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:07:55.383027   21421 start.go:360] acquireMachinesLock for addons-286595: {Name:mkc4548e5afe1d4b0a833f6c522103562b0cefff Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0501 02:07:55.383077   21421 start.go:364] duration metric: took 35.997µs to acquireMachinesLock for "addons-286595"
	I0501 02:07:55.383098   21421 start.go:93] Provisioning new machine with config: &{Name:addons-286595 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.0 ClusterName:addons-286595 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0501 02:07:55.383157   21421 start.go:125] createHost starting for "" (driver="kvm2")
	I0501 02:07:55.384926   21421 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0501 02:07:55.385071   21421 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:07:55.385111   21421 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:07:55.399944   21421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38141
	I0501 02:07:55.400401   21421 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:07:55.400917   21421 main.go:141] libmachine: Using API Version  1
	I0501 02:07:55.400945   21421 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:07:55.401337   21421 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:07:55.401519   21421 main.go:141] libmachine: (addons-286595) Calling .GetMachineName
	I0501 02:07:55.401671   21421 main.go:141] libmachine: (addons-286595) Calling .DriverName
	I0501 02:07:55.401878   21421 start.go:159] libmachine.API.Create for "addons-286595" (driver="kvm2")
	I0501 02:07:55.401929   21421 client.go:168] LocalClient.Create starting
	I0501 02:07:55.401972   21421 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem
	I0501 02:07:55.469501   21421 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem
	I0501 02:07:55.584020   21421 main.go:141] libmachine: Running pre-create checks...
	I0501 02:07:55.584044   21421 main.go:141] libmachine: (addons-286595) Calling .PreCreateCheck
	I0501 02:07:55.584546   21421 main.go:141] libmachine: (addons-286595) Calling .GetConfigRaw
	I0501 02:07:55.584947   21421 main.go:141] libmachine: Creating machine...
	I0501 02:07:55.584961   21421 main.go:141] libmachine: (addons-286595) Calling .Create
	I0501 02:07:55.585110   21421 main.go:141] libmachine: (addons-286595) Creating KVM machine...
	I0501 02:07:55.586374   21421 main.go:141] libmachine: (addons-286595) DBG | found existing default KVM network
	I0501 02:07:55.587079   21421 main.go:141] libmachine: (addons-286595) DBG | I0501 02:07:55.586933   21443 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ad0}
	I0501 02:07:55.587105   21421 main.go:141] libmachine: (addons-286595) DBG | created network xml: 
	I0501 02:07:55.587118   21421 main.go:141] libmachine: (addons-286595) DBG | <network>
	I0501 02:07:55.587138   21421 main.go:141] libmachine: (addons-286595) DBG |   <name>mk-addons-286595</name>
	I0501 02:07:55.587176   21421 main.go:141] libmachine: (addons-286595) DBG |   <dns enable='no'/>
	I0501 02:07:55.587203   21421 main.go:141] libmachine: (addons-286595) DBG |   
	I0501 02:07:55.587215   21421 main.go:141] libmachine: (addons-286595) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0501 02:07:55.587226   21421 main.go:141] libmachine: (addons-286595) DBG |     <dhcp>
	I0501 02:07:55.587240   21421 main.go:141] libmachine: (addons-286595) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0501 02:07:55.587251   21421 main.go:141] libmachine: (addons-286595) DBG |     </dhcp>
	I0501 02:07:55.587262   21421 main.go:141] libmachine: (addons-286595) DBG |   </ip>
	I0501 02:07:55.587270   21421 main.go:141] libmachine: (addons-286595) DBG |   
	I0501 02:07:55.587275   21421 main.go:141] libmachine: (addons-286595) DBG | </network>
	I0501 02:07:55.587282   21421 main.go:141] libmachine: (addons-286595) DBG | 
	I0501 02:07:55.592433   21421 main.go:141] libmachine: (addons-286595) DBG | trying to create private KVM network mk-addons-286595 192.168.39.0/24...
	I0501 02:07:55.656406   21421 main.go:141] libmachine: (addons-286595) DBG | private KVM network mk-addons-286595 192.168.39.0/24 created
	I0501 02:07:55.656442   21421 main.go:141] libmachine: (addons-286595) DBG | I0501 02:07:55.656374   21443 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18779-13391/.minikube
	I0501 02:07:55.656456   21421 main.go:141] libmachine: (addons-286595) Setting up store path in /home/jenkins/minikube-integration/18779-13391/.minikube/machines/addons-286595 ...
	I0501 02:07:55.656472   21421 main.go:141] libmachine: (addons-286595) Building disk image from file:///home/jenkins/minikube-integration/18779-13391/.minikube/cache/iso/amd64/minikube-v1.33.0-1714498396-18779-amd64.iso
	I0501 02:07:55.656501   21421 main.go:141] libmachine: (addons-286595) Downloading /home/jenkins/minikube-integration/18779-13391/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18779-13391/.minikube/cache/iso/amd64/minikube-v1.33.0-1714498396-18779-amd64.iso...
	I0501 02:07:55.887225   21421 main.go:141] libmachine: (addons-286595) DBG | I0501 02:07:55.887056   21443 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/addons-286595/id_rsa...
	I0501 02:07:56.006973   21421 main.go:141] libmachine: (addons-286595) DBG | I0501 02:07:56.006861   21443 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/addons-286595/addons-286595.rawdisk...
	I0501 02:07:56.006998   21421 main.go:141] libmachine: (addons-286595) DBG | Writing magic tar header
	I0501 02:07:56.007012   21421 main.go:141] libmachine: (addons-286595) DBG | Writing SSH key tar header
	I0501 02:07:56.007022   21421 main.go:141] libmachine: (addons-286595) DBG | I0501 02:07:56.006979   21443 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18779-13391/.minikube/machines/addons-286595 ...
	I0501 02:07:56.007086   21421 main.go:141] libmachine: (addons-286595) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/addons-286595
	I0501 02:07:56.007185   21421 main.go:141] libmachine: (addons-286595) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18779-13391/.minikube/machines
	I0501 02:07:56.007215   21421 main.go:141] libmachine: (addons-286595) Setting executable bit set on /home/jenkins/minikube-integration/18779-13391/.minikube/machines/addons-286595 (perms=drwx------)
	I0501 02:07:56.007226   21421 main.go:141] libmachine: (addons-286595) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18779-13391/.minikube
	I0501 02:07:56.007237   21421 main.go:141] libmachine: (addons-286595) Setting executable bit set on /home/jenkins/minikube-integration/18779-13391/.minikube/machines (perms=drwxr-xr-x)
	I0501 02:07:56.007250   21421 main.go:141] libmachine: (addons-286595) Setting executable bit set on /home/jenkins/minikube-integration/18779-13391/.minikube (perms=drwxr-xr-x)
	I0501 02:07:56.007261   21421 main.go:141] libmachine: (addons-286595) Setting executable bit set on /home/jenkins/minikube-integration/18779-13391 (perms=drwxrwxr-x)
	I0501 02:07:56.007275   21421 main.go:141] libmachine: (addons-286595) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0501 02:07:56.007290   21421 main.go:141] libmachine: (addons-286595) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0501 02:07:56.007297   21421 main.go:141] libmachine: (addons-286595) Creating domain...
	I0501 02:07:56.007309   21421 main.go:141] libmachine: (addons-286595) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18779-13391
	I0501 02:07:56.007319   21421 main.go:141] libmachine: (addons-286595) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0501 02:07:56.007333   21421 main.go:141] libmachine: (addons-286595) DBG | Checking permissions on dir: /home/jenkins
	I0501 02:07:56.007342   21421 main.go:141] libmachine: (addons-286595) DBG | Checking permissions on dir: /home
	I0501 02:07:56.007350   21421 main.go:141] libmachine: (addons-286595) DBG | Skipping /home - not owner
	I0501 02:07:56.008303   21421 main.go:141] libmachine: (addons-286595) define libvirt domain using xml: 
	I0501 02:07:56.008324   21421 main.go:141] libmachine: (addons-286595) <domain type='kvm'>
	I0501 02:07:56.008335   21421 main.go:141] libmachine: (addons-286595)   <name>addons-286595</name>
	I0501 02:07:56.008344   21421 main.go:141] libmachine: (addons-286595)   <memory unit='MiB'>4000</memory>
	I0501 02:07:56.008350   21421 main.go:141] libmachine: (addons-286595)   <vcpu>2</vcpu>
	I0501 02:07:56.008357   21421 main.go:141] libmachine: (addons-286595)   <features>
	I0501 02:07:56.008362   21421 main.go:141] libmachine: (addons-286595)     <acpi/>
	I0501 02:07:56.008369   21421 main.go:141] libmachine: (addons-286595)     <apic/>
	I0501 02:07:56.008374   21421 main.go:141] libmachine: (addons-286595)     <pae/>
	I0501 02:07:56.008378   21421 main.go:141] libmachine: (addons-286595)     
	I0501 02:07:56.008384   21421 main.go:141] libmachine: (addons-286595)   </features>
	I0501 02:07:56.008391   21421 main.go:141] libmachine: (addons-286595)   <cpu mode='host-passthrough'>
	I0501 02:07:56.008396   21421 main.go:141] libmachine: (addons-286595)   
	I0501 02:07:56.008406   21421 main.go:141] libmachine: (addons-286595)   </cpu>
	I0501 02:07:56.008412   21421 main.go:141] libmachine: (addons-286595)   <os>
	I0501 02:07:56.008419   21421 main.go:141] libmachine: (addons-286595)     <type>hvm</type>
	I0501 02:07:56.008444   21421 main.go:141] libmachine: (addons-286595)     <boot dev='cdrom'/>
	I0501 02:07:56.008467   21421 main.go:141] libmachine: (addons-286595)     <boot dev='hd'/>
	I0501 02:07:56.008500   21421 main.go:141] libmachine: (addons-286595)     <bootmenu enable='no'/>
	I0501 02:07:56.008522   21421 main.go:141] libmachine: (addons-286595)   </os>
	I0501 02:07:56.008534   21421 main.go:141] libmachine: (addons-286595)   <devices>
	I0501 02:07:56.008543   21421 main.go:141] libmachine: (addons-286595)     <disk type='file' device='cdrom'>
	I0501 02:07:56.008553   21421 main.go:141] libmachine: (addons-286595)       <source file='/home/jenkins/minikube-integration/18779-13391/.minikube/machines/addons-286595/boot2docker.iso'/>
	I0501 02:07:56.008560   21421 main.go:141] libmachine: (addons-286595)       <target dev='hdc' bus='scsi'/>
	I0501 02:07:56.008566   21421 main.go:141] libmachine: (addons-286595)       <readonly/>
	I0501 02:07:56.008574   21421 main.go:141] libmachine: (addons-286595)     </disk>
	I0501 02:07:56.008580   21421 main.go:141] libmachine: (addons-286595)     <disk type='file' device='disk'>
	I0501 02:07:56.008588   21421 main.go:141] libmachine: (addons-286595)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0501 02:07:56.008597   21421 main.go:141] libmachine: (addons-286595)       <source file='/home/jenkins/minikube-integration/18779-13391/.minikube/machines/addons-286595/addons-286595.rawdisk'/>
	I0501 02:07:56.008611   21421 main.go:141] libmachine: (addons-286595)       <target dev='hda' bus='virtio'/>
	I0501 02:07:56.008629   21421 main.go:141] libmachine: (addons-286595)     </disk>
	I0501 02:07:56.008642   21421 main.go:141] libmachine: (addons-286595)     <interface type='network'>
	I0501 02:07:56.008654   21421 main.go:141] libmachine: (addons-286595)       <source network='mk-addons-286595'/>
	I0501 02:07:56.008670   21421 main.go:141] libmachine: (addons-286595)       <model type='virtio'/>
	I0501 02:07:56.008684   21421 main.go:141] libmachine: (addons-286595)     </interface>
	I0501 02:07:56.008696   21421 main.go:141] libmachine: (addons-286595)     <interface type='network'>
	I0501 02:07:56.008709   21421 main.go:141] libmachine: (addons-286595)       <source network='default'/>
	I0501 02:07:56.008724   21421 main.go:141] libmachine: (addons-286595)       <model type='virtio'/>
	I0501 02:07:56.008737   21421 main.go:141] libmachine: (addons-286595)     </interface>
	I0501 02:07:56.008749   21421 main.go:141] libmachine: (addons-286595)     <serial type='pty'>
	I0501 02:07:56.008778   21421 main.go:141] libmachine: (addons-286595)       <target port='0'/>
	I0501 02:07:56.008801   21421 main.go:141] libmachine: (addons-286595)     </serial>
	I0501 02:07:56.008814   21421 main.go:141] libmachine: (addons-286595)     <console type='pty'>
	I0501 02:07:56.008824   21421 main.go:141] libmachine: (addons-286595)       <target type='serial' port='0'/>
	I0501 02:07:56.008832   21421 main.go:141] libmachine: (addons-286595)     </console>
	I0501 02:07:56.008840   21421 main.go:141] libmachine: (addons-286595)     <rng model='virtio'>
	I0501 02:07:56.008855   21421 main.go:141] libmachine: (addons-286595)       <backend model='random'>/dev/random</backend>
	I0501 02:07:56.008866   21421 main.go:141] libmachine: (addons-286595)     </rng>
	I0501 02:07:56.008879   21421 main.go:141] libmachine: (addons-286595)     
	I0501 02:07:56.008893   21421 main.go:141] libmachine: (addons-286595)     
	I0501 02:07:56.008908   21421 main.go:141] libmachine: (addons-286595)   </devices>
	I0501 02:07:56.008921   21421 main.go:141] libmachine: (addons-286595) </domain>
	I0501 02:07:56.008938   21421 main.go:141] libmachine: (addons-286595) 
	I0501 02:07:56.014732   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined MAC address 52:54:00:22:3f:81 in network default
	I0501 02:07:56.015256   21421 main.go:141] libmachine: (addons-286595) Ensuring networks are active...
	I0501 02:07:56.015274   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:07:56.015867   21421 main.go:141] libmachine: (addons-286595) Ensuring network default is active
	I0501 02:07:56.016102   21421 main.go:141] libmachine: (addons-286595) Ensuring network mk-addons-286595 is active
	I0501 02:07:56.016566   21421 main.go:141] libmachine: (addons-286595) Getting domain xml...
	I0501 02:07:56.017210   21421 main.go:141] libmachine: (addons-286595) Creating domain...
	I0501 02:07:57.377993   21421 main.go:141] libmachine: (addons-286595) Waiting to get IP...
	I0501 02:07:57.378764   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:07:57.379157   21421 main.go:141] libmachine: (addons-286595) DBG | unable to find current IP address of domain addons-286595 in network mk-addons-286595
	I0501 02:07:57.379200   21421 main.go:141] libmachine: (addons-286595) DBG | I0501 02:07:57.379149   21443 retry.go:31] will retry after 254.326066ms: waiting for machine to come up
	I0501 02:07:57.634700   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:07:57.635120   21421 main.go:141] libmachine: (addons-286595) DBG | unable to find current IP address of domain addons-286595 in network mk-addons-286595
	I0501 02:07:57.635153   21421 main.go:141] libmachine: (addons-286595) DBG | I0501 02:07:57.635064   21443 retry.go:31] will retry after 249.868559ms: waiting for machine to come up
	I0501 02:07:57.886647   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:07:57.887035   21421 main.go:141] libmachine: (addons-286595) DBG | unable to find current IP address of domain addons-286595 in network mk-addons-286595
	I0501 02:07:57.887069   21421 main.go:141] libmachine: (addons-286595) DBG | I0501 02:07:57.887001   21443 retry.go:31] will retry after 445.355301ms: waiting for machine to come up
	I0501 02:07:58.333589   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:07:58.334022   21421 main.go:141] libmachine: (addons-286595) DBG | unable to find current IP address of domain addons-286595 in network mk-addons-286595
	I0501 02:07:58.334051   21421 main.go:141] libmachine: (addons-286595) DBG | I0501 02:07:58.333975   21443 retry.go:31] will retry after 487.078231ms: waiting for machine to come up
	I0501 02:07:58.822615   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:07:58.823027   21421 main.go:141] libmachine: (addons-286595) DBG | unable to find current IP address of domain addons-286595 in network mk-addons-286595
	I0501 02:07:58.823050   21421 main.go:141] libmachine: (addons-286595) DBG | I0501 02:07:58.822999   21443 retry.go:31] will retry after 637.55693ms: waiting for machine to come up
	I0501 02:07:59.461947   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:07:59.462373   21421 main.go:141] libmachine: (addons-286595) DBG | unable to find current IP address of domain addons-286595 in network mk-addons-286595
	I0501 02:07:59.462422   21421 main.go:141] libmachine: (addons-286595) DBG | I0501 02:07:59.462326   21443 retry.go:31] will retry after 711.50572ms: waiting for machine to come up
	I0501 02:08:00.175263   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:08:00.175675   21421 main.go:141] libmachine: (addons-286595) DBG | unable to find current IP address of domain addons-286595 in network mk-addons-286595
	I0501 02:08:00.175700   21421 main.go:141] libmachine: (addons-286595) DBG | I0501 02:08:00.175617   21443 retry.go:31] will retry after 1.097804426s: waiting for machine to come up
	I0501 02:08:01.275302   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:08:01.275754   21421 main.go:141] libmachine: (addons-286595) DBG | unable to find current IP address of domain addons-286595 in network mk-addons-286595
	I0501 02:08:01.276330   21421 main.go:141] libmachine: (addons-286595) DBG | I0501 02:08:01.276098   21443 retry.go:31] will retry after 1.219199563s: waiting for machine to come up
	I0501 02:08:02.496666   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:08:02.497066   21421 main.go:141] libmachine: (addons-286595) DBG | unable to find current IP address of domain addons-286595 in network mk-addons-286595
	I0501 02:08:02.497094   21421 main.go:141] libmachine: (addons-286595) DBG | I0501 02:08:02.497031   21443 retry.go:31] will retry after 1.494167654s: waiting for machine to come up
	I0501 02:08:03.993680   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:08:03.994088   21421 main.go:141] libmachine: (addons-286595) DBG | unable to find current IP address of domain addons-286595 in network mk-addons-286595
	I0501 02:08:03.994115   21421 main.go:141] libmachine: (addons-286595) DBG | I0501 02:08:03.994048   21443 retry.go:31] will retry after 2.157364528s: waiting for machine to come up
	I0501 02:08:06.152699   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:08:06.153083   21421 main.go:141] libmachine: (addons-286595) DBG | unable to find current IP address of domain addons-286595 in network mk-addons-286595
	I0501 02:08:06.153106   21421 main.go:141] libmachine: (addons-286595) DBG | I0501 02:08:06.153060   21443 retry.go:31] will retry after 2.06631124s: waiting for machine to come up
	I0501 02:08:08.222546   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:08:08.222962   21421 main.go:141] libmachine: (addons-286595) DBG | unable to find current IP address of domain addons-286595 in network mk-addons-286595
	I0501 02:08:08.222985   21421 main.go:141] libmachine: (addons-286595) DBG | I0501 02:08:08.222927   21443 retry.go:31] will retry after 2.959305142s: waiting for machine to come up
	I0501 02:08:11.183544   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:08:11.183944   21421 main.go:141] libmachine: (addons-286595) DBG | unable to find current IP address of domain addons-286595 in network mk-addons-286595
	I0501 02:08:11.183971   21421 main.go:141] libmachine: (addons-286595) DBG | I0501 02:08:11.183898   21443 retry.go:31] will retry after 4.259579563s: waiting for machine to come up
	I0501 02:08:15.445367   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:08:15.445760   21421 main.go:141] libmachine: (addons-286595) DBG | unable to find current IP address of domain addons-286595 in network mk-addons-286595
	I0501 02:08:15.445790   21421 main.go:141] libmachine: (addons-286595) DBG | I0501 02:08:15.445719   21443 retry.go:31] will retry after 4.682748792s: waiting for machine to come up
	I0501 02:08:20.133571   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:08:20.134012   21421 main.go:141] libmachine: (addons-286595) Found IP for machine: 192.168.39.173
	I0501 02:08:20.134042   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has current primary IP address 192.168.39.173 and MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:08:20.134051   21421 main.go:141] libmachine: (addons-286595) Reserving static IP address...
	I0501 02:08:20.134449   21421 main.go:141] libmachine: (addons-286595) DBG | unable to find host DHCP lease matching {name: "addons-286595", mac: "52:54:00:74:55:7e", ip: "192.168.39.173"} in network mk-addons-286595
	I0501 02:08:20.207290   21421 main.go:141] libmachine: (addons-286595) DBG | Getting to WaitForSSH function...
	I0501 02:08:20.207320   21421 main.go:141] libmachine: (addons-286595) Reserved static IP address: 192.168.39.173
	I0501 02:08:20.207332   21421 main.go:141] libmachine: (addons-286595) Waiting for SSH to be available...
	I0501 02:08:20.209437   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:08:20.209892   21421 main.go:141] libmachine: (addons-286595) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:55:7e", ip: ""} in network mk-addons-286595: {Iface:virbr1 ExpiryTime:2024-05-01 03:08:11 +0000 UTC Type:0 Mac:52:54:00:74:55:7e Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:minikube Clientid:01:52:54:00:74:55:7e}
	I0501 02:08:20.209937   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined IP address 192.168.39.173 and MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:08:20.209961   21421 main.go:141] libmachine: (addons-286595) DBG | Using SSH client type: external
	I0501 02:08:20.209991   21421 main.go:141] libmachine: (addons-286595) DBG | Using SSH private key: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/addons-286595/id_rsa (-rw-------)
	I0501 02:08:20.210051   21421 main.go:141] libmachine: (addons-286595) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.173 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18779-13391/.minikube/machines/addons-286595/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0501 02:08:20.210071   21421 main.go:141] libmachine: (addons-286595) DBG | About to run SSH command:
	I0501 02:08:20.210083   21421 main.go:141] libmachine: (addons-286595) DBG | exit 0
	I0501 02:08:20.342966   21421 main.go:141] libmachine: (addons-286595) DBG | SSH cmd err, output: <nil>: 
	I0501 02:08:20.343259   21421 main.go:141] libmachine: (addons-286595) KVM machine creation complete!
	I0501 02:08:20.343571   21421 main.go:141] libmachine: (addons-286595) Calling .GetConfigRaw
	I0501 02:08:20.344111   21421 main.go:141] libmachine: (addons-286595) Calling .DriverName
	I0501 02:08:20.344283   21421 main.go:141] libmachine: (addons-286595) Calling .DriverName
	I0501 02:08:20.344484   21421 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0501 02:08:20.344498   21421 main.go:141] libmachine: (addons-286595) Calling .GetState
	I0501 02:08:20.345605   21421 main.go:141] libmachine: Detecting operating system of created instance...
	I0501 02:08:20.345619   21421 main.go:141] libmachine: Waiting for SSH to be available...
	I0501 02:08:20.345625   21421 main.go:141] libmachine: Getting to WaitForSSH function...
	I0501 02:08:20.345630   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHHostname
	I0501 02:08:20.347918   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:08:20.348255   21421 main.go:141] libmachine: (addons-286595) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:55:7e", ip: ""} in network mk-addons-286595: {Iface:virbr1 ExpiryTime:2024-05-01 03:08:11 +0000 UTC Type:0 Mac:52:54:00:74:55:7e Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:addons-286595 Clientid:01:52:54:00:74:55:7e}
	I0501 02:08:20.348278   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined IP address 192.168.39.173 and MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:08:20.348442   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHPort
	I0501 02:08:20.348626   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHKeyPath
	I0501 02:08:20.348805   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHKeyPath
	I0501 02:08:20.348945   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHUsername
	I0501 02:08:20.349105   21421 main.go:141] libmachine: Using SSH client type: native
	I0501 02:08:20.349335   21421 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.173 22 <nil> <nil>}
	I0501 02:08:20.349350   21421 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0501 02:08:20.458068   21421 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0501 02:08:20.458095   21421 main.go:141] libmachine: Detecting the provisioner...
	I0501 02:08:20.458102   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHHostname
	I0501 02:08:20.460903   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:08:20.461288   21421 main.go:141] libmachine: (addons-286595) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:55:7e", ip: ""} in network mk-addons-286595: {Iface:virbr1 ExpiryTime:2024-05-01 03:08:11 +0000 UTC Type:0 Mac:52:54:00:74:55:7e Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:addons-286595 Clientid:01:52:54:00:74:55:7e}
	I0501 02:08:20.461349   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined IP address 192.168.39.173 and MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:08:20.461481   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHPort
	I0501 02:08:20.461663   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHKeyPath
	I0501 02:08:20.461803   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHKeyPath
	I0501 02:08:20.461927   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHUsername
	I0501 02:08:20.462056   21421 main.go:141] libmachine: Using SSH client type: native
	I0501 02:08:20.462205   21421 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.173 22 <nil> <nil>}
	I0501 02:08:20.462216   21421 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0501 02:08:20.572508   21421 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0501 02:08:20.572576   21421 main.go:141] libmachine: found compatible host: buildroot
	I0501 02:08:20.572583   21421 main.go:141] libmachine: Provisioning with buildroot...
	I0501 02:08:20.572591   21421 main.go:141] libmachine: (addons-286595) Calling .GetMachineName
	I0501 02:08:20.572852   21421 buildroot.go:166] provisioning hostname "addons-286595"
	I0501 02:08:20.572896   21421 main.go:141] libmachine: (addons-286595) Calling .GetMachineName
	I0501 02:08:20.573062   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHHostname
	I0501 02:08:20.575445   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:08:20.575772   21421 main.go:141] libmachine: (addons-286595) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:55:7e", ip: ""} in network mk-addons-286595: {Iface:virbr1 ExpiryTime:2024-05-01 03:08:11 +0000 UTC Type:0 Mac:52:54:00:74:55:7e Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:addons-286595 Clientid:01:52:54:00:74:55:7e}
	I0501 02:08:20.575805   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined IP address 192.168.39.173 and MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:08:20.575903   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHPort
	I0501 02:08:20.576089   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHKeyPath
	I0501 02:08:20.576245   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHKeyPath
	I0501 02:08:20.576387   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHUsername
	I0501 02:08:20.576530   21421 main.go:141] libmachine: Using SSH client type: native
	I0501 02:08:20.576681   21421 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.173 22 <nil> <nil>}
	I0501 02:08:20.576693   21421 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-286595 && echo "addons-286595" | sudo tee /etc/hostname
	I0501 02:08:20.711261   21421 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-286595
	
	I0501 02:08:20.711290   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHHostname
	I0501 02:08:20.713949   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:08:20.714242   21421 main.go:141] libmachine: (addons-286595) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:55:7e", ip: ""} in network mk-addons-286595: {Iface:virbr1 ExpiryTime:2024-05-01 03:08:11 +0000 UTC Type:0 Mac:52:54:00:74:55:7e Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:addons-286595 Clientid:01:52:54:00:74:55:7e}
	I0501 02:08:20.714279   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined IP address 192.168.39.173 and MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:08:20.714472   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHPort
	I0501 02:08:20.714682   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHKeyPath
	I0501 02:08:20.714848   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHKeyPath
	I0501 02:08:20.714989   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHUsername
	I0501 02:08:20.715167   21421 main.go:141] libmachine: Using SSH client type: native
	I0501 02:08:20.715314   21421 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.173 22 <nil> <nil>}
	I0501 02:08:20.715330   21421 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-286595' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-286595/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-286595' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0501 02:08:20.839461   21421 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0501 02:08:20.839491   21421 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18779-13391/.minikube CaCertPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18779-13391/.minikube}
	I0501 02:08:20.839533   21421 buildroot.go:174] setting up certificates
	I0501 02:08:20.839544   21421 provision.go:84] configureAuth start
	I0501 02:08:20.839553   21421 main.go:141] libmachine: (addons-286595) Calling .GetMachineName
	I0501 02:08:20.839811   21421 main.go:141] libmachine: (addons-286595) Calling .GetIP
	I0501 02:08:20.842034   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:08:20.842436   21421 main.go:141] libmachine: (addons-286595) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:55:7e", ip: ""} in network mk-addons-286595: {Iface:virbr1 ExpiryTime:2024-05-01 03:08:11 +0000 UTC Type:0 Mac:52:54:00:74:55:7e Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:addons-286595 Clientid:01:52:54:00:74:55:7e}
	I0501 02:08:20.842466   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined IP address 192.168.39.173 and MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:08:20.842579   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHHostname
	I0501 02:08:20.844673   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:08:20.844962   21421 main.go:141] libmachine: (addons-286595) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:55:7e", ip: ""} in network mk-addons-286595: {Iface:virbr1 ExpiryTime:2024-05-01 03:08:11 +0000 UTC Type:0 Mac:52:54:00:74:55:7e Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:addons-286595 Clientid:01:52:54:00:74:55:7e}
	I0501 02:08:20.844986   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined IP address 192.168.39.173 and MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:08:20.845168   21421 provision.go:143] copyHostCerts
	I0501 02:08:20.845250   21421 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem (1123 bytes)
	I0501 02:08:20.845405   21421 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem (1675 bytes)
	I0501 02:08:20.845489   21421 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem (1078 bytes)
	I0501 02:08:20.845560   21421 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca-key.pem org=jenkins.addons-286595 san=[127.0.0.1 192.168.39.173 addons-286595 localhost minikube]
	I0501 02:08:20.925369   21421 provision.go:177] copyRemoteCerts
	I0501 02:08:20.925429   21421 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0501 02:08:20.925456   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHHostname
	I0501 02:08:20.927667   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:08:20.927959   21421 main.go:141] libmachine: (addons-286595) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:55:7e", ip: ""} in network mk-addons-286595: {Iface:virbr1 ExpiryTime:2024-05-01 03:08:11 +0000 UTC Type:0 Mac:52:54:00:74:55:7e Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:addons-286595 Clientid:01:52:54:00:74:55:7e}
	I0501 02:08:20.927988   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined IP address 192.168.39.173 and MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:08:20.928146   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHPort
	I0501 02:08:20.928339   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHKeyPath
	I0501 02:08:20.928485   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHUsername
	I0501 02:08:20.928610   21421 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/addons-286595/id_rsa Username:docker}
	I0501 02:08:21.013616   21421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0501 02:08:21.041583   21421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0501 02:08:21.068530   21421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0501 02:08:21.096204   21421 provision.go:87] duration metric: took 256.648307ms to configureAuth
	I0501 02:08:21.096233   21421 buildroot.go:189] setting minikube options for container-runtime
	I0501 02:08:21.096457   21421 config.go:182] Loaded profile config "addons-286595": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0501 02:08:21.096555   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHHostname
	I0501 02:08:21.099048   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:08:21.099437   21421 main.go:141] libmachine: (addons-286595) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:55:7e", ip: ""} in network mk-addons-286595: {Iface:virbr1 ExpiryTime:2024-05-01 03:08:11 +0000 UTC Type:0 Mac:52:54:00:74:55:7e Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:addons-286595 Clientid:01:52:54:00:74:55:7e}
	I0501 02:08:21.099468   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined IP address 192.168.39.173 and MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:08:21.099637   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHPort
	I0501 02:08:21.099862   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHKeyPath
	I0501 02:08:21.100018   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHKeyPath
	I0501 02:08:21.100172   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHUsername
	I0501 02:08:21.100313   21421 main.go:141] libmachine: Using SSH client type: native
	I0501 02:08:21.100533   21421 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.173 22 <nil> <nil>}
	I0501 02:08:21.100557   21421 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0501 02:08:21.383701   21421 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0501 02:08:21.383728   21421 main.go:141] libmachine: Checking connection to Docker...
	I0501 02:08:21.383741   21421 main.go:141] libmachine: (addons-286595) Calling .GetURL
	I0501 02:08:21.384973   21421 main.go:141] libmachine: (addons-286595) DBG | Using libvirt version 6000000
	I0501 02:08:21.387017   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:08:21.387287   21421 main.go:141] libmachine: (addons-286595) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:55:7e", ip: ""} in network mk-addons-286595: {Iface:virbr1 ExpiryTime:2024-05-01 03:08:11 +0000 UTC Type:0 Mac:52:54:00:74:55:7e Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:addons-286595 Clientid:01:52:54:00:74:55:7e}
	I0501 02:08:21.387334   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined IP address 192.168.39.173 and MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:08:21.387419   21421 main.go:141] libmachine: Docker is up and running!
	I0501 02:08:21.387443   21421 main.go:141] libmachine: Reticulating splines...
	I0501 02:08:21.387451   21421 client.go:171] duration metric: took 25.98551086s to LocalClient.Create
	I0501 02:08:21.387475   21421 start.go:167] duration metric: took 25.985598472s to libmachine.API.Create "addons-286595"
	I0501 02:08:21.387485   21421 start.go:293] postStartSetup for "addons-286595" (driver="kvm2")
	I0501 02:08:21.387494   21421 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0501 02:08:21.387538   21421 main.go:141] libmachine: (addons-286595) Calling .DriverName
	I0501 02:08:21.387828   21421 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0501 02:08:21.387854   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHHostname
	I0501 02:08:21.389686   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:08:21.389904   21421 main.go:141] libmachine: (addons-286595) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:55:7e", ip: ""} in network mk-addons-286595: {Iface:virbr1 ExpiryTime:2024-05-01 03:08:11 +0000 UTC Type:0 Mac:52:54:00:74:55:7e Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:addons-286595 Clientid:01:52:54:00:74:55:7e}
	I0501 02:08:21.389928   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined IP address 192.168.39.173 and MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:08:21.390024   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHPort
	I0501 02:08:21.390178   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHKeyPath
	I0501 02:08:21.390336   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHUsername
	I0501 02:08:21.390475   21421 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/addons-286595/id_rsa Username:docker}
	I0501 02:08:21.473577   21421 ssh_runner.go:195] Run: cat /etc/os-release
	I0501 02:08:21.478614   21421 info.go:137] Remote host: Buildroot 2023.02.9
	I0501 02:08:21.478641   21421 filesync.go:126] Scanning /home/jenkins/minikube-integration/18779-13391/.minikube/addons for local assets ...
	I0501 02:08:21.478720   21421 filesync.go:126] Scanning /home/jenkins/minikube-integration/18779-13391/.minikube/files for local assets ...
	I0501 02:08:21.478751   21421 start.go:296] duration metric: took 91.260746ms for postStartSetup
	I0501 02:08:21.478789   21421 main.go:141] libmachine: (addons-286595) Calling .GetConfigRaw
	I0501 02:08:21.479330   21421 main.go:141] libmachine: (addons-286595) Calling .GetIP
	I0501 02:08:21.481594   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:08:21.481921   21421 main.go:141] libmachine: (addons-286595) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:55:7e", ip: ""} in network mk-addons-286595: {Iface:virbr1 ExpiryTime:2024-05-01 03:08:11 +0000 UTC Type:0 Mac:52:54:00:74:55:7e Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:addons-286595 Clientid:01:52:54:00:74:55:7e}
	I0501 02:08:21.481964   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined IP address 192.168.39.173 and MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:08:21.482147   21421 profile.go:143] Saving config to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/addons-286595/config.json ...
	I0501 02:08:21.482344   21421 start.go:128] duration metric: took 26.099176468s to createHost
	I0501 02:08:21.482371   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHHostname
	I0501 02:08:21.484256   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:08:21.484574   21421 main.go:141] libmachine: (addons-286595) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:55:7e", ip: ""} in network mk-addons-286595: {Iface:virbr1 ExpiryTime:2024-05-01 03:08:11 +0000 UTC Type:0 Mac:52:54:00:74:55:7e Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:addons-286595 Clientid:01:52:54:00:74:55:7e}
	I0501 02:08:21.484596   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined IP address 192.168.39.173 and MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:08:21.484720   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHPort
	I0501 02:08:21.484866   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHKeyPath
	I0501 02:08:21.485016   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHKeyPath
	I0501 02:08:21.485166   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHUsername
	I0501 02:08:21.485304   21421 main.go:141] libmachine: Using SSH client type: native
	I0501 02:08:21.485458   21421 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.173 22 <nil> <nil>}
	I0501 02:08:21.485469   21421 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0501 02:08:21.595702   21421 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714529301.579854174
	
	I0501 02:08:21.595726   21421 fix.go:216] guest clock: 1714529301.579854174
	I0501 02:08:21.595734   21421 fix.go:229] Guest: 2024-05-01 02:08:21.579854174 +0000 UTC Remote: 2024-05-01 02:08:21.482357717 +0000 UTC m=+26.213215784 (delta=97.496457ms)
	I0501 02:08:21.595754   21421 fix.go:200] guest clock delta is within tolerance: 97.496457ms
	I0501 02:08:21.595759   21421 start.go:83] releasing machines lock for "addons-286595", held for 26.212671298s
	I0501 02:08:21.595776   21421 main.go:141] libmachine: (addons-286595) Calling .DriverName
	I0501 02:08:21.596009   21421 main.go:141] libmachine: (addons-286595) Calling .GetIP
	I0501 02:08:21.598718   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:08:21.599049   21421 main.go:141] libmachine: (addons-286595) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:55:7e", ip: ""} in network mk-addons-286595: {Iface:virbr1 ExpiryTime:2024-05-01 03:08:11 +0000 UTC Type:0 Mac:52:54:00:74:55:7e Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:addons-286595 Clientid:01:52:54:00:74:55:7e}
	I0501 02:08:21.599080   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined IP address 192.168.39.173 and MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:08:21.599195   21421 main.go:141] libmachine: (addons-286595) Calling .DriverName
	I0501 02:08:21.599965   21421 main.go:141] libmachine: (addons-286595) Calling .DriverName
	I0501 02:08:21.600153   21421 main.go:141] libmachine: (addons-286595) Calling .DriverName
	I0501 02:08:21.600247   21421 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0501 02:08:21.600286   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHHostname
	I0501 02:08:21.600366   21421 ssh_runner.go:195] Run: cat /version.json
	I0501 02:08:21.600385   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHHostname
	I0501 02:08:21.602839   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:08:21.602862   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:08:21.603239   21421 main.go:141] libmachine: (addons-286595) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:55:7e", ip: ""} in network mk-addons-286595: {Iface:virbr1 ExpiryTime:2024-05-01 03:08:11 +0000 UTC Type:0 Mac:52:54:00:74:55:7e Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:addons-286595 Clientid:01:52:54:00:74:55:7e}
	I0501 02:08:21.603268   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined IP address 192.168.39.173 and MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:08:21.603372   21421 main.go:141] libmachine: (addons-286595) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:55:7e", ip: ""} in network mk-addons-286595: {Iface:virbr1 ExpiryTime:2024-05-01 03:08:11 +0000 UTC Type:0 Mac:52:54:00:74:55:7e Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:addons-286595 Clientid:01:52:54:00:74:55:7e}
	I0501 02:08:21.603398   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined IP address 192.168.39.173 and MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:08:21.603401   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHPort
	I0501 02:08:21.603575   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHKeyPath
	I0501 02:08:21.603590   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHPort
	I0501 02:08:21.603757   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHUsername
	I0501 02:08:21.603757   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHKeyPath
	I0501 02:08:21.603958   21421 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/addons-286595/id_rsa Username:docker}
	I0501 02:08:21.603970   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHUsername
	I0501 02:08:21.604092   21421 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/addons-286595/id_rsa Username:docker}
	I0501 02:08:21.711583   21421 ssh_runner.go:195] Run: systemctl --version
	I0501 02:08:21.717909   21421 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0501 02:08:21.885783   21421 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0501 02:08:21.892692   21421 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0501 02:08:21.892747   21421 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0501 02:08:21.910176   21421 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0501 02:08:21.910190   21421 start.go:494] detecting cgroup driver to use...
	I0501 02:08:21.910245   21421 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0501 02:08:21.928118   21421 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0501 02:08:21.942966   21421 docker.go:217] disabling cri-docker service (if available) ...
	I0501 02:08:21.943041   21421 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0501 02:08:21.956917   21421 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0501 02:08:21.970716   21421 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0501 02:08:22.086139   21421 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0501 02:08:22.253424   21421 docker.go:233] disabling docker service ...
	I0501 02:08:22.253495   21421 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0501 02:08:22.269905   21421 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0501 02:08:22.283870   21421 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0501 02:08:22.407277   21421 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0501 02:08:22.525776   21421 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0501 02:08:22.541243   21421 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0501 02:08:22.561820   21421 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0501 02:08:22.561901   21421 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 02:08:22.573493   21421 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0501 02:08:22.573564   21421 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 02:08:22.585200   21421 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 02:08:22.596927   21421 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 02:08:22.608362   21421 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0501 02:08:22.619974   21421 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 02:08:22.631160   21421 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 02:08:22.649639   21421 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 02:08:22.660488   21421 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0501 02:08:22.670506   21421 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0501 02:08:22.670555   21421 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0501 02:08:22.685256   21421 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0501 02:08:22.696083   21421 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:08:22.834876   21421 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0501 02:08:22.982747   21421 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0501 02:08:22.982844   21421 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0501 02:08:22.988641   21421 start.go:562] Will wait 60s for crictl version
	I0501 02:08:22.988721   21421 ssh_runner.go:195] Run: which crictl
	I0501 02:08:22.993315   21421 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0501 02:08:23.034485   21421 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0501 02:08:23.034604   21421 ssh_runner.go:195] Run: crio --version
	I0501 02:08:23.065303   21421 ssh_runner.go:195] Run: crio --version
	I0501 02:08:23.097679   21421 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0501 02:08:23.099042   21421 main.go:141] libmachine: (addons-286595) Calling .GetIP
	I0501 02:08:23.101897   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:08:23.102290   21421 main.go:141] libmachine: (addons-286595) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:55:7e", ip: ""} in network mk-addons-286595: {Iface:virbr1 ExpiryTime:2024-05-01 03:08:11 +0000 UTC Type:0 Mac:52:54:00:74:55:7e Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:addons-286595 Clientid:01:52:54:00:74:55:7e}
	I0501 02:08:23.102318   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined IP address 192.168.39.173 and MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:08:23.102542   21421 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0501 02:08:23.107669   21421 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0501 02:08:23.122255   21421 kubeadm.go:877] updating cluster {Name:addons-286595 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
0 ClusterName:addons-286595 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.173 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0501 02:08:23.122380   21421 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0501 02:08:23.122451   21421 ssh_runner.go:195] Run: sudo crictl images --output json
	I0501 02:08:23.157908   21421 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0501 02:08:23.157978   21421 ssh_runner.go:195] Run: which lz4
	I0501 02:08:23.162627   21421 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0501 02:08:23.167516   21421 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0501 02:08:23.167546   21421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394544937 bytes)
	I0501 02:08:24.648981   21421 crio.go:462] duration metric: took 1.48639053s to copy over tarball
	I0501 02:08:24.649045   21421 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0501 02:08:27.328224   21421 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.679148159s)
	I0501 02:08:27.328275   21421 crio.go:469] duration metric: took 2.679269008s to extract the tarball
	I0501 02:08:27.328287   21421 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0501 02:08:27.366840   21421 ssh_runner.go:195] Run: sudo crictl images --output json
	I0501 02:08:27.416579   21421 crio.go:514] all images are preloaded for cri-o runtime.
	I0501 02:08:27.416601   21421 cache_images.go:84] Images are preloaded, skipping loading
	I0501 02:08:27.416609   21421 kubeadm.go:928] updating node { 192.168.39.173 8443 v1.30.0 crio true true} ...
	I0501 02:08:27.416721   21421 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-286595 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.173
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:addons-286595 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0501 02:08:27.416782   21421 ssh_runner.go:195] Run: crio config
	I0501 02:08:27.468311   21421 cni.go:84] Creating CNI manager for ""
	I0501 02:08:27.468334   21421 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0501 02:08:27.468345   21421 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0501 02:08:27.468365   21421 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.173 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-286595 NodeName:addons-286595 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.173"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.173 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0501 02:08:27.468496   21421 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.173
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-286595"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.173
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.173"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0501 02:08:27.468554   21421 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0501 02:08:27.479727   21421 binaries.go:44] Found k8s binaries, skipping transfer
	I0501 02:08:27.479796   21421 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0501 02:08:27.490057   21421 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0501 02:08:27.508749   21421 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0501 02:08:27.528607   21421 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0501 02:08:27.547096   21421 ssh_runner.go:195] Run: grep 192.168.39.173	control-plane.minikube.internal$ /etc/hosts
	I0501 02:08:27.551283   21421 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.173	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0501 02:08:27.564410   21421 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:08:27.682977   21421 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0501 02:08:27.701299   21421 certs.go:68] Setting up /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/addons-286595 for IP: 192.168.39.173
	I0501 02:08:27.701317   21421 certs.go:194] generating shared ca certs ...
	I0501 02:08:27.701342   21421 certs.go:226] acquiring lock for ca certs: {Name:mk85a75d14c00780d4bf09cd9bdaf0f96f1b8fce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:08:27.701485   21421 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/18779-13391/.minikube/ca.key
	I0501 02:08:27.978339   21421 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18779-13391/.minikube/ca.crt ...
	I0501 02:08:27.978369   21421 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18779-13391/.minikube/ca.crt: {Name:mk2aa64ed3ffa43baef26cb76f6975fb66c3c12e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:08:27.978567   21421 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18779-13391/.minikube/ca.key ...
	I0501 02:08:27.978583   21421 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18779-13391/.minikube/ca.key: {Name:mk6b15aedf9e8fb8b4e2dafe20ce2c834eb1faff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:08:27.978682   21421 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.key
	I0501 02:08:28.112877   21421 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.crt ...
	I0501 02:08:28.112905   21421 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.crt: {Name:mkf177ac27a2dfe775a48543cda735a9e19f5da0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:08:28.113070   21421 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.key ...
	I0501 02:08:28.113087   21421 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.key: {Name:mk29b0e94925d4b16264e43f4a48d33fd9427cf0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:08:28.113193   21421 certs.go:256] generating profile certs ...
	I0501 02:08:28.113246   21421 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/addons-286595/client.key
	I0501 02:08:28.113262   21421 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/addons-286595/client.crt with IP's: []
	I0501 02:08:28.314901   21421 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/addons-286595/client.crt ...
	I0501 02:08:28.314934   21421 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/addons-286595/client.crt: {Name:mk5a02944d247a426dd8a7e06384f15984cfa36e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:08:28.315117   21421 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/addons-286595/client.key ...
	I0501 02:08:28.315131   21421 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/addons-286595/client.key: {Name:mk1afc697d58cae69a9e0addf4c201cb1879cde9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:08:28.315222   21421 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/addons-286595/apiserver.key.3e9978e6
	I0501 02:08:28.315247   21421 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/addons-286595/apiserver.crt.3e9978e6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.173]
	I0501 02:08:28.542717   21421 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/addons-286595/apiserver.crt.3e9978e6 ...
	I0501 02:08:28.542750   21421 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/addons-286595/apiserver.crt.3e9978e6: {Name:mk0b5fe76d0e797a2b8e7d8e7a73a27288ed48cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:08:28.542936   21421 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/addons-286595/apiserver.key.3e9978e6 ...
	I0501 02:08:28.542958   21421 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/addons-286595/apiserver.key.3e9978e6: {Name:mkc457c7f86361301c073c2f383c901b6fd9431d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:08:28.543049   21421 certs.go:381] copying /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/addons-286595/apiserver.crt.3e9978e6 -> /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/addons-286595/apiserver.crt
	I0501 02:08:28.543139   21421 certs.go:385] copying /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/addons-286595/apiserver.key.3e9978e6 -> /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/addons-286595/apiserver.key
	I0501 02:08:28.543199   21421 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/addons-286595/proxy-client.key
	I0501 02:08:28.543222   21421 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/addons-286595/proxy-client.crt with IP's: []
	I0501 02:08:28.682021   21421 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/addons-286595/proxy-client.crt ...
	I0501 02:08:28.682050   21421 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/addons-286595/proxy-client.crt: {Name:mk75acd4b588454a97ed7ee5f8ba7ad77e58f89e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:08:28.682220   21421 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/addons-286595/proxy-client.key ...
	I0501 02:08:28.682236   21421 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/addons-286595/proxy-client.key: {Name:mkecc1c15c34afdbf2add76596b544294bb88da5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:08:28.682581   21421 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca-key.pem (1679 bytes)
	I0501 02:08:28.682636   21421 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem (1078 bytes)
	I0501 02:08:28.682669   21421 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem (1123 bytes)
	I0501 02:08:28.682693   21421 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem (1675 bytes)
	I0501 02:08:28.683253   21421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0501 02:08:28.715610   21421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0501 02:08:28.746121   21421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0501 02:08:28.777013   21421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0501 02:08:28.805255   21421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/addons-286595/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0501 02:08:28.833273   21421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/addons-286595/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0501 02:08:28.862002   21421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/addons-286595/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0501 02:08:28.891400   21421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/addons-286595/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0501 02:08:28.919259   21421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0501 02:08:28.946617   21421 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0501 02:08:28.965525   21421 ssh_runner.go:195] Run: openssl version
	I0501 02:08:28.972778   21421 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0501 02:08:28.984998   21421 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0501 02:08:28.990298   21421 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May  1 02:08 /usr/share/ca-certificates/minikubeCA.pem
	I0501 02:08:28.990356   21421 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0501 02:08:28.996864   21421 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0501 02:08:29.008760   21421 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0501 02:08:29.013721   21421 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0501 02:08:29.013782   21421 kubeadm.go:391] StartCluster: {Name:addons-286595 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 C
lusterName:addons-286595 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.173 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 02:08:29.013879   21421 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0501 02:08:29.013948   21421 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0501 02:08:29.057509   21421 cri.go:89] found id: ""
	I0501 02:08:29.057585   21421 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0501 02:08:29.069506   21421 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0501 02:08:29.081025   21421 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0501 02:08:29.092429   21421 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0501 02:08:29.092449   21421 kubeadm.go:156] found existing configuration files:
	
	I0501 02:08:29.092490   21421 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0501 02:08:29.103039   21421 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0501 02:08:29.103096   21421 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0501 02:08:29.113759   21421 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0501 02:08:29.124976   21421 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0501 02:08:29.125033   21421 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0501 02:08:29.135573   21421 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0501 02:08:29.145326   21421 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0501 02:08:29.145394   21421 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0501 02:08:29.155480   21421 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0501 02:08:29.165170   21421 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0501 02:08:29.165231   21421 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0501 02:08:29.176312   21421 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0501 02:08:29.231689   21421 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0501 02:08:29.231827   21421 kubeadm.go:309] [preflight] Running pre-flight checks
	I0501 02:08:29.375556   21421 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0501 02:08:29.375656   21421 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0501 02:08:29.375746   21421 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0501 02:08:29.632538   21421 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0501 02:08:29.831558   21421 out.go:204]   - Generating certificates and keys ...
	I0501 02:08:29.831679   21421 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0501 02:08:29.831744   21421 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0501 02:08:29.831840   21421 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0501 02:08:30.124526   21421 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0501 02:08:30.263886   21421 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0501 02:08:30.440710   21421 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0501 02:08:30.593916   21421 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0501 02:08:30.594214   21421 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-286595 localhost] and IPs [192.168.39.173 127.0.0.1 ::1]
	I0501 02:08:30.816866   21421 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0501 02:08:30.817065   21421 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-286595 localhost] and IPs [192.168.39.173 127.0.0.1 ::1]
	I0501 02:08:30.980470   21421 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0501 02:08:31.061720   21421 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0501 02:08:31.231023   21421 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0501 02:08:31.231276   21421 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0501 02:08:31.382794   21421 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0501 02:08:31.495288   21421 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0501 02:08:31.695041   21421 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0501 02:08:31.771181   21421 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0501 02:08:32.089643   21421 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0501 02:08:32.090114   21421 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0501 02:08:32.092550   21421 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0501 02:08:32.094418   21421 out.go:204]   - Booting up control plane ...
	I0501 02:08:32.094530   21421 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0501 02:08:32.094623   21421 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0501 02:08:32.094727   21421 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0501 02:08:32.110779   21421 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0501 02:08:32.114283   21421 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0501 02:08:32.114325   21421 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0501 02:08:32.265813   21421 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0501 02:08:32.265914   21421 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0501 02:08:32.767536   21421 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 501.777678ms
	I0501 02:08:32.767646   21421 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0501 02:08:37.768609   21421 kubeadm.go:309] [api-check] The API server is healthy after 5.002253197s
	I0501 02:08:37.784310   21421 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0501 02:08:37.801495   21421 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0501 02:08:37.831816   21421 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0501 02:08:37.832011   21421 kubeadm.go:309] [mark-control-plane] Marking the node addons-286595 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0501 02:08:37.846923   21421 kubeadm.go:309] [bootstrap-token] Using token: 7px8y6.mfs7lhrgb9xogpi0
	I0501 02:08:37.848423   21421 out.go:204]   - Configuring RBAC rules ...
	I0501 02:08:37.848544   21421 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0501 02:08:37.860832   21421 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0501 02:08:37.868527   21421 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0501 02:08:37.872634   21421 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0501 02:08:37.876166   21421 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0501 02:08:37.879538   21421 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0501 02:08:38.177708   21421 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0501 02:08:38.626673   21421 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0501 02:08:39.176479   21421 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0501 02:08:39.176502   21421 kubeadm.go:309] 
	I0501 02:08:39.176578   21421 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0501 02:08:39.176587   21421 kubeadm.go:309] 
	I0501 02:08:39.176662   21421 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0501 02:08:39.176670   21421 kubeadm.go:309] 
	I0501 02:08:39.176711   21421 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0501 02:08:39.176797   21421 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0501 02:08:39.176873   21421 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0501 02:08:39.176886   21421 kubeadm.go:309] 
	I0501 02:08:39.176937   21421 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0501 02:08:39.176946   21421 kubeadm.go:309] 
	I0501 02:08:39.176997   21421 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0501 02:08:39.177004   21421 kubeadm.go:309] 
	I0501 02:08:39.177072   21421 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0501 02:08:39.177168   21421 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0501 02:08:39.177266   21421 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0501 02:08:39.177295   21421 kubeadm.go:309] 
	I0501 02:08:39.177421   21421 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0501 02:08:39.177530   21421 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0501 02:08:39.177540   21421 kubeadm.go:309] 
	I0501 02:08:39.177643   21421 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 7px8y6.mfs7lhrgb9xogpi0 \
	I0501 02:08:39.177775   21421 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:bd94cc6acfb95fe36f70b12c6a1f2980601b8235e7c4dea65cebdf35eb514754 \
	I0501 02:08:39.177815   21421 kubeadm.go:309] 	--control-plane 
	I0501 02:08:39.177824   21421 kubeadm.go:309] 
	I0501 02:08:39.177929   21421 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0501 02:08:39.177942   21421 kubeadm.go:309] 
	I0501 02:08:39.178048   21421 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 7px8y6.mfs7lhrgb9xogpi0 \
	I0501 02:08:39.178191   21421 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:bd94cc6acfb95fe36f70b12c6a1f2980601b8235e7c4dea65cebdf35eb514754 
	I0501 02:08:39.178346   21421 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0501 02:08:39.178363   21421 cni.go:84] Creating CNI manager for ""
	I0501 02:08:39.178372   21421 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0501 02:08:39.180320   21421 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0501 02:08:39.181537   21421 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0501 02:08:39.194728   21421 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0501 02:08:39.216034   21421 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0501 02:08:39.216121   21421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:08:39.216127   21421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-286595 minikube.k8s.io/updated_at=2024_05_01T02_08_39_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=2c4eae41cda912e6a762d77f0d8868e00f97bb4e minikube.k8s.io/name=addons-286595 minikube.k8s.io/primary=true
	I0501 02:08:39.249433   21421 ops.go:34] apiserver oom_adj: -16
	I0501 02:08:39.366801   21421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:08:39.867705   21421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:08:40.367573   21421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:08:40.867598   21421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:08:41.367658   21421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:08:41.867114   21421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:08:42.367729   21421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:08:42.866964   21421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:08:43.366888   21421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:08:43.867647   21421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:08:44.366934   21421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:08:44.866907   21421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:08:45.367710   21421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:08:45.867637   21421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:08:46.367271   21421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:08:46.867261   21421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:08:47.366939   21421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:08:47.867905   21421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:08:48.367168   21421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:08:48.866957   21421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:08:49.367396   21421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:08:49.867589   21421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:08:50.366996   21421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:08:50.867510   21421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:08:51.366907   21421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:08:51.867645   21421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:08:52.367085   21421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:08:52.950168   21421 kubeadm.go:1107] duration metric: took 13.734115773s to wait for elevateKubeSystemPrivileges
	W0501 02:08:52.950209   21421 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0501 02:08:52.950219   21421 kubeadm.go:393] duration metric: took 23.936442112s to StartCluster
	I0501 02:08:52.950248   21421 settings.go:142] acquiring lock: {Name:mkcfe08bcadd45d99c00ed1eaf4e9c226a489fe2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:08:52.950388   21421 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18779-13391/kubeconfig
	I0501 02:08:52.950761   21421 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18779-13391/kubeconfig: {Name:mk5d1131557f885800877622151c0915337cb23a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:08:52.950959   21421 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0501 02:08:52.950987   21421 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.173 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0501 02:08:52.952716   21421 out.go:177] * Verifying Kubernetes components...
	I0501 02:08:52.951055   21421 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0501 02:08:52.952764   21421 addons.go:69] Setting cloud-spanner=true in profile "addons-286595"
	I0501 02:08:52.954426   21421 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:08:52.954449   21421 addons.go:234] Setting addon cloud-spanner=true in "addons-286595"
	I0501 02:08:52.951208   21421 config.go:182] Loaded profile config "addons-286595": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0501 02:08:52.952782   21421 addons.go:69] Setting inspektor-gadget=true in profile "addons-286595"
	I0501 02:08:52.954606   21421 addons.go:234] Setting addon inspektor-gadget=true in "addons-286595"
	I0501 02:08:52.954649   21421 host.go:66] Checking if "addons-286595" exists ...
	I0501 02:08:52.952780   21421 addons.go:69] Setting yakd=true in profile "addons-286595"
	I0501 02:08:52.954687   21421 addons.go:234] Setting addon yakd=true in "addons-286595"
	I0501 02:08:52.954711   21421 host.go:66] Checking if "addons-286595" exists ...
	I0501 02:08:52.952791   21421 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-286595"
	I0501 02:08:52.954806   21421 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-286595"
	I0501 02:08:52.954846   21421 host.go:66] Checking if "addons-286595" exists ...
	I0501 02:08:52.952793   21421 addons.go:69] Setting metrics-server=true in profile "addons-286595"
	I0501 02:08:52.952801   21421 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-286595"
	I0501 02:08:52.952801   21421 addons.go:69] Setting default-storageclass=true in profile "addons-286595"
	I0501 02:08:52.952808   21421 addons.go:69] Setting volumesnapshots=true in profile "addons-286595"
	I0501 02:08:52.952810   21421 addons.go:69] Setting gcp-auth=true in profile "addons-286595"
	I0501 02:08:52.952817   21421 addons.go:69] Setting helm-tiller=true in profile "addons-286595"
	I0501 02:08:52.952809   21421 addons.go:69] Setting storage-provisioner=true in profile "addons-286595"
	I0501 02:08:52.952824   21421 addons.go:69] Setting ingress=true in profile "addons-286595"
	I0501 02:08:52.952829   21421 addons.go:69] Setting ingress-dns=true in profile "addons-286595"
	I0501 02:08:52.952848   21421 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-286595"
	I0501 02:08:52.952873   21421 addons.go:69] Setting registry=true in profile "addons-286595"
	I0501 02:08:52.954495   21421 host.go:66] Checking if "addons-286595" exists ...
	I0501 02:08:52.954906   21421 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-286595"
	I0501 02:08:52.954915   21421 addons.go:234] Setting addon volumesnapshots=true in "addons-286595"
	I0501 02:08:52.954940   21421 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-286595"
	I0501 02:08:52.954949   21421 host.go:66] Checking if "addons-286595" exists ...
	I0501 02:08:52.955104   21421 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:08:52.955112   21421 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:08:52.955128   21421 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:08:52.955132   21421 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:08:52.955261   21421 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:08:52.955291   21421 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:08:52.955296   21421 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:08:52.955300   21421 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:08:52.955313   21421 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:08:52.955319   21421 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:08:52.955340   21421 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:08:52.955363   21421 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:08:52.955375   21421 mustload.go:65] Loading cluster: addons-286595
	I0501 02:08:52.955382   21421 addons.go:234] Setting addon metrics-server=true in "addons-286595"
	I0501 02:08:52.955400   21421 addons.go:234] Setting addon helm-tiller=true in "addons-286595"
	I0501 02:08:52.955401   21421 addons.go:234] Setting addon ingress-dns=true in "addons-286595"
	I0501 02:08:52.955418   21421 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-286595"
	I0501 02:08:52.955419   21421 addons.go:234] Setting addon storage-provisioner=true in "addons-286595"
	I0501 02:08:52.955435   21421 addons.go:234] Setting addon registry=true in "addons-286595"
	I0501 02:08:52.955439   21421 addons.go:234] Setting addon ingress=true in "addons-286595"
	I0501 02:08:52.955543   21421 host.go:66] Checking if "addons-286595" exists ...
	I0501 02:08:52.955595   21421 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:08:52.955907   21421 host.go:66] Checking if "addons-286595" exists ...
	I0501 02:08:52.955907   21421 host.go:66] Checking if "addons-286595" exists ...
	I0501 02:08:52.955826   21421 config.go:182] Loaded profile config "addons-286595": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0501 02:08:52.956331   21421 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:08:52.956365   21421 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:08:52.956393   21421 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:08:52.955851   21421 host.go:66] Checking if "addons-286595" exists ...
	I0501 02:08:52.956468   21421 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:08:52.956484   21421 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:08:52.956514   21421 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:08:52.955866   21421 host.go:66] Checking if "addons-286595" exists ...
	I0501 02:08:52.956905   21421 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:08:52.956924   21421 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:08:52.955875   21421 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:08:52.957167   21421 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:08:52.955885   21421 host.go:66] Checking if "addons-286595" exists ...
	I0501 02:08:52.959060   21421 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:08:52.959085   21421 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:08:52.955651   21421 host.go:66] Checking if "addons-286595" exists ...
	I0501 02:08:52.955921   21421 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:08:52.976530   21421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46043
	I0501 02:08:52.976624   21421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46045
	I0501 02:08:52.977157   21421 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:08:52.977673   21421 main.go:141] libmachine: Using API Version  1
	I0501 02:08:52.977694   21421 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:08:52.977704   21421 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:08:52.978031   21421 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:08:52.978170   21421 main.go:141] libmachine: Using API Version  1
	I0501 02:08:52.978195   21421 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:08:52.978807   21421 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:08:52.978850   21421 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:08:52.979041   21421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32787
	I0501 02:08:52.979061   21421 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:08:52.979462   21421 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:08:52.979738   21421 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:08:52.979770   21421 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:08:52.979948   21421 main.go:141] libmachine: Using API Version  1
	I0501 02:08:52.979971   21421 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:08:52.980275   21421 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:08:52.980456   21421 main.go:141] libmachine: (addons-286595) Calling .GetState
	I0501 02:08:52.982292   21421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38547
	I0501 02:08:52.982687   21421 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:08:52.983189   21421 main.go:141] libmachine: Using API Version  1
	I0501 02:08:52.983210   21421 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:08:52.983586   21421 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:08:52.984139   21421 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:08:52.984163   21421 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:08:52.984845   21421 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-286595"
	I0501 02:08:52.984892   21421 host.go:66] Checking if "addons-286595" exists ...
	I0501 02:08:52.985244   21421 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:08:52.985270   21421 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:08:52.986838   21421 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:08:52.986878   21421 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:08:52.986923   21421 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:08:52.986958   21421 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:08:52.999537   21421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44279
	I0501 02:08:53.000238   21421 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:08:53.000810   21421 main.go:141] libmachine: Using API Version  1
	I0501 02:08:53.000828   21421 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:08:53.001203   21421 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:08:53.001409   21421 main.go:141] libmachine: (addons-286595) Calling .GetState
	I0501 02:08:53.002043   21421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37591
	I0501 02:08:53.002411   21421 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:08:53.003276   21421 main.go:141] libmachine: Using API Version  1
	I0501 02:08:53.003300   21421 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:08:53.003626   21421 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:08:53.003718   21421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37107
	I0501 02:08:53.004908   21421 addons.go:234] Setting addon default-storageclass=true in "addons-286595"
	I0501 02:08:53.004956   21421 host.go:66] Checking if "addons-286595" exists ...
	I0501 02:08:53.005318   21421 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:08:53.005362   21421 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:08:53.005553   21421 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:08:53.005783   21421 main.go:141] libmachine: (addons-286595) Calling .GetState
	I0501 02:08:53.006671   21421 main.go:141] libmachine: Using API Version  1
	I0501 02:08:53.006693   21421 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:08:53.007072   21421 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:08:53.007295   21421 main.go:141] libmachine: (addons-286595) Calling .GetState
	I0501 02:08:53.008023   21421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43813
	I0501 02:08:53.008420   21421 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:08:53.008630   21421 main.go:141] libmachine: (addons-286595) Calling .DriverName
	I0501 02:08:53.008914   21421 main.go:141] libmachine: Using API Version  1
	I0501 02:08:53.008936   21421 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:08:53.010849   21421 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0501 02:08:53.009292   21421 main.go:141] libmachine: (addons-286595) Calling .DriverName
	I0501 02:08:53.009461   21421 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:08:53.012148   21421 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0501 02:08:53.012163   21421 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0501 02:08:53.012181   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHHostname
	I0501 02:08:53.012311   21421 main.go:141] libmachine: (addons-286595) Calling .GetState
	I0501 02:08:53.016169   21421 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.27.0
	I0501 02:08:53.014837   21421 host.go:66] Checking if "addons-286595" exists ...
	I0501 02:08:53.015944   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:08:53.016508   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHPort
	I0501 02:08:53.017514   21421 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0501 02:08:53.017528   21421 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0501 02:08:53.017549   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHHostname
	I0501 02:08:53.017603   21421 main.go:141] libmachine: (addons-286595) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:55:7e", ip: ""} in network mk-addons-286595: {Iface:virbr1 ExpiryTime:2024-05-01 03:08:11 +0000 UTC Type:0 Mac:52:54:00:74:55:7e Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:addons-286595 Clientid:01:52:54:00:74:55:7e}
	I0501 02:08:53.017637   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined IP address 192.168.39.173 and MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:08:53.017787   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHKeyPath
	I0501 02:08:53.017855   21421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33571
	I0501 02:08:53.017940   21421 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:08:53.017976   21421 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:08:53.018193   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHUsername
	I0501 02:08:53.018193   21421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44443
	I0501 02:08:53.018443   21421 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/addons-286595/id_rsa Username:docker}
	I0501 02:08:53.018931   21421 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:08:53.019000   21421 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:08:53.019487   21421 main.go:141] libmachine: Using API Version  1
	I0501 02:08:53.019504   21421 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:08:53.019726   21421 main.go:141] libmachine: Using API Version  1
	I0501 02:08:53.019741   21421 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:08:53.020029   21421 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:08:53.020648   21421 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:08:53.020673   21421 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:08:53.021148   21421 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:08:53.021408   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:08:53.021852   21421 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:08:53.021894   21421 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:08:53.022480   21421 main.go:141] libmachine: (addons-286595) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:55:7e", ip: ""} in network mk-addons-286595: {Iface:virbr1 ExpiryTime:2024-05-01 03:08:11 +0000 UTC Type:0 Mac:52:54:00:74:55:7e Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:addons-286595 Clientid:01:52:54:00:74:55:7e}
	I0501 02:08:53.022802   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined IP address 192.168.39.173 and MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:08:53.022993   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHPort
	I0501 02:08:53.023181   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHKeyPath
	I0501 02:08:53.023372   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHUsername
	I0501 02:08:53.023518   21421 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/addons-286595/id_rsa Username:docker}
	I0501 02:08:53.024019   21421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46597
	I0501 02:08:53.024418   21421 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:08:53.024941   21421 main.go:141] libmachine: Using API Version  1
	I0501 02:08:53.024958   21421 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:08:53.025306   21421 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:08:53.026060   21421 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:08:53.026107   21421 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:08:53.040585   21421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37537
	I0501 02:08:53.041240   21421 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:08:53.041741   21421 main.go:141] libmachine: Using API Version  1
	I0501 02:08:53.041758   21421 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:08:53.042075   21421 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:08:53.042632   21421 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:08:53.042656   21421 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:08:53.042855   21421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43447
	I0501 02:08:53.043295   21421 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:08:53.043815   21421 main.go:141] libmachine: Using API Version  1
	I0501 02:08:53.043834   21421 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:08:53.044041   21421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36633
	I0501 02:08:53.044257   21421 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:08:53.044459   21421 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:08:53.044880   21421 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:08:53.044912   21421 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:08:53.045117   21421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39211
	I0501 02:08:53.045126   21421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34383
	I0501 02:08:53.045649   21421 main.go:141] libmachine: Using API Version  1
	I0501 02:08:53.045665   21421 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:08:53.045725   21421 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:08:53.045792   21421 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:08:53.046133   21421 main.go:141] libmachine: Using API Version  1
	I0501 02:08:53.046149   21421 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:08:53.046265   21421 main.go:141] libmachine: Using API Version  1
	I0501 02:08:53.046274   21421 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:08:53.046624   21421 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:08:53.047109   21421 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:08:53.047143   21421 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:08:53.047333   21421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33389
	I0501 02:08:53.047346   21421 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:08:53.047607   21421 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:08:53.047835   21421 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:08:53.048072   21421 main.go:141] libmachine: (addons-286595) Calling .GetState
	I0501 02:08:53.048545   21421 main.go:141] libmachine: Using API Version  1
	I0501 02:08:53.048562   21421 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:08:53.048880   21421 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:08:53.049067   21421 main.go:141] libmachine: (addons-286595) Calling .DriverName
	I0501 02:08:53.050573   21421 main.go:141] libmachine: (addons-286595) Calling .DriverName
	I0501 02:08:53.050638   21421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35749
	I0501 02:08:53.050947   21421 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:08:53.050970   21421 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:08:53.052844   21421 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0501 02:08:53.051743   21421 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:08:53.054009   21421 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0501 02:08:53.054034   21421 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0501 02:08:53.054058   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHHostname
	I0501 02:08:53.054511   21421 main.go:141] libmachine: Using API Version  1
	I0501 02:08:53.054536   21421 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:08:53.054913   21421 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:08:53.055379   21421 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:08:53.055414   21421 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:08:53.057397   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:08:53.057793   21421 main.go:141] libmachine: (addons-286595) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:55:7e", ip: ""} in network mk-addons-286595: {Iface:virbr1 ExpiryTime:2024-05-01 03:08:11 +0000 UTC Type:0 Mac:52:54:00:74:55:7e Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:addons-286595 Clientid:01:52:54:00:74:55:7e}
	I0501 02:08:53.057813   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined IP address 192.168.39.173 and MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:08:53.057967   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHPort
	I0501 02:08:53.058115   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHKeyPath
	I0501 02:08:53.058255   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHUsername
	I0501 02:08:53.058365   21421 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/addons-286595/id_rsa Username:docker}
	I0501 02:08:53.059864   21421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35727
	I0501 02:08:53.060173   21421 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:08:53.060568   21421 main.go:141] libmachine: Using API Version  1
	I0501 02:08:53.060580   21421 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:08:53.060863   21421 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:08:53.061329   21421 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:08:53.061358   21421 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:08:53.065868   21421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39679
	I0501 02:08:53.066186   21421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43523
	I0501 02:08:53.066410   21421 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:08:53.067271   21421 main.go:141] libmachine: Using API Version  1
	I0501 02:08:53.067289   21421 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:08:53.067687   21421 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:08:53.067886   21421 main.go:141] libmachine: (addons-286595) Calling .GetState
	I0501 02:08:53.068877   21421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38351
	I0501 02:08:53.069175   21421 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:08:53.069308   21421 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:08:53.069726   21421 main.go:141] libmachine: Using API Version  1
	I0501 02:08:53.069748   21421 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:08:53.070136   21421 main.go:141] libmachine: Using API Version  1
	I0501 02:08:53.070157   21421 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:08:53.070853   21421 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:08:53.071021   21421 main.go:141] libmachine: (addons-286595) Calling .DriverName
	I0501 02:08:53.072922   21421 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0501 02:08:53.072950   21421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40605
	I0501 02:08:53.074350   21421 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0501 02:08:53.074365   21421 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0501 02:08:53.074383   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHHostname
	I0501 02:08:53.072848   21421 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:08:53.071561   21421 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:08:53.074504   21421 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:08:53.074613   21421 main.go:141] libmachine: (addons-286595) Calling .GetState
	I0501 02:08:53.074935   21421 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:08:53.075681   21421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37751
	I0501 02:08:53.076515   21421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42399
	I0501 02:08:53.076531   21421 main.go:141] libmachine: (addons-286595) Calling .DriverName
	I0501 02:08:53.078095   21421 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0501 02:08:53.079211   21421 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0501 02:08:53.079232   21421 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0501 02:08:53.079245   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHHostname
	I0501 02:08:53.077249   21421 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:08:53.079211   21421 main.go:141] libmachine: Using API Version  1
	I0501 02:08:53.079319   21421 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:08:53.077413   21421 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:08:53.078185   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:08:53.079430   21421 main.go:141] libmachine: (addons-286595) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:55:7e", ip: ""} in network mk-addons-286595: {Iface:virbr1 ExpiryTime:2024-05-01 03:08:11 +0000 UTC Type:0 Mac:52:54:00:74:55:7e Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:addons-286595 Clientid:01:52:54:00:74:55:7e}
	I0501 02:08:53.079456   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined IP address 192.168.39.173 and MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:08:53.078872   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHPort
	I0501 02:08:53.079878   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHKeyPath
	I0501 02:08:53.079878   21421 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:08:53.080093   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHUsername
	I0501 02:08:53.080298   21421 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/addons-286595/id_rsa Username:docker}
	I0501 02:08:53.080714   21421 main.go:141] libmachine: Using API Version  1
	I0501 02:08:53.080741   21421 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:08:53.080892   21421 main.go:141] libmachine: Using API Version  1
	I0501 02:08:53.080905   21421 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:08:53.081187   21421 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:08:53.081217   21421 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:08:53.081418   21421 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:08:53.081632   21421 main.go:141] libmachine: (addons-286595) Calling .GetState
	I0501 02:08:53.082390   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:08:53.083515   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHPort
	I0501 02:08:53.083565   21421 main.go:141] libmachine: (addons-286595) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:55:7e", ip: ""} in network mk-addons-286595: {Iface:virbr1 ExpiryTime:2024-05-01 03:08:11 +0000 UTC Type:0 Mac:52:54:00:74:55:7e Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:addons-286595 Clientid:01:52:54:00:74:55:7e}
	I0501 02:08:53.083584   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined IP address 192.168.39.173 and MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:08:53.083617   21421 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:08:53.083755   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHKeyPath
	I0501 02:08:53.084035   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHUsername
	I0501 02:08:53.084158   21421 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/addons-286595/id_rsa Username:docker}
	I0501 02:08:53.084440   21421 main.go:141] libmachine: (addons-286595) Calling .DriverName
	I0501 02:08:53.084577   21421 main.go:141] libmachine: (addons-286595) Calling .GetState
	I0501 02:08:53.086064   21421 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.16
	I0501 02:08:53.084795   21421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42675
	I0501 02:08:53.086447   21421 main.go:141] libmachine: (addons-286595) Calling .DriverName
	I0501 02:08:53.087326   21421 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0501 02:08:53.087334   21421 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0501 02:08:53.087345   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHHostname
	I0501 02:08:53.087748   21421 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:08:53.089339   21421 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0501 02:08:53.090598   21421 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0501 02:08:53.090617   21421 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0501 02:08:53.090634   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHHostname
	I0501 02:08:53.088434   21421 main.go:141] libmachine: Using API Version  1
	I0501 02:08:53.089584   21421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32885
	I0501 02:08:53.090703   21421 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:08:53.090331   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:08:53.090985   21421 main.go:141] libmachine: (addons-286595) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:55:7e", ip: ""} in network mk-addons-286595: {Iface:virbr1 ExpiryTime:2024-05-01 03:08:11 +0000 UTC Type:0 Mac:52:54:00:74:55:7e Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:addons-286595 Clientid:01:52:54:00:74:55:7e}
	I0501 02:08:53.091019   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined IP address 192.168.39.173 and MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:08:53.091047   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHPort
	I0501 02:08:53.091254   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHKeyPath
	I0501 02:08:53.091428   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHUsername
	I0501 02:08:53.091804   21421 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/addons-286595/id_rsa Username:docker}
	I0501 02:08:53.092199   21421 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:08:53.092268   21421 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:08:53.092999   21421 main.go:141] libmachine: Using API Version  1
	I0501 02:08:53.093019   21421 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:08:53.093382   21421 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:08:53.093533   21421 main.go:141] libmachine: (addons-286595) Calling .GetState
	I0501 02:08:53.094190   21421 main.go:141] libmachine: (addons-286595) Calling .GetState
	I0501 02:08:53.094258   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:08:53.095041   21421 main.go:141] libmachine: (addons-286595) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:55:7e", ip: ""} in network mk-addons-286595: {Iface:virbr1 ExpiryTime:2024-05-01 03:08:11 +0000 UTC Type:0 Mac:52:54:00:74:55:7e Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:addons-286595 Clientid:01:52:54:00:74:55:7e}
	I0501 02:08:53.095082   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined IP address 192.168.39.173 and MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:08:53.095272   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHPort
	I0501 02:08:53.095332   21421 main.go:141] libmachine: (addons-286595) Calling .DriverName
	I0501 02:08:53.095602   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHKeyPath
	I0501 02:08:53.097068   21421 out.go:177]   - Using image docker.io/busybox:stable
	I0501 02:08:53.096505   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHUsername
	I0501 02:08:53.096615   21421 main.go:141] libmachine: (addons-286595) Calling .DriverName
	I0501 02:08:53.098160   21421 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0501 02:08:53.098455   21421 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/addons-286595/id_rsa Username:docker}
	I0501 02:08:53.099682   21421 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0501 02:08:53.099696   21421 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0501 02:08:53.099715   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHHostname
	I0501 02:08:53.100903   21421 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.15.0
	I0501 02:08:53.100366   21421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45035
	I0501 02:08:53.102133   21421 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0501 02:08:53.102145   21421 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0501 02:08:53.102160   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHHostname
	I0501 02:08:53.102754   21421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43689
	I0501 02:08:53.103197   21421 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:08:53.103366   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:08:53.103419   21421 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:08:53.104385   21421 main.go:141] libmachine: Using API Version  1
	I0501 02:08:53.104401   21421 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:08:53.104523   21421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41789
	I0501 02:08:53.104990   21421 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:08:53.105080   21421 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:08:53.105318   21421 main.go:141] libmachine: (addons-286595) Calling .GetState
	I0501 02:08:53.105727   21421 main.go:141] libmachine: (addons-286595) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:55:7e", ip: ""} in network mk-addons-286595: {Iface:virbr1 ExpiryTime:2024-05-01 03:08:11 +0000 UTC Type:0 Mac:52:54:00:74:55:7e Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:addons-286595 Clientid:01:52:54:00:74:55:7e}
	I0501 02:08:53.105762   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined IP address 192.168.39.173 and MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:08:53.105912   21421 main.go:141] libmachine: Using API Version  1
	I0501 02:08:53.105934   21421 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:08:53.106604   21421 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:08:53.107098   21421 main.go:141] libmachine: Using API Version  1
	I0501 02:08:53.107117   21421 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:08:53.107198   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHPort
	I0501 02:08:53.107252   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:08:53.107312   21421 main.go:141] libmachine: (addons-286595) Calling .GetState
	I0501 02:08:53.107355   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHKeyPath
	I0501 02:08:53.107453   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHUsername
	I0501 02:08:53.107484   21421 main.go:141] libmachine: (addons-286595) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:55:7e", ip: ""} in network mk-addons-286595: {Iface:virbr1 ExpiryTime:2024-05-01 03:08:11 +0000 UTC Type:0 Mac:52:54:00:74:55:7e Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:addons-286595 Clientid:01:52:54:00:74:55:7e}
	I0501 02:08:53.107496   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined IP address 192.168.39.173 and MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:08:53.107565   21421 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/addons-286595/id_rsa Username:docker}
	I0501 02:08:53.107889   21421 main.go:141] libmachine: (addons-286595) Calling .DriverName
	I0501 02:08:53.107981   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHPort
	I0501 02:08:53.108210   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHKeyPath
	I0501 02:08:53.108284   21421 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0501 02:08:53.108293   21421 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0501 02:08:53.108296   21421 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:08:53.108303   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHHostname
	I0501 02:08:53.108423   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHUsername
	I0501 02:08:53.108853   21421 main.go:141] libmachine: (addons-286595) Calling .GetState
	I0501 02:08:53.109109   21421 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/addons-286595/id_rsa Username:docker}
	I0501 02:08:53.109226   21421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46267
	I0501 02:08:53.109338   21421 main.go:141] libmachine: (addons-286595) Calling .DriverName
	I0501 02:08:53.110861   21421 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.1
	I0501 02:08:53.109877   21421 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:08:53.110992   21421 main.go:141] libmachine: (addons-286595) Calling .DriverName
	I0501 02:08:53.111932   21421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40941
	I0501 02:08:53.111953   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:08:53.113332   21421 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0501 02:08:53.112565   21421 main.go:141] libmachine: (addons-286595) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:55:7e", ip: ""} in network mk-addons-286595: {Iface:virbr1 ExpiryTime:2024-05-01 03:08:11 +0000 UTC Type:0 Mac:52:54:00:74:55:7e Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:addons-286595 Clientid:01:52:54:00:74:55:7e}
	I0501 02:08:53.112614   21421 main.go:141] libmachine: Using API Version  1
	I0501 02:08:53.112632   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHPort
	I0501 02:08:53.112861   21421 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:08:53.114538   21421 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0501 02:08:53.114554   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined IP address 192.168.39.173 and MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:08:53.115688   21421 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0501 02:08:53.116859   21421 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0501 02:08:53.116874   21421 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0501 02:08:53.116885   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHHostname
	I0501 02:08:53.115705   21421 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:08:53.115882   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHKeyPath
	I0501 02:08:53.116112   21421 main.go:141] libmachine: Using API Version  1
	I0501 02:08:53.118023   21421 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:08:53.117206   21421 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:08:53.118135   21421 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0501 02:08:53.118153   21421 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0501 02:08:53.118173   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHHostname
	I0501 02:08:53.118295   21421 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:08:53.118310   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHUsername
	I0501 02:08:53.118469   21421 main.go:141] libmachine: (addons-286595) Calling .GetState
	I0501 02:08:53.118706   21421 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/addons-286595/id_rsa Username:docker}
	I0501 02:08:53.118912   21421 main.go:141] libmachine: (addons-286595) Calling .GetState
	I0501 02:08:53.119891   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:08:53.120448   21421 main.go:141] libmachine: (addons-286595) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:55:7e", ip: ""} in network mk-addons-286595: {Iface:virbr1 ExpiryTime:2024-05-01 03:08:11 +0000 UTC Type:0 Mac:52:54:00:74:55:7e Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:addons-286595 Clientid:01:52:54:00:74:55:7e}
	I0501 02:08:53.120483   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined IP address 192.168.39.173 and MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:08:53.120692   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHPort
	I0501 02:08:53.120848   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHKeyPath
	I0501 02:08:53.121004   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHUsername
	I0501 02:08:53.121155   21421 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/addons-286595/id_rsa Username:docker}
	I0501 02:08:53.121163   21421 main.go:141] libmachine: (addons-286595) Calling .DriverName
	I0501 02:08:53.121424   21421 main.go:141] libmachine: (addons-286595) Calling .DriverName
	I0501 02:08:53.122658   21421 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0501 02:08:53.122366   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:08:53.122953   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHPort
	I0501 02:08:53.123778   21421 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0501 02:08:53.123819   21421 main.go:141] libmachine: (addons-286595) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:55:7e", ip: ""} in network mk-addons-286595: {Iface:virbr1 ExpiryTime:2024-05-01 03:08:11 +0000 UTC Type:0 Mac:52:54:00:74:55:7e Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:addons-286595 Clientid:01:52:54:00:74:55:7e}
	I0501 02:08:53.123937   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHKeyPath
	I0501 02:08:53.125076   21421 out.go:177]   - Using image docker.io/registry:2.8.3
	I0501 02:08:53.126440   21421 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0501 02:08:53.126458   21421 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0501 02:08:53.126474   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHHostname
	I0501 02:08:53.125146   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined IP address 192.168.39.173 and MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:08:53.125133   21421 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0501 02:08:53.125279   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHUsername
	I0501 02:08:53.129000   21421 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0501 02:08:53.127965   21421 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/addons-286595/id_rsa Username:docker}
	I0501 02:08:53.128940   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:08:53.129447   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHPort
	I0501 02:08:53.130124   21421 main.go:141] libmachine: (addons-286595) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:55:7e", ip: ""} in network mk-addons-286595: {Iface:virbr1 ExpiryTime:2024-05-01 03:08:11 +0000 UTC Type:0 Mac:52:54:00:74:55:7e Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:addons-286595 Clientid:01:52:54:00:74:55:7e}
	I0501 02:08:53.130141   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined IP address 192.168.39.173 and MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:08:53.131239   21421 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0501 02:08:53.132405   21421 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0501 02:08:53.130280   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHKeyPath
	I0501 02:08:53.134568   21421 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0501 02:08:53.133679   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHUsername
	I0501 02:08:53.135673   21421 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0501 02:08:53.136848   21421 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0501 02:08:53.135846   21421 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/addons-286595/id_rsa Username:docker}
	I0501 02:08:53.138036   21421 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0501 02:08:53.138048   21421 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0501 02:08:53.138058   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHHostname
	I0501 02:08:53.140593   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:08:53.141472   21421 main.go:141] libmachine: (addons-286595) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:55:7e", ip: ""} in network mk-addons-286595: {Iface:virbr1 ExpiryTime:2024-05-01 03:08:11 +0000 UTC Type:0 Mac:52:54:00:74:55:7e Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:addons-286595 Clientid:01:52:54:00:74:55:7e}
	I0501 02:08:53.141496   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined IP address 192.168.39.173 and MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:08:53.141630   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHPort
	I0501 02:08:53.141803   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHKeyPath
	I0501 02:08:53.141956   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHUsername
	I0501 02:08:53.142067   21421 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/addons-286595/id_rsa Username:docker}
	W0501 02:08:53.142721   21421 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:58574->192.168.39.173:22: read: connection reset by peer
	I0501 02:08:53.142742   21421 retry.go:31] will retry after 198.557266ms: ssh: handshake failed: read tcp 192.168.39.1:58574->192.168.39.173:22: read: connection reset by peer
	I0501 02:08:53.358930   21421 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0501 02:08:53.358947   21421 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0501 02:08:53.419948   21421 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0501 02:08:53.419969   21421 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0501 02:08:53.427400   21421 node_ready.go:35] waiting up to 6m0s for node "addons-286595" to be "Ready" ...
	I0501 02:08:53.430796   21421 node_ready.go:49] node "addons-286595" has status "Ready":"True"
	I0501 02:08:53.430823   21421 node_ready.go:38] duration metric: took 3.387168ms for node "addons-286595" to be "Ready" ...
	I0501 02:08:53.430834   21421 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0501 02:08:53.440520   21421 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-rlvmm" in "kube-system" namespace to be "Ready" ...
	I0501 02:08:53.533751   21421 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0501 02:08:53.533761   21421 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0501 02:08:53.533780   21421 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0501 02:08:53.562822   21421 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0501 02:08:53.562844   21421 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0501 02:08:53.562855   21421 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0501 02:08:53.592680   21421 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0501 02:08:53.592702   21421 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0501 02:08:53.614415   21421 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0501 02:08:53.625729   21421 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0501 02:08:53.633610   21421 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0501 02:08:53.657973   21421 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0501 02:08:53.671985   21421 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0501 02:08:53.687836   21421 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0501 02:08:53.687857   21421 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0501 02:08:53.691866   21421 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0501 02:08:53.691883   21421 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0501 02:08:53.713548   21421 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0501 02:08:53.713565   21421 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0501 02:08:53.718699   21421 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0501 02:08:53.718713   21421 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0501 02:08:53.782102   21421 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0501 02:08:53.782125   21421 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0501 02:08:53.872289   21421 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0501 02:08:53.872312   21421 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0501 02:08:53.872899   21421 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0501 02:08:53.872922   21421 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0501 02:08:53.918979   21421 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0501 02:08:53.919011   21421 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0501 02:08:53.934642   21421 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0501 02:08:53.934657   21421 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0501 02:08:53.969760   21421 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0501 02:08:53.969785   21421 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0501 02:08:53.987439   21421 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0501 02:08:53.987461   21421 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0501 02:08:54.018839   21421 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0501 02:08:54.037591   21421 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0501 02:08:54.037617   21421 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0501 02:08:54.103205   21421 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0501 02:08:54.103227   21421 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0501 02:08:54.123025   21421 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0501 02:08:54.123063   21421 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0501 02:08:54.163101   21421 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0501 02:08:54.306026   21421 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0501 02:08:54.306053   21421 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0501 02:08:54.386426   21421 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0501 02:08:54.395755   21421 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0501 02:08:54.395777   21421 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0501 02:08:54.581706   21421 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0501 02:08:54.581726   21421 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0501 02:08:54.621478   21421 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0501 02:08:54.621496   21421 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0501 02:08:54.768813   21421 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0501 02:08:54.791137   21421 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0501 02:08:54.791158   21421 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0501 02:08:54.861158   21421 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0501 02:08:54.861187   21421 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0501 02:08:55.084156   21421 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0501 02:08:55.084181   21421 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0501 02:08:55.140919   21421 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0501 02:08:55.140942   21421 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0501 02:08:55.293710   21421 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0501 02:08:55.293738   21421 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0501 02:08:55.324404   21421 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0501 02:08:55.324425   21421 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0501 02:08:55.362335   21421 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0501 02:08:55.362355   21421 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0501 02:08:55.453799   21421 pod_ready.go:102] pod "coredns-7db6d8ff4d-rlvmm" in "kube-system" namespace has status "Ready":"False"
	I0501 02:08:55.618628   21421 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0501 02:08:55.618649   21421 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0501 02:08:55.634771   21421 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0501 02:08:55.634793   21421 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0501 02:08:55.845672   21421 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0501 02:08:55.984163   21421 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0501 02:08:56.040917   21421 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0501 02:08:56.040956   21421 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0501 02:08:56.145915   21421 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.78693674s)
	I0501 02:08:56.145955   21421 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0501 02:08:56.357841   21421 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0501 02:08:56.357862   21421 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0501 02:08:56.519223   21421 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.985428529s)
	I0501 02:08:56.519243   21421 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.95636659s)
	I0501 02:08:56.519290   21421 main.go:141] libmachine: Making call to close driver server
	I0501 02:08:56.519307   21421 main.go:141] libmachine: (addons-286595) Calling .Close
	I0501 02:08:56.519325   21421 main.go:141] libmachine: Making call to close driver server
	I0501 02:08:56.519341   21421 main.go:141] libmachine: (addons-286595) Calling .Close
	I0501 02:08:56.519599   21421 main.go:141] libmachine: Successfully made call to close driver server
	I0501 02:08:56.519611   21421 main.go:141] libmachine: (addons-286595) DBG | Closing plugin on server side
	I0501 02:08:56.519622   21421 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 02:08:56.519632   21421 main.go:141] libmachine: Making call to close driver server
	I0501 02:08:56.519642   21421 main.go:141] libmachine: Successfully made call to close driver server
	I0501 02:08:56.519658   21421 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 02:08:56.519665   21421 main.go:141] libmachine: Making call to close driver server
	I0501 02:08:56.519676   21421 main.go:141] libmachine: (addons-286595) Calling .Close
	I0501 02:08:56.519646   21421 main.go:141] libmachine: (addons-286595) Calling .Close
	I0501 02:08:56.519633   21421 main.go:141] libmachine: (addons-286595) DBG | Closing plugin on server side
	I0501 02:08:56.519963   21421 main.go:141] libmachine: (addons-286595) DBG | Closing plugin on server side
	I0501 02:08:56.519976   21421 main.go:141] libmachine: Successfully made call to close driver server
	I0501 02:08:56.519989   21421 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 02:08:56.519988   21421 main.go:141] libmachine: (addons-286595) DBG | Closing plugin on server side
	I0501 02:08:56.519995   21421 main.go:141] libmachine: Successfully made call to close driver server
	I0501 02:08:56.520003   21421 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 02:08:56.555755   21421 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0501 02:08:56.654133   21421 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-286595" context rescaled to 1 replicas
	I0501 02:08:57.536717   21421 pod_ready.go:102] pod "coredns-7db6d8ff4d-rlvmm" in "kube-system" namespace has status "Ready":"False"
	I0501 02:08:58.891225   21421 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.276755478s)
	I0501 02:08:58.891238   21421 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.265478039s)
	I0501 02:08:58.891280   21421 main.go:141] libmachine: Making call to close driver server
	I0501 02:08:58.891295   21421 main.go:141] libmachine: (addons-286595) Calling .Close
	I0501 02:08:58.891307   21421 main.go:141] libmachine: Making call to close driver server
	I0501 02:08:58.891322   21421 main.go:141] libmachine: (addons-286595) Calling .Close
	I0501 02:08:58.891265   21421 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.257632377s)
	I0501 02:08:58.891379   21421 main.go:141] libmachine: Making call to close driver server
	I0501 02:08:58.891392   21421 main.go:141] libmachine: (addons-286595) Calling .Close
	I0501 02:08:58.891553   21421 main.go:141] libmachine: Successfully made call to close driver server
	I0501 02:08:58.891570   21421 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 02:08:58.891579   21421 main.go:141] libmachine: Making call to close driver server
	I0501 02:08:58.891588   21421 main.go:141] libmachine: (addons-286595) Calling .Close
	I0501 02:08:58.891781   21421 main.go:141] libmachine: (addons-286595) DBG | Closing plugin on server side
	I0501 02:08:58.891800   21421 main.go:141] libmachine: (addons-286595) DBG | Closing plugin on server side
	I0501 02:08:58.891806   21421 main.go:141] libmachine: Successfully made call to close driver server
	I0501 02:08:58.891823   21421 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 02:08:58.891824   21421 main.go:141] libmachine: Successfully made call to close driver server
	I0501 02:08:58.891829   21421 main.go:141] libmachine: (addons-286595) DBG | Closing plugin on server side
	I0501 02:08:58.891834   21421 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 02:08:58.891843   21421 main.go:141] libmachine: Making call to close driver server
	I0501 02:08:58.891851   21421 main.go:141] libmachine: (addons-286595) Calling .Close
	I0501 02:08:58.891857   21421 main.go:141] libmachine: Successfully made call to close driver server
	I0501 02:08:58.891863   21421 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 02:08:58.891872   21421 main.go:141] libmachine: Making call to close driver server
	I0501 02:08:58.891878   21421 main.go:141] libmachine: (addons-286595) Calling .Close
	I0501 02:08:58.892089   21421 main.go:141] libmachine: Successfully made call to close driver server
	I0501 02:08:58.892102   21421 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 02:08:58.892203   21421 main.go:141] libmachine: Successfully made call to close driver server
	I0501 02:08:58.892215   21421 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 02:08:58.892275   21421 main.go:141] libmachine: (addons-286595) DBG | Closing plugin on server side
	I0501 02:08:59.049359   21421 main.go:141] libmachine: Making call to close driver server
	I0501 02:08:59.049376   21421 main.go:141] libmachine: Making call to close driver server
	I0501 02:08:59.049382   21421 main.go:141] libmachine: (addons-286595) Calling .Close
	I0501 02:08:59.049389   21421 main.go:141] libmachine: (addons-286595) Calling .Close
	I0501 02:08:59.049655   21421 main.go:141] libmachine: Successfully made call to close driver server
	I0501 02:08:59.049675   21421 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 02:08:59.049739   21421 main.go:141] libmachine: Successfully made call to close driver server
	I0501 02:08:59.049784   21421 main.go:141] libmachine: Making call to close connection to plugin binary
	W0501 02:08:59.049894   21421 out.go:239] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0501 02:08:59.977632   21421 pod_ready.go:92] pod "coredns-7db6d8ff4d-rlvmm" in "kube-system" namespace has status "Ready":"True"
	I0501 02:08:59.977657   21421 pod_ready.go:81] duration metric: took 6.53710628s for pod "coredns-7db6d8ff4d-rlvmm" in "kube-system" namespace to be "Ready" ...
	I0501 02:08:59.977668   21421 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-s2t68" in "kube-system" namespace to be "Ready" ...
	I0501 02:09:00.014149   21421 pod_ready.go:92] pod "coredns-7db6d8ff4d-s2t68" in "kube-system" namespace has status "Ready":"True"
	I0501 02:09:00.014189   21421 pod_ready.go:81] duration metric: took 36.512612ms for pod "coredns-7db6d8ff4d-s2t68" in "kube-system" namespace to be "Ready" ...
	I0501 02:09:00.014203   21421 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-286595" in "kube-system" namespace to be "Ready" ...
	I0501 02:09:00.051925   21421 pod_ready.go:92] pod "etcd-addons-286595" in "kube-system" namespace has status "Ready":"True"
	I0501 02:09:00.051953   21421 pod_ready.go:81] duration metric: took 37.741297ms for pod "etcd-addons-286595" in "kube-system" namespace to be "Ready" ...
	I0501 02:09:00.051966   21421 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-286595" in "kube-system" namespace to be "Ready" ...
	I0501 02:09:00.078344   21421 pod_ready.go:92] pod "kube-apiserver-addons-286595" in "kube-system" namespace has status "Ready":"True"
	I0501 02:09:00.078374   21421 pod_ready.go:81] duration metric: took 26.399132ms for pod "kube-apiserver-addons-286595" in "kube-system" namespace to be "Ready" ...
	I0501 02:09:00.078387   21421 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-286595" in "kube-system" namespace to be "Ready" ...
	I0501 02:09:00.088111   21421 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0501 02:09:00.088145   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHHostname
	I0501 02:09:00.091564   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:09:00.091990   21421 main.go:141] libmachine: (addons-286595) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:55:7e", ip: ""} in network mk-addons-286595: {Iface:virbr1 ExpiryTime:2024-05-01 03:08:11 +0000 UTC Type:0 Mac:52:54:00:74:55:7e Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:addons-286595 Clientid:01:52:54:00:74:55:7e}
	I0501 02:09:00.092021   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined IP address 192.168.39.173 and MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:09:00.092209   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHPort
	I0501 02:09:00.092405   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHKeyPath
	I0501 02:09:00.092575   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHUsername
	I0501 02:09:00.092713   21421 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/addons-286595/id_rsa Username:docker}
	I0501 02:09:00.094243   21421 pod_ready.go:92] pod "kube-controller-manager-addons-286595" in "kube-system" namespace has status "Ready":"True"
	I0501 02:09:00.094258   21421 pod_ready.go:81] duration metric: took 15.863807ms for pod "kube-controller-manager-addons-286595" in "kube-system" namespace to be "Ready" ...
	I0501 02:09:00.094267   21421 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7dw4g" in "kube-system" namespace to be "Ready" ...
	I0501 02:09:00.347230   21421 pod_ready.go:92] pod "kube-proxy-7dw4g" in "kube-system" namespace has status "Ready":"True"
	I0501 02:09:00.347255   21421 pod_ready.go:81] duration metric: took 252.978049ms for pod "kube-proxy-7dw4g" in "kube-system" namespace to be "Ready" ...
	I0501 02:09:00.347267   21421 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-286595" in "kube-system" namespace to be "Ready" ...
	I0501 02:09:00.749600   21421 pod_ready.go:92] pod "kube-scheduler-addons-286595" in "kube-system" namespace has status "Ready":"True"
	I0501 02:09:00.749630   21421 pod_ready.go:81] duration metric: took 402.354526ms for pod "kube-scheduler-addons-286595" in "kube-system" namespace to be "Ready" ...
	I0501 02:09:00.749640   21421 pod_ready.go:38] duration metric: took 7.318788702s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0501 02:09:00.749658   21421 api_server.go:52] waiting for apiserver process to appear ...
	I0501 02:09:00.749732   21421 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 02:09:01.021122   21421 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0501 02:09:01.218110   21421 addons.go:234] Setting addon gcp-auth=true in "addons-286595"
	I0501 02:09:01.218166   21421 host.go:66] Checking if "addons-286595" exists ...
	I0501 02:09:01.218591   21421 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:09:01.218627   21421 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:09:01.233848   21421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39917
	I0501 02:09:01.234328   21421 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:09:01.234771   21421 main.go:141] libmachine: Using API Version  1
	I0501 02:09:01.234787   21421 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:09:01.235074   21421 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:09:01.235711   21421 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:09:01.235748   21421 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:09:01.251180   21421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39717
	I0501 02:09:01.251616   21421 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:09:01.252112   21421 main.go:141] libmachine: Using API Version  1
	I0501 02:09:01.252143   21421 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:09:01.252445   21421 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:09:01.252603   21421 main.go:141] libmachine: (addons-286595) Calling .GetState
	I0501 02:09:01.254079   21421 main.go:141] libmachine: (addons-286595) Calling .DriverName
	I0501 02:09:01.254281   21421 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0501 02:09:01.254302   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHHostname
	I0501 02:09:01.257104   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:09:01.257524   21421 main.go:141] libmachine: (addons-286595) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:55:7e", ip: ""} in network mk-addons-286595: {Iface:virbr1 ExpiryTime:2024-05-01 03:08:11 +0000 UTC Type:0 Mac:52:54:00:74:55:7e Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:addons-286595 Clientid:01:52:54:00:74:55:7e}
	I0501 02:09:01.257554   21421 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined IP address 192.168.39.173 and MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:09:01.257689   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHPort
	I0501 02:09:01.257884   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHKeyPath
	I0501 02:09:01.258085   21421 main.go:141] libmachine: (addons-286595) Calling .GetSSHUsername
	I0501 02:09:01.258235   21421 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/addons-286595/id_rsa Username:docker}
	I0501 02:09:02.835709   21421 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (9.177699111s)
	I0501 02:09:02.835779   21421 main.go:141] libmachine: Making call to close driver server
	I0501 02:09:02.835786   21421 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (9.163771093s)
	I0501 02:09:02.835831   21421 main.go:141] libmachine: Making call to close driver server
	I0501 02:09:02.835833   21421 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (8.816965849s)
	I0501 02:09:02.835848   21421 main.go:141] libmachine: (addons-286595) Calling .Close
	I0501 02:09:02.835793   21421 main.go:141] libmachine: (addons-286595) Calling .Close
	I0501 02:09:02.835862   21421 main.go:141] libmachine: Making call to close driver server
	I0501 02:09:02.835934   21421 main.go:141] libmachine: (addons-286595) Calling .Close
	I0501 02:09:02.835945   21421 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.672804964s)
	I0501 02:09:02.835973   21421 main.go:141] libmachine: Making call to close driver server
	I0501 02:09:02.835978   21421 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.449516824s)
	I0501 02:09:02.836001   21421 main.go:141] libmachine: Making call to close driver server
	I0501 02:09:02.836011   21421 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (8.067170489s)
	I0501 02:09:02.835984   21421 main.go:141] libmachine: (addons-286595) Calling .Close
	I0501 02:09:02.836030   21421 main.go:141] libmachine: Making call to close driver server
	I0501 02:09:02.836040   21421 main.go:141] libmachine: (addons-286595) Calling .Close
	I0501 02:09:02.836014   21421 main.go:141] libmachine: (addons-286595) Calling .Close
	I0501 02:09:02.836149   21421 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.990437694s)
	I0501 02:09:02.836171   21421 main.go:141] libmachine: Making call to close driver server
	I0501 02:09:02.836182   21421 main.go:141] libmachine: (addons-286595) Calling .Close
	I0501 02:09:02.836316   21421 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.852116618s)
	I0501 02:09:02.836328   21421 main.go:141] libmachine: Successfully made call to close driver server
	I0501 02:09:02.836343   21421 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 02:09:02.836364   21421 main.go:141] libmachine: Making call to close driver server
	I0501 02:09:02.836372   21421 main.go:141] libmachine: (addons-286595) Calling .Close
	W0501 02:09:02.836371   21421 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0501 02:09:02.836423   21421 retry.go:31] will retry after 149.528632ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0501 02:09:02.836502   21421 main.go:141] libmachine: (addons-286595) DBG | Closing plugin on server side
	I0501 02:09:02.836506   21421 main.go:141] libmachine: (addons-286595) DBG | Closing plugin on server side
	I0501 02:09:02.836516   21421 main.go:141] libmachine: Successfully made call to close driver server
	I0501 02:09:02.836527   21421 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 02:09:02.836535   21421 main.go:141] libmachine: Making call to close driver server
	I0501 02:09:02.836536   21421 main.go:141] libmachine: (addons-286595) DBG | Closing plugin on server side
	I0501 02:09:02.836537   21421 main.go:141] libmachine: Successfully made call to close driver server
	I0501 02:09:02.836542   21421 main.go:141] libmachine: (addons-286595) Calling .Close
	I0501 02:09:02.836548   21421 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 02:09:02.836558   21421 main.go:141] libmachine: Making call to close driver server
	I0501 02:09:02.836565   21421 main.go:141] libmachine: Successfully made call to close driver server
	I0501 02:09:02.836574   21421 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 02:09:02.836583   21421 main.go:141] libmachine: Making call to close driver server
	I0501 02:09:02.836588   21421 main.go:141] libmachine: Successfully made call to close driver server
	I0501 02:09:02.836591   21421 main.go:141] libmachine: (addons-286595) Calling .Close
	I0501 02:09:02.836597   21421 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 02:09:02.836607   21421 main.go:141] libmachine: Making call to close driver server
	I0501 02:09:02.836614   21421 main.go:141] libmachine: (addons-286595) Calling .Close
	I0501 02:09:02.836641   21421 main.go:141] libmachine: (addons-286595) DBG | Closing plugin on server side
	I0501 02:09:02.836567   21421 main.go:141] libmachine: (addons-286595) Calling .Close
	I0501 02:09:02.836920   21421 main.go:141] libmachine: (addons-286595) DBG | Closing plugin on server side
	I0501 02:09:02.836953   21421 main.go:141] libmachine: (addons-286595) DBG | Closing plugin on server side
	I0501 02:09:02.836976   21421 main.go:141] libmachine: Successfully made call to close driver server
	I0501 02:09:02.836986   21421 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 02:09:02.837043   21421 main.go:141] libmachine: (addons-286595) DBG | Closing plugin on server side
	I0501 02:09:02.837070   21421 main.go:141] libmachine: Successfully made call to close driver server
	I0501 02:09:02.837090   21421 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 02:09:02.837123   21421 addons.go:470] Verifying addon registry=true in "addons-286595"
	I0501 02:09:02.837135   21421 main.go:141] libmachine: Successfully made call to close driver server
	I0501 02:09:02.837153   21421 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 02:09:02.838883   21421 out.go:177] * Verifying registry addon...
	I0501 02:09:02.838371   21421 main.go:141] libmachine: (addons-286595) DBG | Closing plugin on server side
	I0501 02:09:02.838375   21421 main.go:141] libmachine: Successfully made call to close driver server
	I0501 02:09:02.840470   21421 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 02:09:02.840481   21421 main.go:141] libmachine: Making call to close driver server
	I0501 02:09:02.840493   21421 main.go:141] libmachine: (addons-286595) Calling .Close
	I0501 02:09:02.838392   21421 main.go:141] libmachine: Successfully made call to close driver server
	I0501 02:09:02.840518   21421 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 02:09:02.841920   21421 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-286595 service yakd-dashboard -n yakd-dashboard
	
	I0501 02:09:02.839697   21421 main.go:141] libmachine: (addons-286595) DBG | Closing plugin on server side
	I0501 02:09:02.839708   21421 main.go:141] libmachine: Successfully made call to close driver server
	I0501 02:09:02.839973   21421 main.go:141] libmachine: Successfully made call to close driver server
	I0501 02:09:02.840032   21421 main.go:141] libmachine: (addons-286595) DBG | Closing plugin on server side
	I0501 02:09:02.840800   21421 main.go:141] libmachine: (addons-286595) DBG | Closing plugin on server side
	I0501 02:09:02.840828   21421 main.go:141] libmachine: Successfully made call to close driver server
	I0501 02:09:02.841317   21421 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0501 02:09:02.843023   21421 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 02:09:02.843036   21421 main.go:141] libmachine: Making call to close driver server
	I0501 02:09:02.843048   21421 main.go:141] libmachine: (addons-286595) Calling .Close
	I0501 02:09:02.843050   21421 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 02:09:02.843065   21421 addons.go:470] Verifying addon metrics-server=true in "addons-286595"
	I0501 02:09:02.843094   21421 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 02:09:02.843107   21421 addons.go:470] Verifying addon ingress=true in "addons-286595"
	I0501 02:09:02.844154   21421 out.go:177] * Verifying ingress addon...
	I0501 02:09:02.843361   21421 main.go:141] libmachine: Successfully made call to close driver server
	I0501 02:09:02.845508   21421 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 02:09:02.843381   21421 main.go:141] libmachine: (addons-286595) DBG | Closing plugin on server side
	I0501 02:09:02.846124   21421 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0501 02:09:02.851653   21421 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0501 02:09:02.851669   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:02.852280   21421 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0501 02:09:02.852303   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:02.987058   21421 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0501 02:09:03.354373   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:03.356921   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:03.520272   21421 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.770514591s)
	I0501 02:09:03.520309   21421 api_server.go:72] duration metric: took 10.569297355s to wait for apiserver process to appear ...
	I0501 02:09:03.520318   21421 api_server.go:88] waiting for apiserver healthz status ...
	I0501 02:09:03.520342   21421 api_server.go:253] Checking apiserver healthz at https://192.168.39.173:8443/healthz ...
	I0501 02:09:03.520383   21421 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.266082714s)
	I0501 02:09:03.521848   21421 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0501 02:09:03.520585   21421 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.964777698s)
	I0501 02:09:03.521892   21421 main.go:141] libmachine: Making call to close driver server
	I0501 02:09:03.521902   21421 main.go:141] libmachine: (addons-286595) Calling .Close
	I0501 02:09:03.523196   21421 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0501 02:09:03.522196   21421 main.go:141] libmachine: Successfully made call to close driver server
	I0501 02:09:03.522222   21421 main.go:141] libmachine: (addons-286595) DBG | Closing plugin on server side
	I0501 02:09:03.524355   21421 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0501 02:09:03.524368   21421 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0501 02:09:03.523235   21421 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 02:09:03.524420   21421 main.go:141] libmachine: Making call to close driver server
	I0501 02:09:03.524434   21421 main.go:141] libmachine: (addons-286595) Calling .Close
	I0501 02:09:03.524682   21421 main.go:141] libmachine: (addons-286595) DBG | Closing plugin on server side
	I0501 02:09:03.524696   21421 main.go:141] libmachine: Successfully made call to close driver server
	I0501 02:09:03.524709   21421 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 02:09:03.524728   21421 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-286595"
	I0501 02:09:03.525940   21421 out.go:177] * Verifying csi-hostpath-driver addon...
	I0501 02:09:03.527831   21421 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0501 02:09:03.536113   21421 api_server.go:279] https://192.168.39.173:8443/healthz returned 200:
	ok
	I0501 02:09:03.537095   21421 api_server.go:141] control plane version: v1.30.0
	I0501 02:09:03.537123   21421 api_server.go:131] duration metric: took 16.797746ms to wait for apiserver health ...
	I0501 02:09:03.537134   21421 system_pods.go:43] waiting for kube-system pods to appear ...
	I0501 02:09:03.557169   21421 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0501 02:09:03.557195   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:03.559708   21421 system_pods.go:59] 19 kube-system pods found
	I0501 02:09:03.559750   21421 system_pods.go:61] "coredns-7db6d8ff4d-rlvmm" [b9eb9071-e21b-46fc-8605-055d6915f55e] Running
	I0501 02:09:03.559760   21421 system_pods.go:61] "coredns-7db6d8ff4d-s2t68" [7bc229a2-c453-440a-99d1-ed6eca63a179] Running
	I0501 02:09:03.559768   21421 system_pods.go:61] "csi-hostpath-attacher-0" [1171b8a4-c4ea-44f6-b440-b20e6789c3c1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0501 02:09:03.559775   21421 system_pods.go:61] "csi-hostpath-resizer-0" [0c417652-a924-43e6-ad18-0d1adc827868] Pending
	I0501 02:09:03.559781   21421 system_pods.go:61] "csi-hostpathplugin-h96nk" [406dcf80-86a8-4b1d-8c1a-c3e446a15d47] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0501 02:09:03.559787   21421 system_pods.go:61] "etcd-addons-286595" [95d9e711-65f9-4dce-84e8-2cff4b9c00dd] Running
	I0501 02:09:03.559790   21421 system_pods.go:61] "kube-apiserver-addons-286595" [d7533d69-f88a-4772-8292-76367bc8ef2f] Running
	I0501 02:09:03.559794   21421 system_pods.go:61] "kube-controller-manager-addons-286595" [59918d31-1d66-43ab-bfd8-319ca2366ae1] Running
	I0501 02:09:03.559801   21421 system_pods.go:61] "kube-ingress-dns-minikube" [2c0204aa-5d9f-4c78-a423-4378e147abf4] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0501 02:09:03.559807   21421 system_pods.go:61] "kube-proxy-7dw4g" [7aec44ec-1615-4aa4-9d65-e464831f8518] Running
	I0501 02:09:03.559811   21421 system_pods.go:61] "kube-scheduler-addons-286595" [37f73d9c-b5ac-4946-92b5-b826a3cf9ed1] Running
	I0501 02:09:03.559817   21421 system_pods.go:61] "metrics-server-c59844bb4-gvcdl" [9385fe21-53b5-4105-bb14-3008fcd7dc3a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0501 02:09:03.559826   21421 system_pods.go:61] "nvidia-device-plugin-daemonset-rkmjq" [ed0cb4b4-ad39-4ba6-8e70-771dffc9b32e] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0501 02:09:03.559837   21421 system_pods.go:61] "registry-f6tfr" [cf6f5911-c14d-4b26-9767-c66913822a34] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0501 02:09:03.559846   21421 system_pods.go:61] "registry-proxy-6hksn" [f6f624f2-3e51-4453-b84e-7d908b7736fa] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0501 02:09:03.559852   21421 system_pods.go:61] "snapshot-controller-745499f584-blqww" [6c914d0a-4f6b-458b-9601-41d41a96d448] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0501 02:09:03.559860   21421 system_pods.go:61] "snapshot-controller-745499f584-cnc7j" [e70de7d8-e03e-4147-b633-6fec7dbe1e88] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0501 02:09:03.559867   21421 system_pods.go:61] "storage-provisioner" [b23a96b2-9c34-4d4f-9df5-90dc5195248b] Running
	I0501 02:09:03.559872   21421 system_pods.go:61] "tiller-deploy-6677d64bcd-btpph" [f3632fb8-1c95-4630-b3ce-f08c09d4a4ff] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0501 02:09:03.559879   21421 system_pods.go:74] duration metric: took 22.734985ms to wait for pod list to return data ...
	I0501 02:09:03.559889   21421 default_sa.go:34] waiting for default service account to be created ...
	I0501 02:09:03.591890   21421 default_sa.go:45] found service account: "default"
	I0501 02:09:03.591912   21421 default_sa.go:55] duration metric: took 32.017511ms for default service account to be created ...
	I0501 02:09:03.591923   21421 system_pods.go:116] waiting for k8s-apps to be running ...
	I0501 02:09:03.614860   21421 system_pods.go:86] 19 kube-system pods found
	I0501 02:09:03.614886   21421 system_pods.go:89] "coredns-7db6d8ff4d-rlvmm" [b9eb9071-e21b-46fc-8605-055d6915f55e] Running
	I0501 02:09:03.614891   21421 system_pods.go:89] "coredns-7db6d8ff4d-s2t68" [7bc229a2-c453-440a-99d1-ed6eca63a179] Running
	I0501 02:09:03.614899   21421 system_pods.go:89] "csi-hostpath-attacher-0" [1171b8a4-c4ea-44f6-b440-b20e6789c3c1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0501 02:09:03.614904   21421 system_pods.go:89] "csi-hostpath-resizer-0" [0c417652-a924-43e6-ad18-0d1adc827868] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0501 02:09:03.614917   21421 system_pods.go:89] "csi-hostpathplugin-h96nk" [406dcf80-86a8-4b1d-8c1a-c3e446a15d47] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0501 02:09:03.614924   21421 system_pods.go:89] "etcd-addons-286595" [95d9e711-65f9-4dce-84e8-2cff4b9c00dd] Running
	I0501 02:09:03.614931   21421 system_pods.go:89] "kube-apiserver-addons-286595" [d7533d69-f88a-4772-8292-76367bc8ef2f] Running
	I0501 02:09:03.614941   21421 system_pods.go:89] "kube-controller-manager-addons-286595" [59918d31-1d66-43ab-bfd8-319ca2366ae1] Running
	I0501 02:09:03.614955   21421 system_pods.go:89] "kube-ingress-dns-minikube" [2c0204aa-5d9f-4c78-a423-4378e147abf4] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0501 02:09:03.614962   21421 system_pods.go:89] "kube-proxy-7dw4g" [7aec44ec-1615-4aa4-9d65-e464831f8518] Running
	I0501 02:09:03.614971   21421 system_pods.go:89] "kube-scheduler-addons-286595" [37f73d9c-b5ac-4946-92b5-b826a3cf9ed1] Running
	I0501 02:09:03.614980   21421 system_pods.go:89] "metrics-server-c59844bb4-gvcdl" [9385fe21-53b5-4105-bb14-3008fcd7dc3a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0501 02:09:03.614988   21421 system_pods.go:89] "nvidia-device-plugin-daemonset-rkmjq" [ed0cb4b4-ad39-4ba6-8e70-771dffc9b32e] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0501 02:09:03.614998   21421 system_pods.go:89] "registry-f6tfr" [cf6f5911-c14d-4b26-9767-c66913822a34] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0501 02:09:03.615007   21421 system_pods.go:89] "registry-proxy-6hksn" [f6f624f2-3e51-4453-b84e-7d908b7736fa] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0501 02:09:03.615013   21421 system_pods.go:89] "snapshot-controller-745499f584-blqww" [6c914d0a-4f6b-458b-9601-41d41a96d448] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0501 02:09:03.615023   21421 system_pods.go:89] "snapshot-controller-745499f584-cnc7j" [e70de7d8-e03e-4147-b633-6fec7dbe1e88] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0501 02:09:03.615032   21421 system_pods.go:89] "storage-provisioner" [b23a96b2-9c34-4d4f-9df5-90dc5195248b] Running
	I0501 02:09:03.615045   21421 system_pods.go:89] "tiller-deploy-6677d64bcd-btpph" [f3632fb8-1c95-4630-b3ce-f08c09d4a4ff] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0501 02:09:03.615060   21421 system_pods.go:126] duration metric: took 23.130136ms to wait for k8s-apps to be running ...
	I0501 02:09:03.615073   21421 system_svc.go:44] waiting for kubelet service to be running ....
	I0501 02:09:03.615115   21421 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 02:09:03.670208   21421 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0501 02:09:03.670228   21421 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0501 02:09:03.850025   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:03.851011   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:03.858310   21421 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0501 02:09:03.858329   21421 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0501 02:09:03.935569   21421 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0501 02:09:04.033534   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:04.347212   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:04.351655   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:04.540386   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:04.849999   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:04.856268   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:05.050362   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:05.347903   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:05.351994   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:05.535886   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:05.837272   21421 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.222127757s)
	I0501 02:09:05.837309   21421 system_svc.go:56] duration metric: took 2.222232995s WaitForService to wait for kubelet
	I0501 02:09:05.837320   21421 kubeadm.go:576] duration metric: took 12.886308147s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0501 02:09:05.837347   21421 node_conditions.go:102] verifying NodePressure condition ...
	I0501 02:09:05.837389   21421 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.850287785s)
	I0501 02:09:05.837433   21421 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.901781058s)
	I0501 02:09:05.837462   21421 main.go:141] libmachine: Making call to close driver server
	I0501 02:09:05.837472   21421 main.go:141] libmachine: Making call to close driver server
	I0501 02:09:05.837484   21421 main.go:141] libmachine: (addons-286595) Calling .Close
	I0501 02:09:05.837489   21421 main.go:141] libmachine: (addons-286595) Calling .Close
	I0501 02:09:05.837756   21421 main.go:141] libmachine: Successfully made call to close driver server
	I0501 02:09:05.837780   21421 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 02:09:05.837789   21421 main.go:141] libmachine: Making call to close driver server
	I0501 02:09:05.837796   21421 main.go:141] libmachine: (addons-286595) Calling .Close
	I0501 02:09:05.839516   21421 main.go:141] libmachine: (addons-286595) DBG | Closing plugin on server side
	I0501 02:09:05.839522   21421 main.go:141] libmachine: (addons-286595) DBG | Closing plugin on server side
	I0501 02:09:05.839530   21421 main.go:141] libmachine: Successfully made call to close driver server
	I0501 02:09:05.839542   21421 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 02:09:05.839532   21421 main.go:141] libmachine: Successfully made call to close driver server
	I0501 02:09:05.839590   21421 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 02:09:05.839609   21421 main.go:141] libmachine: Making call to close driver server
	I0501 02:09:05.839622   21421 main.go:141] libmachine: (addons-286595) Calling .Close
	I0501 02:09:05.839803   21421 main.go:141] libmachine: Successfully made call to close driver server
	I0501 02:09:05.839817   21421 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 02:09:05.841843   21421 addons.go:470] Verifying addon gcp-auth=true in "addons-286595"
	I0501 02:09:05.844672   21421 out.go:177] * Verifying gcp-auth addon...
	I0501 02:09:05.842414   21421 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0501 02:09:05.846020   21421 node_conditions.go:123] node cpu capacity is 2
	I0501 02:09:05.846060   21421 node_conditions.go:105] duration metric: took 8.705183ms to run NodePressure ...
	I0501 02:09:05.846078   21421 start.go:240] waiting for startup goroutines ...
	I0501 02:09:05.846684   21421 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0501 02:09:05.848170   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:05.855974   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:05.856795   21421 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0501 02:09:05.856811   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:06.034156   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:06.348380   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:06.352404   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:06.353196   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:06.538870   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:06.848324   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:06.853902   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:06.854479   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:07.034379   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:07.348756   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:07.355884   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:07.356291   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:07.538562   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:07.848051   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:07.851436   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:07.851873   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:08.034390   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:08.349312   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:08.352692   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:08.353142   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:08.534385   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:08.850117   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:08.853366   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:08.854279   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:09.036789   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:09.346973   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:09.350654   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:09.351578   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:09.772287   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:09.849975   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:09.853462   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:09.854058   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:10.034323   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:10.348146   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:10.350393   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:10.351895   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:10.533987   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:10.847392   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:10.852765   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:10.853317   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:11.033889   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:11.348464   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:11.353081   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:11.353580   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:11.533688   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:11.848467   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:11.862613   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:11.863353   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:12.037430   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:12.348707   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:12.351206   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:12.351588   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:12.533906   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:12.850524   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:12.851394   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:12.852427   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:13.034904   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:13.348493   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:13.350879   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:13.351580   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:13.534481   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:13.848066   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:13.850094   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:13.850957   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:14.033930   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:14.348673   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:14.349986   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:14.351472   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:14.533389   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:14.849263   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:14.850498   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:14.851743   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:15.036301   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:15.350035   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:15.357342   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:15.357864   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:15.536337   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:15.848788   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:15.851336   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:15.852369   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:16.034189   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:16.349607   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:16.350435   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:16.353669   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:16.534389   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:16.850325   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:16.853233   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:16.854624   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:17.033618   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:17.348218   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:17.352125   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:17.352206   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:17.533426   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:17.848544   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:17.853423   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:17.853941   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:18.034354   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:18.351855   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:18.352672   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:18.354657   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:18.898953   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:18.899095   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:18.899631   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:18.901011   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:19.034465   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:19.350005   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:19.350797   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:19.352127   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:19.534191   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:19.848237   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:19.850637   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:19.851864   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:20.034530   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:20.355179   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:20.356063   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:20.359358   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:20.534931   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:20.848879   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:20.850657   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:20.852092   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:21.037008   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:21.350432   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:21.350487   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:21.351417   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:21.548758   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:21.848156   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:21.852489   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:21.853211   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:22.033909   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:22.353887   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:22.355430   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:22.355900   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:22.534273   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:22.848335   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:22.852672   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:22.853059   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:23.033649   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:23.349904   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:23.352068   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:23.352833   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:23.534439   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:23.850408   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:23.852949   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:23.856123   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:24.033560   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:24.348155   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:24.350705   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:24.351200   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:24.534628   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:24.848032   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:24.850120   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:24.851373   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:25.033826   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:25.349232   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:25.351132   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:25.352042   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:25.533445   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:25.848837   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:25.850778   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:25.851234   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:26.033982   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:26.348587   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:26.349831   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:26.351628   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:26.533729   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:26.854068   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:26.854128   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:26.854803   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:27.034655   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:27.349852   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:27.353330   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:27.353784   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:27.547238   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:27.850801   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:27.853955   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:27.855458   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:28.033724   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:28.348220   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:28.355197   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:28.356921   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:28.533711   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:28.851253   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:28.851797   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:28.853367   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:29.033196   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:29.347570   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:29.351744   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:29.353104   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:29.534166   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:29.848797   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:29.852557   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:29.853164   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:30.033924   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:30.350086   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:30.353365   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:30.353841   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:30.534032   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:30.847632   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:30.850743   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:30.850807   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:31.034089   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:31.348560   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:31.351074   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:31.352426   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:31.534387   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:31.848980   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:31.851305   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:31.852287   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:32.033808   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:32.348001   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:32.351658   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:32.352556   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:32.533437   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:32.855761   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:32.857438   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:32.858348   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:33.039174   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:33.350521   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:33.350703   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:33.351114   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:33.534361   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:33.851786   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:33.857074   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:33.859110   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:34.033676   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:34.347549   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:34.352477   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:34.352982   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:34.533457   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:34.848746   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:34.850203   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:34.850812   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:35.034186   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:35.352677   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:35.357913   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:35.358705   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:35.534412   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:35.853567   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:35.855679   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:35.856004   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:36.034429   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:36.349488   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:36.352523   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:36.353361   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:36.539580   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:36.854307   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:36.857663   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:36.858186   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:37.035242   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:37.586108   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:37.586647   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:37.589501   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:37.591939   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:37.848645   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:37.850797   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:37.854099   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:38.033480   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:38.347555   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:38.349983   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:38.351182   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:38.540293   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:38.852068   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:38.854100   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:38.854637   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:39.033661   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:39.349951   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:39.352147   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:39.355468   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:39.534190   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:39.976897   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:39.977454   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:39.977691   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:40.035416   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:40.349046   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:40.353737   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:40.353948   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:40.533536   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:40.850787   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:40.851707   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:40.852826   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:41.034758   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:41.351402   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:41.352316   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:41.353292   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:41.534487   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:41.856032   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:41.856834   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:41.857830   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:42.034772   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:42.352930   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:42.353334   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:42.353756   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:42.533592   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:42.852617   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:42.853014   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:42.857826   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:43.384837   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:43.385490   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:43.385523   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:43.386767   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:43.534089   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:43.848395   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:43.855593   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:43.856280   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:44.035508   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:44.348670   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:44.352744   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:44.353865   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:44.534299   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:44.850263   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:44.852499   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:44.853253   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:45.035437   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:45.353222   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:45.353230   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:45.353476   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:45.533974   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:45.848441   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:45.851053   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:45.851658   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:46.033458   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:46.348754   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:46.352045   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:46.352306   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:46.533364   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:46.848354   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:46.851442   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:46.851773   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:47.035152   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:47.348226   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:47.350062   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:47.351398   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:47.534979   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:47.849529   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:47.851547   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:47.852382   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:48.033357   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:48.348475   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:48.350959   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:48.351648   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:48.534440   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:48.849302   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:48.851306   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:48.851360   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:49.033951   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:49.347596   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:49.351024   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:49.353326   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:49.534112   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:49.848434   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:49.852686   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:49.853251   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:50.501578   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:50.521026   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:50.524657   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:50.524883   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:50.539948   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:50.848082   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:50.850508   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:50.850981   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:51.032555   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:51.349433   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:51.351446   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:51.352257   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:51.533082   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:51.848207   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:51.849760   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:51.851156   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:52.034147   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:52.349417   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:52.351817   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:52.353965   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:52.534599   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:52.848151   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:52.850956   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:52.851145   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:53.034039   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:53.350437   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:53.352294   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:53.355296   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:53.535005   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:53.848811   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:53.851636   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:53.852381   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:54.033840   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:54.348718   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:54.351045   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:54.351785   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:54.826161   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:54.854654   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:54.854989   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:54.855705   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:55.034372   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:55.347925   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:55.349843   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:55.350667   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:55.534319   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:55.848469   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:55.850628   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:55.850929   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:56.033854   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:56.347793   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:56.349846   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:56.350379   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:56.534943   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:56.862432   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:56.862966   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:56.863312   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:57.033924   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:57.359796   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:57.368249   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:57.371151   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:57.533854   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:57.848380   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:57.850870   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:57.852490   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:58.035140   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:58.348872   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:58.351487   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:58.352270   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:58.533633   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:58.846753   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:58.852161   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:58.852307   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:59.034053   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:59.348946   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:59.352643   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:59.353252   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:09:59.534215   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:09:59.851321   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:09:59.853295   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:09:59.854023   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:10:00.034025   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:10:00.350643   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:10:00.358624   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:00.358827   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:10:00.678022   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:10:00.855611   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:00.861319   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:10:00.861430   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:10:01.042088   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:10:01.350571   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:10:01.351285   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:01.351585   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:10:01.534254   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:10:01.849712   21421 kapi.go:107] duration metric: took 59.008389808s to wait for kubernetes.io/minikube-addons=registry ...
	I0501 02:10:01.852061   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:10:01.852649   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:02.034979   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:10:02.351431   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:02.351810   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:10:02.542990   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:10:02.851388   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:02.851705   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:10:03.034497   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:10:03.353937   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:03.354227   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:10:03.534513   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:10:03.855917   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:10:03.859387   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:04.035327   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:10:04.352128   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:04.354318   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:10:04.539166   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:10:04.851244   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:10:04.851590   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:05.034933   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:10:05.351548   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:05.351998   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:10:05.534182   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:10:05.852561   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:05.852718   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:10:06.034464   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:10:06.355789   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:06.356955   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:10:06.534454   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:10:06.850752   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:06.851049   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:10:07.034132   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:10:07.352356   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:07.353142   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:10:07.534098   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:10:07.852618   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:07.853113   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:10:08.045853   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:10:08.351751   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:08.352064   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:10:08.535018   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:10:08.850819   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:08.851536   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:10:09.035087   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:10:09.352544   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:10:09.353926   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:09.535722   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:10:09.850368   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:10:09.852043   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:10.037336   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:10:10.352997   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:10.353223   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:10:10.535588   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:10:10.852064   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:10.852242   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:10:11.034391   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:10:11.353931   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:11.354039   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:10:11.537803   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:10:11.852251   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:11.853099   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:10:12.042513   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:10:12.352852   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:10:12.353364   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:12.549443   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:10:12.856741   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:10:12.857460   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:13.041564   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:10:13.358750   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:13.358917   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:10:13.539477   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:10:13.851822   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:13.852466   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:10:14.034484   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:10:14.356258   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:14.356489   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:10:14.533730   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:10:14.851854   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:14.857079   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:10:15.036002   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:10:15.352291   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:15.353211   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:10:15.539150   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:10:15.856345   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:15.856735   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:10:16.058900   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:10:16.351967   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:16.357810   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:10:16.536954   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:10:16.851762   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:16.852952   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:10:17.034113   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:10:17.351743   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:10:17.352162   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:17.533930   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:10:17.852556   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:10:17.852752   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:18.035543   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:10:18.772704   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:10:18.774651   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:18.780267   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:10:18.851153   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:10:18.852478   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:19.039429   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:10:19.351656   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:19.351916   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:10:19.534584   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:10:19.856658   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:19.857492   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:10:20.035488   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:10:20.353124   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:20.359879   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:10:20.534553   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:10:20.850493   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:20.850774   21421 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:10:21.055086   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:10:21.355882   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:21.356106   21421 kapi.go:107] duration metric: took 1m18.509978097s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0501 02:10:21.534496   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:10:21.851201   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:22.034755   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:10:22.350744   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:22.533804   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:10:22.850911   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:23.034726   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:10:23.350869   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:23.540177   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:10:23.851795   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:24.034565   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:10:24.351529   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:24.534154   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:10:24.850740   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:25.045172   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:10:25.351440   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:25.534600   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:10:25.851426   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:26.033630   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:10:26.351409   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:26.534481   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:10:26.851008   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:27.034691   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:10:27.350997   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:27.535752   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:10:27.850455   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:28.035244   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:10:28.350584   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:28.535056   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:10:28.851604   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:29.034987   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:10:29.351003   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:29.533975   21421 kapi.go:107] duration metric: took 1m26.006142357s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0501 02:10:29.851326   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:30.350882   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:30.851739   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:31.350590   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:31.850375   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:32.351233   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:32.851240   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:33.352081   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:33.852173   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:34.352633   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:34.850461   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:35.350886   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:35.851033   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:36.351169   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:36.852021   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:37.351335   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:37.850791   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:38.350512   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:38.851213   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:39.351143   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:39.851202   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:40.351553   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:40.850844   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:41.350827   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:41.867204   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:42.351434   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:42.851865   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:43.352116   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:43.851343   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:44.351992   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:44.851836   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:45.351676   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:45.851190   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:46.351119   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:46.850726   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:47.350329   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:47.852543   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:48.351952   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:48.852299   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:49.351591   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:49.850165   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:50.351503   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:50.850811   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:51.350453   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:51.851111   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:52.350350   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:52.852117   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:53.351435   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:53.854509   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:54.351422   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:54.851332   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:55.350269   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:55.850578   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:56.350202   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:56.851153   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:57.351245   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:57.850921   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:58.351225   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:58.851448   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:59.351225   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:10:59.852150   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:11:00.351655   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:11:00.852020   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:11:01.352548   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:11:01.851598   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:11:02.351074   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:11:02.851507   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:11:03.353134   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:11:03.851104   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:11:04.350825   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:11:04.851533   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:11:05.350557   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:11:05.851239   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:11:06.351691   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:11:06.850540   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:11:07.351668   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:11:07.851662   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:11:08.349948   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:11:08.851320   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:11:09.353401   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:11:09.851374   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:11:10.351003   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:11:10.851166   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:11:11.350983   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:11:11.850985   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:11:12.351679   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:11:12.851119   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:11:13.351738   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:11:13.851073   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:11:14.351672   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:11:14.851098   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:11:15.351690   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:11:15.850661   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:11:16.351040   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:11:16.850854   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:11:17.350797   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:11:17.850510   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:11:18.350961   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:11:18.851654   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:11:19.350706   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:11:19.850457   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:11:20.350022   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:11:20.851666   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:11:21.351527   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:11:21.851128   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:11:22.351195   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:11:22.851066   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:11:23.355687   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:11:23.850924   21421 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:11:24.350129   21421 kapi.go:107] duration metric: took 2m18.503442071s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0501 02:11:24.351556   21421 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-286595 cluster.
	I0501 02:11:24.352718   21421 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0501 02:11:24.353793   21421 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0501 02:11:24.354947   21421 out.go:177] * Enabled addons: nvidia-device-plugin, cloud-spanner, storage-provisioner, default-storageclass, ingress-dns, helm-tiller, yakd, metrics-server, inspektor-gadget, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I0501 02:11:24.356093   21421 addons.go:505] duration metric: took 2m31.405039024s for enable addons: enabled=[nvidia-device-plugin cloud-spanner storage-provisioner default-storageclass ingress-dns helm-tiller yakd metrics-server inspektor-gadget volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I0501 02:11:24.356135   21421 start.go:245] waiting for cluster config update ...
	I0501 02:11:24.356162   21421 start.go:254] writing updated cluster config ...
	I0501 02:11:24.356406   21421 ssh_runner.go:195] Run: rm -f paused
	I0501 02:11:24.408013   21421 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0501 02:11:24.409678   21421 out.go:177] * Done! kubectl is now configured to use "addons-286595" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	May 01 02:16:58 addons-286595 crio[678]: time="2024-05-01 02:16:58.419992250Z" level=debug msg="Event: CREATE        \"/var/run/crio/exits/c2e873794e6a54a807a04cb4169d0cdf7072fdc2c36d4eb97fa559d1c1077ce6\"" file="server/server.go:805"
	May 01 02:16:58 addons-286595 crio[678]: time="2024-05-01 02:16:58.420010367Z" level=debug msg="Container or sandbox exited: c2e873794e6a54a807a04cb4169d0cdf7072fdc2c36d4eb97fa559d1c1077ce6" file="server/server.go:810"
	May 01 02:16:58 addons-286595 crio[678]: time="2024-05-01 02:16:58.420030030Z" level=debug msg="container exited and found: c2e873794e6a54a807a04cb4169d0cdf7072fdc2c36d4eb97fa559d1c1077ce6" file="server/server.go:825"
	May 01 02:16:58 addons-286595 crio[678]: time="2024-05-01 02:16:58.421890422Z" level=debug msg="Event: RENAME        \"/var/run/crio/exits/c2e873794e6a54a807a04cb4169d0cdf7072fdc2c36d4eb97fa559d1c1077ce6.OQ6AN2\"" file="server/server.go:805"
	May 01 02:16:58 addons-286595 crio[678]: time="2024-05-01 02:16:58.459093250Z" level=debug msg="Found exit code for c2e873794e6a54a807a04cb4169d0cdf7072fdc2c36d4eb97fa559d1c1077ce6: 0" file="oci/runtime_oci.go:1022" id=0f895560-ad47-4377-9421-3e09864d66c8 name=/runtime.v1.RuntimeService/StopContainer
	May 01 02:16:58 addons-286595 crio[678]: time="2024-05-01 02:16:58.459302483Z" level=debug msg="Skipping status update for: &{State:{Version:1.0.2-dev ID:c2e873794e6a54a807a04cb4169d0cdf7072fdc2c36d4eb97fa559d1c1077ce6 Status:stopped Pid:0 Bundle:/run/containers/storage/overlay-containers/c2e873794e6a54a807a04cb4169d0cdf7072fdc2c36d4eb97fa559d1c1077ce6/userdata Annotations:map[io.container.manager:cri-o io.kubernetes.container.hash:b4e4ef8 io.kubernetes.container.name:metrics-server io.kubernetes.container.ports:[{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}] io.kubernetes.container.restartCount:0 io.kubernetes.container.terminationMessagePath:/dev/termination-log io.kubernetes.container.terminationMessagePolicy:File io.kubernetes.cri-o.Annotations:{\"io.kubernetes.container.hash\":\"b4e4ef8\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"https\\\",\\\"containerPort\\\":4443,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.con
tainer.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"} io.kubernetes.cri-o.ContainerID:c2e873794e6a54a807a04cb4169d0cdf7072fdc2c36d4eb97fa559d1c1077ce6 io.kubernetes.cri-o.ContainerType:container io.kubernetes.cri-o.Created:2024-05-01T02:09:44.392024143Z io.kubernetes.cri-o.IP.0:10.244.0.10 io.kubernetes.cri-o.Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872 io.kubernetes.cri-o.ImageName:registry.k8s.io/metrics-server/metrics-server@sha256:db3800085a0957083930c3932b17580eec652cfb6156a05c0f79c7543e80d17a io.kubernetes.cri-o.ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62 io.kubernetes.cri-o.Labels:{\"io.kubernetes.container.name\":\"metrics-server\",\"io.kubernetes.pod.name\":\"metrics-server-c59844bb4-gvcdl\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"9385fe21-53
b5-4105-bb14-3008fcd7dc3a\"} io.kubernetes.cri-o.LogPath:/var/log/pods/kube-system_metrics-server-c59844bb4-gvcdl_9385fe21-53b5-4105-bb14-3008fcd7dc3a/metrics-server/0.log io.kubernetes.cri-o.Metadata:{\"name\":\"metrics-server\"} io.kubernetes.cri-o.MountPoint:/var/lib/containers/storage/overlay/ac0a575e9b1e9b2a80bbd64c38f89090b694748f37e015988b2cf0a7e9e7c256/merged io.kubernetes.cri-o.Name:k8s_metrics-server_metrics-server-c59844bb4-gvcdl_kube-system_9385fe21-53b5-4105-bb14-3008fcd7dc3a_0 io.kubernetes.cri-o.PlatformRuntimePath: io.kubernetes.cri-o.ResolvPath:/var/run/containers/storage/overlay-containers/6294a66cd68bc9c243ae456aea52a5bc0b3ab300e9d2370d2649dfaa8deda9be/userdata/resolv.conf io.kubernetes.cri-o.SandboxID:6294a66cd68bc9c243ae456aea52a5bc0b3ab300e9d2370d2649dfaa8deda9be io.kubernetes.cri-o.SandboxName:k8s_metrics-server-c59844bb4-gvcdl_kube-system_9385fe21-53b5-4105-bb14-3008fcd7dc3a_0 io.kubernetes.cri-o.SeccompProfilePath:Unconfined io.kubernetes.cri-o.Stdin:false io.kubernetes.cri-o.StdinOnc
e:false io.kubernetes.cri-o.TTY:false io.kubernetes.cri-o.Volumes:[{\"container_path\":\"/tmp\",\"host_path\":\"/var/lib/kubelet/pods/9385fe21-53b5-4105-bb14-3008fcd7dc3a/volumes/kubernetes.io~empty-dir/tmp-dir\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/9385fe21-53b5-4105-bb14-3008fcd7dc3a/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/9385fe21-53b5-4105-bb14-3008fcd7dc3a/containers/metrics-server/158c65f2\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/9385fe21-53b5-4105-bb14-3008fcd7dc3a/volumes/kubernetes.io~projected/kube-api-access-bhpp8\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}] io.kubernetes.pod.name:metrics-server-c59844bb4-gvcdl io.kubernetes.pod.nam
espace:kube-system io.kubernetes.pod.terminationGracePeriod:30 io.kubernetes.pod.uid:9385fe21-53b5-4105-bb14-3008fcd7dc3a kubernetes.io/config.seen:2024-05-01T02:08:59.417316527Z kubernetes.io/config.source:api]} Created:2024-05-01 02:09:44.439293001 +0000 UTC Started:2024-05-01 02:09:44.467053601 +0000 UTC m=+81.562017126 Finished:2024-05-01 02:16:58.4193404 +0000 UTC ExitCode:0xc000ed1290 OOMKilled:false SeccompKilled:false Error: InitPid:4854 InitStartTime:10476 CheckpointedAt:0001-01-01 00:00:00 +0000 UTC}" file="oci/runtime_oci.go:946"
	May 01 02:16:58 addons-286595 crio[678]: time="2024-05-01 02:16:58.463536589Z" level=info msg="Stopped container c2e873794e6a54a807a04cb4169d0cdf7072fdc2c36d4eb97fa559d1c1077ce6: kube-system/metrics-server-c59844bb4-gvcdl/metrics-server" file="server/container_stop.go:29" id=0f895560-ad47-4377-9421-3e09864d66c8 name=/runtime.v1.RuntimeService/StopContainer
	May 01 02:16:58 addons-286595 crio[678]: time="2024-05-01 02:16:58.463681378Z" level=debug msg="Response: &StopContainerResponse{}" file="otel-collector/interceptors.go:74" id=0f895560-ad47-4377-9421-3e09864d66c8 name=/runtime.v1.RuntimeService/StopContainer
	May 01 02:16:58 addons-286595 crio[678]: time="2024-05-01 02:16:58.464675975Z" level=debug msg="Request: &StopPodSandboxRequest{PodSandboxId:6294a66cd68bc9c243ae456aea52a5bc0b3ab300e9d2370d2649dfaa8deda9be,}" file="otel-collector/interceptors.go:62" id=d08aac81-acca-4874-93d6-bee09c23821b name=/runtime.v1.RuntimeService/StopPodSandbox
	May 01 02:16:58 addons-286595 crio[678]: time="2024-05-01 02:16:58.464732021Z" level=info msg="Stopping pod sandbox: 6294a66cd68bc9c243ae456aea52a5bc0b3ab300e9d2370d2649dfaa8deda9be" file="server/sandbox_stop.go:18" id=d08aac81-acca-4874-93d6-bee09c23821b name=/runtime.v1.RuntimeService/StopPodSandbox
	May 01 02:16:58 addons-286595 crio[678]: time="2024-05-01 02:16:58.465125562Z" level=info msg="Got pod network &{Name:metrics-server-c59844bb4-gvcdl Namespace:kube-system ID:6294a66cd68bc9c243ae456aea52a5bc0b3ab300e9d2370d2649dfaa8deda9be UID:9385fe21-53b5-4105-bb14-3008fcd7dc3a NetNS:/var/run/netns/18cefb95-f992-495d-ba72-04b3f7acea40 Networks:[{Name:bridge Ifname:eth0}] RuntimeConfig:map[bridge:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath:/kubepods/burstable/pod9385fe21-53b5-4105-bb14-3008fcd7dc3a PodAnnotations:0xc0006868c8}] Aliases:map[]}" file="ocicni/ocicni.go:795"
	May 01 02:16:58 addons-286595 crio[678]: time="2024-05-01 02:16:58.466304764Z" level=debug msg="Event: REMOVE        \"/var/run/crio/exits/c2e873794e6a54a807a04cb4169d0cdf7072fdc2c36d4eb97fa559d1c1077ce6\"" file="server/server.go:805"
	May 01 02:16:58 addons-286595 crio[678]: time="2024-05-01 02:16:58.466545163Z" level=info msg="Deleting pod kube-system_metrics-server-c59844bb4-gvcdl from CNI network \"bridge\" (type=bridge)" file="ocicni/ocicni.go:667"
	May 01 02:16:58 addons-286595 crio[678]: time="2024-05-01 02:16:58.469221090Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1dee7a12-4815-4a06-a10c-7eb56c059e0c name=/runtime.v1.RuntimeService/Version
	May 01 02:16:58 addons-286595 crio[678]: time="2024-05-01 02:16:58.469282000Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1dee7a12-4815-4a06-a10c-7eb56c059e0c name=/runtime.v1.RuntimeService/Version
	May 01 02:16:58 addons-286595 crio[678]: time="2024-05-01 02:16:58.472491238Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=dbc89b87-af2b-4ce4-9c41-c7c61006b850 name=/runtime.v1.ImageService/ImageFsInfo
	May 01 02:16:58 addons-286595 crio[678]: time="2024-05-01 02:16:58.474291551Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714529818474214377,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579668,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=dbc89b87-af2b-4ce4-9c41-c7c61006b850 name=/runtime.v1.ImageService/ImageFsInfo
	May 01 02:16:58 addons-286595 crio[678]: time="2024-05-01 02:16:58.474852375Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a908dd9b-6736-4062-bb78-836bec6c56ef name=/runtime.v1.RuntimeService/ListContainers
	May 01 02:16:58 addons-286595 crio[678]: time="2024-05-01 02:16:58.474910517Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a908dd9b-6736-4062-bb78-836bec6c56ef name=/runtime.v1.RuntimeService/ListContainers
	May 01 02:16:58 addons-286595 crio[678]: time="2024-05-01 02:16:58.475501966Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3dc8451202c845e1cfc4e3c28974631a42f49e34ea0411ce1b1faac0ae57f237,PodSandboxId:8e72cc6f09b3e3fadf9ec75b9310c2d78126d3b97a1d88fd61ef25f8991d9a5f,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1714529649599140523,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-86c47465fc-jtwrv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0125296c-388d-4687-96e0-fa6da417e535,},Annotations:map[string]string{io.kubernetes.container.hash: a80b439f,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9fa4fb94b082c838ebe8a1532662751e9183f51fb69467cfab3cd6cc237ca435,PodSandboxId:a2dc1241af174a36ec9c93e88afa189e0d7100872a8ca7c508e972b3dff4683c,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:9d84f30d4c5e54cdc40f63b060e93ba6a0cd8a4c05d28d7cda4cd14f6b56490f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7373e995f4086a9db4ce8b2f96af2c2ae7f319e3e7e2ebdc1291e9c50ae4437e,State:CONTAINER_RUNNING,CreatedAt:1714529543354931898,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7559bf459f-844d4,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: d7626782-57eb-48f0-907d-f0d1e86e250c,},Annota
tions:map[string]string{io.kubernetes.container.hash: 6de05c84,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2bc8bf0be7ba2293941b7d54385fbe3193360b4e299b1b731da89f470069a51,PodSandboxId:2e7d62998cc8784acf5c9dec6b82bd83857310927d192fa8b08bee020d42647d,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:dd524baac105f5353429a7022c26a02c8c80d95a50cb4d34b6e19a3a4289ff88,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f4215f6ee683f29c0a4611b02d1adc3b7d986a96ab894eb5f7b9437c862c9499,State:CONTAINER_RUNNING,CreatedAt:1714529508994443047,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: defaul
t,io.kubernetes.pod.uid: e8f25648-6f7c-4d88-9b95-89988ad85a6b,},Annotations:map[string]string{io.kubernetes.container.hash: 9da3a0f0,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10db6a0b55c4b872bf2919f3b05779544eabcca22bf61fbb6744de0ab2d8afb5,PodSandboxId:f844e33a463589aea8f33444bb22c7b510e164c021bba4c9c600c3212811974e,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1714529483336772469,Labels:map[string]string{io.kubernetes.container.name: gcp-a
uth,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-dgngh,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 73a446bd-5b8b-4a38-a644-68a5bae5a7d3,},Annotations:map[string]string{io.kubernetes.container.hash: 77530408,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a62a664191c4977760c04227d07b9b820559d285509fc2756deed35ae140a10,PodSandboxId:ad1e349c324dba757460ebdcb36722456dd3dfcb30afd93c00934130590bf0f1,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:17145
29390648983383,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-5ddbf7d777-q2wzp,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 85549b73-ebbe-4fa9-9fe0-72d18004bc71,},Annotations:map[string]string{io.kubernetes.container.hash: a5b1870f,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2e873794e6a54a807a04cb4169d0cdf7072fdc2c36d4eb97fa559d1c1077ce6,PodSandboxId:6294a66cd68bc9c243ae456aea52a5bc0b3ab300e9d2370d2649dfaa8deda9be,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage
:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_EXITED,CreatedAt:1714529384391927028,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-gvcdl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9385fe21-53b5-4105-bb14-3008fcd7dc3a,},Annotations:map[string]string{io.kubernetes.container.hash: b4e4ef8,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4410f9180e8b7a76ed62481d4939e86cb64584d5029463b76ac78aba4d683fb2,PodSandboxId:52f085039ab716b6a5764a7f162fb92caeb9f15f16496e0238724151a3bcc477,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/loca
l-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1714529373783972306,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-8d985888d-6wdsq,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: ce525196-ae88-42bb-8519-66775d8bfd11,},Annotations:map[string]string{io.kubernetes.container.hash: 82a9c618,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea17f2d9434251df9401981536acddc1f90957bd5e65bc3d10cd23f2258cecbc,PodSandboxId:cc387497d8c12938de15c71ac1d5667043a9293348e8a60abdb3c871258371e2,Metadata:&ContainerMetadata{Name:storage-prov
isioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714529340397208355,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b23a96b2-9c34-4d4f-9df5-90dc5195248b,},Annotations:map[string]string{io.kubernetes.container.hash: 13732288,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09d11ea02380a6ff352ea6ce929b940136fe970bfecd9ad03d3100cc98c598b6,PodSandboxId:bb441acccf15e35c28edd043946725c6b690c977da8552408faeed2463860243,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Imag
e:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714529337800103649,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rlvmm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9eb9071-e21b-46fc-8605-055d6915f55e,},Annotations:map[string]string{io.kubernetes.container.hash: 69f92d0f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3cfa
2da63bbf5b5bf434bebf921cd1711d24a75e5e358306e59c34caf06382f,PodSandboxId:38ef8fa37bf0ee992cf804eea09c31a3645f258cb6483f8bd8e876a77faf5186,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714529334395019868,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7dw4g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7aec44ec-1615-4aa4-9d65-e464831f8518,},Annotations:map[string]string{io.kubernetes.container.hash: 851c5df7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2a049b4c17d6d072b9097aa0071b82d6d4edc
2a255d26f724807d4ac369f9c2,PodSandboxId:6085650b010323972a3db452fa956a57d8c7020bd388875734adcffecd114fcb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714529313518598784,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-286595,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d221c44f5de61b31369bfd052ad23bd,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff3c851c7688d3c9fbb0d390c99ba4b9407c06fff923031bc3115f0
c17f49cac,PodSandboxId:c8262f04ef4be0d9d65eee36bdcdd8c16ba76c2ef296e0274e4e12a870d0b39c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714529313487129105,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-286595,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ae6bc0c9189de883182d2bdeaf96bb1,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:976be39bc269736268dbe23a871c448f5827e29fde81f
f90e0159d69f9af5bd2,PodSandboxId:f4de0dae893536d69fed3e1ba4efd516bf0ebcb53f31e84ef2e794ec189d1476,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714529313444695735,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-286595,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6fe04d16c604e95cfae2a0539842d3a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a081a71,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5d66ed0ede7ea6abf6b73f76e9bd96372ad218e49de932b1f7d31ddf968ae
30,PodSandboxId:2787103be5c6e4a6c4e2799e1eb48b451b4a6b9f490477a0420833d00ec32937,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714529313430592961,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-286595,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e1ad2676189ad08ca8b732b2016bda4,},Annotations:map[string]string{io.kubernetes.container.hash: 9a2629b0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a908dd9b-6736-4062-bb78-836bec6c56ef name=/runtime.v1.RuntimeService/ListC
ontainers
	May 01 02:16:58 addons-286595 crio[678]: time="2024-05-01 02:16:58.498524350Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9d0f7b36-a9e1-4234-b066-104880babfa8 name=/runtime.v1.RuntimeService/ListPodSandbox
	May 01 02:16:58 addons-286595 crio[678]: time="2024-05-01 02:16:58.499837385Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:8e72cc6f09b3e3fadf9ec75b9310c2d78126d3b97a1d88fd61ef25f8991d9a5f,Metadata:&PodSandboxMetadata{Name:hello-world-app-86c47465fc-jtwrv,Uid:0125296c-388d-4687-96e0-fa6da417e535,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1714529645801152447,Labels:map[string]string{app: hello-world-app,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-world-app-86c47465fc-jtwrv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0125296c-388d-4687-96e0-fa6da417e535,pod-template-hash: 86c47465fc,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-05-01T02:14:05.483840703Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a2dc1241af174a36ec9c93e88afa189e0d7100872a8ca7c508e972b3dff4683c,Metadata:&PodSandboxMetadata{Name:headlamp-7559bf459f-844d4,Uid:d7626782-57eb-48f0-907d-f0d1e86e250c,Namespace
:headlamp,Attempt:0,},State:SANDBOX_READY,CreatedAt:1714529536998640779,Labels:map[string]string{app.kubernetes.io/instance: headlamp,app.kubernetes.io/name: headlamp,io.kubernetes.container.name: POD,io.kubernetes.pod.name: headlamp-7559bf459f-844d4,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: d7626782-57eb-48f0-907d-f0d1e86e250c,pod-template-hash: 7559bf459f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-05-01T02:12:16.682103368Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2e7d62998cc8784acf5c9dec6b82bd83857310927d192fa8b08bee020d42647d,Metadata:&PodSandboxMetadata{Name:nginx,Uid:e8f25648-6f7c-4d88-9b95-89988ad85a6b,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1714529504341411039,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e8f25648-6f7c-4d88-9b95-89988ad85a6b,run: nginx,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-
05-01T02:11:44.029462250Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f844e33a463589aea8f33444bb22c7b510e164c021bba4c9c600c3212811974e,Metadata:&PodSandboxMetadata{Name:gcp-auth-5db96cd9b4-dgngh,Uid:73a446bd-5b8b-4a38-a644-68a5bae5a7d3,Namespace:gcp-auth,Attempt:0,},State:SANDBOX_READY,CreatedAt:1714529479507722097,Labels:map[string]string{app: gcp-auth,io.kubernetes.container.name: POD,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-dgngh,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 73a446bd-5b8b-4a38-a644-68a5bae5a7d3,kubernetes.io/minikube-addons: gcp-auth,pod-template-hash: 5db96cd9b4,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-05-01T02:09:05.651010315Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ad1e349c324dba757460ebdcb36722456dd3dfcb30afd93c00934130590bf0f1,Metadata:&PodSandboxMetadata{Name:yakd-dashboard-5ddbf7d777-q2wzp,Uid:85549b73-ebbe-4fa9-9fe0-72d18004bc71,Namespace:yakd-dashboard,Attempt:0,},State:SANDBOX_READY,Creat
edAt:1714529340423867048,Labels:map[string]string{app.kubernetes.io/instance: yakd-dashboard,app.kubernetes.io/name: yakd-dashboard,gcp-auth-skip-secret: true,io.kubernetes.container.name: POD,io.kubernetes.pod.name: yakd-dashboard-5ddbf7d777-q2wzp,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 85549b73-ebbe-4fa9-9fe0-72d18004bc71,pod-template-hash: 5ddbf7d777,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-05-01T02:08:59.811522164Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:6294a66cd68bc9c243ae456aea52a5bc0b3ab300e9d2370d2649dfaa8deda9be,Metadata:&PodSandboxMetadata{Name:metrics-server-c59844bb4-gvcdl,Uid:9385fe21-53b5-4105-bb14-3008fcd7dc3a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1714529340067572178,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-c59844bb4-gvcdl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9385fe21-53b5-4105-bb14-3008fcd7dc3a,k8s-app: metr
ics-server,pod-template-hash: c59844bb4,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-05-01T02:08:59.417316527Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:cc387497d8c12938de15c71ac1d5667043a9293348e8a60abdb3c871258371e2,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:b23a96b2-9c34-4d4f-9df5-90dc5195248b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1714529339609004805,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b23a96b2-9c34-4d4f-9df5-90dc5195248b,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\
"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-05-01T02:08:58.952881464Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:52f085039ab716b6a5764a7f162fb92caeb9f15f16496e0238724151a3bcc477,Metadata:&PodSandboxMetadata{Name:local-path-provisioner-8d985888d-6wdsq,Uid:ce525196-ae88-42bb-8519-66775d8bfd11,Namespace:local-path-storage,Attempt:0,},State:SANDBOX_READY,CreatedAt:1714529338981992950,Labels:map[string]string{app: local-path-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: local-path-provisioner-8d985888d-6wdsq,io.kubernetes.pod.namespace: lo
cal-path-storage,io.kubernetes.pod.uid: ce525196-ae88-42bb-8519-66775d8bfd11,pod-template-hash: 8d985888d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-05-01T02:08:58.634815838Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:38ef8fa37bf0ee992cf804eea09c31a3645f258cb6483f8bd8e876a77faf5186,Metadata:&PodSandboxMetadata{Name:kube-proxy-7dw4g,Uid:7aec44ec-1615-4aa4-9d65-e464831f8518,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1714529334004243923,Labels:map[string]string{controller-revision-hash: 79cf874c65,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-7dw4g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7aec44ec-1615-4aa4-9d65-e464831f8518,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-05-01T02:08:52.181628049Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:bb441acccf15e35c28edd043946725c6b690c977da8552408faeed2463860243,Metadata:&P
odSandboxMetadata{Name:coredns-7db6d8ff4d-rlvmm,Uid:b9eb9071-e21b-46fc-8605-055d6915f55e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1714529333975872633,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-rlvmm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9eb9071-e21b-46fc-8605-055d6915f55e,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-05-01T02:08:52.459963634Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c8262f04ef4be0d9d65eee36bdcdd8c16ba76c2ef296e0274e4e12a870d0b39c,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-addons-286595,Uid:6ae6bc0c9189de883182d2bdeaf96bb1,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1714529313242663877,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-addons-286595,io.kubernetes.pod.name
space: kube-system,io.kubernetes.pod.uid: 6ae6bc0c9189de883182d2bdeaf96bb1,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 6ae6bc0c9189de883182d2bdeaf96bb1,kubernetes.io/config.seen: 2024-05-01T02:08:32.752264794Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:f4de0dae893536d69fed3e1ba4efd516bf0ebcb53f31e84ef2e794ec189d1476,Metadata:&PodSandboxMetadata{Name:kube-apiserver-addons-286595,Uid:d6fe04d16c604e95cfae2a0539842d3a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1714529313241989951,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-addons-286595,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6fe04d16c604e95cfae2a0539842d3a,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.173:8443,kubernetes.io/config.hash: d6fe04d16c604e95cfae2a0539842d3a,kubernetes.io/config.seen: 20
24-05-01T02:08:32.752263878Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:2787103be5c6e4a6c4e2799e1eb48b451b4a6b9f490477a0420833d00ec32937,Metadata:&PodSandboxMetadata{Name:etcd-addons-286595,Uid:4e1ad2676189ad08ca8b732b2016bda4,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1714529313207093235,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-addons-286595,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e1ad2676189ad08ca8b732b2016bda4,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.173:2379,kubernetes.io/config.hash: 4e1ad2676189ad08ca8b732b2016bda4,kubernetes.io/config.seen: 2024-05-01T02:08:32.752262657Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:6085650b010323972a3db452fa956a57d8c7020bd388875734adcffecd114fcb,Metadata:&PodSandboxMetadata{Name:kube-scheduler-addons-286595,Uid:6d221c44f5de61b31369bfd
052ad23bd,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1714529313205970014,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-addons-286595,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d221c44f5de61b31369bfd052ad23bd,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 6d221c44f5de61b31369bfd052ad23bd,kubernetes.io/config.seen: 2024-05-01T02:08:32.752258643Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=9d0f7b36-a9e1-4234-b066-104880babfa8 name=/runtime.v1.RuntimeService/ListPodSandbox
	May 01 02:16:58 addons-286595 crio[678]: time="2024-05-01 02:16:58.501252255Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=06af305c-07f9-484b-9d5b-078f7c407071 name=/runtime.v1.RuntimeService/ListContainers
	May 01 02:16:58 addons-286595 crio[678]: time="2024-05-01 02:16:58.501308962Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=06af305c-07f9-484b-9d5b-078f7c407071 name=/runtime.v1.RuntimeService/ListContainers
	May 01 02:16:58 addons-286595 crio[678]: time="2024-05-01 02:16:58.501992072Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3dc8451202c845e1cfc4e3c28974631a42f49e34ea0411ce1b1faac0ae57f237,PodSandboxId:8e72cc6f09b3e3fadf9ec75b9310c2d78126d3b97a1d88fd61ef25f8991d9a5f,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1714529649599140523,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-86c47465fc-jtwrv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0125296c-388d-4687-96e0-fa6da417e535,},Annotations:map[string]string{io.kubernetes.container.hash: a80b439f,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9fa4fb94b082c838ebe8a1532662751e9183f51fb69467cfab3cd6cc237ca435,PodSandboxId:a2dc1241af174a36ec9c93e88afa189e0d7100872a8ca7c508e972b3dff4683c,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:9d84f30d4c5e54cdc40f63b060e93ba6a0cd8a4c05d28d7cda4cd14f6b56490f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7373e995f4086a9db4ce8b2f96af2c2ae7f319e3e7e2ebdc1291e9c50ae4437e,State:CONTAINER_RUNNING,CreatedAt:1714529543354931898,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7559bf459f-844d4,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: d7626782-57eb-48f0-907d-f0d1e86e250c,},Annota
tions:map[string]string{io.kubernetes.container.hash: 6de05c84,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2bc8bf0be7ba2293941b7d54385fbe3193360b4e299b1b731da89f470069a51,PodSandboxId:2e7d62998cc8784acf5c9dec6b82bd83857310927d192fa8b08bee020d42647d,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:dd524baac105f5353429a7022c26a02c8c80d95a50cb4d34b6e19a3a4289ff88,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f4215f6ee683f29c0a4611b02d1adc3b7d986a96ab894eb5f7b9437c862c9499,State:CONTAINER_RUNNING,CreatedAt:1714529508994443047,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: defaul
t,io.kubernetes.pod.uid: e8f25648-6f7c-4d88-9b95-89988ad85a6b,},Annotations:map[string]string{io.kubernetes.container.hash: 9da3a0f0,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10db6a0b55c4b872bf2919f3b05779544eabcca22bf61fbb6744de0ab2d8afb5,PodSandboxId:f844e33a463589aea8f33444bb22c7b510e164c021bba4c9c600c3212811974e,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1714529483336772469,Labels:map[string]string{io.kubernetes.container.name: gcp-a
uth,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-dgngh,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 73a446bd-5b8b-4a38-a644-68a5bae5a7d3,},Annotations:map[string]string{io.kubernetes.container.hash: 77530408,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a62a664191c4977760c04227d07b9b820559d285509fc2756deed35ae140a10,PodSandboxId:ad1e349c324dba757460ebdcb36722456dd3dfcb30afd93c00934130590bf0f1,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:17145
29390648983383,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-5ddbf7d777-q2wzp,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 85549b73-ebbe-4fa9-9fe0-72d18004bc71,},Annotations:map[string]string{io.kubernetes.container.hash: a5b1870f,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4410f9180e8b7a76ed62481d4939e86cb64584d5029463b76ac78aba4d683fb2,PodSandboxId:52f085039ab716b6a5764a7f162fb92caeb9f15f16496e0238724151a3bcc477,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedIm
age:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1714529373783972306,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-8d985888d-6wdsq,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: ce525196-ae88-42bb-8519-66775d8bfd11,},Annotations:map[string]string{io.kubernetes.container.hash: 82a9c618,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea17f2d9434251df9401981536acddc1f90957bd5e65bc3d10cd23f2258cecbc,PodSandboxId:cc387497d8c12938de15c71ac1d5667043a9293348e8a60abdb3c871258371e2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{
},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714529340397208355,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b23a96b2-9c34-4d4f-9df5-90dc5195248b,},Annotations:map[string]string{io.kubernetes.container.hash: 13732288,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09d11ea02380a6ff352ea6ce929b940136fe970bfecd9ad03d3100cc98c598b6,PodSandboxId:bb441acccf15e35c28edd043946725c6b690c977da8552408faeed2463860243,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,Ru
ntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714529337800103649,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rlvmm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9eb9071-e21b-46fc-8605-055d6915f55e,},Annotations:map[string]string{io.kubernetes.container.hash: 69f92d0f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3cfa2da63bbf5b5bf434bebf921cd1711d24a75e5e358306e59c34caf06382f,PodSandboxId:38ef8fa37bf0ee992cf804eea09c31a3645f258cb6483f8bd8e876a77faf5186,
Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714529334395019868,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7dw4g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7aec44ec-1615-4aa4-9d65-e464831f8518,},Annotations:map[string]string{io.kubernetes.container.hash: 851c5df7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2a049b4c17d6d072b9097aa0071b82d6d4edc2a255d26f724807d4ac369f9c2,PodSandboxId:6085650b010323972a3db452fa956a57d8c7020bd388875734adcffecd114fcb,Metadata:&ContainerMetadata{Name:
kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714529313518598784,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-286595,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d221c44f5de61b31369bfd052ad23bd,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff3c851c7688d3c9fbb0d390c99ba4b9407c06fff923031bc3115f0c17f49cac,PodSandboxId:c8262f04ef4be0d9d65eee36bdcdd8c16ba76c2ef296e0274e4e12a870d0b39c,Metadata:&ContainerMetadata{Name:kube-controller-m
anager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714529313487129105,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-286595,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ae6bc0c9189de883182d2bdeaf96bb1,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:976be39bc269736268dbe23a871c448f5827e29fde81ff90e0159d69f9af5bd2,PodSandboxId:f4de0dae893536d69fed3e1ba4efd516bf0ebcb53f31e84ef2e794ec189d1476,Metadata:&ContainerMetadata{Name:kube-ap
iserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714529313444695735,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-286595,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6fe04d16c604e95cfae2a0539842d3a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a081a71,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5d66ed0ede7ea6abf6b73f76e9bd96372ad218e49de932b1f7d31ddf968ae30,PodSandboxId:2787103be5c6e4a6c4e2799e1eb48b451b4a6b9f490477a0420833d00ec32937,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&
ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714529313430592961,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-286595,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e1ad2676189ad08ca8b732b2016bda4,},Annotations:map[string]string{io.kubernetes.container.hash: 9a2629b0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=06af305c-07f9-484b-9d5b-078f7c407071 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	3dc8451202c84       gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7                 2 minutes ago       Running             hello-world-app           0                   8e72cc6f09b3e       hello-world-app-86c47465fc-jtwrv
	9fa4fb94b082c       ghcr.io/headlamp-k8s/headlamp@sha256:9d84f30d4c5e54cdc40f63b060e93ba6a0cd8a4c05d28d7cda4cd14f6b56490f                   4 minutes ago       Running             headlamp                  0                   a2dc1241af174       headlamp-7559bf459f-844d4
	a2bc8bf0be7ba       docker.io/library/nginx@sha256:dd524baac105f5353429a7022c26a02c8c80d95a50cb4d34b6e19a3a4289ff88                         5 minutes ago       Running             nginx                     0                   2e7d62998cc87       nginx
	10db6a0b55c4b       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b            5 minutes ago       Running             gcp-auth                  0                   f844e33a46358       gcp-auth-5db96cd9b4-dgngh
	3a62a664191c4       docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                         7 minutes ago       Running             yakd                      0                   ad1e349c324db       yakd-dashboard-5ddbf7d777-q2wzp
	c2e873794e6a5       registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872   7 minutes ago       Exited              metrics-server            0                   6294a66cd68bc       metrics-server-c59844bb4-gvcdl
	4410f9180e8b7       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef        7 minutes ago       Running             local-path-provisioner    0                   52f085039ab71       local-path-provisioner-8d985888d-6wdsq
	ea17f2d943425       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                        7 minutes ago       Running             storage-provisioner       0                   cc387497d8c12       storage-provisioner
	09d11ea02380a       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                        8 minutes ago       Running             coredns                   0                   bb441acccf15e       coredns-7db6d8ff4d-rlvmm
	e3cfa2da63bbf       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b                                                        8 minutes ago       Running             kube-proxy                0                   38ef8fa37bf0e       kube-proxy-7dw4g
	f2a049b4c17d6       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced                                                        8 minutes ago       Running             kube-scheduler            0                   6085650b01032       kube-scheduler-addons-286595
	ff3c851c7688d       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b                                                        8 minutes ago       Running             kube-controller-manager   0                   c8262f04ef4be       kube-controller-manager-addons-286595
	976be39bc2697       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0                                                        8 minutes ago       Running             kube-apiserver            0                   f4de0dae89353       kube-apiserver-addons-286595
	f5d66ed0ede7e       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                                        8 minutes ago       Running             etcd                      0                   2787103be5c6e       etcd-addons-286595
	
	
	==> coredns [09d11ea02380a6ff352ea6ce929b940136fe970bfecd9ad03d3100cc98c598b6] <==
	[INFO] 10.244.0.8:56913 - 16125 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000619599s
	[INFO] 10.244.0.8:33023 - 5277 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000149458s
	[INFO] 10.244.0.8:33023 - 45723 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000373581s
	[INFO] 10.244.0.8:43266 - 36546 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000146545s
	[INFO] 10.244.0.8:43266 - 15812 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000132869s
	[INFO] 10.244.0.8:59983 - 49520 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000113131s
	[INFO] 10.244.0.8:59983 - 29299 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000058142s
	[INFO] 10.244.0.8:41609 - 25749 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000480171s
	[INFO] 10.244.0.8:41609 - 26000 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000547791s
	[INFO] 10.244.0.8:38364 - 15795 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000122158s
	[INFO] 10.244.0.8:38364 - 43952 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000156286s
	[INFO] 10.244.0.8:33956 - 37525 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000074082s
	[INFO] 10.244.0.8:33956 - 60311 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00019137s
	[INFO] 10.244.0.8:35494 - 32708 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000067575s
	[INFO] 10.244.0.8:35494 - 12762 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000033832s
	[INFO] 10.244.0.22:58112 - 64578 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000504346s
	[INFO] 10.244.0.22:57420 - 44570 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000283478s
	[INFO] 10.244.0.22:33874 - 4246 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000185886s
	[INFO] 10.244.0.22:35305 - 60562 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00012197s
	[INFO] 10.244.0.22:38894 - 1320 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000077391s
	[INFO] 10.244.0.22:50127 - 57463 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000345002s
	[INFO] 10.244.0.22:47380 - 48268 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001029882s
	[INFO] 10.244.0.22:55881 - 56349 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 496 0.001253411s
	[INFO] 10.244.0.26:45143 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00021357s
	[INFO] 10.244.0.26:49210 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000096037s
	
	
	==> describe nodes <==
	Name:               addons-286595
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-286595
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2c4eae41cda912e6a762d77f0d8868e00f97bb4e
	                    minikube.k8s.io/name=addons-286595
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_01T02_08_39_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-286595
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 01 May 2024 02:08:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-286595
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 01 May 2024 02:16:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 01 May 2024 02:14:16 +0000   Wed, 01 May 2024 02:08:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 01 May 2024 02:14:16 +0000   Wed, 01 May 2024 02:08:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 01 May 2024 02:14:16 +0000   Wed, 01 May 2024 02:08:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 01 May 2024 02:14:16 +0000   Wed, 01 May 2024 02:08:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.173
	  Hostname:    addons-286595
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 bc112c0dc1d8478892371fb1c7c107fa
	  System UUID:                bc112c0d-c1d8-4788-9237-1fb1c7c107fa
	  Boot ID:                    d6cd403d-3270-41ed-8568-6727e96b7924
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                      CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                      ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-86c47465fc-jtwrv          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m53s
	  default                     nginx                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m14s
	  gcp-auth                    gcp-auth-5db96cd9b4-dgngh                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m53s
	  headlamp                    headlamp-7559bf459f-844d4                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m42s
	  kube-system                 coredns-7db6d8ff4d-rlvmm                  100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     8m6s
	  kube-system                 etcd-addons-286595                        100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         8m20s
	  kube-system                 kube-apiserver-addons-286595              250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m20s
	  kube-system                 kube-controller-manager-addons-286595     200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m20s
	  kube-system                 kube-proxy-7dw4g                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m6s
	  kube-system                 kube-scheduler-addons-286595              100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m20s
	  kube-system                 storage-provisioner                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m
	  local-path-storage          local-path-provisioner-8d985888d-6wdsq    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m
	  yakd-dashboard              yakd-dashboard-5ddbf7d777-q2wzp           0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     7m59s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             298Mi (7%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 8m3s   kube-proxy       
	  Normal  Starting                 8m20s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  8m20s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m20s  kubelet          Node addons-286595 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m20s  kubelet          Node addons-286595 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m20s  kubelet          Node addons-286595 status is now: NodeHasSufficientPID
	  Normal  NodeReady                8m19s  kubelet          Node addons-286595 status is now: NodeReady
	  Normal  RegisteredNode           8m7s   node-controller  Node addons-286595 event: Registered Node addons-286595 in Controller
	
	
	==> dmesg <==
	[  +5.077667] kauditd_printk_skb: 101 callbacks suppressed
	[May 1 02:09] kauditd_printk_skb: 136 callbacks suppressed
	[  +5.233590] kauditd_printk_skb: 89 callbacks suppressed
	[ +22.125333] kauditd_printk_skb: 4 callbacks suppressed
	[ +10.667585] kauditd_printk_skb: 2 callbacks suppressed
	[  +9.336300] kauditd_printk_skb: 30 callbacks suppressed
	[  +9.842048] kauditd_printk_skb: 4 callbacks suppressed
	[May 1 02:10] kauditd_printk_skb: 7 callbacks suppressed
	[  +5.144770] kauditd_printk_skb: 39 callbacks suppressed
	[  +6.587913] kauditd_printk_skb: 50 callbacks suppressed
	[  +5.576870] kauditd_printk_skb: 9 callbacks suppressed
	[  +5.225177] kauditd_printk_skb: 7 callbacks suppressed
	[ +21.336193] kauditd_printk_skb: 28 callbacks suppressed
	[May 1 02:11] kauditd_printk_skb: 24 callbacks suppressed
	[  +5.585570] kauditd_printk_skb: 9 callbacks suppressed
	[  +5.429410] kauditd_printk_skb: 25 callbacks suppressed
	[  +5.103066] kauditd_printk_skb: 4 callbacks suppressed
	[  +5.146607] kauditd_printk_skb: 20 callbacks suppressed
	[  +6.879216] kauditd_printk_skb: 71 callbacks suppressed
	[May 1 02:12] kauditd_printk_skb: 3 callbacks suppressed
	[  +6.401210] kauditd_printk_skb: 23 callbacks suppressed
	[  +7.766567] kauditd_printk_skb: 53 callbacks suppressed
	[  +5.041797] kauditd_printk_skb: 21 callbacks suppressed
	[May 1 02:14] kauditd_printk_skb: 8 callbacks suppressed
	[  +5.768945] kauditd_printk_skb: 21 callbacks suppressed
	
	
	==> etcd [f5d66ed0ede7ea6abf6b73f76e9bd96372ad218e49de932b1f7d31ddf968ae30] <==
	{"level":"info","ts":"2024-05-01T02:10:00.667256Z","caller":"traceutil/trace.go:171","msg":"trace[1763656184] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:982; }","duration":"138.376765ms","start":"2024-05-01T02:10:00.528866Z","end":"2024-05-01T02:10:00.667243Z","steps":["trace[1763656184] 'range keys from in-memory index tree'  (duration: 138.080651ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-01T02:10:18.654654Z","caller":"traceutil/trace.go:171","msg":"trace[708212347] transaction","detail":"{read_only:false; response_revision:1090; number_of_response:1; }","duration":"428.323682ms","start":"2024-05-01T02:10:18.226294Z","end":"2024-05-01T02:10:18.654618Z","steps":["trace[708212347] 'process raft request'  (duration: 428.203604ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-01T02:10:18.654923Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-01T02:10:18.226274Z","time spent":"428.519597ms","remote":"127.0.0.1:53582","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":764,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/events/gadget/gadget-2kks5.17cb3b6c022d329e\" mod_revision:0 > success:<request_put:<key:\"/registry/events/gadget/gadget-2kks5.17cb3b6c022d329e\" value_size:693 lease:2165825915339397124 >> failure:<>"}
	{"level":"info","ts":"2024-05-01T02:10:18.75593Z","caller":"traceutil/trace.go:171","msg":"trace[229671378] transaction","detail":"{read_only:false; response_revision:1091; number_of_response:1; }","duration":"510.596733ms","start":"2024-05-01T02:10:18.245317Z","end":"2024-05-01T02:10:18.755913Z","steps":["trace[229671378] 'process raft request'  (duration: 509.963049ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-01T02:10:18.75607Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-01T02:10:18.245298Z","time spent":"510.71107ms","remote":"127.0.0.1:53680","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":11080,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/pods/gadget/gadget-2kks5\" mod_revision:1070 > success:<request_put:<key:\"/registry/pods/gadget/gadget-2kks5\" value_size:11038 >> failure:<request_range:<key:\"/registry/pods/gadget/gadget-2kks5\" > >"}
	{"level":"warn","ts":"2024-05-01T02:10:18.75611Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"415.398582ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:14363"}
	{"level":"info","ts":"2024-05-01T02:10:18.756141Z","caller":"traceutil/trace.go:171","msg":"trace[1491624114] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1092; }","duration":"415.458887ms","start":"2024-05-01T02:10:18.340676Z","end":"2024-05-01T02:10:18.756135Z","steps":["trace[1491624114] 'agreement among raft nodes before linearized reading'  (duration: 415.294191ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-01T02:10:18.75616Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-01T02:10:18.340662Z","time spent":"415.492252ms","remote":"127.0.0.1:53680","response type":"/etcdserverpb.KV/Range","request count":0,"request size":62,"response count":3,"response size":14387,"request content":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" "}
	{"level":"info","ts":"2024-05-01T02:10:18.755929Z","caller":"traceutil/trace.go:171","msg":"trace[305672440] linearizableReadLoop","detail":"{readStateIndex:1127; appliedIndex:1126; }","duration":"415.217462ms","start":"2024-05-01T02:10:18.3407Z","end":"2024-05-01T02:10:18.755918Z","steps":["trace[305672440] 'read index received'  (duration: 314.533058ms)","trace[305672440] 'applied index is now lower than readState.Index'  (duration: 100.682167ms)"],"step_count":2}
	{"level":"warn","ts":"2024-05-01T02:10:18.756293Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"415.50259ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:11453"}
	{"level":"info","ts":"2024-05-01T02:10:18.75631Z","caller":"traceutil/trace.go:171","msg":"trace[1404787745] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1092; }","duration":"415.540287ms","start":"2024-05-01T02:10:18.340764Z","end":"2024-05-01T02:10:18.756305Z","steps":["trace[1404787745] 'agreement among raft nodes before linearized reading'  (duration: 415.463297ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-01T02:10:18.756325Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-01T02:10:18.340727Z","time spent":"415.594813ms","remote":"127.0.0.1:53680","response type":"/etcdserverpb.KV/Range","request count":0,"request size":52,"response count":3,"response size":11477,"request content":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" "}
	{"level":"info","ts":"2024-05-01T02:10:18.756468Z","caller":"traceutil/trace.go:171","msg":"trace[1388601756] transaction","detail":"{read_only:false; response_revision:1092; number_of_response:1; }","duration":"169.339832ms","start":"2024-05-01T02:10:18.587119Z","end":"2024-05-01T02:10:18.756458Z","steps":["trace[1388601756] 'process raft request'  (duration: 168.729753ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-01T02:10:18.756666Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"235.014365ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:85556"}
	{"level":"info","ts":"2024-05-01T02:10:18.756686Z","caller":"traceutil/trace.go:171","msg":"trace[2085779939] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1092; }","duration":"235.056712ms","start":"2024-05-01T02:10:18.521624Z","end":"2024-05-01T02:10:18.756681Z","steps":["trace[2085779939] 'agreement among raft nodes before linearized reading'  (duration: 234.888839ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-01T02:10:18.75678Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"378.100245ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" ","response":"range_response_count:1 size:499"}
	{"level":"info","ts":"2024-05-01T02:10:18.756802Z","caller":"traceutil/trace.go:171","msg":"trace[528246052] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1092; }","duration":"378.147187ms","start":"2024-05-01T02:10:18.378649Z","end":"2024-05-01T02:10:18.756796Z","steps":["trace[528246052] 'agreement among raft nodes before linearized reading'  (duration: 378.077291ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-01T02:10:18.756824Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-01T02:10:18.378635Z","time spent":"378.185809ms","remote":"127.0.0.1:53768","response type":"/etcdserverpb.KV/Range","request count":0,"request size":57,"response count":1,"response size":523,"request content":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" "}
	{"level":"warn","ts":"2024-05-01T02:11:34.265514Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"162.139762ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/podtemplates/\" range_end:\"/registry/podtemplates0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-05-01T02:11:34.265602Z","caller":"traceutil/trace.go:171","msg":"trace[46202076] range","detail":"{range_begin:/registry/podtemplates/; range_end:/registry/podtemplates0; response_count:0; response_revision:1338; }","duration":"162.252899ms","start":"2024-05-01T02:11:34.103335Z","end":"2024-05-01T02:11:34.265587Z","steps":["trace[46202076] 'count revisions from in-memory index tree'  (duration: 161.969468ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-01T02:11:34.265553Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"203.679123ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-05-01T02:11:34.265658Z","caller":"traceutil/trace.go:171","msg":"trace[330583683] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1338; }","duration":"203.841499ms","start":"2024-05-01T02:11:34.061807Z","end":"2024-05-01T02:11:34.265648Z","steps":["trace[330583683] 'range keys from in-memory index tree'  (duration: 203.66836ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-01T02:11:40.250535Z","caller":"traceutil/trace.go:171","msg":"trace[1595426452] transaction","detail":"{read_only:false; response_revision:1361; number_of_response:1; }","duration":"135.472331ms","start":"2024-05-01T02:11:40.115041Z","end":"2024-05-01T02:11:40.250513Z","steps":["trace[1595426452] 'process raft request'  (duration: 134.905036ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-01T02:12:22.340283Z","caller":"traceutil/trace.go:171","msg":"trace[1906510856] transaction","detail":"{read_only:false; response_revision:1741; number_of_response:1; }","duration":"425.738233ms","start":"2024-05-01T02:12:21.913606Z","end":"2024-05-01T02:12:22.339344Z","steps":["trace[1906510856] 'process raft request'  (duration: 424.340141ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-01T02:12:22.341499Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-01T02:12:21.913588Z","time spent":"427.204429ms","remote":"127.0.0.1:53676","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1731 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	
	
	==> gcp-auth [10db6a0b55c4b872bf2919f3b05779544eabcca22bf61fbb6744de0ab2d8afb5] <==
	2024/05/01 02:11:23 GCP Auth Webhook started!
	2024/05/01 02:11:24 Ready to marshal response ...
	2024/05/01 02:11:24 Ready to write response ...
	2024/05/01 02:11:24 Ready to marshal response ...
	2024/05/01 02:11:24 Ready to write response ...
	2024/05/01 02:11:26 Ready to marshal response ...
	2024/05/01 02:11:26 Ready to write response ...
	2024/05/01 02:11:35 Ready to marshal response ...
	2024/05/01 02:11:35 Ready to write response ...
	2024/05/01 02:11:40 Ready to marshal response ...
	2024/05/01 02:11:40 Ready to write response ...
	2024/05/01 02:11:44 Ready to marshal response ...
	2024/05/01 02:11:44 Ready to write response ...
	2024/05/01 02:12:02 Ready to marshal response ...
	2024/05/01 02:12:02 Ready to write response ...
	2024/05/01 02:12:02 Ready to marshal response ...
	2024/05/01 02:12:02 Ready to write response ...
	2024/05/01 02:12:16 Ready to marshal response ...
	2024/05/01 02:12:16 Ready to write response ...
	2024/05/01 02:12:16 Ready to marshal response ...
	2024/05/01 02:12:16 Ready to write response ...
	2024/05/01 02:12:16 Ready to marshal response ...
	2024/05/01 02:12:16 Ready to write response ...
	2024/05/01 02:14:05 Ready to marshal response ...
	2024/05/01 02:14:05 Ready to write response ...
	
	
	==> kernel <==
	 02:16:58 up 8 min,  0 users,  load average: 0.03, 0.59, 0.49
	Linux addons-286595 5.10.207 #1 SMP Tue Apr 30 22:38:43 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [976be39bc269736268dbe23a871c448f5827e29fde81ff90e0159d69f9af5bd2] <==
	E0501 02:10:50.075819       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.99.100.129:443/apis/metrics.k8s.io/v1beta1: Get "https://10.99.100.129:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.99.100.129:443: connect: connection refused
	E0501 02:10:50.077983       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.99.100.129:443/apis/metrics.k8s.io/v1beta1: Get "https://10.99.100.129:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.99.100.129:443: connect: connection refused
	E0501 02:10:50.082957       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.99.100.129:443/apis/metrics.k8s.io/v1beta1: Get "https://10.99.100.129:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.99.100.129:443: connect: connection refused
	I0501 02:10:50.197599       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0501 02:11:42.839672       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0501 02:11:43.854833       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0501 02:11:44.085132       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.104.170.104"}
	I0501 02:11:46.578526       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0501 02:11:47.618719       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	E0501 02:12:07.258988       1 upgradeaware.go:427] Error proxying data from client to backend: read tcp 192.168.39.173:8443->10.244.0.30:36490: read: connection reset by peer
	I0501 02:12:16.601233       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.104.104.63"}
	I0501 02:12:18.783704       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0501 02:12:18.783754       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0501 02:12:18.812604       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0501 02:12:18.812671       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0501 02:12:18.872882       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0501 02:12:18.872914       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0501 02:12:18.906782       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0501 02:12:18.906876       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0501 02:12:18.995006       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0501 02:12:18.995035       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0501 02:12:19.906915       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0501 02:12:19.996042       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0501 02:12:20.003716       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0501 02:14:05.655996       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.108.167.24"}
	
	
	==> kube-controller-manager [ff3c851c7688d3c9fbb0d390c99ba4b9407c06fff923031bc3115f0c17f49cac] <==
	W0501 02:14:39.076755       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0501 02:14:39.076833       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0501 02:14:59.213874       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0501 02:14:59.213979       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0501 02:15:00.572878       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0501 02:15:00.573078       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0501 02:15:03.907108       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0501 02:15:03.907170       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0501 02:15:34.649511       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0501 02:15:34.649565       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0501 02:15:49.176045       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0501 02:15:49.176159       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0501 02:15:49.677578       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0501 02:15:49.677639       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0501 02:15:57.805129       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0501 02:15:57.805161       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0501 02:16:21.335564       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0501 02:16:21.335701       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0501 02:16:21.507070       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0501 02:16:21.507256       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0501 02:16:22.574078       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0501 02:16:22.574113       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0501 02:16:56.952614       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0501 02:16:56.952643       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0501 02:16:57.257504       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-c59844bb4" duration="15.946µs"
	
	
	==> kube-proxy [e3cfa2da63bbf5b5bf434bebf921cd1711d24a75e5e358306e59c34caf06382f] <==
	I0501 02:08:55.440577       1 server_linux.go:69] "Using iptables proxy"
	I0501 02:08:55.468633       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.173"]
	I0501 02:08:55.591758       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0501 02:08:55.591795       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0501 02:08:55.591817       1 server_linux.go:165] "Using iptables Proxier"
	I0501 02:08:55.601494       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0501 02:08:55.601722       1 server.go:872] "Version info" version="v1.30.0"
	I0501 02:08:55.601733       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0501 02:08:55.602732       1 config.go:192] "Starting service config controller"
	I0501 02:08:55.602776       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0501 02:08:55.602800       1 config.go:101] "Starting endpoint slice config controller"
	I0501 02:08:55.602803       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0501 02:08:55.603241       1 config.go:319] "Starting node config controller"
	I0501 02:08:55.603279       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0501 02:08:55.703149       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0501 02:08:55.703190       1 shared_informer.go:320] Caches are synced for service config
	I0501 02:08:55.703455       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [f2a049b4c17d6d072b9097aa0071b82d6d4edc2a255d26f724807d4ac369f9c2] <==
	W0501 02:08:36.058229       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0501 02:08:36.062463       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0501 02:08:37.002461       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0501 02:08:37.002515       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0501 02:08:37.118477       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0501 02:08:37.118529       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0501 02:08:37.136128       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0501 02:08:37.136190       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0501 02:08:37.194929       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0501 02:08:37.195453       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0501 02:08:37.214862       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0501 02:08:37.214917       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0501 02:08:37.219246       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0501 02:08:37.219331       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0501 02:08:37.244163       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0501 02:08:37.244225       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0501 02:08:37.337586       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0501 02:08:37.337683       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0501 02:08:37.341063       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0501 02:08:37.341149       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0501 02:08:37.371774       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0501 02:08:37.371831       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0501 02:08:37.584546       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0501 02:08:37.584878       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0501 02:08:39.348743       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	May 01 02:14:38 addons-286595 kubelet[1272]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 01 02:14:38 addons-286595 kubelet[1272]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 01 02:14:38 addons-286595 kubelet[1272]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 01 02:14:41 addons-286595 kubelet[1272]: I0501 02:14:41.313938    1272 scope.go:117] "RemoveContainer" containerID="432cc59073fd29047b54aeeb84e6631af1878efef18205659447c86e2699bcb9"
	May 01 02:14:41 addons-286595 kubelet[1272]: I0501 02:14:41.341929    1272 scope.go:117] "RemoveContainer" containerID="c40abe3c7fabbea438a13626a124e09b026a80d64d27c409ea806f4fd413d56c"
	May 01 02:15:38 addons-286595 kubelet[1272]: E0501 02:15:38.545818    1272 iptables.go:577] "Could not set up iptables canary" err=<
	May 01 02:15:38 addons-286595 kubelet[1272]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 01 02:15:38 addons-286595 kubelet[1272]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 01 02:15:38 addons-286595 kubelet[1272]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 01 02:15:38 addons-286595 kubelet[1272]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 01 02:16:38 addons-286595 kubelet[1272]: E0501 02:16:38.545758    1272 iptables.go:577] "Could not set up iptables canary" err=<
	May 01 02:16:38 addons-286595 kubelet[1272]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 01 02:16:38 addons-286595 kubelet[1272]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 01 02:16:38 addons-286595 kubelet[1272]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 01 02:16:38 addons-286595 kubelet[1272]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 01 02:16:58 addons-286595 kubelet[1272]: I0501 02:16:58.775031    1272 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9385fe21-53b5-4105-bb14-3008fcd7dc3a-tmp-dir\") pod \"9385fe21-53b5-4105-bb14-3008fcd7dc3a\" (UID: \"9385fe21-53b5-4105-bb14-3008fcd7dc3a\") "
	May 01 02:16:58 addons-286595 kubelet[1272]: I0501 02:16:58.775074    1272 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bhpp8\" (UniqueName: \"kubernetes.io/projected/9385fe21-53b5-4105-bb14-3008fcd7dc3a-kube-api-access-bhpp8\") pod \"9385fe21-53b5-4105-bb14-3008fcd7dc3a\" (UID: \"9385fe21-53b5-4105-bb14-3008fcd7dc3a\") "
	May 01 02:16:58 addons-286595 kubelet[1272]: I0501 02:16:58.776003    1272 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9385fe21-53b5-4105-bb14-3008fcd7dc3a-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "9385fe21-53b5-4105-bb14-3008fcd7dc3a" (UID: "9385fe21-53b5-4105-bb14-3008fcd7dc3a"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	May 01 02:16:58 addons-286595 kubelet[1272]: I0501 02:16:58.784447    1272 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9385fe21-53b5-4105-bb14-3008fcd7dc3a-kube-api-access-bhpp8" (OuterVolumeSpecName: "kube-api-access-bhpp8") pod "9385fe21-53b5-4105-bb14-3008fcd7dc3a" (UID: "9385fe21-53b5-4105-bb14-3008fcd7dc3a"). InnerVolumeSpecName "kube-api-access-bhpp8". PluginName "kubernetes.io/projected", VolumeGidValue ""
	May 01 02:16:58 addons-286595 kubelet[1272]: I0501 02:16:58.875546    1272 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-bhpp8\" (UniqueName: \"kubernetes.io/projected/9385fe21-53b5-4105-bb14-3008fcd7dc3a-kube-api-access-bhpp8\") on node \"addons-286595\" DevicePath \"\""
	May 01 02:16:58 addons-286595 kubelet[1272]: I0501 02:16:58.875604    1272 reconciler_common.go:289] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9385fe21-53b5-4105-bb14-3008fcd7dc3a-tmp-dir\") on node \"addons-286595\" DevicePath \"\""
	May 01 02:16:59 addons-286595 kubelet[1272]: I0501 02:16:59.136012    1272 scope.go:117] "RemoveContainer" containerID="c2e873794e6a54a807a04cb4169d0cdf7072fdc2c36d4eb97fa559d1c1077ce6"
	May 01 02:16:59 addons-286595 kubelet[1272]: I0501 02:16:59.218297    1272 scope.go:117] "RemoveContainer" containerID="c2e873794e6a54a807a04cb4169d0cdf7072fdc2c36d4eb97fa559d1c1077ce6"
	May 01 02:16:59 addons-286595 kubelet[1272]: E0501 02:16:59.223084    1272 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2e873794e6a54a807a04cb4169d0cdf7072fdc2c36d4eb97fa559d1c1077ce6\": container with ID starting with c2e873794e6a54a807a04cb4169d0cdf7072fdc2c36d4eb97fa559d1c1077ce6 not found: ID does not exist" containerID="c2e873794e6a54a807a04cb4169d0cdf7072fdc2c36d4eb97fa559d1c1077ce6"
	May 01 02:16:59 addons-286595 kubelet[1272]: I0501 02:16:59.223128    1272 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2e873794e6a54a807a04cb4169d0cdf7072fdc2c36d4eb97fa559d1c1077ce6"} err="failed to get container status \"c2e873794e6a54a807a04cb4169d0cdf7072fdc2c36d4eb97fa559d1c1077ce6\": rpc error: code = NotFound desc = could not find container \"c2e873794e6a54a807a04cb4169d0cdf7072fdc2c36d4eb97fa559d1c1077ce6\": container with ID starting with c2e873794e6a54a807a04cb4169d0cdf7072fdc2c36d4eb97fa559d1c1077ce6 not found: ID does not exist"
	
	
	==> storage-provisioner [ea17f2d9434251df9401981536acddc1f90957bd5e65bc3d10cd23f2258cecbc] <==
	I0501 02:09:00.917737       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0501 02:09:00.928035       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0501 02:09:00.928072       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0501 02:09:00.943117       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0501 02:09:00.943339       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-286595_5d8efd10-74c8-4326-b5c8-ec5c064e6fc1!
	I0501 02:09:00.944530       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"7ba41ed1-cb3f-4e11-b6c3-df3b8bded704", APIVersion:"v1", ResourceVersion:"625", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-286595_5d8efd10-74c8-4326-b5c8-ec5c064e6fc1 became leader
	I0501 02:09:01.043748       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-286595_5d8efd10-74c8-4326-b5c8-ec5c064e6fc1!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-286595 -n addons-286595
helpers_test.go:261: (dbg) Run:  kubectl --context addons-286595 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/MetricsServer (335.44s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (16.61s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-286595 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-286595 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-286595 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-286595 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-286595 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-286595 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-286595 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-286595 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-286595 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-286595 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [8918c463-0bcb-4276-a0d5-2c2f244e3c53] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [8918c463-0bcb-4276-a0d5-2c2f244e3c53] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [8918c463-0bcb-4276-a0d5-2c2f244e3c53] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 8.004075117s
addons_test.go:891: (dbg) Run:  kubectl --context addons-286595 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-linux-amd64 -p addons-286595 ssh "cat /opt/local-path-provisioner/pvc-e2a3e7ab-0856-4130-bea1-c8089bb4ffec_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-286595 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-286595 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-linux-amd64 -p addons-286595 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-286595 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (581.089106ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0501 02:11:40.512115   22842 out.go:291] Setting OutFile to fd 1 ...
	I0501 02:11:40.512329   22842 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 02:11:40.512343   22842 out.go:304] Setting ErrFile to fd 2...
	I0501 02:11:40.512350   22842 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 02:11:40.512629   22842 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18779-13391/.minikube/bin
	I0501 02:11:40.512983   22842 mustload.go:65] Loading cluster: addons-286595
	I0501 02:11:40.513463   22842 config.go:182] Loaded profile config "addons-286595": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0501 02:11:40.513493   22842 addons.go:597] checking whether the cluster is paused
	I0501 02:11:40.513674   22842 config.go:182] Loaded profile config "addons-286595": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0501 02:11:40.513695   22842 host.go:66] Checking if "addons-286595" exists ...
	I0501 02:11:40.514227   22842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:11:40.514283   22842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:11:40.529517   22842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34905
	I0501 02:11:40.529982   22842 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:11:40.530651   22842 main.go:141] libmachine: Using API Version  1
	I0501 02:11:40.530680   22842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:11:40.531036   22842 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:11:40.531224   22842 main.go:141] libmachine: (addons-286595) Calling .GetState
	I0501 02:11:40.532969   22842 main.go:141] libmachine: (addons-286595) Calling .DriverName
	I0501 02:11:40.533186   22842 ssh_runner.go:195] Run: systemctl --version
	I0501 02:11:40.533208   22842 main.go:141] libmachine: (addons-286595) Calling .GetSSHHostname
	I0501 02:11:40.535648   22842 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:11:40.536035   22842 main.go:141] libmachine: (addons-286595) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:55:7e", ip: ""} in network mk-addons-286595: {Iface:virbr1 ExpiryTime:2024-05-01 03:08:11 +0000 UTC Type:0 Mac:52:54:00:74:55:7e Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:addons-286595 Clientid:01:52:54:00:74:55:7e}
	I0501 02:11:40.536072   22842 main.go:141] libmachine: (addons-286595) DBG | domain addons-286595 has defined IP address 192.168.39.173 and MAC address 52:54:00:74:55:7e in network mk-addons-286595
	I0501 02:11:40.536285   22842 main.go:141] libmachine: (addons-286595) Calling .GetSSHPort
	I0501 02:11:40.536458   22842 main.go:141] libmachine: (addons-286595) Calling .GetSSHKeyPath
	I0501 02:11:40.536611   22842 main.go:141] libmachine: (addons-286595) Calling .GetSSHUsername
	I0501 02:11:40.536734   22842 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/addons-286595/id_rsa Username:docker}
	I0501 02:11:40.739438   22842 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0501 02:11:40.739506   22842 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0501 02:11:40.938576   22842 cri.go:89] found id: "d8fd9103c4060dbb31c40b9c51c939f22d92b63ce154e367ad5eeee06d662d38"
	I0501 02:11:40.938598   22842 cri.go:89] found id: "69133ab9fd516839b74bc8c6bbd0f54b6ecfd1d8be2b75835d4cd9867bddd1c1"
	I0501 02:11:40.938602   22842 cri.go:89] found id: "394a45731b9f591d94acb8859885358095319b7df65e17d6e09fa5adbae039cb"
	I0501 02:11:40.938605   22842 cri.go:89] found id: "93687e00a8ee019d2ee0789649c720b250f94473de7be160ab1808b3524d1ec9"
	I0501 02:11:40.938607   22842 cri.go:89] found id: "06a25e3f43a7c554df14b5ca9e2e6f4cdb4ac91eddac2f5e2b70bd9cc0e800fe"
	I0501 02:11:40.938610   22842 cri.go:89] found id: "7cd11a1b0cb4dd23614397952699ac10b32ed24126fabfe61085d8ee3f5a5ad6"
	I0501 02:11:40.938615   22842 cri.go:89] found id: "3a69a7d0b1f50875b228730cbd0d1bbaeb3bddd64a31cf4478aa833fee761506"
	I0501 02:11:40.938618   22842 cri.go:89] found id: "7a0836f80845e7961645d8ded773ba83d1ce35c82763863f089127b09e3bbe42"
	I0501 02:11:40.938620   22842 cri.go:89] found id: "0cd2d1c7adfd0fbf51a6772fbb30a2a6b7ffaf0b609b30f14928b036d76eb412"
	I0501 02:11:40.938626   22842 cri.go:89] found id: "fae1d7c1b6664150d2bb3ead1dbd29a0eec9cdac7ca0b475e4e348aeafed508f"
	I0501 02:11:40.938629   22842 cri.go:89] found id: "e3a075f6c8cbfed83e2b587a3438a2b211f03eb351954501645afa49b3cd143c"
	I0501 02:11:40.938631   22842 cri.go:89] found id: "ee0a2097ffa65fb7d37d541f9e5267f47e64f0ec220104a69d1b51ba101a7905"
	I0501 02:11:40.938634   22842 cri.go:89] found id: "c2e873794e6a54a807a04cb4169d0cdf7072fdc2c36d4eb97fa559d1c1077ce6"
	I0501 02:11:40.938636   22842 cri.go:89] found id: "c6c88f02d47d238b063909b95a94d0b11c7aa4401fe913ff00d86b7137acfed7"
	I0501 02:11:40.938640   22842 cri.go:89] found id: "d508c00526c092bcf28b27a52276a8677df6c8e9c57478977d630377d6db4627"
	I0501 02:11:40.938642   22842 cri.go:89] found id: "0938068ac1ba8cc0444385d144ec582184069aac645bae3deabb7e6c96984c2b"
	I0501 02:11:40.938644   22842 cri.go:89] found id: "ea17f2d9434251df9401981536acddc1f90957bd5e65bc3d10cd23f2258cecbc"
	I0501 02:11:40.938647   22842 cri.go:89] found id: "09d11ea02380a6ff352ea6ce929b940136fe970bfecd9ad03d3100cc98c598b6"
	I0501 02:11:40.938649   22842 cri.go:89] found id: "e3cfa2da63bbf5b5bf434bebf921cd1711d24a75e5e358306e59c34caf06382f"
	I0501 02:11:40.938652   22842 cri.go:89] found id: "f2a049b4c17d6d072b9097aa0071b82d6d4edc2a255d26f724807d4ac369f9c2"
	I0501 02:11:40.938663   22842 cri.go:89] found id: "ff3c851c7688d3c9fbb0d390c99ba4b9407c06fff923031bc3115f0c17f49cac"
	I0501 02:11:40.938666   22842 cri.go:89] found id: "976be39bc269736268dbe23a871c448f5827e29fde81ff90e0159d69f9af5bd2"
	I0501 02:11:40.938669   22842 cri.go:89] found id: "f5d66ed0ede7ea6abf6b73f76e9bd96372ad218e49de932b1f7d31ddf968ae30"
	I0501 02:11:40.938671   22842 cri.go:89] found id: ""
	I0501 02:11:40.938708   22842 ssh_runner.go:195] Run: sudo runc list -f json
	I0501 02:11:41.018830   22842 main.go:141] libmachine: Making call to close driver server
	I0501 02:11:41.018861   22842 main.go:141] libmachine: (addons-286595) Calling .Close
	I0501 02:11:41.019149   22842 main.go:141] libmachine: Successfully made call to close driver server
	I0501 02:11:41.019170   22842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 02:11:41.021657   22842 out.go:177] 
	W0501 02:11:41.023145   22842 out.go:239] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-05-01T02:11:41Z" level=error msg="stat /run/runc/2e90c798af158df2e1712766e2e7132092c6546761dbcc56c67d62d1599a649b: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-05-01T02:11:41Z" level=error msg="stat /run/runc/2e90c798af158df2e1712766e2e7132092c6546761dbcc56c67d62d1599a649b: no such file or directory"
	
	W0501 02:11:41.023168   22842 out.go:239] * 
	* 
	W0501 02:11:41.024719   22842 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0501 02:11:41.026317   22842 out.go:177] 

                                                
                                                
** /stderr **
addons_test.go:922: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-amd64 -p addons-286595 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (16.61s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (154.39s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-286595
addons_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p addons-286595: exit status 82 (2m0.469473472s)

                                                
                                                
-- stdout --
	* Stopping node "addons-286595"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:174: failed to stop minikube. args "out/minikube-linux-amd64 stop -p addons-286595" : exit status 82
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-286595
addons_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-286595: exit status 11 (21.638045531s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.173:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:178: failed to enable dashboard addon: args "out/minikube-linux-amd64 addons enable dashboard -p addons-286595" : exit status 11
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-286595
addons_test.go:180: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-286595: exit status 11 (6.143081308s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.173:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_7b2045b3edf32de99b3c34afdc43bfaabe8aa3c2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:182: failed to disable dashboard addon: args "out/minikube-linux-amd64 addons disable dashboard -p addons-286595" : exit status 11
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-286595
addons_test.go:185: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable gvisor -p addons-286595: exit status 11 (6.143245944s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.173:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_8dd43b2cee45a94e37dbac1dd983966d1c97e7d4_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:187: failed to disable non-enabled addon: args "out/minikube-linux-amd64 addons disable gvisor -p addons-286595" : exit status 11
--- FAIL: TestAddons/StoppedEnableDisable (154.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (2.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-960026 image rm gcr.io/google-containers/addon-resizer:functional-960026 --alsologtostderr
functional_test.go:391: (dbg) Done: out/minikube-linux-amd64 -p functional-960026 image rm gcr.io/google-containers/addon-resizer:functional-960026 --alsologtostderr: (2.399475389s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-960026 image ls
functional_test.go:402: expected "gcr.io/google-containers/addon-resizer:functional-960026" to be removed from minikube but still exists
--- FAIL: TestFunctional/parallel/ImageCommands/ImageRemove (2.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (142.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-329926 node stop m02 -v=7 --alsologtostderr
E0501 02:36:18.122261   20724 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/functional-960026/client.crt: no such file or directory
E0501 02:36:24.419010   20724 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/addons-286595/client.crt: no such file or directory
E0501 02:37:40.043168   20724 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/functional-960026/client.crt: no such file or directory
E0501 02:37:47.467415   20724 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/addons-286595/client.crt: no such file or directory
ha_test.go:363: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-329926 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.484132827s)

                                                
                                                
-- stdout --
	* Stopping node "ha-329926-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0501 02:35:57.374713   36968 out.go:291] Setting OutFile to fd 1 ...
	I0501 02:35:57.374874   36968 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 02:35:57.374886   36968 out.go:304] Setting ErrFile to fd 2...
	I0501 02:35:57.374890   36968 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 02:35:57.375086   36968 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18779-13391/.minikube/bin
	I0501 02:35:57.375330   36968 mustload.go:65] Loading cluster: ha-329926
	I0501 02:35:57.375689   36968 config.go:182] Loaded profile config "ha-329926": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0501 02:35:57.375702   36968 stop.go:39] StopHost: ha-329926-m02
	I0501 02:35:57.376060   36968 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:35:57.376103   36968 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:35:57.391214   36968 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38239
	I0501 02:35:57.391800   36968 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:35:57.392531   36968 main.go:141] libmachine: Using API Version  1
	I0501 02:35:57.392558   36968 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:35:57.392925   36968 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:35:57.395260   36968 out.go:177] * Stopping node "ha-329926-m02"  ...
	I0501 02:35:57.396507   36968 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0501 02:35:57.396533   36968 main.go:141] libmachine: (ha-329926-m02) Calling .DriverName
	I0501 02:35:57.396773   36968 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0501 02:35:57.396806   36968 main.go:141] libmachine: (ha-329926-m02) Calling .GetSSHHostname
	I0501 02:35:57.399647   36968 main.go:141] libmachine: (ha-329926-m02) DBG | domain ha-329926-m02 has defined MAC address 52:54:00:92:16:5f in network mk-ha-329926
	I0501 02:35:57.400043   36968 main.go:141] libmachine: (ha-329926-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:16:5f", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:32:18 +0000 UTC Type:0 Mac:52:54:00:92:16:5f Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-329926-m02 Clientid:01:52:54:00:92:16:5f}
	I0501 02:35:57.400076   36968 main.go:141] libmachine: (ha-329926-m02) DBG | domain ha-329926-m02 has defined IP address 192.168.39.79 and MAC address 52:54:00:92:16:5f in network mk-ha-329926
	I0501 02:35:57.400230   36968 main.go:141] libmachine: (ha-329926-m02) Calling .GetSSHPort
	I0501 02:35:57.400407   36968 main.go:141] libmachine: (ha-329926-m02) Calling .GetSSHKeyPath
	I0501 02:35:57.400608   36968 main.go:141] libmachine: (ha-329926-m02) Calling .GetSSHUsername
	I0501 02:35:57.400765   36968 sshutil.go:53] new ssh client: &{IP:192.168.39.79 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926-m02/id_rsa Username:docker}
	I0501 02:35:57.487353   36968 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0501 02:35:57.544322   36968 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0501 02:35:57.605156   36968 main.go:141] libmachine: Stopping "ha-329926-m02"...
	I0501 02:35:57.605217   36968 main.go:141] libmachine: (ha-329926-m02) Calling .GetState
	I0501 02:35:57.607027   36968 main.go:141] libmachine: (ha-329926-m02) Calling .Stop
	I0501 02:35:57.610840   36968 main.go:141] libmachine: (ha-329926-m02) Waiting for machine to stop 0/120
	I0501 02:35:58.612949   36968 main.go:141] libmachine: (ha-329926-m02) Waiting for machine to stop 1/120
	I0501 02:35:59.615122   36968 main.go:141] libmachine: (ha-329926-m02) Waiting for machine to stop 2/120
	I0501 02:36:00.616319   36968 main.go:141] libmachine: (ha-329926-m02) Waiting for machine to stop 3/120
	I0501 02:36:01.617646   36968 main.go:141] libmachine: (ha-329926-m02) Waiting for machine to stop 4/120
	I0501 02:36:02.619439   36968 main.go:141] libmachine: (ha-329926-m02) Waiting for machine to stop 5/120
	I0501 02:36:03.620783   36968 main.go:141] libmachine: (ha-329926-m02) Waiting for machine to stop 6/120
	I0501 02:36:04.622042   36968 main.go:141] libmachine: (ha-329926-m02) Waiting for machine to stop 7/120
	I0501 02:36:05.623407   36968 main.go:141] libmachine: (ha-329926-m02) Waiting for machine to stop 8/120
	I0501 02:36:06.624806   36968 main.go:141] libmachine: (ha-329926-m02) Waiting for machine to stop 9/120
	I0501 02:36:07.627008   36968 main.go:141] libmachine: (ha-329926-m02) Waiting for machine to stop 10/120
	I0501 02:36:08.628847   36968 main.go:141] libmachine: (ha-329926-m02) Waiting for machine to stop 11/120
	I0501 02:36:09.630693   36968 main.go:141] libmachine: (ha-329926-m02) Waiting for machine to stop 12/120
	I0501 02:36:10.633055   36968 main.go:141] libmachine: (ha-329926-m02) Waiting for machine to stop 13/120
	I0501 02:36:11.634702   36968 main.go:141] libmachine: (ha-329926-m02) Waiting for machine to stop 14/120
	I0501 02:36:12.636476   36968 main.go:141] libmachine: (ha-329926-m02) Waiting for machine to stop 15/120
	I0501 02:36:13.637838   36968 main.go:141] libmachine: (ha-329926-m02) Waiting for machine to stop 16/120
	I0501 02:36:14.640056   36968 main.go:141] libmachine: (ha-329926-m02) Waiting for machine to stop 17/120
	I0501 02:36:15.641262   36968 main.go:141] libmachine: (ha-329926-m02) Waiting for machine to stop 18/120
	I0501 02:36:16.642577   36968 main.go:141] libmachine: (ha-329926-m02) Waiting for machine to stop 19/120
	I0501 02:36:17.644814   36968 main.go:141] libmachine: (ha-329926-m02) Waiting for machine to stop 20/120
	I0501 02:36:18.645991   36968 main.go:141] libmachine: (ha-329926-m02) Waiting for machine to stop 21/120
	I0501 02:36:19.647388   36968 main.go:141] libmachine: (ha-329926-m02) Waiting for machine to stop 22/120
	I0501 02:36:20.648815   36968 main.go:141] libmachine: (ha-329926-m02) Waiting for machine to stop 23/120
	I0501 02:36:21.650170   36968 main.go:141] libmachine: (ha-329926-m02) Waiting for machine to stop 24/120
	I0501 02:36:22.652044   36968 main.go:141] libmachine: (ha-329926-m02) Waiting for machine to stop 25/120
	I0501 02:36:23.653540   36968 main.go:141] libmachine: (ha-329926-m02) Waiting for machine to stop 26/120
	I0501 02:36:24.655295   36968 main.go:141] libmachine: (ha-329926-m02) Waiting for machine to stop 27/120
	I0501 02:36:25.656919   36968 main.go:141] libmachine: (ha-329926-m02) Waiting for machine to stop 28/120
	I0501 02:36:26.659030   36968 main.go:141] libmachine: (ha-329926-m02) Waiting for machine to stop 29/120
	I0501 02:36:27.660953   36968 main.go:141] libmachine: (ha-329926-m02) Waiting for machine to stop 30/120
	I0501 02:36:28.662480   36968 main.go:141] libmachine: (ha-329926-m02) Waiting for machine to stop 31/120
	I0501 02:36:29.663903   36968 main.go:141] libmachine: (ha-329926-m02) Waiting for machine to stop 32/120
	I0501 02:36:30.665349   36968 main.go:141] libmachine: (ha-329926-m02) Waiting for machine to stop 33/120
	I0501 02:36:31.666883   36968 main.go:141] libmachine: (ha-329926-m02) Waiting for machine to stop 34/120
	I0501 02:36:32.669001   36968 main.go:141] libmachine: (ha-329926-m02) Waiting for machine to stop 35/120
	I0501 02:36:33.670545   36968 main.go:141] libmachine: (ha-329926-m02) Waiting for machine to stop 36/120
	I0501 02:36:34.672761   36968 main.go:141] libmachine: (ha-329926-m02) Waiting for machine to stop 37/120
	I0501 02:36:35.674079   36968 main.go:141] libmachine: (ha-329926-m02) Waiting for machine to stop 38/120
	I0501 02:36:36.675293   36968 main.go:141] libmachine: (ha-329926-m02) Waiting for machine to stop 39/120
	I0501 02:36:37.676673   36968 main.go:141] libmachine: (ha-329926-m02) Waiting for machine to stop 40/120
	I0501 02:36:38.678042   36968 main.go:141] libmachine: (ha-329926-m02) Waiting for machine to stop 41/120
	I0501 02:36:39.680373   36968 main.go:141] libmachine: (ha-329926-m02) Waiting for machine to stop 42/120
	I0501 02:36:40.682510   36968 main.go:141] libmachine: (ha-329926-m02) Waiting for machine to stop 43/120
	I0501 02:36:41.683779   36968 main.go:141] libmachine: (ha-329926-m02) Waiting for machine to stop 44/120
	I0501 02:36:42.685238   36968 main.go:141] libmachine: (ha-329926-m02) Waiting for machine to stop 45/120
	I0501 02:36:43.686637   36968 main.go:141] libmachine: (ha-329926-m02) Waiting for machine to stop 46/120
	I0501 02:36:44.687878   36968 main.go:141] libmachine: (ha-329926-m02) Waiting for machine to stop 47/120
	I0501 02:36:45.689285   36968 main.go:141] libmachine: (ha-329926-m02) Waiting for machine to stop 48/120
	I0501 02:36:46.690597   36968 main.go:141] libmachine: (ha-329926-m02) Waiting for machine to stop 49/120
	I0501 02:36:47.692818   36968 main.go:141] libmachine: (ha-329926-m02) Waiting for machine to stop 50/120
	I0501 02:36:48.694331   36968 main.go:141] libmachine: (ha-329926-m02) Waiting for machine to stop 51/120
	I0501 02:36:49.695986   36968 main.go:141] libmachine: (ha-329926-m02) Waiting for machine to stop 52/120
	I0501 02:36:50.697391   36968 main.go:141] libmachine: (ha-329926-m02) Waiting for machine to stop 53/120
	I0501 02:36:51.698995   36968 main.go:141] libmachine: (ha-329926-m02) Waiting for machine to stop 54/120
	I0501 02:36:52.700622   36968 main.go:141] libmachine: (ha-329926-m02) Waiting for machine to stop 55/120
	I0501 02:36:53.702014   36968 main.go:141] libmachine: (ha-329926-m02) Waiting for machine to stop 56/120
	I0501 02:36:54.704368   36968 main.go:141] libmachine: (ha-329926-m02) Waiting for machine to stop 57/120
	I0501 02:36:55.705723   36968 main.go:141] libmachine: (ha-329926-m02) Waiting for machine to stop 58/120
	I0501 02:36:56.707031   36968 main.go:141] libmachine: (ha-329926-m02) Waiting for machine to stop 59/120
	I0501 02:36:57.709076   36968 main.go:141] libmachine: (ha-329926-m02) Waiting for machine to stop 60/120
	I0501 02:36:58.710519   36968 main.go:141] libmachine: (ha-329926-m02) Waiting for machine to stop 61/120
	I0501 02:36:59.711806   36968 main.go:141] libmachine: (ha-329926-m02) Waiting for machine to stop 62/120
	I0501 02:37:00.713176   36968 main.go:141] libmachine: (ha-329926-m02) Waiting for machine to stop 63/120
	I0501 02:37:01.714488   36968 main.go:141] libmachine: (ha-329926-m02) Waiting for machine to stop 64/120
	I0501 02:37:02.716054   36968 main.go:141] libmachine: (ha-329926-m02) Waiting for machine to stop 65/120
	I0501 02:37:03.717387   36968 main.go:141] libmachine: (ha-329926-m02) Waiting for machine to stop 66/120
	I0501 02:37:04.719279   36968 main.go:141] libmachine: (ha-329926-m02) Waiting for machine to stop 67/120
	I0501 02:37:05.720621   36968 main.go:141] libmachine: (ha-329926-m02) Waiting for machine to stop 68/120
	I0501 02:37:06.721941   36968 main.go:141] libmachine: (ha-329926-m02) Waiting for machine to stop 69/120
	I0501 02:37:07.723898   36968 main.go:141] libmachine: (ha-329926-m02) Waiting for machine to stop 70/120
	I0501 02:37:08.725192   36968 main.go:141] libmachine: (ha-329926-m02) Waiting for machine to stop 71/120
	I0501 02:37:09.726371   36968 main.go:141] libmachine: (ha-329926-m02) Waiting for machine to stop 72/120
	I0501 02:37:10.727827   36968 main.go:141] libmachine: (ha-329926-m02) Waiting for machine to stop 73/120
	I0501 02:37:11.729570   36968 main.go:141] libmachine: (ha-329926-m02) Waiting for machine to stop 74/120
	I0501 02:37:12.731260   36968 main.go:141] libmachine: (ha-329926-m02) Waiting for machine to stop 75/120
	I0501 02:37:13.732759   36968 main.go:141] libmachine: (ha-329926-m02) Waiting for machine to stop 76/120
	I0501 02:37:14.733966   36968 main.go:141] libmachine: (ha-329926-m02) Waiting for machine to stop 77/120
	I0501 02:37:15.735181   36968 main.go:141] libmachine: (ha-329926-m02) Waiting for machine to stop 78/120
	I0501 02:37:16.736669   36968 main.go:141] libmachine: (ha-329926-m02) Waiting for machine to stop 79/120
	I0501 02:37:17.738755   36968 main.go:141] libmachine: (ha-329926-m02) Waiting for machine to stop 80/120
	I0501 02:37:18.740174   36968 main.go:141] libmachine: (ha-329926-m02) Waiting for machine to stop 81/120
	I0501 02:37:19.741816   36968 main.go:141] libmachine: (ha-329926-m02) Waiting for machine to stop 82/120
	I0501 02:37:20.743103   36968 main.go:141] libmachine: (ha-329926-m02) Waiting for machine to stop 83/120
	I0501 02:37:21.744450   36968 main.go:141] libmachine: (ha-329926-m02) Waiting for machine to stop 84/120
	I0501 02:37:22.746321   36968 main.go:141] libmachine: (ha-329926-m02) Waiting for machine to stop 85/120
	I0501 02:37:23.747725   36968 main.go:141] libmachine: (ha-329926-m02) Waiting for machine to stop 86/120
	I0501 02:37:24.749122   36968 main.go:141] libmachine: (ha-329926-m02) Waiting for machine to stop 87/120
	I0501 02:37:25.750536   36968 main.go:141] libmachine: (ha-329926-m02) Waiting for machine to stop 88/120
	I0501 02:37:26.752873   36968 main.go:141] libmachine: (ha-329926-m02) Waiting for machine to stop 89/120
	I0501 02:37:27.755055   36968 main.go:141] libmachine: (ha-329926-m02) Waiting for machine to stop 90/120
	I0501 02:37:28.756937   36968 main.go:141] libmachine: (ha-329926-m02) Waiting for machine to stop 91/120
	I0501 02:37:29.758955   36968 main.go:141] libmachine: (ha-329926-m02) Waiting for machine to stop 92/120
	I0501 02:37:30.760848   36968 main.go:141] libmachine: (ha-329926-m02) Waiting for machine to stop 93/120
	I0501 02:37:31.762187   36968 main.go:141] libmachine: (ha-329926-m02) Waiting for machine to stop 94/120
	I0501 02:37:32.763778   36968 main.go:141] libmachine: (ha-329926-m02) Waiting for machine to stop 95/120
	I0501 02:37:33.765103   36968 main.go:141] libmachine: (ha-329926-m02) Waiting for machine to stop 96/120
	I0501 02:37:34.766560   36968 main.go:141] libmachine: (ha-329926-m02) Waiting for machine to stop 97/120
	I0501 02:37:35.768640   36968 main.go:141] libmachine: (ha-329926-m02) Waiting for machine to stop 98/120
	I0501 02:37:36.770598   36968 main.go:141] libmachine: (ha-329926-m02) Waiting for machine to stop 99/120
	I0501 02:37:37.772303   36968 main.go:141] libmachine: (ha-329926-m02) Waiting for machine to stop 100/120
	I0501 02:37:38.774437   36968 main.go:141] libmachine: (ha-329926-m02) Waiting for machine to stop 101/120
	I0501 02:37:39.775636   36968 main.go:141] libmachine: (ha-329926-m02) Waiting for machine to stop 102/120
	I0501 02:37:40.776992   36968 main.go:141] libmachine: (ha-329926-m02) Waiting for machine to stop 103/120
	I0501 02:37:41.778338   36968 main.go:141] libmachine: (ha-329926-m02) Waiting for machine to stop 104/120
	I0501 02:37:42.779844   36968 main.go:141] libmachine: (ha-329926-m02) Waiting for machine to stop 105/120
	I0501 02:37:43.781263   36968 main.go:141] libmachine: (ha-329926-m02) Waiting for machine to stop 106/120
	I0501 02:37:44.782985   36968 main.go:141] libmachine: (ha-329926-m02) Waiting for machine to stop 107/120
	I0501 02:37:45.784965   36968 main.go:141] libmachine: (ha-329926-m02) Waiting for machine to stop 108/120
	I0501 02:37:46.786456   36968 main.go:141] libmachine: (ha-329926-m02) Waiting for machine to stop 109/120
	I0501 02:37:47.788679   36968 main.go:141] libmachine: (ha-329926-m02) Waiting for machine to stop 110/120
	I0501 02:37:48.789853   36968 main.go:141] libmachine: (ha-329926-m02) Waiting for machine to stop 111/120
	I0501 02:37:49.791542   36968 main.go:141] libmachine: (ha-329926-m02) Waiting for machine to stop 112/120
	I0501 02:37:50.792870   36968 main.go:141] libmachine: (ha-329926-m02) Waiting for machine to stop 113/120
	I0501 02:37:51.794104   36968 main.go:141] libmachine: (ha-329926-m02) Waiting for machine to stop 114/120
	I0501 02:37:52.795947   36968 main.go:141] libmachine: (ha-329926-m02) Waiting for machine to stop 115/120
	I0501 02:37:53.798231   36968 main.go:141] libmachine: (ha-329926-m02) Waiting for machine to stop 116/120
	I0501 02:37:54.799796   36968 main.go:141] libmachine: (ha-329926-m02) Waiting for machine to stop 117/120
	I0501 02:37:55.801335   36968 main.go:141] libmachine: (ha-329926-m02) Waiting for machine to stop 118/120
	I0501 02:37:56.802773   36968 main.go:141] libmachine: (ha-329926-m02) Waiting for machine to stop 119/120
	I0501 02:37:57.803480   36968 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0501 02:37:57.803590   36968 out.go:239] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-329926 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-329926 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-329926 status -v=7 --alsologtostderr: exit status 3 (19.136772149s)

                                                
                                                
-- stdout --
	ha-329926
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-329926-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-329926-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-329926-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0501 02:37:57.860562   37393 out.go:291] Setting OutFile to fd 1 ...
	I0501 02:37:57.860696   37393 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 02:37:57.860706   37393 out.go:304] Setting ErrFile to fd 2...
	I0501 02:37:57.860710   37393 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 02:37:57.860929   37393 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18779-13391/.minikube/bin
	I0501 02:37:57.861150   37393 out.go:298] Setting JSON to false
	I0501 02:37:57.861181   37393 mustload.go:65] Loading cluster: ha-329926
	I0501 02:37:57.861287   37393 notify.go:220] Checking for updates...
	I0501 02:37:57.861644   37393 config.go:182] Loaded profile config "ha-329926": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0501 02:37:57.861662   37393 status.go:255] checking status of ha-329926 ...
	I0501 02:37:57.862064   37393 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:37:57.862126   37393 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:37:57.876808   37393 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33275
	I0501 02:37:57.877213   37393 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:37:57.877776   37393 main.go:141] libmachine: Using API Version  1
	I0501 02:37:57.877799   37393 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:37:57.878117   37393 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:37:57.878294   37393 main.go:141] libmachine: (ha-329926) Calling .GetState
	I0501 02:37:57.879754   37393 status.go:330] ha-329926 host status = "Running" (err=<nil>)
	I0501 02:37:57.879768   37393 host.go:66] Checking if "ha-329926" exists ...
	I0501 02:37:57.880048   37393 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:37:57.880080   37393 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:37:57.893946   37393 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36649
	I0501 02:37:57.894297   37393 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:37:57.894722   37393 main.go:141] libmachine: Using API Version  1
	I0501 02:37:57.894745   37393 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:37:57.895029   37393 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:37:57.895207   37393 main.go:141] libmachine: (ha-329926) Calling .GetIP
	I0501 02:37:57.897870   37393 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:37:57.898366   37393 main.go:141] libmachine: (ha-329926) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:d8:43", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:31:18 +0000 UTC Type:0 Mac:52:54:00:ce:d8:43 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-329926 Clientid:01:52:54:00:ce:d8:43}
	I0501 02:37:57.898412   37393 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined IP address 192.168.39.5 and MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:37:57.898524   37393 host.go:66] Checking if "ha-329926" exists ...
	I0501 02:37:57.898808   37393 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:37:57.898847   37393 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:37:57.912431   37393 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40315
	I0501 02:37:57.912830   37393 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:37:57.913334   37393 main.go:141] libmachine: Using API Version  1
	I0501 02:37:57.913356   37393 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:37:57.913642   37393 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:37:57.913799   37393 main.go:141] libmachine: (ha-329926) Calling .DriverName
	I0501 02:37:57.913958   37393 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0501 02:37:57.913986   37393 main.go:141] libmachine: (ha-329926) Calling .GetSSHHostname
	I0501 02:37:57.916331   37393 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:37:57.916804   37393 main.go:141] libmachine: (ha-329926) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:d8:43", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:31:18 +0000 UTC Type:0 Mac:52:54:00:ce:d8:43 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-329926 Clientid:01:52:54:00:ce:d8:43}
	I0501 02:37:57.916832   37393 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined IP address 192.168.39.5 and MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:37:57.916962   37393 main.go:141] libmachine: (ha-329926) Calling .GetSSHPort
	I0501 02:37:57.917135   37393 main.go:141] libmachine: (ha-329926) Calling .GetSSHKeyPath
	I0501 02:37:57.917279   37393 main.go:141] libmachine: (ha-329926) Calling .GetSSHUsername
	I0501 02:37:57.917424   37393 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926/id_rsa Username:docker}
	I0501 02:37:58.001732   37393 ssh_runner.go:195] Run: systemctl --version
	I0501 02:37:58.015533   37393 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 02:37:58.042850   37393 kubeconfig.go:125] found "ha-329926" server: "https://192.168.39.254:8443"
	I0501 02:37:58.042877   37393 api_server.go:166] Checking apiserver status ...
	I0501 02:37:58.042914   37393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 02:37:58.061621   37393 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1174/cgroup
	W0501 02:37:58.077465   37393 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1174/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0501 02:37:58.077506   37393 ssh_runner.go:195] Run: ls
	I0501 02:37:58.083233   37393 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0501 02:37:58.089979   37393 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0501 02:37:58.089998   37393 status.go:422] ha-329926 apiserver status = Running (err=<nil>)
	I0501 02:37:58.090008   37393 status.go:257] ha-329926 status: &{Name:ha-329926 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0501 02:37:58.090030   37393 status.go:255] checking status of ha-329926-m02 ...
	I0501 02:37:58.090316   37393 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:37:58.090349   37393 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:37:58.104719   37393 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35867
	I0501 02:37:58.105061   37393 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:37:58.105500   37393 main.go:141] libmachine: Using API Version  1
	I0501 02:37:58.105518   37393 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:37:58.105857   37393 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:37:58.106013   37393 main.go:141] libmachine: (ha-329926-m02) Calling .GetState
	I0501 02:37:58.107506   37393 status.go:330] ha-329926-m02 host status = "Running" (err=<nil>)
	I0501 02:37:58.107524   37393 host.go:66] Checking if "ha-329926-m02" exists ...
	I0501 02:37:58.107795   37393 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:37:58.107825   37393 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:37:58.121866   37393 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42769
	I0501 02:37:58.122211   37393 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:37:58.122744   37393 main.go:141] libmachine: Using API Version  1
	I0501 02:37:58.122766   37393 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:37:58.123041   37393 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:37:58.123233   37393 main.go:141] libmachine: (ha-329926-m02) Calling .GetIP
	I0501 02:37:58.125679   37393 main.go:141] libmachine: (ha-329926-m02) DBG | domain ha-329926-m02 has defined MAC address 52:54:00:92:16:5f in network mk-ha-329926
	I0501 02:37:58.126043   37393 main.go:141] libmachine: (ha-329926-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:16:5f", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:32:18 +0000 UTC Type:0 Mac:52:54:00:92:16:5f Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-329926-m02 Clientid:01:52:54:00:92:16:5f}
	I0501 02:37:58.126075   37393 main.go:141] libmachine: (ha-329926-m02) DBG | domain ha-329926-m02 has defined IP address 192.168.39.79 and MAC address 52:54:00:92:16:5f in network mk-ha-329926
	I0501 02:37:58.126151   37393 host.go:66] Checking if "ha-329926-m02" exists ...
	I0501 02:37:58.126447   37393 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:37:58.126479   37393 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:37:58.142543   37393 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36391
	I0501 02:37:58.143044   37393 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:37:58.143512   37393 main.go:141] libmachine: Using API Version  1
	I0501 02:37:58.143532   37393 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:37:58.143783   37393 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:37:58.144000   37393 main.go:141] libmachine: (ha-329926-m02) Calling .DriverName
	I0501 02:37:58.144191   37393 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0501 02:37:58.144210   37393 main.go:141] libmachine: (ha-329926-m02) Calling .GetSSHHostname
	I0501 02:37:58.146905   37393 main.go:141] libmachine: (ha-329926-m02) DBG | domain ha-329926-m02 has defined MAC address 52:54:00:92:16:5f in network mk-ha-329926
	I0501 02:37:58.147388   37393 main.go:141] libmachine: (ha-329926-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:16:5f", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:32:18 +0000 UTC Type:0 Mac:52:54:00:92:16:5f Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-329926-m02 Clientid:01:52:54:00:92:16:5f}
	I0501 02:37:58.147414   37393 main.go:141] libmachine: (ha-329926-m02) DBG | domain ha-329926-m02 has defined IP address 192.168.39.79 and MAC address 52:54:00:92:16:5f in network mk-ha-329926
	I0501 02:37:58.147601   37393 main.go:141] libmachine: (ha-329926-m02) Calling .GetSSHPort
	I0501 02:37:58.147772   37393 main.go:141] libmachine: (ha-329926-m02) Calling .GetSSHKeyPath
	I0501 02:37:58.147926   37393 main.go:141] libmachine: (ha-329926-m02) Calling .GetSSHUsername
	I0501 02:37:58.148067   37393 sshutil.go:53] new ssh client: &{IP:192.168.39.79 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926-m02/id_rsa Username:docker}
	W0501 02:38:16.550662   37393 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.79:22: connect: no route to host
	W0501 02:38:16.550759   37393 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.79:22: connect: no route to host
	E0501 02:38:16.550787   37393 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.79:22: connect: no route to host
	I0501 02:38:16.550802   37393 status.go:257] ha-329926-m02 status: &{Name:ha-329926-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0501 02:38:16.550827   37393 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.79:22: connect: no route to host
	I0501 02:38:16.550835   37393 status.go:255] checking status of ha-329926-m03 ...
	I0501 02:38:16.551257   37393 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:38:16.551315   37393 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:38:16.568168   37393 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41067
	I0501 02:38:16.568606   37393 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:38:16.569076   37393 main.go:141] libmachine: Using API Version  1
	I0501 02:38:16.569098   37393 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:38:16.569443   37393 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:38:16.569663   37393 main.go:141] libmachine: (ha-329926-m03) Calling .GetState
	I0501 02:38:16.571505   37393 status.go:330] ha-329926-m03 host status = "Running" (err=<nil>)
	I0501 02:38:16.571525   37393 host.go:66] Checking if "ha-329926-m03" exists ...
	I0501 02:38:16.571800   37393 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:38:16.571834   37393 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:38:16.587460   37393 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45881
	I0501 02:38:16.587904   37393 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:38:16.588347   37393 main.go:141] libmachine: Using API Version  1
	I0501 02:38:16.588377   37393 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:38:16.588752   37393 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:38:16.588999   37393 main.go:141] libmachine: (ha-329926-m03) Calling .GetIP
	I0501 02:38:16.592165   37393 main.go:141] libmachine: (ha-329926-m03) DBG | domain ha-329926-m03 has defined MAC address 52:54:00:f9:eb:7d in network mk-ha-329926
	I0501 02:38:16.592749   37393 main.go:141] libmachine: (ha-329926-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:eb:7d", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:33:44 +0000 UTC Type:0 Mac:52:54:00:f9:eb:7d Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:ha-329926-m03 Clientid:01:52:54:00:f9:eb:7d}
	I0501 02:38:16.592777   37393 main.go:141] libmachine: (ha-329926-m03) DBG | domain ha-329926-m03 has defined IP address 192.168.39.115 and MAC address 52:54:00:f9:eb:7d in network mk-ha-329926
	I0501 02:38:16.592965   37393 host.go:66] Checking if "ha-329926-m03" exists ...
	I0501 02:38:16.593371   37393 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:38:16.593418   37393 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:38:16.610298   37393 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34895
	I0501 02:38:16.610749   37393 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:38:16.611207   37393 main.go:141] libmachine: Using API Version  1
	I0501 02:38:16.611226   37393 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:38:16.611505   37393 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:38:16.611676   37393 main.go:141] libmachine: (ha-329926-m03) Calling .DriverName
	I0501 02:38:16.611888   37393 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0501 02:38:16.611910   37393 main.go:141] libmachine: (ha-329926-m03) Calling .GetSSHHostname
	I0501 02:38:16.614896   37393 main.go:141] libmachine: (ha-329926-m03) DBG | domain ha-329926-m03 has defined MAC address 52:54:00:f9:eb:7d in network mk-ha-329926
	I0501 02:38:16.615394   37393 main.go:141] libmachine: (ha-329926-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:eb:7d", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:33:44 +0000 UTC Type:0 Mac:52:54:00:f9:eb:7d Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:ha-329926-m03 Clientid:01:52:54:00:f9:eb:7d}
	I0501 02:38:16.615423   37393 main.go:141] libmachine: (ha-329926-m03) DBG | domain ha-329926-m03 has defined IP address 192.168.39.115 and MAC address 52:54:00:f9:eb:7d in network mk-ha-329926
	I0501 02:38:16.615566   37393 main.go:141] libmachine: (ha-329926-m03) Calling .GetSSHPort
	I0501 02:38:16.615730   37393 main.go:141] libmachine: (ha-329926-m03) Calling .GetSSHKeyPath
	I0501 02:38:16.615867   37393 main.go:141] libmachine: (ha-329926-m03) Calling .GetSSHUsername
	I0501 02:38:16.615982   37393 sshutil.go:53] new ssh client: &{IP:192.168.39.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926-m03/id_rsa Username:docker}
	I0501 02:38:16.706003   37393 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 02:38:16.726114   37393 kubeconfig.go:125] found "ha-329926" server: "https://192.168.39.254:8443"
	I0501 02:38:16.726138   37393 api_server.go:166] Checking apiserver status ...
	I0501 02:38:16.726167   37393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 02:38:16.744327   37393 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1592/cgroup
	W0501 02:38:16.754757   37393 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1592/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0501 02:38:16.754801   37393 ssh_runner.go:195] Run: ls
	I0501 02:38:16.759616   37393 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0501 02:38:16.764184   37393 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0501 02:38:16.764207   37393 status.go:422] ha-329926-m03 apiserver status = Running (err=<nil>)
	I0501 02:38:16.764217   37393 status.go:257] ha-329926-m03 status: &{Name:ha-329926-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0501 02:38:16.764237   37393 status.go:255] checking status of ha-329926-m04 ...
	I0501 02:38:16.764530   37393 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:38:16.764574   37393 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:38:16.780216   37393 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42303
	I0501 02:38:16.780620   37393 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:38:16.781052   37393 main.go:141] libmachine: Using API Version  1
	I0501 02:38:16.781075   37393 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:38:16.781416   37393 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:38:16.781604   37393 main.go:141] libmachine: (ha-329926-m04) Calling .GetState
	I0501 02:38:16.783379   37393 status.go:330] ha-329926-m04 host status = "Running" (err=<nil>)
	I0501 02:38:16.783395   37393 host.go:66] Checking if "ha-329926-m04" exists ...
	I0501 02:38:16.783663   37393 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:38:16.783715   37393 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:38:16.799937   37393 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43651
	I0501 02:38:16.800484   37393 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:38:16.801020   37393 main.go:141] libmachine: Using API Version  1
	I0501 02:38:16.801040   37393 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:38:16.801354   37393 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:38:16.801544   37393 main.go:141] libmachine: (ha-329926-m04) Calling .GetIP
	I0501 02:38:16.804491   37393 main.go:141] libmachine: (ha-329926-m04) DBG | domain ha-329926-m04 has defined MAC address 52:54:00:95:f4:8b in network mk-ha-329926
	I0501 02:38:16.804937   37393 main.go:141] libmachine: (ha-329926-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:f4:8b", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:35:11 +0000 UTC Type:0 Mac:52:54:00:95:f4:8b Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-329926-m04 Clientid:01:52:54:00:95:f4:8b}
	I0501 02:38:16.804962   37393 main.go:141] libmachine: (ha-329926-m04) DBG | domain ha-329926-m04 has defined IP address 192.168.39.84 and MAC address 52:54:00:95:f4:8b in network mk-ha-329926
	I0501 02:38:16.805104   37393 host.go:66] Checking if "ha-329926-m04" exists ...
	I0501 02:38:16.805492   37393 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:38:16.805538   37393 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:38:16.820995   37393 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37401
	I0501 02:38:16.821438   37393 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:38:16.821929   37393 main.go:141] libmachine: Using API Version  1
	I0501 02:38:16.821950   37393 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:38:16.822225   37393 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:38:16.822441   37393 main.go:141] libmachine: (ha-329926-m04) Calling .DriverName
	I0501 02:38:16.822626   37393 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0501 02:38:16.822649   37393 main.go:141] libmachine: (ha-329926-m04) Calling .GetSSHHostname
	I0501 02:38:16.825437   37393 main.go:141] libmachine: (ha-329926-m04) DBG | domain ha-329926-m04 has defined MAC address 52:54:00:95:f4:8b in network mk-ha-329926
	I0501 02:38:16.825830   37393 main.go:141] libmachine: (ha-329926-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:f4:8b", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:35:11 +0000 UTC Type:0 Mac:52:54:00:95:f4:8b Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-329926-m04 Clientid:01:52:54:00:95:f4:8b}
	I0501 02:38:16.825858   37393 main.go:141] libmachine: (ha-329926-m04) DBG | domain ha-329926-m04 has defined IP address 192.168.39.84 and MAC address 52:54:00:95:f4:8b in network mk-ha-329926
	I0501 02:38:16.826138   37393 main.go:141] libmachine: (ha-329926-m04) Calling .GetSSHPort
	I0501 02:38:16.826338   37393 main.go:141] libmachine: (ha-329926-m04) Calling .GetSSHKeyPath
	I0501 02:38:16.826524   37393 main.go:141] libmachine: (ha-329926-m04) Calling .GetSSHUsername
	I0501 02:38:16.826706   37393 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926-m04/id_rsa Username:docker}
	I0501 02:38:16.920608   37393 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 02:38:16.939540   37393 status.go:257] ha-329926-m04 status: &{Name:ha-329926-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:372: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-329926 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-329926 -n ha-329926
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-329926 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-329926 logs -n 25: (1.560767074s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-329926 cp ha-329926-m03:/home/docker/cp-test.txt                             | ha-329926 | jenkins | v1.33.0 | 01 May 24 02:35 UTC | 01 May 24 02:35 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile895580191/001/cp-test_ha-329926-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-329926 ssh -n                                                                | ha-329926 | jenkins | v1.33.0 | 01 May 24 02:35 UTC | 01 May 24 02:35 UTC |
	|         | ha-329926-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-329926 cp ha-329926-m03:/home/docker/cp-test.txt                             | ha-329926 | jenkins | v1.33.0 | 01 May 24 02:35 UTC | 01 May 24 02:35 UTC |
	|         | ha-329926:/home/docker/cp-test_ha-329926-m03_ha-329926.txt                      |           |         |         |                     |                     |
	| ssh     | ha-329926 ssh -n                                                                | ha-329926 | jenkins | v1.33.0 | 01 May 24 02:35 UTC | 01 May 24 02:35 UTC |
	|         | ha-329926-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-329926 ssh -n ha-329926 sudo cat                                             | ha-329926 | jenkins | v1.33.0 | 01 May 24 02:35 UTC | 01 May 24 02:35 UTC |
	|         | /home/docker/cp-test_ha-329926-m03_ha-329926.txt                                |           |         |         |                     |                     |
	| cp      | ha-329926 cp ha-329926-m03:/home/docker/cp-test.txt                             | ha-329926 | jenkins | v1.33.0 | 01 May 24 02:35 UTC | 01 May 24 02:35 UTC |
	|         | ha-329926-m02:/home/docker/cp-test_ha-329926-m03_ha-329926-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-329926 ssh -n                                                                | ha-329926 | jenkins | v1.33.0 | 01 May 24 02:35 UTC | 01 May 24 02:35 UTC |
	|         | ha-329926-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-329926 ssh -n ha-329926-m02 sudo cat                                         | ha-329926 | jenkins | v1.33.0 | 01 May 24 02:35 UTC | 01 May 24 02:35 UTC |
	|         | /home/docker/cp-test_ha-329926-m03_ha-329926-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-329926 cp ha-329926-m03:/home/docker/cp-test.txt                             | ha-329926 | jenkins | v1.33.0 | 01 May 24 02:35 UTC | 01 May 24 02:35 UTC |
	|         | ha-329926-m04:/home/docker/cp-test_ha-329926-m03_ha-329926-m04.txt              |           |         |         |                     |                     |
	| ssh     | ha-329926 ssh -n                                                                | ha-329926 | jenkins | v1.33.0 | 01 May 24 02:35 UTC | 01 May 24 02:35 UTC |
	|         | ha-329926-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-329926 ssh -n ha-329926-m04 sudo cat                                         | ha-329926 | jenkins | v1.33.0 | 01 May 24 02:35 UTC | 01 May 24 02:35 UTC |
	|         | /home/docker/cp-test_ha-329926-m03_ha-329926-m04.txt                            |           |         |         |                     |                     |
	| cp      | ha-329926 cp testdata/cp-test.txt                                               | ha-329926 | jenkins | v1.33.0 | 01 May 24 02:35 UTC | 01 May 24 02:35 UTC |
	|         | ha-329926-m04:/home/docker/cp-test.txt                                          |           |         |         |                     |                     |
	| ssh     | ha-329926 ssh -n                                                                | ha-329926 | jenkins | v1.33.0 | 01 May 24 02:35 UTC | 01 May 24 02:35 UTC |
	|         | ha-329926-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-329926 cp ha-329926-m04:/home/docker/cp-test.txt                             | ha-329926 | jenkins | v1.33.0 | 01 May 24 02:35 UTC | 01 May 24 02:35 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile895580191/001/cp-test_ha-329926-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-329926 ssh -n                                                                | ha-329926 | jenkins | v1.33.0 | 01 May 24 02:35 UTC | 01 May 24 02:35 UTC |
	|         | ha-329926-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-329926 cp ha-329926-m04:/home/docker/cp-test.txt                             | ha-329926 | jenkins | v1.33.0 | 01 May 24 02:35 UTC | 01 May 24 02:35 UTC |
	|         | ha-329926:/home/docker/cp-test_ha-329926-m04_ha-329926.txt                      |           |         |         |                     |                     |
	| ssh     | ha-329926 ssh -n                                                                | ha-329926 | jenkins | v1.33.0 | 01 May 24 02:35 UTC | 01 May 24 02:35 UTC |
	|         | ha-329926-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-329926 ssh -n ha-329926 sudo cat                                             | ha-329926 | jenkins | v1.33.0 | 01 May 24 02:35 UTC | 01 May 24 02:35 UTC |
	|         | /home/docker/cp-test_ha-329926-m04_ha-329926.txt                                |           |         |         |                     |                     |
	| cp      | ha-329926 cp ha-329926-m04:/home/docker/cp-test.txt                             | ha-329926 | jenkins | v1.33.0 | 01 May 24 02:35 UTC | 01 May 24 02:35 UTC |
	|         | ha-329926-m02:/home/docker/cp-test_ha-329926-m04_ha-329926-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-329926 ssh -n                                                                | ha-329926 | jenkins | v1.33.0 | 01 May 24 02:35 UTC | 01 May 24 02:35 UTC |
	|         | ha-329926-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-329926 ssh -n ha-329926-m02 sudo cat                                         | ha-329926 | jenkins | v1.33.0 | 01 May 24 02:35 UTC | 01 May 24 02:35 UTC |
	|         | /home/docker/cp-test_ha-329926-m04_ha-329926-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-329926 cp ha-329926-m04:/home/docker/cp-test.txt                             | ha-329926 | jenkins | v1.33.0 | 01 May 24 02:35 UTC | 01 May 24 02:35 UTC |
	|         | ha-329926-m03:/home/docker/cp-test_ha-329926-m04_ha-329926-m03.txt              |           |         |         |                     |                     |
	| ssh     | ha-329926 ssh -n                                                                | ha-329926 | jenkins | v1.33.0 | 01 May 24 02:35 UTC | 01 May 24 02:35 UTC |
	|         | ha-329926-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-329926 ssh -n ha-329926-m03 sudo cat                                         | ha-329926 | jenkins | v1.33.0 | 01 May 24 02:35 UTC | 01 May 24 02:35 UTC |
	|         | /home/docker/cp-test_ha-329926-m04_ha-329926-m03.txt                            |           |         |         |                     |                     |
	| node    | ha-329926 node stop m02 -v=7                                                    | ha-329926 | jenkins | v1.33.0 | 01 May 24 02:35 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/01 02:31:02
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0501 02:31:02.127151   32853 out.go:291] Setting OutFile to fd 1 ...
	I0501 02:31:02.127254   32853 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 02:31:02.127264   32853 out.go:304] Setting ErrFile to fd 2...
	I0501 02:31:02.127268   32853 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 02:31:02.127458   32853 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18779-13391/.minikube/bin
	I0501 02:31:02.128001   32853 out.go:298] Setting JSON to false
	I0501 02:31:02.128797   32853 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4405,"bootTime":1714526257,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0501 02:31:02.128859   32853 start.go:139] virtualization: kvm guest
	I0501 02:31:02.130891   32853 out.go:177] * [ha-329926] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0501 02:31:02.132216   32853 out.go:177]   - MINIKUBE_LOCATION=18779
	I0501 02:31:02.133332   32853 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0501 02:31:02.132243   32853 notify.go:220] Checking for updates...
	I0501 02:31:02.135670   32853 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18779-13391/kubeconfig
	I0501 02:31:02.137084   32853 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18779-13391/.minikube
	I0501 02:31:02.138504   32853 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0501 02:31:02.139897   32853 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0501 02:31:02.141367   32853 driver.go:392] Setting default libvirt URI to qemu:///system
	I0501 02:31:02.174964   32853 out.go:177] * Using the kvm2 driver based on user configuration
	I0501 02:31:02.176378   32853 start.go:297] selected driver: kvm2
	I0501 02:31:02.176396   32853 start.go:901] validating driver "kvm2" against <nil>
	I0501 02:31:02.176406   32853 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0501 02:31:02.177100   32853 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0501 02:31:02.177168   32853 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18779-13391/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0501 02:31:02.191961   32853 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0501 02:31:02.192043   32853 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0501 02:31:02.192259   32853 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0501 02:31:02.192310   32853 cni.go:84] Creating CNI manager for ""
	I0501 02:31:02.192331   32853 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0501 02:31:02.192341   32853 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0501 02:31:02.192386   32853 start.go:340] cluster config:
	{Name:ha-329926 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-329926 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0501 02:31:02.192467   32853 iso.go:125] acquiring lock: {Name:mkcd2496aadb29931e179193d707c731bfda98b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0501 02:31:02.194294   32853 out.go:177] * Starting "ha-329926" primary control-plane node in "ha-329926" cluster
	I0501 02:31:02.195474   32853 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0501 02:31:02.195504   32853 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0501 02:31:02.195513   32853 cache.go:56] Caching tarball of preloaded images
	I0501 02:31:02.195589   32853 preload.go:173] Found /home/jenkins/minikube-integration/18779-13391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0501 02:31:02.195609   32853 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0501 02:31:02.195892   32853 profile.go:143] Saving config to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/config.json ...
	I0501 02:31:02.195913   32853 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/config.json: {Name:mkac9273eac834ed61b43bee84b2def140a2e5fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:31:02.196029   32853 start.go:360] acquireMachinesLock for ha-329926: {Name:mkc4548e5afe1d4b0a833f6c522103562b0cefff Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0501 02:31:02.196056   32853 start.go:364] duration metric: took 15.002µs to acquireMachinesLock for "ha-329926"
	I0501 02:31:02.196073   32853 start.go:93] Provisioning new machine with config: &{Name:ha-329926 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.0 ClusterName:ha-329926 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0501 02:31:02.196129   32853 start.go:125] createHost starting for "" (driver="kvm2")
	I0501 02:31:02.197767   32853 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0501 02:31:02.197867   32853 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:31:02.197898   32853 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:31:02.211916   32853 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39605
	I0501 02:31:02.212295   32853 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:31:02.212848   32853 main.go:141] libmachine: Using API Version  1
	I0501 02:31:02.212868   32853 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:31:02.213166   32853 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:31:02.213347   32853 main.go:141] libmachine: (ha-329926) Calling .GetMachineName
	I0501 02:31:02.213482   32853 main.go:141] libmachine: (ha-329926) Calling .DriverName
	I0501 02:31:02.213609   32853 start.go:159] libmachine.API.Create for "ha-329926" (driver="kvm2")
	I0501 02:31:02.213645   32853 client.go:168] LocalClient.Create starting
	I0501 02:31:02.213678   32853 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem
	I0501 02:31:02.213717   32853 main.go:141] libmachine: Decoding PEM data...
	I0501 02:31:02.213747   32853 main.go:141] libmachine: Parsing certificate...
	I0501 02:31:02.213833   32853 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem
	I0501 02:31:02.213874   32853 main.go:141] libmachine: Decoding PEM data...
	I0501 02:31:02.213896   32853 main.go:141] libmachine: Parsing certificate...
	I0501 02:31:02.213928   32853 main.go:141] libmachine: Running pre-create checks...
	I0501 02:31:02.213941   32853 main.go:141] libmachine: (ha-329926) Calling .PreCreateCheck
	I0501 02:31:02.214241   32853 main.go:141] libmachine: (ha-329926) Calling .GetConfigRaw
	I0501 02:31:02.214579   32853 main.go:141] libmachine: Creating machine...
	I0501 02:31:02.214605   32853 main.go:141] libmachine: (ha-329926) Calling .Create
	I0501 02:31:02.214738   32853 main.go:141] libmachine: (ha-329926) Creating KVM machine...
	I0501 02:31:02.216059   32853 main.go:141] libmachine: (ha-329926) DBG | found existing default KVM network
	I0501 02:31:02.216643   32853 main.go:141] libmachine: (ha-329926) DBG | I0501 02:31:02.216531   32876 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00012d980}
	I0501 02:31:02.216682   32853 main.go:141] libmachine: (ha-329926) DBG | created network xml: 
	I0501 02:31:02.216705   32853 main.go:141] libmachine: (ha-329926) DBG | <network>
	I0501 02:31:02.216715   32853 main.go:141] libmachine: (ha-329926) DBG |   <name>mk-ha-329926</name>
	I0501 02:31:02.216726   32853 main.go:141] libmachine: (ha-329926) DBG |   <dns enable='no'/>
	I0501 02:31:02.216735   32853 main.go:141] libmachine: (ha-329926) DBG |   
	I0501 02:31:02.216747   32853 main.go:141] libmachine: (ha-329926) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0501 02:31:02.216756   32853 main.go:141] libmachine: (ha-329926) DBG |     <dhcp>
	I0501 02:31:02.216763   32853 main.go:141] libmachine: (ha-329926) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0501 02:31:02.216775   32853 main.go:141] libmachine: (ha-329926) DBG |     </dhcp>
	I0501 02:31:02.216787   32853 main.go:141] libmachine: (ha-329926) DBG |   </ip>
	I0501 02:31:02.216799   32853 main.go:141] libmachine: (ha-329926) DBG |   
	I0501 02:31:02.216819   32853 main.go:141] libmachine: (ha-329926) DBG | </network>
	I0501 02:31:02.216849   32853 main.go:141] libmachine: (ha-329926) DBG | 
	I0501 02:31:02.221819   32853 main.go:141] libmachine: (ha-329926) DBG | trying to create private KVM network mk-ha-329926 192.168.39.0/24...
	I0501 02:31:02.283186   32853 main.go:141] libmachine: (ha-329926) DBG | private KVM network mk-ha-329926 192.168.39.0/24 created
	I0501 02:31:02.283216   32853 main.go:141] libmachine: (ha-329926) Setting up store path in /home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926 ...
	I0501 02:31:02.283228   32853 main.go:141] libmachine: (ha-329926) DBG | I0501 02:31:02.283155   32876 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18779-13391/.minikube
	I0501 02:31:02.283265   32853 main.go:141] libmachine: (ha-329926) Building disk image from file:///home/jenkins/minikube-integration/18779-13391/.minikube/cache/iso/amd64/minikube-v1.33.0-1714498396-18779-amd64.iso
	I0501 02:31:02.283290   32853 main.go:141] libmachine: (ha-329926) Downloading /home/jenkins/minikube-integration/18779-13391/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18779-13391/.minikube/cache/iso/amd64/minikube-v1.33.0-1714498396-18779-amd64.iso...
	I0501 02:31:02.508576   32853 main.go:141] libmachine: (ha-329926) DBG | I0501 02:31:02.508477   32876 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926/id_rsa...
	I0501 02:31:02.768972   32853 main.go:141] libmachine: (ha-329926) DBG | I0501 02:31:02.768811   32876 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926/ha-329926.rawdisk...
	I0501 02:31:02.769012   32853 main.go:141] libmachine: (ha-329926) DBG | Writing magic tar header
	I0501 02:31:02.769028   32853 main.go:141] libmachine: (ha-329926) DBG | Writing SSH key tar header
	I0501 02:31:02.769049   32853 main.go:141] libmachine: (ha-329926) DBG | I0501 02:31:02.768957   32876 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926 ...
	I0501 02:31:02.769112   32853 main.go:141] libmachine: (ha-329926) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926
	I0501 02:31:02.769152   32853 main.go:141] libmachine: (ha-329926) Setting executable bit set on /home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926 (perms=drwx------)
	I0501 02:31:02.769164   32853 main.go:141] libmachine: (ha-329926) Setting executable bit set on /home/jenkins/minikube-integration/18779-13391/.minikube/machines (perms=drwxr-xr-x)
	I0501 02:31:02.769176   32853 main.go:141] libmachine: (ha-329926) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18779-13391/.minikube/machines
	I0501 02:31:02.769188   32853 main.go:141] libmachine: (ha-329926) Setting executable bit set on /home/jenkins/minikube-integration/18779-13391/.minikube (perms=drwxr-xr-x)
	I0501 02:31:02.769198   32853 main.go:141] libmachine: (ha-329926) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18779-13391/.minikube
	I0501 02:31:02.769211   32853 main.go:141] libmachine: (ha-329926) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18779-13391
	I0501 02:31:02.769218   32853 main.go:141] libmachine: (ha-329926) Setting executable bit set on /home/jenkins/minikube-integration/18779-13391 (perms=drwxrwxr-x)
	I0501 02:31:02.769224   32853 main.go:141] libmachine: (ha-329926) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0501 02:31:02.769253   32853 main.go:141] libmachine: (ha-329926) DBG | Checking permissions on dir: /home/jenkins
	I0501 02:31:02.769266   32853 main.go:141] libmachine: (ha-329926) DBG | Checking permissions on dir: /home
	I0501 02:31:02.769278   32853 main.go:141] libmachine: (ha-329926) DBG | Skipping /home - not owner
	I0501 02:31:02.769298   32853 main.go:141] libmachine: (ha-329926) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0501 02:31:02.769312   32853 main.go:141] libmachine: (ha-329926) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0501 02:31:02.769318   32853 main.go:141] libmachine: (ha-329926) Creating domain...
	I0501 02:31:02.770295   32853 main.go:141] libmachine: (ha-329926) define libvirt domain using xml: 
	I0501 02:31:02.770324   32853 main.go:141] libmachine: (ha-329926) <domain type='kvm'>
	I0501 02:31:02.770331   32853 main.go:141] libmachine: (ha-329926)   <name>ha-329926</name>
	I0501 02:31:02.770336   32853 main.go:141] libmachine: (ha-329926)   <memory unit='MiB'>2200</memory>
	I0501 02:31:02.770342   32853 main.go:141] libmachine: (ha-329926)   <vcpu>2</vcpu>
	I0501 02:31:02.770348   32853 main.go:141] libmachine: (ha-329926)   <features>
	I0501 02:31:02.770359   32853 main.go:141] libmachine: (ha-329926)     <acpi/>
	I0501 02:31:02.770366   32853 main.go:141] libmachine: (ha-329926)     <apic/>
	I0501 02:31:02.770394   32853 main.go:141] libmachine: (ha-329926)     <pae/>
	I0501 02:31:02.770427   32853 main.go:141] libmachine: (ha-329926)     
	I0501 02:31:02.770434   32853 main.go:141] libmachine: (ha-329926)   </features>
	I0501 02:31:02.770444   32853 main.go:141] libmachine: (ha-329926)   <cpu mode='host-passthrough'>
	I0501 02:31:02.770450   32853 main.go:141] libmachine: (ha-329926)   
	I0501 02:31:02.770457   32853 main.go:141] libmachine: (ha-329926)   </cpu>
	I0501 02:31:02.770465   32853 main.go:141] libmachine: (ha-329926)   <os>
	I0501 02:31:02.770470   32853 main.go:141] libmachine: (ha-329926)     <type>hvm</type>
	I0501 02:31:02.770474   32853 main.go:141] libmachine: (ha-329926)     <boot dev='cdrom'/>
	I0501 02:31:02.770481   32853 main.go:141] libmachine: (ha-329926)     <boot dev='hd'/>
	I0501 02:31:02.770486   32853 main.go:141] libmachine: (ha-329926)     <bootmenu enable='no'/>
	I0501 02:31:02.770490   32853 main.go:141] libmachine: (ha-329926)   </os>
	I0501 02:31:02.770495   32853 main.go:141] libmachine: (ha-329926)   <devices>
	I0501 02:31:02.770501   32853 main.go:141] libmachine: (ha-329926)     <disk type='file' device='cdrom'>
	I0501 02:31:02.770511   32853 main.go:141] libmachine: (ha-329926)       <source file='/home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926/boot2docker.iso'/>
	I0501 02:31:02.770516   32853 main.go:141] libmachine: (ha-329926)       <target dev='hdc' bus='scsi'/>
	I0501 02:31:02.770521   32853 main.go:141] libmachine: (ha-329926)       <readonly/>
	I0501 02:31:02.770525   32853 main.go:141] libmachine: (ha-329926)     </disk>
	I0501 02:31:02.770533   32853 main.go:141] libmachine: (ha-329926)     <disk type='file' device='disk'>
	I0501 02:31:02.770539   32853 main.go:141] libmachine: (ha-329926)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0501 02:31:02.770549   32853 main.go:141] libmachine: (ha-329926)       <source file='/home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926/ha-329926.rawdisk'/>
	I0501 02:31:02.770558   32853 main.go:141] libmachine: (ha-329926)       <target dev='hda' bus='virtio'/>
	I0501 02:31:02.770562   32853 main.go:141] libmachine: (ha-329926)     </disk>
	I0501 02:31:02.770568   32853 main.go:141] libmachine: (ha-329926)     <interface type='network'>
	I0501 02:31:02.770575   32853 main.go:141] libmachine: (ha-329926)       <source network='mk-ha-329926'/>
	I0501 02:31:02.770580   32853 main.go:141] libmachine: (ha-329926)       <model type='virtio'/>
	I0501 02:31:02.770585   32853 main.go:141] libmachine: (ha-329926)     </interface>
	I0501 02:31:02.770590   32853 main.go:141] libmachine: (ha-329926)     <interface type='network'>
	I0501 02:31:02.770597   32853 main.go:141] libmachine: (ha-329926)       <source network='default'/>
	I0501 02:31:02.770602   32853 main.go:141] libmachine: (ha-329926)       <model type='virtio'/>
	I0501 02:31:02.770609   32853 main.go:141] libmachine: (ha-329926)     </interface>
	I0501 02:31:02.770613   32853 main.go:141] libmachine: (ha-329926)     <serial type='pty'>
	I0501 02:31:02.770620   32853 main.go:141] libmachine: (ha-329926)       <target port='0'/>
	I0501 02:31:02.770625   32853 main.go:141] libmachine: (ha-329926)     </serial>
	I0501 02:31:02.770630   32853 main.go:141] libmachine: (ha-329926)     <console type='pty'>
	I0501 02:31:02.770636   32853 main.go:141] libmachine: (ha-329926)       <target type='serial' port='0'/>
	I0501 02:31:02.770652   32853 main.go:141] libmachine: (ha-329926)     </console>
	I0501 02:31:02.770660   32853 main.go:141] libmachine: (ha-329926)     <rng model='virtio'>
	I0501 02:31:02.770665   32853 main.go:141] libmachine: (ha-329926)       <backend model='random'>/dev/random</backend>
	I0501 02:31:02.770673   32853 main.go:141] libmachine: (ha-329926)     </rng>
	I0501 02:31:02.770677   32853 main.go:141] libmachine: (ha-329926)     
	I0501 02:31:02.770688   32853 main.go:141] libmachine: (ha-329926)     
	I0501 02:31:02.770695   32853 main.go:141] libmachine: (ha-329926)   </devices>
	I0501 02:31:02.770699   32853 main.go:141] libmachine: (ha-329926) </domain>
	I0501 02:31:02.770705   32853 main.go:141] libmachine: (ha-329926) 
	I0501 02:31:02.775111   32853 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined MAC address 52:54:00:b8:ba:6a in network default
	I0501 02:31:02.775622   32853 main.go:141] libmachine: (ha-329926) Ensuring networks are active...
	I0501 02:31:02.775642   32853 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:31:02.776188   32853 main.go:141] libmachine: (ha-329926) Ensuring network default is active
	I0501 02:31:02.776475   32853 main.go:141] libmachine: (ha-329926) Ensuring network mk-ha-329926 is active
	I0501 02:31:02.776962   32853 main.go:141] libmachine: (ha-329926) Getting domain xml...
	I0501 02:31:02.777590   32853 main.go:141] libmachine: (ha-329926) Creating domain...
	I0501 02:31:03.958358   32853 main.go:141] libmachine: (ha-329926) Waiting to get IP...
	I0501 02:31:03.959186   32853 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:31:03.959545   32853 main.go:141] libmachine: (ha-329926) DBG | unable to find current IP address of domain ha-329926 in network mk-ha-329926
	I0501 02:31:03.959566   32853 main.go:141] libmachine: (ha-329926) DBG | I0501 02:31:03.959529   32876 retry.go:31] will retry after 238.732907ms: waiting for machine to come up
	I0501 02:31:04.200166   32853 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:31:04.200557   32853 main.go:141] libmachine: (ha-329926) DBG | unable to find current IP address of domain ha-329926 in network mk-ha-329926
	I0501 02:31:04.200587   32853 main.go:141] libmachine: (ha-329926) DBG | I0501 02:31:04.200531   32876 retry.go:31] will retry after 374.829741ms: waiting for machine to come up
	I0501 02:31:04.576992   32853 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:31:04.577416   32853 main.go:141] libmachine: (ha-329926) DBG | unable to find current IP address of domain ha-329926 in network mk-ha-329926
	I0501 02:31:04.577449   32853 main.go:141] libmachine: (ha-329926) DBG | I0501 02:31:04.577372   32876 retry.go:31] will retry after 309.413827ms: waiting for machine to come up
	I0501 02:31:04.888766   32853 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:31:04.889189   32853 main.go:141] libmachine: (ha-329926) DBG | unable to find current IP address of domain ha-329926 in network mk-ha-329926
	I0501 02:31:04.889238   32853 main.go:141] libmachine: (ha-329926) DBG | I0501 02:31:04.889142   32876 retry.go:31] will retry after 366.291711ms: waiting for machine to come up
	I0501 02:31:05.256536   32853 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:31:05.256930   32853 main.go:141] libmachine: (ha-329926) DBG | unable to find current IP address of domain ha-329926 in network mk-ha-329926
	I0501 02:31:05.256960   32853 main.go:141] libmachine: (ha-329926) DBG | I0501 02:31:05.256882   32876 retry.go:31] will retry after 711.660535ms: waiting for machine to come up
	I0501 02:31:05.969606   32853 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:31:05.969985   32853 main.go:141] libmachine: (ha-329926) DBG | unable to find current IP address of domain ha-329926 in network mk-ha-329926
	I0501 02:31:05.970044   32853 main.go:141] libmachine: (ha-329926) DBG | I0501 02:31:05.969964   32876 retry.go:31] will retry after 826.819518ms: waiting for machine to come up
	I0501 02:31:06.797981   32853 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:31:06.798491   32853 main.go:141] libmachine: (ha-329926) DBG | unable to find current IP address of domain ha-329926 in network mk-ha-329926
	I0501 02:31:06.798551   32853 main.go:141] libmachine: (ha-329926) DBG | I0501 02:31:06.798455   32876 retry.go:31] will retry after 766.952141ms: waiting for machine to come up
	I0501 02:31:07.566945   32853 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:31:07.567298   32853 main.go:141] libmachine: (ha-329926) DBG | unable to find current IP address of domain ha-329926 in network mk-ha-329926
	I0501 02:31:07.567328   32853 main.go:141] libmachine: (ha-329926) DBG | I0501 02:31:07.567254   32876 retry.go:31] will retry after 1.148906462s: waiting for machine to come up
	I0501 02:31:08.717544   32853 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:31:08.717895   32853 main.go:141] libmachine: (ha-329926) DBG | unable to find current IP address of domain ha-329926 in network mk-ha-329926
	I0501 02:31:08.717921   32853 main.go:141] libmachine: (ha-329926) DBG | I0501 02:31:08.717850   32876 retry.go:31] will retry after 1.572762289s: waiting for machine to come up
	I0501 02:31:10.292539   32853 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:31:10.292913   32853 main.go:141] libmachine: (ha-329926) DBG | unable to find current IP address of domain ha-329926 in network mk-ha-329926
	I0501 02:31:10.292941   32853 main.go:141] libmachine: (ha-329926) DBG | I0501 02:31:10.292867   32876 retry.go:31] will retry after 2.066139393s: waiting for machine to come up
	I0501 02:31:12.360803   32853 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:31:12.361151   32853 main.go:141] libmachine: (ha-329926) DBG | unable to find current IP address of domain ha-329926 in network mk-ha-329926
	I0501 02:31:12.361176   32853 main.go:141] libmachine: (ha-329926) DBG | I0501 02:31:12.361095   32876 retry.go:31] will retry after 2.871501826s: waiting for machine to come up
	I0501 02:31:15.236013   32853 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:31:15.236432   32853 main.go:141] libmachine: (ha-329926) DBG | unable to find current IP address of domain ha-329926 in network mk-ha-329926
	I0501 02:31:15.236459   32853 main.go:141] libmachine: (ha-329926) DBG | I0501 02:31:15.236387   32876 retry.go:31] will retry after 3.153540987s: waiting for machine to come up
	I0501 02:31:18.391419   32853 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:31:18.391858   32853 main.go:141] libmachine: (ha-329926) DBG | unable to find current IP address of domain ha-329926 in network mk-ha-329926
	I0501 02:31:18.391886   32853 main.go:141] libmachine: (ha-329926) DBG | I0501 02:31:18.391808   32876 retry.go:31] will retry after 4.132363881s: waiting for machine to come up
	I0501 02:31:22.525823   32853 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:31:22.526223   32853 main.go:141] libmachine: (ha-329926) DBG | unable to find current IP address of domain ha-329926 in network mk-ha-329926
	I0501 02:31:22.526247   32853 main.go:141] libmachine: (ha-329926) DBG | I0501 02:31:22.526172   32876 retry.go:31] will retry after 4.703892793s: waiting for machine to come up
	I0501 02:31:27.231444   32853 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:31:27.231840   32853 main.go:141] libmachine: (ha-329926) Found IP for machine: 192.168.39.5
	I0501 02:31:27.231868   32853 main.go:141] libmachine: (ha-329926) Reserving static IP address...
	I0501 02:31:27.231882   32853 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has current primary IP address 192.168.39.5 and MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:31:27.232282   32853 main.go:141] libmachine: (ha-329926) DBG | unable to find host DHCP lease matching {name: "ha-329926", mac: "52:54:00:ce:d8:43", ip: "192.168.39.5"} in network mk-ha-329926
	I0501 02:31:27.306230   32853 main.go:141] libmachine: (ha-329926) DBG | Getting to WaitForSSH function...
	I0501 02:31:27.306257   32853 main.go:141] libmachine: (ha-329926) Reserved static IP address: 192.168.39.5
	I0501 02:31:27.306296   32853 main.go:141] libmachine: (ha-329926) Waiting for SSH to be available...
	I0501 02:31:27.308886   32853 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:31:27.309237   32853 main.go:141] libmachine: (ha-329926) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:d8:43", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:31:18 +0000 UTC Type:0 Mac:52:54:00:ce:d8:43 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:minikube Clientid:01:52:54:00:ce:d8:43}
	I0501 02:31:27.309262   32853 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined IP address 192.168.39.5 and MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:31:27.309426   32853 main.go:141] libmachine: (ha-329926) DBG | Using SSH client type: external
	I0501 02:31:27.309451   32853 main.go:141] libmachine: (ha-329926) DBG | Using SSH private key: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926/id_rsa (-rw-------)
	I0501 02:31:27.309482   32853 main.go:141] libmachine: (ha-329926) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.5 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0501 02:31:27.309495   32853 main.go:141] libmachine: (ha-329926) DBG | About to run SSH command:
	I0501 02:31:27.309507   32853 main.go:141] libmachine: (ha-329926) DBG | exit 0
	I0501 02:31:27.434752   32853 main.go:141] libmachine: (ha-329926) DBG | SSH cmd err, output: <nil>: 
	I0501 02:31:27.435030   32853 main.go:141] libmachine: (ha-329926) KVM machine creation complete!
	I0501 02:31:27.435317   32853 main.go:141] libmachine: (ha-329926) Calling .GetConfigRaw
	I0501 02:31:27.435956   32853 main.go:141] libmachine: (ha-329926) Calling .DriverName
	I0501 02:31:27.436206   32853 main.go:141] libmachine: (ha-329926) Calling .DriverName
	I0501 02:31:27.436384   32853 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0501 02:31:27.436396   32853 main.go:141] libmachine: (ha-329926) Calling .GetState
	I0501 02:31:27.437585   32853 main.go:141] libmachine: Detecting operating system of created instance...
	I0501 02:31:27.437597   32853 main.go:141] libmachine: Waiting for SSH to be available...
	I0501 02:31:27.437603   32853 main.go:141] libmachine: Getting to WaitForSSH function...
	I0501 02:31:27.437609   32853 main.go:141] libmachine: (ha-329926) Calling .GetSSHHostname
	I0501 02:31:27.439934   32853 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:31:27.440337   32853 main.go:141] libmachine: (ha-329926) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:d8:43", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:31:18 +0000 UTC Type:0 Mac:52:54:00:ce:d8:43 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-329926 Clientid:01:52:54:00:ce:d8:43}
	I0501 02:31:27.440369   32853 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined IP address 192.168.39.5 and MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:31:27.440519   32853 main.go:141] libmachine: (ha-329926) Calling .GetSSHPort
	I0501 02:31:27.440713   32853 main.go:141] libmachine: (ha-329926) Calling .GetSSHKeyPath
	I0501 02:31:27.440852   32853 main.go:141] libmachine: (ha-329926) Calling .GetSSHKeyPath
	I0501 02:31:27.440949   32853 main.go:141] libmachine: (ha-329926) Calling .GetSSHUsername
	I0501 02:31:27.441092   32853 main.go:141] libmachine: Using SSH client type: native
	I0501 02:31:27.441261   32853 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.5 22 <nil> <nil>}
	I0501 02:31:27.441271   32853 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0501 02:31:27.542047   32853 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0501 02:31:27.542073   32853 main.go:141] libmachine: Detecting the provisioner...
	I0501 02:31:27.542084   32853 main.go:141] libmachine: (ha-329926) Calling .GetSSHHostname
	I0501 02:31:27.544546   32853 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:31:27.544801   32853 main.go:141] libmachine: (ha-329926) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:d8:43", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:31:18 +0000 UTC Type:0 Mac:52:54:00:ce:d8:43 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-329926 Clientid:01:52:54:00:ce:d8:43}
	I0501 02:31:27.544823   32853 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined IP address 192.168.39.5 and MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:31:27.544948   32853 main.go:141] libmachine: (ha-329926) Calling .GetSSHPort
	I0501 02:31:27.545142   32853 main.go:141] libmachine: (ha-329926) Calling .GetSSHKeyPath
	I0501 02:31:27.545293   32853 main.go:141] libmachine: (ha-329926) Calling .GetSSHKeyPath
	I0501 02:31:27.545418   32853 main.go:141] libmachine: (ha-329926) Calling .GetSSHUsername
	I0501 02:31:27.545555   32853 main.go:141] libmachine: Using SSH client type: native
	I0501 02:31:27.545774   32853 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.5 22 <nil> <nil>}
	I0501 02:31:27.545791   32853 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0501 02:31:27.651855   32853 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0501 02:31:27.651922   32853 main.go:141] libmachine: found compatible host: buildroot
	I0501 02:31:27.651933   32853 main.go:141] libmachine: Provisioning with buildroot...
	I0501 02:31:27.651942   32853 main.go:141] libmachine: (ha-329926) Calling .GetMachineName
	I0501 02:31:27.652222   32853 buildroot.go:166] provisioning hostname "ha-329926"
	I0501 02:31:27.652254   32853 main.go:141] libmachine: (ha-329926) Calling .GetMachineName
	I0501 02:31:27.652482   32853 main.go:141] libmachine: (ha-329926) Calling .GetSSHHostname
	I0501 02:31:27.654880   32853 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:31:27.655220   32853 main.go:141] libmachine: (ha-329926) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:d8:43", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:31:18 +0000 UTC Type:0 Mac:52:54:00:ce:d8:43 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-329926 Clientid:01:52:54:00:ce:d8:43}
	I0501 02:31:27.655237   32853 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined IP address 192.168.39.5 and MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:31:27.655371   32853 main.go:141] libmachine: (ha-329926) Calling .GetSSHPort
	I0501 02:31:27.655541   32853 main.go:141] libmachine: (ha-329926) Calling .GetSSHKeyPath
	I0501 02:31:27.655687   32853 main.go:141] libmachine: (ha-329926) Calling .GetSSHKeyPath
	I0501 02:31:27.655837   32853 main.go:141] libmachine: (ha-329926) Calling .GetSSHUsername
	I0501 02:31:27.655996   32853 main.go:141] libmachine: Using SSH client type: native
	I0501 02:31:27.656194   32853 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.5 22 <nil> <nil>}
	I0501 02:31:27.656209   32853 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-329926 && echo "ha-329926" | sudo tee /etc/hostname
	I0501 02:31:27.775558   32853 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-329926
	
	I0501 02:31:27.775590   32853 main.go:141] libmachine: (ha-329926) Calling .GetSSHHostname
	I0501 02:31:27.778154   32853 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:31:27.778534   32853 main.go:141] libmachine: (ha-329926) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:d8:43", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:31:18 +0000 UTC Type:0 Mac:52:54:00:ce:d8:43 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-329926 Clientid:01:52:54:00:ce:d8:43}
	I0501 02:31:27.778586   32853 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined IP address 192.168.39.5 and MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:31:27.778713   32853 main.go:141] libmachine: (ha-329926) Calling .GetSSHPort
	I0501 02:31:27.778940   32853 main.go:141] libmachine: (ha-329926) Calling .GetSSHKeyPath
	I0501 02:31:27.779113   32853 main.go:141] libmachine: (ha-329926) Calling .GetSSHKeyPath
	I0501 02:31:27.779293   32853 main.go:141] libmachine: (ha-329926) Calling .GetSSHUsername
	I0501 02:31:27.779460   32853 main.go:141] libmachine: Using SSH client type: native
	I0501 02:31:27.779694   32853 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.5 22 <nil> <nil>}
	I0501 02:31:27.779714   32853 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-329926' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-329926/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-329926' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0501 02:31:27.893285   32853 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0501 02:31:27.893325   32853 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18779-13391/.minikube CaCertPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18779-13391/.minikube}
	I0501 02:31:27.893378   32853 buildroot.go:174] setting up certificates
	I0501 02:31:27.893397   32853 provision.go:84] configureAuth start
	I0501 02:31:27.893416   32853 main.go:141] libmachine: (ha-329926) Calling .GetMachineName
	I0501 02:31:27.893706   32853 main.go:141] libmachine: (ha-329926) Calling .GetIP
	I0501 02:31:27.896155   32853 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:31:27.896491   32853 main.go:141] libmachine: (ha-329926) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:d8:43", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:31:18 +0000 UTC Type:0 Mac:52:54:00:ce:d8:43 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-329926 Clientid:01:52:54:00:ce:d8:43}
	I0501 02:31:27.896510   32853 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined IP address 192.168.39.5 and MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:31:27.896597   32853 main.go:141] libmachine: (ha-329926) Calling .GetSSHHostname
	I0501 02:31:27.898661   32853 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:31:27.898974   32853 main.go:141] libmachine: (ha-329926) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:d8:43", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:31:18 +0000 UTC Type:0 Mac:52:54:00:ce:d8:43 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-329926 Clientid:01:52:54:00:ce:d8:43}
	I0501 02:31:27.899000   32853 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined IP address 192.168.39.5 and MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:31:27.899156   32853 provision.go:143] copyHostCerts
	I0501 02:31:27.899193   32853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem
	I0501 02:31:27.899220   32853 exec_runner.go:144] found /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem, removing ...
	I0501 02:31:27.899228   32853 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem
	I0501 02:31:27.899302   32853 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem (1123 bytes)
	I0501 02:31:27.899395   32853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem
	I0501 02:31:27.899415   32853 exec_runner.go:144] found /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem, removing ...
	I0501 02:31:27.899419   32853 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem
	I0501 02:31:27.899442   32853 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem (1675 bytes)
	I0501 02:31:27.899495   32853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem
	I0501 02:31:27.899510   32853 exec_runner.go:144] found /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem, removing ...
	I0501 02:31:27.899514   32853 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem
	I0501 02:31:27.899547   32853 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem (1078 bytes)
	I0501 02:31:27.899606   32853 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca-key.pem org=jenkins.ha-329926 san=[127.0.0.1 192.168.39.5 ha-329926 localhost minikube]
	I0501 02:31:28.044485   32853 provision.go:177] copyRemoteCerts
	I0501 02:31:28.044535   32853 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0501 02:31:28.044556   32853 main.go:141] libmachine: (ha-329926) Calling .GetSSHHostname
	I0501 02:31:28.047199   32853 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:31:28.047648   32853 main.go:141] libmachine: (ha-329926) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:d8:43", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:31:18 +0000 UTC Type:0 Mac:52:54:00:ce:d8:43 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-329926 Clientid:01:52:54:00:ce:d8:43}
	I0501 02:31:28.047686   32853 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined IP address 192.168.39.5 and MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:31:28.047841   32853 main.go:141] libmachine: (ha-329926) Calling .GetSSHPort
	I0501 02:31:28.048023   32853 main.go:141] libmachine: (ha-329926) Calling .GetSSHKeyPath
	I0501 02:31:28.048183   32853 main.go:141] libmachine: (ha-329926) Calling .GetSSHUsername
	I0501 02:31:28.048316   32853 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926/id_rsa Username:docker}
	I0501 02:31:28.131981   32853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0501 02:31:28.132055   32853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0501 02:31:28.161023   32853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0501 02:31:28.161097   32853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0501 02:31:28.190060   32853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0501 02:31:28.190132   32853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0501 02:31:28.218808   32853 provision.go:87] duration metric: took 325.394032ms to configureAuth
	I0501 02:31:28.218836   32853 buildroot.go:189] setting minikube options for container-runtime
	I0501 02:31:28.219004   32853 config.go:182] Loaded profile config "ha-329926": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0501 02:31:28.219110   32853 main.go:141] libmachine: (ha-329926) Calling .GetSSHHostname
	I0501 02:31:28.221523   32853 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:31:28.221859   32853 main.go:141] libmachine: (ha-329926) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:d8:43", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:31:18 +0000 UTC Type:0 Mac:52:54:00:ce:d8:43 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-329926 Clientid:01:52:54:00:ce:d8:43}
	I0501 02:31:28.221888   32853 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined IP address 192.168.39.5 and MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:31:28.222053   32853 main.go:141] libmachine: (ha-329926) Calling .GetSSHPort
	I0501 02:31:28.222248   32853 main.go:141] libmachine: (ha-329926) Calling .GetSSHKeyPath
	I0501 02:31:28.222434   32853 main.go:141] libmachine: (ha-329926) Calling .GetSSHKeyPath
	I0501 02:31:28.222567   32853 main.go:141] libmachine: (ha-329926) Calling .GetSSHUsername
	I0501 02:31:28.222683   32853 main.go:141] libmachine: Using SSH client type: native
	I0501 02:31:28.222846   32853 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.5 22 <nil> <nil>}
	I0501 02:31:28.222860   32853 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0501 02:31:28.506794   32853 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0501 02:31:28.506824   32853 main.go:141] libmachine: Checking connection to Docker...
	I0501 02:31:28.506834   32853 main.go:141] libmachine: (ha-329926) Calling .GetURL
	I0501 02:31:28.508069   32853 main.go:141] libmachine: (ha-329926) DBG | Using libvirt version 6000000
	I0501 02:31:28.510048   32853 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:31:28.510322   32853 main.go:141] libmachine: (ha-329926) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:d8:43", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:31:18 +0000 UTC Type:0 Mac:52:54:00:ce:d8:43 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-329926 Clientid:01:52:54:00:ce:d8:43}
	I0501 02:31:28.510343   32853 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined IP address 192.168.39.5 and MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:31:28.510543   32853 main.go:141] libmachine: Docker is up and running!
	I0501 02:31:28.510569   32853 main.go:141] libmachine: Reticulating splines...
	I0501 02:31:28.510575   32853 client.go:171] duration metric: took 26.296922163s to LocalClient.Create
	I0501 02:31:28.510597   32853 start.go:167] duration metric: took 26.296986611s to libmachine.API.Create "ha-329926"
	I0501 02:31:28.510609   32853 start.go:293] postStartSetup for "ha-329926" (driver="kvm2")
	I0501 02:31:28.510624   32853 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0501 02:31:28.510639   32853 main.go:141] libmachine: (ha-329926) Calling .DriverName
	I0501 02:31:28.510865   32853 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0501 02:31:28.510895   32853 main.go:141] libmachine: (ha-329926) Calling .GetSSHHostname
	I0501 02:31:28.512814   32853 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:31:28.513130   32853 main.go:141] libmachine: (ha-329926) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:d8:43", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:31:18 +0000 UTC Type:0 Mac:52:54:00:ce:d8:43 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-329926 Clientid:01:52:54:00:ce:d8:43}
	I0501 02:31:28.513152   32853 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined IP address 192.168.39.5 and MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:31:28.513256   32853 main.go:141] libmachine: (ha-329926) Calling .GetSSHPort
	I0501 02:31:28.513422   32853 main.go:141] libmachine: (ha-329926) Calling .GetSSHKeyPath
	I0501 02:31:28.513566   32853 main.go:141] libmachine: (ha-329926) Calling .GetSSHUsername
	I0501 02:31:28.513673   32853 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926/id_rsa Username:docker}
	I0501 02:31:28.593262   32853 ssh_runner.go:195] Run: cat /etc/os-release
	I0501 02:31:28.598118   32853 info.go:137] Remote host: Buildroot 2023.02.9
	I0501 02:31:28.598146   32853 filesync.go:126] Scanning /home/jenkins/minikube-integration/18779-13391/.minikube/addons for local assets ...
	I0501 02:31:28.598226   32853 filesync.go:126] Scanning /home/jenkins/minikube-integration/18779-13391/.minikube/files for local assets ...
	I0501 02:31:28.598317   32853 filesync.go:149] local asset: /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem -> 207242.pem in /etc/ssl/certs
	I0501 02:31:28.598329   32853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem -> /etc/ssl/certs/207242.pem
	I0501 02:31:28.598460   32853 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0501 02:31:28.608303   32853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem --> /etc/ssl/certs/207242.pem (1708 bytes)
	I0501 02:31:28.634374   32853 start.go:296] duration metric: took 123.748542ms for postStartSetup
	I0501 02:31:28.634435   32853 main.go:141] libmachine: (ha-329926) Calling .GetConfigRaw
	I0501 02:31:28.635011   32853 main.go:141] libmachine: (ha-329926) Calling .GetIP
	I0501 02:31:28.637415   32853 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:31:28.637744   32853 main.go:141] libmachine: (ha-329926) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:d8:43", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:31:18 +0000 UTC Type:0 Mac:52:54:00:ce:d8:43 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-329926 Clientid:01:52:54:00:ce:d8:43}
	I0501 02:31:28.637772   32853 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined IP address 192.168.39.5 and MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:31:28.638014   32853 profile.go:143] Saving config to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/config.json ...
	I0501 02:31:28.638164   32853 start.go:128] duration metric: took 26.442026735s to createHost
	I0501 02:31:28.638184   32853 main.go:141] libmachine: (ha-329926) Calling .GetSSHHostname
	I0501 02:31:28.640154   32853 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:31:28.640404   32853 main.go:141] libmachine: (ha-329926) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:d8:43", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:31:18 +0000 UTC Type:0 Mac:52:54:00:ce:d8:43 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-329926 Clientid:01:52:54:00:ce:d8:43}
	I0501 02:31:28.640430   32853 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined IP address 192.168.39.5 and MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:31:28.640526   32853 main.go:141] libmachine: (ha-329926) Calling .GetSSHPort
	I0501 02:31:28.640720   32853 main.go:141] libmachine: (ha-329926) Calling .GetSSHKeyPath
	I0501 02:31:28.640860   32853 main.go:141] libmachine: (ha-329926) Calling .GetSSHKeyPath
	I0501 02:31:28.640990   32853 main.go:141] libmachine: (ha-329926) Calling .GetSSHUsername
	I0501 02:31:28.641115   32853 main.go:141] libmachine: Using SSH client type: native
	I0501 02:31:28.641289   32853 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.5 22 <nil> <nil>}
	I0501 02:31:28.641312   32853 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0501 02:31:28.743716   32853 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714530688.716538813
	
	I0501 02:31:28.743739   32853 fix.go:216] guest clock: 1714530688.716538813
	I0501 02:31:28.743746   32853 fix.go:229] Guest: 2024-05-01 02:31:28.716538813 +0000 UTC Remote: 2024-05-01 02:31:28.638174692 +0000 UTC m=+26.560671961 (delta=78.364121ms)
	I0501 02:31:28.743771   32853 fix.go:200] guest clock delta is within tolerance: 78.364121ms
	I0501 02:31:28.743777   32853 start.go:83] releasing machines lock for "ha-329926", held for 26.547711947s
	I0501 02:31:28.743799   32853 main.go:141] libmachine: (ha-329926) Calling .DriverName
	I0501 02:31:28.744031   32853 main.go:141] libmachine: (ha-329926) Calling .GetIP
	I0501 02:31:28.746551   32853 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:31:28.746896   32853 main.go:141] libmachine: (ha-329926) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:d8:43", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:31:18 +0000 UTC Type:0 Mac:52:54:00:ce:d8:43 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-329926 Clientid:01:52:54:00:ce:d8:43}
	I0501 02:31:28.746920   32853 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined IP address 192.168.39.5 and MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:31:28.747070   32853 main.go:141] libmachine: (ha-329926) Calling .DriverName
	I0501 02:31:28.747674   32853 main.go:141] libmachine: (ha-329926) Calling .DriverName
	I0501 02:31:28.747860   32853 main.go:141] libmachine: (ha-329926) Calling .DriverName
	I0501 02:31:28.747973   32853 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0501 02:31:28.748005   32853 main.go:141] libmachine: (ha-329926) Calling .GetSSHHostname
	I0501 02:31:28.748095   32853 ssh_runner.go:195] Run: cat /version.json
	I0501 02:31:28.748117   32853 main.go:141] libmachine: (ha-329926) Calling .GetSSHHostname
	I0501 02:31:28.750298   32853 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:31:28.750669   32853 main.go:141] libmachine: (ha-329926) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:d8:43", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:31:18 +0000 UTC Type:0 Mac:52:54:00:ce:d8:43 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-329926 Clientid:01:52:54:00:ce:d8:43}
	I0501 02:31:28.750693   32853 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined IP address 192.168.39.5 and MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:31:28.750711   32853 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:31:28.750864   32853 main.go:141] libmachine: (ha-329926) Calling .GetSSHPort
	I0501 02:31:28.751018   32853 main.go:141] libmachine: (ha-329926) Calling .GetSSHKeyPath
	I0501 02:31:28.751129   32853 main.go:141] libmachine: (ha-329926) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:d8:43", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:31:18 +0000 UTC Type:0 Mac:52:54:00:ce:d8:43 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-329926 Clientid:01:52:54:00:ce:d8:43}
	I0501 02:31:28.751150   32853 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined IP address 192.168.39.5 and MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:31:28.751166   32853 main.go:141] libmachine: (ha-329926) Calling .GetSSHUsername
	I0501 02:31:28.751306   32853 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926/id_rsa Username:docker}
	I0501 02:31:28.751389   32853 main.go:141] libmachine: (ha-329926) Calling .GetSSHPort
	I0501 02:31:28.751533   32853 main.go:141] libmachine: (ha-329926) Calling .GetSSHKeyPath
	I0501 02:31:28.751656   32853 main.go:141] libmachine: (ha-329926) Calling .GetSSHUsername
	I0501 02:31:28.751809   32853 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926/id_rsa Username:docker}
	I0501 02:31:28.848869   32853 ssh_runner.go:195] Run: systemctl --version
	I0501 02:31:28.855210   32853 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0501 02:31:29.016256   32853 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0501 02:31:29.023608   32853 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0501 02:31:29.023691   32853 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0501 02:31:29.042085   32853 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0501 02:31:29.042115   32853 start.go:494] detecting cgroup driver to use...
	I0501 02:31:29.042178   32853 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0501 02:31:29.059776   32853 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0501 02:31:29.075189   32853 docker.go:217] disabling cri-docker service (if available) ...
	I0501 02:31:29.075262   32853 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0501 02:31:29.090216   32853 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0501 02:31:29.105523   32853 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0501 02:31:29.221270   32853 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0501 02:31:29.352751   32853 docker.go:233] disabling docker service ...
	I0501 02:31:29.352848   32853 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0501 02:31:29.369405   32853 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0501 02:31:29.383459   32853 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0501 02:31:29.520606   32853 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0501 02:31:29.660010   32853 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0501 02:31:29.675021   32853 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0501 02:31:29.695267   32853 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0501 02:31:29.695336   32853 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 02:31:29.707073   32853 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0501 02:31:29.707136   32853 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 02:31:29.718755   32853 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 02:31:29.730541   32853 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 02:31:29.743583   32853 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0501 02:31:29.756320   32853 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 02:31:29.768711   32853 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 02:31:29.788302   32853 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 02:31:29.800367   32853 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0501 02:31:29.811307   32853 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0501 02:31:29.811373   32853 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0501 02:31:29.825777   32853 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0501 02:31:29.837371   32853 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:31:29.952518   32853 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0501 02:31:30.093573   32853 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0501 02:31:30.093652   32853 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0501 02:31:30.098671   32853 start.go:562] Will wait 60s for crictl version
	I0501 02:31:30.098708   32853 ssh_runner.go:195] Run: which crictl
	I0501 02:31:30.103137   32853 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0501 02:31:30.139019   32853 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0501 02:31:30.139117   32853 ssh_runner.go:195] Run: crio --version
	I0501 02:31:30.168469   32853 ssh_runner.go:195] Run: crio --version
	I0501 02:31:30.203703   32853 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0501 02:31:30.205011   32853 main.go:141] libmachine: (ha-329926) Calling .GetIP
	I0501 02:31:30.207922   32853 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:31:30.208309   32853 main.go:141] libmachine: (ha-329926) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:d8:43", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:31:18 +0000 UTC Type:0 Mac:52:54:00:ce:d8:43 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-329926 Clientid:01:52:54:00:ce:d8:43}
	I0501 02:31:30.208340   32853 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined IP address 192.168.39.5 and MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:31:30.208519   32853 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0501 02:31:30.213134   32853 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0501 02:31:30.227739   32853 kubeadm.go:877] updating cluster {Name:ha-329926 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Cl
usterName:ha-329926 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.5 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0501 02:31:30.227847   32853 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0501 02:31:30.227895   32853 ssh_runner.go:195] Run: sudo crictl images --output json
	I0501 02:31:30.278071   32853 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0501 02:31:30.278133   32853 ssh_runner.go:195] Run: which lz4
	I0501 02:31:30.282738   32853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0501 02:31:30.282841   32853 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0501 02:31:30.287593   32853 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0501 02:31:30.287625   32853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394544937 bytes)
	I0501 02:31:31.899532   32853 crio.go:462] duration metric: took 1.616715499s to copy over tarball
	I0501 02:31:31.899619   32853 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0501 02:31:34.331728   32853 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.432078045s)
	I0501 02:31:34.331754   32853 crio.go:469] duration metric: took 2.432192448s to extract the tarball
	I0501 02:31:34.331761   32853 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0501 02:31:34.372975   32853 ssh_runner.go:195] Run: sudo crictl images --output json
	I0501 02:31:34.421556   32853 crio.go:514] all images are preloaded for cri-o runtime.
	I0501 02:31:34.421580   32853 cache_images.go:84] Images are preloaded, skipping loading
	I0501 02:31:34.421589   32853 kubeadm.go:928] updating node { 192.168.39.5 8443 v1.30.0 crio true true} ...
	I0501 02:31:34.421690   32853 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-329926 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-329926 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0501 02:31:34.421758   32853 ssh_runner.go:195] Run: crio config
	I0501 02:31:34.470851   32853 cni.go:84] Creating CNI manager for ""
	I0501 02:31:34.470875   32853 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0501 02:31:34.470887   32853 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0501 02:31:34.470908   32853 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.5 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-329926 NodeName:ha-329926 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0501 02:31:34.471082   32853 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-329926"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0501 02:31:34.471110   32853 kube-vip.go:111] generating kube-vip config ...
	I0501 02:31:34.471157   32853 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0501 02:31:34.494493   32853 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0501 02:31:34.494609   32853 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0501 02:31:34.494670   32853 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0501 02:31:34.506544   32853 binaries.go:44] Found k8s binaries, skipping transfer
	I0501 02:31:34.506641   32853 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0501 02:31:34.518288   32853 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I0501 02:31:34.537345   32853 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0501 02:31:34.556679   32853 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2147 bytes)
	I0501 02:31:34.575628   32853 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1352 bytes)
	I0501 02:31:34.594823   32853 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0501 02:31:34.599305   32853 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0501 02:31:34.613451   32853 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:31:34.737037   32853 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0501 02:31:34.757717   32853 certs.go:68] Setting up /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926 for IP: 192.168.39.5
	I0501 02:31:34.757740   32853 certs.go:194] generating shared ca certs ...
	I0501 02:31:34.757759   32853 certs.go:226] acquiring lock for ca certs: {Name:mk85a75d14c00780d4bf09cd9bdaf0f96f1b8fce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:31:34.757924   32853 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18779-13391/.minikube/ca.key
	I0501 02:31:34.757995   32853 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.key
	I0501 02:31:34.758010   32853 certs.go:256] generating profile certs ...
	I0501 02:31:34.758085   32853 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/client.key
	I0501 02:31:34.758102   32853 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/client.crt with IP's: []
	I0501 02:31:35.184404   32853 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/client.crt ...
	I0501 02:31:35.184439   32853 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/client.crt: {Name:mk7262274ab19f428bd917a3a08a2ab22cf28192 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:31:35.184627   32853 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/client.key ...
	I0501 02:31:35.184641   32853 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/client.key: {Name:mk6a4a995038232669fc0f6a17d68762f3b81c49 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:31:35.184741   32853 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/apiserver.key.a34e53e1
	I0501 02:31:35.184761   32853 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/apiserver.crt.a34e53e1 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.5 192.168.39.254]
	I0501 02:31:35.280636   32853 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/apiserver.crt.a34e53e1 ...
	I0501 02:31:35.280666   32853 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/apiserver.crt.a34e53e1: {Name:mk4e096a3a58435245d20a768dcb5062bf6dfa7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:31:35.280838   32853 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/apiserver.key.a34e53e1 ...
	I0501 02:31:35.280854   32853 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/apiserver.key.a34e53e1: {Name:mk47468eb32dd383aceebd71d208491de3b69700 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:31:35.280943   32853 certs.go:381] copying /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/apiserver.crt.a34e53e1 -> /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/apiserver.crt
	I0501 02:31:35.281017   32853 certs.go:385] copying /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/apiserver.key.a34e53e1 -> /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/apiserver.key
	I0501 02:31:35.281066   32853 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/proxy-client.key
	I0501 02:31:35.281080   32853 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/proxy-client.crt with IP's: []
	I0501 02:31:35.610854   32853 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/proxy-client.crt ...
	I0501 02:31:35.610887   32853 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/proxy-client.crt: {Name:mk234aa7e8d9b93676c6aac1337f4aea75086303 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:31:35.611073   32853 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/proxy-client.key ...
	I0501 02:31:35.611088   32853 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/proxy-client.key: {Name:mk9b5c622227e136431a0d879f84ae5015bc057c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:31:35.611187   32853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0501 02:31:35.611205   32853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0501 02:31:35.611215   32853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0501 02:31:35.611228   32853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0501 02:31:35.611241   32853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0501 02:31:35.611254   32853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0501 02:31:35.611266   32853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0501 02:31:35.611283   32853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0501 02:31:35.611330   32853 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/20724.pem (1338 bytes)
	W0501 02:31:35.611367   32853 certs.go:480] ignoring /home/jenkins/minikube-integration/18779-13391/.minikube/certs/20724_empty.pem, impossibly tiny 0 bytes
	I0501 02:31:35.611377   32853 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca-key.pem (1679 bytes)
	I0501 02:31:35.611397   32853 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem (1078 bytes)
	I0501 02:31:35.611420   32853 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem (1123 bytes)
	I0501 02:31:35.611442   32853 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem (1675 bytes)
	I0501 02:31:35.611480   32853 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem (1708 bytes)
	I0501 02:31:35.611507   32853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0501 02:31:35.611521   32853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/20724.pem -> /usr/share/ca-certificates/20724.pem
	I0501 02:31:35.611533   32853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem -> /usr/share/ca-certificates/207242.pem
	I0501 02:31:35.612028   32853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0501 02:31:35.653543   32853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0501 02:31:35.682002   32853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0501 02:31:35.728901   32853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0501 02:31:35.756532   32853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0501 02:31:35.783290   32853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0501 02:31:35.810383   32853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0501 02:31:35.839806   32853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0501 02:31:35.867844   32853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0501 02:31:35.895750   32853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/certs/20724.pem --> /usr/share/ca-certificates/20724.pem (1338 bytes)
	I0501 02:31:35.921788   32853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem --> /usr/share/ca-certificates/207242.pem (1708 bytes)
	I0501 02:31:35.947447   32853 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0501 02:31:35.966086   32853 ssh_runner.go:195] Run: openssl version
	I0501 02:31:35.973809   32853 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0501 02:31:35.986988   32853 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0501 02:31:35.992144   32853 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May  1 02:08 /usr/share/ca-certificates/minikubeCA.pem
	I0501 02:31:35.992201   32853 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0501 02:31:35.998663   32853 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0501 02:31:36.011667   32853 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20724.pem && ln -fs /usr/share/ca-certificates/20724.pem /etc/ssl/certs/20724.pem"
	I0501 02:31:36.025015   32853 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20724.pem
	I0501 02:31:36.030258   32853 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May  1 02:20 /usr/share/ca-certificates/20724.pem
	I0501 02:31:36.030315   32853 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20724.pem
	I0501 02:31:36.036690   32853 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20724.pem /etc/ssl/certs/51391683.0"
	I0501 02:31:36.050321   32853 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/207242.pem && ln -fs /usr/share/ca-certificates/207242.pem /etc/ssl/certs/207242.pem"
	I0501 02:31:36.063932   32853 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/207242.pem
	I0501 02:31:36.069293   32853 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May  1 02:20 /usr/share/ca-certificates/207242.pem
	I0501 02:31:36.069351   32853 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/207242.pem
	I0501 02:31:36.075994   32853 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/207242.pem /etc/ssl/certs/3ec20f2e.0"
	I0501 02:31:36.089356   32853 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0501 02:31:36.094230   32853 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0501 02:31:36.094289   32853 kubeadm.go:391] StartCluster: {Name:ha-329926 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Clust
erName:ha-329926 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.5 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 02:31:36.094363   32853 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0501 02:31:36.094444   32853 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0501 02:31:36.134470   32853 cri.go:89] found id: ""
	I0501 02:31:36.134545   32853 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0501 02:31:36.146249   32853 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0501 02:31:36.157221   32853 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0501 02:31:36.167977   32853 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0501 02:31:36.167994   32853 kubeadm.go:156] found existing configuration files:
	
	I0501 02:31:36.168026   32853 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0501 02:31:36.178309   32853 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0501 02:31:36.178363   32853 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0501 02:31:36.189152   32853 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0501 02:31:36.199616   32853 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0501 02:31:36.199667   32853 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0501 02:31:36.210379   32853 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0501 02:31:36.220906   32853 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0501 02:31:36.220954   32853 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0501 02:31:36.232914   32853 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0501 02:31:36.244646   32853 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0501 02:31:36.244693   32853 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0501 02:31:36.255737   32853 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0501 02:31:36.512862   32853 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0501 02:31:48.755507   32853 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0501 02:31:48.755566   32853 kubeadm.go:309] [preflight] Running pre-flight checks
	I0501 02:31:48.755657   32853 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0501 02:31:48.755766   32853 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0501 02:31:48.755902   32853 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0501 02:31:48.756000   32853 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0501 02:31:48.757306   32853 out.go:204]   - Generating certificates and keys ...
	I0501 02:31:48.757389   32853 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0501 02:31:48.757467   32853 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0501 02:31:48.757562   32853 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0501 02:31:48.757643   32853 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0501 02:31:48.757721   32853 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0501 02:31:48.757797   32853 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0501 02:31:48.757875   32853 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0501 02:31:48.758036   32853 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-329926 localhost] and IPs [192.168.39.5 127.0.0.1 ::1]
	I0501 02:31:48.758119   32853 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0501 02:31:48.758222   32853 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-329926 localhost] and IPs [192.168.39.5 127.0.0.1 ::1]
	I0501 02:31:48.758282   32853 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0501 02:31:48.758355   32853 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0501 02:31:48.758433   32853 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0501 02:31:48.758499   32853 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0501 02:31:48.758570   32853 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0501 02:31:48.758615   32853 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0501 02:31:48.758656   32853 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0501 02:31:48.758708   32853 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0501 02:31:48.758772   32853 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0501 02:31:48.758885   32853 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0501 02:31:48.758938   32853 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0501 02:31:48.760267   32853 out.go:204]   - Booting up control plane ...
	I0501 02:31:48.760382   32853 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0501 02:31:48.760490   32853 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0501 02:31:48.760544   32853 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0501 02:31:48.760626   32853 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0501 02:31:48.760693   32853 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0501 02:31:48.760758   32853 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0501 02:31:48.760935   32853 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0501 02:31:48.761027   32853 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0501 02:31:48.761120   32853 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.001253757s
	I0501 02:31:48.761214   32853 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0501 02:31:48.761289   32853 kubeadm.go:309] [api-check] The API server is healthy after 6.003159502s
	I0501 02:31:48.761425   32853 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0501 02:31:48.761542   32853 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0501 02:31:48.761627   32853 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0501 02:31:48.761821   32853 kubeadm.go:309] [mark-control-plane] Marking the node ha-329926 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0501 02:31:48.761882   32853 kubeadm.go:309] [bootstrap-token] Using token: ig5cw9.dz3x2efs4246n26l
	I0501 02:31:48.763213   32853 out.go:204]   - Configuring RBAC rules ...
	I0501 02:31:48.763314   32853 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0501 02:31:48.763416   32853 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0501 02:31:48.763542   32853 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0501 02:31:48.763649   32853 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0501 02:31:48.763771   32853 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0501 02:31:48.763903   32853 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0501 02:31:48.764014   32853 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0501 02:31:48.764060   32853 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0501 02:31:48.764132   32853 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0501 02:31:48.764140   32853 kubeadm.go:309] 
	I0501 02:31:48.764226   32853 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0501 02:31:48.764248   32853 kubeadm.go:309] 
	I0501 02:31:48.764346   32853 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0501 02:31:48.764356   32853 kubeadm.go:309] 
	I0501 02:31:48.764401   32853 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0501 02:31:48.764479   32853 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0501 02:31:48.764532   32853 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0501 02:31:48.764538   32853 kubeadm.go:309] 
	I0501 02:31:48.764582   32853 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0501 02:31:48.764588   32853 kubeadm.go:309] 
	I0501 02:31:48.764636   32853 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0501 02:31:48.764644   32853 kubeadm.go:309] 
	I0501 02:31:48.764724   32853 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0501 02:31:48.764814   32853 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0501 02:31:48.764876   32853 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0501 02:31:48.764883   32853 kubeadm.go:309] 
	I0501 02:31:48.764950   32853 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0501 02:31:48.765012   32853 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0501 02:31:48.765019   32853 kubeadm.go:309] 
	I0501 02:31:48.765089   32853 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token ig5cw9.dz3x2efs4246n26l \
	I0501 02:31:48.765173   32853 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:bd94cc6acfb95fe36f70b12c6a1f2980601b8235e7c4dea65cebdf35eb514754 \
	I0501 02:31:48.765194   32853 kubeadm.go:309] 	--control-plane 
	I0501 02:31:48.765200   32853 kubeadm.go:309] 
	I0501 02:31:48.765270   32853 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0501 02:31:48.765278   32853 kubeadm.go:309] 
	I0501 02:31:48.765343   32853 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token ig5cw9.dz3x2efs4246n26l \
	I0501 02:31:48.765445   32853 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:bd94cc6acfb95fe36f70b12c6a1f2980601b8235e7c4dea65cebdf35eb514754 
	I0501 02:31:48.765465   32853 cni.go:84] Creating CNI manager for ""
	I0501 02:31:48.765471   32853 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0501 02:31:48.766782   32853 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0501 02:31:48.767793   32853 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0501 02:31:48.773813   32853 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.0/kubectl ...
	I0501 02:31:48.773830   32853 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0501 02:31:48.796832   32853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0501 02:31:49.171787   32853 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0501 02:31:49.171873   32853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:31:49.171885   32853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-329926 minikube.k8s.io/updated_at=2024_05_01T02_31_49_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=2c4eae41cda912e6a762d77f0d8868e00f97bb4e minikube.k8s.io/name=ha-329926 minikube.k8s.io/primary=true
	I0501 02:31:49.199739   32853 ops.go:34] apiserver oom_adj: -16
	I0501 02:31:49.397251   32853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:31:49.898132   32853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:31:50.398273   32853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:31:50.897522   32853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:31:51.397933   32853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:31:51.897587   32853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:31:52.398178   32853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:31:52.898135   32853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:31:53.397977   32853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:31:53.897989   32853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:31:54.398231   32853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:31:54.897911   32853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:31:55.398096   32853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:31:55.897405   32853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:31:56.397928   32853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:31:56.897483   32853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:31:57.397649   32853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:31:57.897882   32853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:31:58.398240   32853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:31:58.897674   32853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:31:59.397723   32853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:31:59.898128   32853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:32:00.397427   32853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:32:00.897398   32853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:32:01.398293   32853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:32:01.500100   32853 kubeadm.go:1107] duration metric: took 12.328290279s to wait for elevateKubeSystemPrivileges
	W0501 02:32:01.500160   32853 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0501 02:32:01.500170   32853 kubeadm.go:393] duration metric: took 25.405886252s to StartCluster
	I0501 02:32:01.500193   32853 settings.go:142] acquiring lock: {Name:mkcfe08bcadd45d99c00ed1eaf4e9c226a489fe2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:32:01.500290   32853 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18779-13391/kubeconfig
	I0501 02:32:01.500970   32853 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18779-13391/kubeconfig: {Name:mk5d1131557f885800877622151c0915337cb23a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:32:01.501171   32853 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.5 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0501 02:32:01.501187   32853 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0501 02:32:01.501201   32853 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0501 02:32:01.501279   32853 addons.go:69] Setting storage-provisioner=true in profile "ha-329926"
	I0501 02:32:01.501194   32853 start.go:240] waiting for startup goroutines ...
	I0501 02:32:01.501302   32853 addons.go:69] Setting default-storageclass=true in profile "ha-329926"
	I0501 02:32:01.501314   32853 addons.go:234] Setting addon storage-provisioner=true in "ha-329926"
	I0501 02:32:01.501331   32853 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-329926"
	I0501 02:32:01.501347   32853 host.go:66] Checking if "ha-329926" exists ...
	I0501 02:32:01.501398   32853 config.go:182] Loaded profile config "ha-329926": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0501 02:32:01.501782   32853 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:32:01.501804   32853 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:32:01.501785   32853 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:32:01.501919   32853 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:32:01.517256   32853 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38955
	I0501 02:32:01.517710   32853 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:32:01.518224   32853 main.go:141] libmachine: Using API Version  1
	I0501 02:32:01.518244   32853 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:32:01.518254   32853 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39407
	I0501 02:32:01.518608   32853 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:32:01.518693   32853 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:32:01.518781   32853 main.go:141] libmachine: (ha-329926) Calling .GetState
	I0501 02:32:01.519264   32853 main.go:141] libmachine: Using API Version  1
	I0501 02:32:01.519290   32853 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:32:01.519642   32853 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:32:01.520167   32853 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:32:01.520195   32853 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:32:01.520971   32853 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18779-13391/kubeconfig
	I0501 02:32:01.521213   32853 kapi.go:59] client config for ha-329926: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/client.crt", KeyFile:"/home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/client.key", CAFile:"/home/jenkins/minikube-integration/18779-13391/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02700), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0501 02:32:01.521662   32853 cert_rotation.go:137] Starting client certificate rotation controller
	I0501 02:32:01.521833   32853 addons.go:234] Setting addon default-storageclass=true in "ha-329926"
	I0501 02:32:01.521864   32853 host.go:66] Checking if "ha-329926" exists ...
	I0501 02:32:01.522122   32853 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:32:01.522169   32853 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:32:01.535997   32853 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44213
	I0501 02:32:01.536512   32853 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:32:01.536995   32853 main.go:141] libmachine: Using API Version  1
	I0501 02:32:01.537021   32853 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:32:01.537377   32853 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:32:01.537390   32853 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45103
	I0501 02:32:01.537589   32853 main.go:141] libmachine: (ha-329926) Calling .GetState
	I0501 02:32:01.537774   32853 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:32:01.538266   32853 main.go:141] libmachine: Using API Version  1
	I0501 02:32:01.538289   32853 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:32:01.538674   32853 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:32:01.539243   32853 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:32:01.539274   32853 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:32:01.539513   32853 main.go:141] libmachine: (ha-329926) Calling .DriverName
	I0501 02:32:01.541192   32853 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0501 02:32:01.542333   32853 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0501 02:32:01.542350   32853 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0501 02:32:01.542363   32853 main.go:141] libmachine: (ha-329926) Calling .GetSSHHostname
	I0501 02:32:01.545583   32853 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:32:01.546102   32853 main.go:141] libmachine: (ha-329926) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:d8:43", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:31:18 +0000 UTC Type:0 Mac:52:54:00:ce:d8:43 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-329926 Clientid:01:52:54:00:ce:d8:43}
	I0501 02:32:01.546127   32853 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined IP address 192.168.39.5 and MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:32:01.546289   32853 main.go:141] libmachine: (ha-329926) Calling .GetSSHPort
	I0501 02:32:01.546480   32853 main.go:141] libmachine: (ha-329926) Calling .GetSSHKeyPath
	I0501 02:32:01.546654   32853 main.go:141] libmachine: (ha-329926) Calling .GetSSHUsername
	I0501 02:32:01.546813   32853 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926/id_rsa Username:docker}
	I0501 02:32:01.555702   32853 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42311
	I0501 02:32:01.556148   32853 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:32:01.556629   32853 main.go:141] libmachine: Using API Version  1
	I0501 02:32:01.556650   32853 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:32:01.556959   32853 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:32:01.557174   32853 main.go:141] libmachine: (ha-329926) Calling .GetState
	I0501 02:32:01.558844   32853 main.go:141] libmachine: (ha-329926) Calling .DriverName
	I0501 02:32:01.559088   32853 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0501 02:32:01.559107   32853 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0501 02:32:01.559125   32853 main.go:141] libmachine: (ha-329926) Calling .GetSSHHostname
	I0501 02:32:01.561835   32853 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:32:01.562222   32853 main.go:141] libmachine: (ha-329926) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:d8:43", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:31:18 +0000 UTC Type:0 Mac:52:54:00:ce:d8:43 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-329926 Clientid:01:52:54:00:ce:d8:43}
	I0501 02:32:01.562250   32853 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined IP address 192.168.39.5 and MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:32:01.562441   32853 main.go:141] libmachine: (ha-329926) Calling .GetSSHPort
	I0501 02:32:01.562623   32853 main.go:141] libmachine: (ha-329926) Calling .GetSSHKeyPath
	I0501 02:32:01.562773   32853 main.go:141] libmachine: (ha-329926) Calling .GetSSHUsername
	I0501 02:32:01.562925   32853 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926/id_rsa Username:docker}
	I0501 02:32:01.751537   32853 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0501 02:32:01.785259   32853 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0501 02:32:01.792036   32853 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0501 02:32:02.564393   32853 main.go:141] libmachine: Making call to close driver server
	I0501 02:32:02.564419   32853 main.go:141] libmachine: (ha-329926) Calling .Close
	I0501 02:32:02.564434   32853 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0501 02:32:02.564490   32853 main.go:141] libmachine: Making call to close driver server
	I0501 02:32:02.564503   32853 main.go:141] libmachine: (ha-329926) Calling .Close
	I0501 02:32:02.564715   32853 main.go:141] libmachine: Successfully made call to close driver server
	I0501 02:32:02.564733   32853 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 02:32:02.564741   32853 main.go:141] libmachine: Making call to close driver server
	I0501 02:32:02.564748   32853 main.go:141] libmachine: (ha-329926) Calling .Close
	I0501 02:32:02.564850   32853 main.go:141] libmachine: Successfully made call to close driver server
	I0501 02:32:02.564860   32853 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 02:32:02.564868   32853 main.go:141] libmachine: Making call to close driver server
	I0501 02:32:02.564876   32853 main.go:141] libmachine: (ha-329926) Calling .Close
	I0501 02:32:02.564993   32853 main.go:141] libmachine: Successfully made call to close driver server
	I0501 02:32:02.565008   32853 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 02:32:02.565110   32853 main.go:141] libmachine: (ha-329926) DBG | Closing plugin on server side
	I0501 02:32:02.565109   32853 main.go:141] libmachine: Successfully made call to close driver server
	I0501 02:32:02.565142   32853 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 02:32:02.565144   32853 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0501 02:32:02.565153   32853 round_trippers.go:469] Request Headers:
	I0501 02:32:02.565164   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:32:02.565169   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:32:02.581629   32853 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0501 02:32:02.582188   32853 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0501 02:32:02.582205   32853 round_trippers.go:469] Request Headers:
	I0501 02:32:02.582212   32853 round_trippers.go:473]     Content-Type: application/json
	I0501 02:32:02.582215   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:32:02.582218   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:32:02.586925   32853 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:32:02.587164   32853 main.go:141] libmachine: Making call to close driver server
	I0501 02:32:02.587180   32853 main.go:141] libmachine: (ha-329926) Calling .Close
	I0501 02:32:02.587421   32853 main.go:141] libmachine: Successfully made call to close driver server
	I0501 02:32:02.587439   32853 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 02:32:02.588760   32853 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0501 02:32:02.589714   32853 addons.go:505] duration metric: took 1.088515167s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0501 02:32:02.589748   32853 start.go:245] waiting for cluster config update ...
	I0501 02:32:02.589759   32853 start.go:254] writing updated cluster config ...
	I0501 02:32:02.591174   32853 out.go:177] 
	I0501 02:32:02.592511   32853 config.go:182] Loaded profile config "ha-329926": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0501 02:32:02.592585   32853 profile.go:143] Saving config to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/config.json ...
	I0501 02:32:02.594055   32853 out.go:177] * Starting "ha-329926-m02" control-plane node in "ha-329926" cluster
	I0501 02:32:02.595029   32853 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0501 02:32:02.595055   32853 cache.go:56] Caching tarball of preloaded images
	I0501 02:32:02.595143   32853 preload.go:173] Found /home/jenkins/minikube-integration/18779-13391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0501 02:32:02.595159   32853 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0501 02:32:02.595239   32853 profile.go:143] Saving config to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/config.json ...
	I0501 02:32:02.595457   32853 start.go:360] acquireMachinesLock for ha-329926-m02: {Name:mkc4548e5afe1d4b0a833f6c522103562b0cefff Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0501 02:32:02.595506   32853 start.go:364] duration metric: took 27.6µs to acquireMachinesLock for "ha-329926-m02"
	I0501 02:32:02.595540   32853 start.go:93] Provisioning new machine with config: &{Name:ha-329926 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.0 ClusterName:ha-329926 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.5 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0501 02:32:02.595624   32853 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0501 02:32:02.597000   32853 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0501 02:32:02.597096   32853 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:32:02.597126   32853 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:32:02.611846   32853 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43957
	I0501 02:32:02.612237   32853 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:32:02.612731   32853 main.go:141] libmachine: Using API Version  1
	I0501 02:32:02.612754   32853 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:32:02.613047   32853 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:32:02.613230   32853 main.go:141] libmachine: (ha-329926-m02) Calling .GetMachineName
	I0501 02:32:02.613367   32853 main.go:141] libmachine: (ha-329926-m02) Calling .DriverName
	I0501 02:32:02.613524   32853 start.go:159] libmachine.API.Create for "ha-329926" (driver="kvm2")
	I0501 02:32:02.613551   32853 client.go:168] LocalClient.Create starting
	I0501 02:32:02.613580   32853 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem
	I0501 02:32:02.613611   32853 main.go:141] libmachine: Decoding PEM data...
	I0501 02:32:02.613625   32853 main.go:141] libmachine: Parsing certificate...
	I0501 02:32:02.613671   32853 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem
	I0501 02:32:02.613688   32853 main.go:141] libmachine: Decoding PEM data...
	I0501 02:32:02.613698   32853 main.go:141] libmachine: Parsing certificate...
	I0501 02:32:02.613716   32853 main.go:141] libmachine: Running pre-create checks...
	I0501 02:32:02.613724   32853 main.go:141] libmachine: (ha-329926-m02) Calling .PreCreateCheck
	I0501 02:32:02.613900   32853 main.go:141] libmachine: (ha-329926-m02) Calling .GetConfigRaw
	I0501 02:32:02.614249   32853 main.go:141] libmachine: Creating machine...
	I0501 02:32:02.614262   32853 main.go:141] libmachine: (ha-329926-m02) Calling .Create
	I0501 02:32:02.614381   32853 main.go:141] libmachine: (ha-329926-m02) Creating KVM machine...
	I0501 02:32:02.615568   32853 main.go:141] libmachine: (ha-329926-m02) DBG | found existing default KVM network
	I0501 02:32:02.615712   32853 main.go:141] libmachine: (ha-329926-m02) DBG | found existing private KVM network mk-ha-329926
	I0501 02:32:02.615805   32853 main.go:141] libmachine: (ha-329926-m02) Setting up store path in /home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926-m02 ...
	I0501 02:32:02.615836   32853 main.go:141] libmachine: (ha-329926-m02) Building disk image from file:///home/jenkins/minikube-integration/18779-13391/.minikube/cache/iso/amd64/minikube-v1.33.0-1714498396-18779-amd64.iso
	I0501 02:32:02.615905   32853 main.go:141] libmachine: (ha-329926-m02) DBG | I0501 02:32:02.615811   33274 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18779-13391/.minikube
	I0501 02:32:02.615996   32853 main.go:141] libmachine: (ha-329926-m02) Downloading /home/jenkins/minikube-integration/18779-13391/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18779-13391/.minikube/cache/iso/amd64/minikube-v1.33.0-1714498396-18779-amd64.iso...
	I0501 02:32:02.826831   32853 main.go:141] libmachine: (ha-329926-m02) DBG | I0501 02:32:02.826712   33274 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926-m02/id_rsa...
	I0501 02:32:02.959121   32853 main.go:141] libmachine: (ha-329926-m02) DBG | I0501 02:32:02.958954   33274 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926-m02/ha-329926-m02.rawdisk...
	I0501 02:32:02.959153   32853 main.go:141] libmachine: (ha-329926-m02) DBG | Writing magic tar header
	I0501 02:32:02.959179   32853 main.go:141] libmachine: (ha-329926-m02) DBG | Writing SSH key tar header
	I0501 02:32:02.959194   32853 main.go:141] libmachine: (ha-329926-m02) DBG | I0501 02:32:02.959067   33274 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926-m02 ...
	I0501 02:32:02.959239   32853 main.go:141] libmachine: (ha-329926-m02) Setting executable bit set on /home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926-m02 (perms=drwx------)
	I0501 02:32:02.959258   32853 main.go:141] libmachine: (ha-329926-m02) Setting executable bit set on /home/jenkins/minikube-integration/18779-13391/.minikube/machines (perms=drwxr-xr-x)
	I0501 02:32:02.959266   32853 main.go:141] libmachine: (ha-329926-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926-m02
	I0501 02:32:02.959279   32853 main.go:141] libmachine: (ha-329926-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18779-13391/.minikube/machines
	I0501 02:32:02.959288   32853 main.go:141] libmachine: (ha-329926-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18779-13391/.minikube
	I0501 02:32:02.959303   32853 main.go:141] libmachine: (ha-329926-m02) Setting executable bit set on /home/jenkins/minikube-integration/18779-13391/.minikube (perms=drwxr-xr-x)
	I0501 02:32:02.959315   32853 main.go:141] libmachine: (ha-329926-m02) Setting executable bit set on /home/jenkins/minikube-integration/18779-13391 (perms=drwxrwxr-x)
	I0501 02:32:02.959325   32853 main.go:141] libmachine: (ha-329926-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0501 02:32:02.959336   32853 main.go:141] libmachine: (ha-329926-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0501 02:32:02.959341   32853 main.go:141] libmachine: (ha-329926-m02) Creating domain...
	I0501 02:32:02.959353   32853 main.go:141] libmachine: (ha-329926-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18779-13391
	I0501 02:32:02.959361   32853 main.go:141] libmachine: (ha-329926-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0501 02:32:02.959372   32853 main.go:141] libmachine: (ha-329926-m02) DBG | Checking permissions on dir: /home/jenkins
	I0501 02:32:02.959402   32853 main.go:141] libmachine: (ha-329926-m02) DBG | Checking permissions on dir: /home
	I0501 02:32:02.959419   32853 main.go:141] libmachine: (ha-329926-m02) DBG | Skipping /home - not owner
	I0501 02:32:02.960337   32853 main.go:141] libmachine: (ha-329926-m02) define libvirt domain using xml: 
	I0501 02:32:02.960360   32853 main.go:141] libmachine: (ha-329926-m02) <domain type='kvm'>
	I0501 02:32:02.960371   32853 main.go:141] libmachine: (ha-329926-m02)   <name>ha-329926-m02</name>
	I0501 02:32:02.960384   32853 main.go:141] libmachine: (ha-329926-m02)   <memory unit='MiB'>2200</memory>
	I0501 02:32:02.960396   32853 main.go:141] libmachine: (ha-329926-m02)   <vcpu>2</vcpu>
	I0501 02:32:02.960402   32853 main.go:141] libmachine: (ha-329926-m02)   <features>
	I0501 02:32:02.960414   32853 main.go:141] libmachine: (ha-329926-m02)     <acpi/>
	I0501 02:32:02.960424   32853 main.go:141] libmachine: (ha-329926-m02)     <apic/>
	I0501 02:32:02.960431   32853 main.go:141] libmachine: (ha-329926-m02)     <pae/>
	I0501 02:32:02.960440   32853 main.go:141] libmachine: (ha-329926-m02)     
	I0501 02:32:02.960450   32853 main.go:141] libmachine: (ha-329926-m02)   </features>
	I0501 02:32:02.960464   32853 main.go:141] libmachine: (ha-329926-m02)   <cpu mode='host-passthrough'>
	I0501 02:32:02.960475   32853 main.go:141] libmachine: (ha-329926-m02)   
	I0501 02:32:02.960484   32853 main.go:141] libmachine: (ha-329926-m02)   </cpu>
	I0501 02:32:02.960493   32853 main.go:141] libmachine: (ha-329926-m02)   <os>
	I0501 02:32:02.960511   32853 main.go:141] libmachine: (ha-329926-m02)     <type>hvm</type>
	I0501 02:32:02.960528   32853 main.go:141] libmachine: (ha-329926-m02)     <boot dev='cdrom'/>
	I0501 02:32:02.960570   32853 main.go:141] libmachine: (ha-329926-m02)     <boot dev='hd'/>
	I0501 02:32:02.960606   32853 main.go:141] libmachine: (ha-329926-m02)     <bootmenu enable='no'/>
	I0501 02:32:02.960620   32853 main.go:141] libmachine: (ha-329926-m02)   </os>
	I0501 02:32:02.960634   32853 main.go:141] libmachine: (ha-329926-m02)   <devices>
	I0501 02:32:02.960651   32853 main.go:141] libmachine: (ha-329926-m02)     <disk type='file' device='cdrom'>
	I0501 02:32:02.960667   32853 main.go:141] libmachine: (ha-329926-m02)       <source file='/home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926-m02/boot2docker.iso'/>
	I0501 02:32:02.960681   32853 main.go:141] libmachine: (ha-329926-m02)       <target dev='hdc' bus='scsi'/>
	I0501 02:32:02.960697   32853 main.go:141] libmachine: (ha-329926-m02)       <readonly/>
	I0501 02:32:02.960722   32853 main.go:141] libmachine: (ha-329926-m02)     </disk>
	I0501 02:32:02.960734   32853 main.go:141] libmachine: (ha-329926-m02)     <disk type='file' device='disk'>
	I0501 02:32:02.960748   32853 main.go:141] libmachine: (ha-329926-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0501 02:32:02.960763   32853 main.go:141] libmachine: (ha-329926-m02)       <source file='/home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926-m02/ha-329926-m02.rawdisk'/>
	I0501 02:32:02.960783   32853 main.go:141] libmachine: (ha-329926-m02)       <target dev='hda' bus='virtio'/>
	I0501 02:32:02.960795   32853 main.go:141] libmachine: (ha-329926-m02)     </disk>
	I0501 02:32:02.960810   32853 main.go:141] libmachine: (ha-329926-m02)     <interface type='network'>
	I0501 02:32:02.960823   32853 main.go:141] libmachine: (ha-329926-m02)       <source network='mk-ha-329926'/>
	I0501 02:32:02.960835   32853 main.go:141] libmachine: (ha-329926-m02)       <model type='virtio'/>
	I0501 02:32:02.960847   32853 main.go:141] libmachine: (ha-329926-m02)     </interface>
	I0501 02:32:02.960855   32853 main.go:141] libmachine: (ha-329926-m02)     <interface type='network'>
	I0501 02:32:02.960868   32853 main.go:141] libmachine: (ha-329926-m02)       <source network='default'/>
	I0501 02:32:02.960884   32853 main.go:141] libmachine: (ha-329926-m02)       <model type='virtio'/>
	I0501 02:32:02.960897   32853 main.go:141] libmachine: (ha-329926-m02)     </interface>
	I0501 02:32:02.960907   32853 main.go:141] libmachine: (ha-329926-m02)     <serial type='pty'>
	I0501 02:32:02.960918   32853 main.go:141] libmachine: (ha-329926-m02)       <target port='0'/>
	I0501 02:32:02.960929   32853 main.go:141] libmachine: (ha-329926-m02)     </serial>
	I0501 02:32:02.960940   32853 main.go:141] libmachine: (ha-329926-m02)     <console type='pty'>
	I0501 02:32:02.960951   32853 main.go:141] libmachine: (ha-329926-m02)       <target type='serial' port='0'/>
	I0501 02:32:02.960960   32853 main.go:141] libmachine: (ha-329926-m02)     </console>
	I0501 02:32:02.960972   32853 main.go:141] libmachine: (ha-329926-m02)     <rng model='virtio'>
	I0501 02:32:02.960985   32853 main.go:141] libmachine: (ha-329926-m02)       <backend model='random'>/dev/random</backend>
	I0501 02:32:02.960998   32853 main.go:141] libmachine: (ha-329926-m02)     </rng>
	I0501 02:32:02.961008   32853 main.go:141] libmachine: (ha-329926-m02)     
	I0501 02:32:02.961017   32853 main.go:141] libmachine: (ha-329926-m02)     
	I0501 02:32:02.961025   32853 main.go:141] libmachine: (ha-329926-m02)   </devices>
	I0501 02:32:02.961043   32853 main.go:141] libmachine: (ha-329926-m02) </domain>
	I0501 02:32:02.961054   32853 main.go:141] libmachine: (ha-329926-m02) 
	I0501 02:32:02.967307   32853 main.go:141] libmachine: (ha-329926-m02) DBG | domain ha-329926-m02 has defined MAC address 52:54:00:6f:35:48 in network default
	I0501 02:32:02.967939   32853 main.go:141] libmachine: (ha-329926-m02) Ensuring networks are active...
	I0501 02:32:02.967959   32853 main.go:141] libmachine: (ha-329926-m02) DBG | domain ha-329926-m02 has defined MAC address 52:54:00:92:16:5f in network mk-ha-329926
	I0501 02:32:02.968665   32853 main.go:141] libmachine: (ha-329926-m02) Ensuring network default is active
	I0501 02:32:02.968978   32853 main.go:141] libmachine: (ha-329926-m02) Ensuring network mk-ha-329926 is active
	I0501 02:32:02.969344   32853 main.go:141] libmachine: (ha-329926-m02) Getting domain xml...
	I0501 02:32:02.970049   32853 main.go:141] libmachine: (ha-329926-m02) Creating domain...
	I0501 02:32:04.175671   32853 main.go:141] libmachine: (ha-329926-m02) Waiting to get IP...
	I0501 02:32:04.176721   32853 main.go:141] libmachine: (ha-329926-m02) DBG | domain ha-329926-m02 has defined MAC address 52:54:00:92:16:5f in network mk-ha-329926
	I0501 02:32:04.177224   32853 main.go:141] libmachine: (ha-329926-m02) DBG | unable to find current IP address of domain ha-329926-m02 in network mk-ha-329926
	I0501 02:32:04.177270   32853 main.go:141] libmachine: (ha-329926-m02) DBG | I0501 02:32:04.177210   33274 retry.go:31] will retry after 291.477557ms: waiting for machine to come up
	I0501 02:32:04.470804   32853 main.go:141] libmachine: (ha-329926-m02) DBG | domain ha-329926-m02 has defined MAC address 52:54:00:92:16:5f in network mk-ha-329926
	I0501 02:32:04.471377   32853 main.go:141] libmachine: (ha-329926-m02) DBG | unable to find current IP address of domain ha-329926-m02 in network mk-ha-329926
	I0501 02:32:04.471398   32853 main.go:141] libmachine: (ha-329926-m02) DBG | I0501 02:32:04.471334   33274 retry.go:31] will retry after 247.398331ms: waiting for machine to come up
	I0501 02:32:04.720554   32853 main.go:141] libmachine: (ha-329926-m02) DBG | domain ha-329926-m02 has defined MAC address 52:54:00:92:16:5f in network mk-ha-329926
	I0501 02:32:04.720929   32853 main.go:141] libmachine: (ha-329926-m02) DBG | unable to find current IP address of domain ha-329926-m02 in network mk-ha-329926
	I0501 02:32:04.720959   32853 main.go:141] libmachine: (ha-329926-m02) DBG | I0501 02:32:04.720886   33274 retry.go:31] will retry after 470.735543ms: waiting for machine to come up
	I0501 02:32:05.193520   32853 main.go:141] libmachine: (ha-329926-m02) DBG | domain ha-329926-m02 has defined MAC address 52:54:00:92:16:5f in network mk-ha-329926
	I0501 02:32:05.193999   32853 main.go:141] libmachine: (ha-329926-m02) DBG | unable to find current IP address of domain ha-329926-m02 in network mk-ha-329926
	I0501 02:32:05.194029   32853 main.go:141] libmachine: (ha-329926-m02) DBG | I0501 02:32:05.193939   33274 retry.go:31] will retry after 376.557887ms: waiting for machine to come up
	I0501 02:32:05.572714   32853 main.go:141] libmachine: (ha-329926-m02) DBG | domain ha-329926-m02 has defined MAC address 52:54:00:92:16:5f in network mk-ha-329926
	I0501 02:32:05.573167   32853 main.go:141] libmachine: (ha-329926-m02) DBG | unable to find current IP address of domain ha-329926-m02 in network mk-ha-329926
	I0501 02:32:05.573199   32853 main.go:141] libmachine: (ha-329926-m02) DBG | I0501 02:32:05.573101   33274 retry.go:31] will retry after 716.277143ms: waiting for machine to come up
	I0501 02:32:06.291055   32853 main.go:141] libmachine: (ha-329926-m02) DBG | domain ha-329926-m02 has defined MAC address 52:54:00:92:16:5f in network mk-ha-329926
	I0501 02:32:06.291486   32853 main.go:141] libmachine: (ha-329926-m02) DBG | unable to find current IP address of domain ha-329926-m02 in network mk-ha-329926
	I0501 02:32:06.291515   32853 main.go:141] libmachine: (ha-329926-m02) DBG | I0501 02:32:06.291451   33274 retry.go:31] will retry after 673.420155ms: waiting for machine to come up
	I0501 02:32:06.966230   32853 main.go:141] libmachine: (ha-329926-m02) DBG | domain ha-329926-m02 has defined MAC address 52:54:00:92:16:5f in network mk-ha-329926
	I0501 02:32:06.966667   32853 main.go:141] libmachine: (ha-329926-m02) DBG | unable to find current IP address of domain ha-329926-m02 in network mk-ha-329926
	I0501 02:32:06.966700   32853 main.go:141] libmachine: (ha-329926-m02) DBG | I0501 02:32:06.966625   33274 retry.go:31] will retry after 763.13328ms: waiting for machine to come up
	I0501 02:32:07.732579   32853 main.go:141] libmachine: (ha-329926-m02) DBG | domain ha-329926-m02 has defined MAC address 52:54:00:92:16:5f in network mk-ha-329926
	I0501 02:32:07.733018   32853 main.go:141] libmachine: (ha-329926-m02) DBG | unable to find current IP address of domain ha-329926-m02 in network mk-ha-329926
	I0501 02:32:07.733039   32853 main.go:141] libmachine: (ha-329926-m02) DBG | I0501 02:32:07.732998   33274 retry.go:31] will retry after 1.123440141s: waiting for machine to come up
	I0501 02:32:08.858360   32853 main.go:141] libmachine: (ha-329926-m02) DBG | domain ha-329926-m02 has defined MAC address 52:54:00:92:16:5f in network mk-ha-329926
	I0501 02:32:08.858874   32853 main.go:141] libmachine: (ha-329926-m02) DBG | unable to find current IP address of domain ha-329926-m02 in network mk-ha-329926
	I0501 02:32:08.858907   32853 main.go:141] libmachine: (ha-329926-m02) DBG | I0501 02:32:08.858830   33274 retry.go:31] will retry after 1.476597499s: waiting for machine to come up
	I0501 02:32:10.337562   32853 main.go:141] libmachine: (ha-329926-m02) DBG | domain ha-329926-m02 has defined MAC address 52:54:00:92:16:5f in network mk-ha-329926
	I0501 02:32:10.337956   32853 main.go:141] libmachine: (ha-329926-m02) DBG | unable to find current IP address of domain ha-329926-m02 in network mk-ha-329926
	I0501 02:32:10.337985   32853 main.go:141] libmachine: (ha-329926-m02) DBG | I0501 02:32:10.337918   33274 retry.go:31] will retry after 2.200841931s: waiting for machine to come up
	I0501 02:32:12.540585   32853 main.go:141] libmachine: (ha-329926-m02) DBG | domain ha-329926-m02 has defined MAC address 52:54:00:92:16:5f in network mk-ha-329926
	I0501 02:32:12.541052   32853 main.go:141] libmachine: (ha-329926-m02) DBG | unable to find current IP address of domain ha-329926-m02 in network mk-ha-329926
	I0501 02:32:12.541103   32853 main.go:141] libmachine: (ha-329926-m02) DBG | I0501 02:32:12.541026   33274 retry.go:31] will retry after 2.547827016s: waiting for machine to come up
	I0501 02:32:15.091592   32853 main.go:141] libmachine: (ha-329926-m02) DBG | domain ha-329926-m02 has defined MAC address 52:54:00:92:16:5f in network mk-ha-329926
	I0501 02:32:15.092126   32853 main.go:141] libmachine: (ha-329926-m02) DBG | unable to find current IP address of domain ha-329926-m02 in network mk-ha-329926
	I0501 02:32:15.092158   32853 main.go:141] libmachine: (ha-329926-m02) DBG | I0501 02:32:15.092067   33274 retry.go:31] will retry after 2.718478189s: waiting for machine to come up
	I0501 02:32:17.812506   32853 main.go:141] libmachine: (ha-329926-m02) DBG | domain ha-329926-m02 has defined MAC address 52:54:00:92:16:5f in network mk-ha-329926
	I0501 02:32:17.812877   32853 main.go:141] libmachine: (ha-329926-m02) DBG | unable to find current IP address of domain ha-329926-m02 in network mk-ha-329926
	I0501 02:32:17.812903   32853 main.go:141] libmachine: (ha-329926-m02) DBG | I0501 02:32:17.812839   33274 retry.go:31] will retry after 3.715125165s: waiting for machine to come up
	I0501 02:32:21.532524   32853 main.go:141] libmachine: (ha-329926-m02) DBG | domain ha-329926-m02 has defined MAC address 52:54:00:92:16:5f in network mk-ha-329926
	I0501 02:32:21.533034   32853 main.go:141] libmachine: (ha-329926-m02) DBG | unable to find current IP address of domain ha-329926-m02 in network mk-ha-329926
	I0501 02:32:21.533063   32853 main.go:141] libmachine: (ha-329926-m02) DBG | I0501 02:32:21.533000   33274 retry.go:31] will retry after 3.412402033s: waiting for machine to come up
	I0501 02:32:24.948532   32853 main.go:141] libmachine: (ha-329926-m02) DBG | domain ha-329926-m02 has defined MAC address 52:54:00:92:16:5f in network mk-ha-329926
	I0501 02:32:24.948969   32853 main.go:141] libmachine: (ha-329926-m02) Found IP for machine: 192.168.39.79
	I0501 02:32:24.948994   32853 main.go:141] libmachine: (ha-329926-m02) Reserving static IP address...
	I0501 02:32:24.949009   32853 main.go:141] libmachine: (ha-329926-m02) DBG | domain ha-329926-m02 has current primary IP address 192.168.39.79 and MAC address 52:54:00:92:16:5f in network mk-ha-329926
	I0501 02:32:24.949344   32853 main.go:141] libmachine: (ha-329926-m02) DBG | unable to find host DHCP lease matching {name: "ha-329926-m02", mac: "52:54:00:92:16:5f", ip: "192.168.39.79"} in network mk-ha-329926
	I0501 02:32:25.021976   32853 main.go:141] libmachine: (ha-329926-m02) DBG | Getting to WaitForSSH function...
	I0501 02:32:25.022006   32853 main.go:141] libmachine: (ha-329926-m02) Reserved static IP address: 192.168.39.79
	I0501 02:32:25.022019   32853 main.go:141] libmachine: (ha-329926-m02) Waiting for SSH to be available...
	I0501 02:32:25.024815   32853 main.go:141] libmachine: (ha-329926-m02) DBG | domain ha-329926-m02 has defined MAC address 52:54:00:92:16:5f in network mk-ha-329926
	I0501 02:32:25.025333   32853 main.go:141] libmachine: (ha-329926-m02) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:92:16:5f", ip: ""} in network mk-ha-329926
	I0501 02:32:25.025376   32853 main.go:141] libmachine: (ha-329926-m02) DBG | unable to find defined IP address of network mk-ha-329926 interface with MAC address 52:54:00:92:16:5f
	I0501 02:32:25.025416   32853 main.go:141] libmachine: (ha-329926-m02) DBG | Using SSH client type: external
	I0501 02:32:25.025449   32853 main.go:141] libmachine: (ha-329926-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926-m02/id_rsa (-rw-------)
	I0501 02:32:25.025483   32853 main.go:141] libmachine: (ha-329926-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0501 02:32:25.025498   32853 main.go:141] libmachine: (ha-329926-m02) DBG | About to run SSH command:
	I0501 02:32:25.025512   32853 main.go:141] libmachine: (ha-329926-m02) DBG | exit 0
	I0501 02:32:25.029148   32853 main.go:141] libmachine: (ha-329926-m02) DBG | SSH cmd err, output: exit status 255: 
	I0501 02:32:25.029172   32853 main.go:141] libmachine: (ha-329926-m02) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0501 02:32:25.029183   32853 main.go:141] libmachine: (ha-329926-m02) DBG | command : exit 0
	I0501 02:32:25.029213   32853 main.go:141] libmachine: (ha-329926-m02) DBG | err     : exit status 255
	I0501 02:32:25.029227   32853 main.go:141] libmachine: (ha-329926-m02) DBG | output  : 
	I0501 02:32:28.029440   32853 main.go:141] libmachine: (ha-329926-m02) DBG | Getting to WaitForSSH function...
	I0501 02:32:28.031840   32853 main.go:141] libmachine: (ha-329926-m02) DBG | domain ha-329926-m02 has defined MAC address 52:54:00:92:16:5f in network mk-ha-329926
	I0501 02:32:28.032190   32853 main.go:141] libmachine: (ha-329926-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:16:5f", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:32:18 +0000 UTC Type:0 Mac:52:54:00:92:16:5f Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-329926-m02 Clientid:01:52:54:00:92:16:5f}
	I0501 02:32:28.032214   32853 main.go:141] libmachine: (ha-329926-m02) DBG | domain ha-329926-m02 has defined IP address 192.168.39.79 and MAC address 52:54:00:92:16:5f in network mk-ha-329926
	I0501 02:32:28.032355   32853 main.go:141] libmachine: (ha-329926-m02) DBG | Using SSH client type: external
	I0501 02:32:28.032375   32853 main.go:141] libmachine: (ha-329926-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926-m02/id_rsa (-rw-------)
	I0501 02:32:28.032395   32853 main.go:141] libmachine: (ha-329926-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.79 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0501 02:32:28.032402   32853 main.go:141] libmachine: (ha-329926-m02) DBG | About to run SSH command:
	I0501 02:32:28.032413   32853 main.go:141] libmachine: (ha-329926-m02) DBG | exit 0
	I0501 02:32:28.158886   32853 main.go:141] libmachine: (ha-329926-m02) DBG | SSH cmd err, output: <nil>: 
	I0501 02:32:28.159180   32853 main.go:141] libmachine: (ha-329926-m02) KVM machine creation complete!
	I0501 02:32:28.159537   32853 main.go:141] libmachine: (ha-329926-m02) Calling .GetConfigRaw
	I0501 02:32:28.160119   32853 main.go:141] libmachine: (ha-329926-m02) Calling .DriverName
	I0501 02:32:28.160324   32853 main.go:141] libmachine: (ha-329926-m02) Calling .DriverName
	I0501 02:32:28.160532   32853 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0501 02:32:28.160546   32853 main.go:141] libmachine: (ha-329926-m02) Calling .GetState
	I0501 02:32:28.161848   32853 main.go:141] libmachine: Detecting operating system of created instance...
	I0501 02:32:28.161861   32853 main.go:141] libmachine: Waiting for SSH to be available...
	I0501 02:32:28.161867   32853 main.go:141] libmachine: Getting to WaitForSSH function...
	I0501 02:32:28.161872   32853 main.go:141] libmachine: (ha-329926-m02) Calling .GetSSHHostname
	I0501 02:32:28.163988   32853 main.go:141] libmachine: (ha-329926-m02) DBG | domain ha-329926-m02 has defined MAC address 52:54:00:92:16:5f in network mk-ha-329926
	I0501 02:32:28.164322   32853 main.go:141] libmachine: (ha-329926-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:16:5f", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:32:18 +0000 UTC Type:0 Mac:52:54:00:92:16:5f Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-329926-m02 Clientid:01:52:54:00:92:16:5f}
	I0501 02:32:28.164348   32853 main.go:141] libmachine: (ha-329926-m02) DBG | domain ha-329926-m02 has defined IP address 192.168.39.79 and MAC address 52:54:00:92:16:5f in network mk-ha-329926
	I0501 02:32:28.164513   32853 main.go:141] libmachine: (ha-329926-m02) Calling .GetSSHPort
	I0501 02:32:28.164673   32853 main.go:141] libmachine: (ha-329926-m02) Calling .GetSSHKeyPath
	I0501 02:32:28.164816   32853 main.go:141] libmachine: (ha-329926-m02) Calling .GetSSHKeyPath
	I0501 02:32:28.164914   32853 main.go:141] libmachine: (ha-329926-m02) Calling .GetSSHUsername
	I0501 02:32:28.165101   32853 main.go:141] libmachine: Using SSH client type: native
	I0501 02:32:28.165370   32853 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.79 22 <nil> <nil>}
	I0501 02:32:28.165385   32853 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0501 02:32:28.270126   32853 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0501 02:32:28.270150   32853 main.go:141] libmachine: Detecting the provisioner...
	I0501 02:32:28.270157   32853 main.go:141] libmachine: (ha-329926-m02) Calling .GetSSHHostname
	I0501 02:32:28.272738   32853 main.go:141] libmachine: (ha-329926-m02) DBG | domain ha-329926-m02 has defined MAC address 52:54:00:92:16:5f in network mk-ha-329926
	I0501 02:32:28.273164   32853 main.go:141] libmachine: (ha-329926-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:16:5f", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:32:18 +0000 UTC Type:0 Mac:52:54:00:92:16:5f Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-329926-m02 Clientid:01:52:54:00:92:16:5f}
	I0501 02:32:28.273192   32853 main.go:141] libmachine: (ha-329926-m02) DBG | domain ha-329926-m02 has defined IP address 192.168.39.79 and MAC address 52:54:00:92:16:5f in network mk-ha-329926
	I0501 02:32:28.273354   32853 main.go:141] libmachine: (ha-329926-m02) Calling .GetSSHPort
	I0501 02:32:28.273547   32853 main.go:141] libmachine: (ha-329926-m02) Calling .GetSSHKeyPath
	I0501 02:32:28.273697   32853 main.go:141] libmachine: (ha-329926-m02) Calling .GetSSHKeyPath
	I0501 02:32:28.273825   32853 main.go:141] libmachine: (ha-329926-m02) Calling .GetSSHUsername
	I0501 02:32:28.274027   32853 main.go:141] libmachine: Using SSH client type: native
	I0501 02:32:28.274226   32853 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.79 22 <nil> <nil>}
	I0501 02:32:28.274240   32853 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0501 02:32:28.375684   32853 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0501 02:32:28.375755   32853 main.go:141] libmachine: found compatible host: buildroot
	I0501 02:32:28.375766   32853 main.go:141] libmachine: Provisioning with buildroot...
	I0501 02:32:28.375782   32853 main.go:141] libmachine: (ha-329926-m02) Calling .GetMachineName
	I0501 02:32:28.376055   32853 buildroot.go:166] provisioning hostname "ha-329926-m02"
	I0501 02:32:28.376083   32853 main.go:141] libmachine: (ha-329926-m02) Calling .GetMachineName
	I0501 02:32:28.376256   32853 main.go:141] libmachine: (ha-329926-m02) Calling .GetSSHHostname
	I0501 02:32:28.378946   32853 main.go:141] libmachine: (ha-329926-m02) DBG | domain ha-329926-m02 has defined MAC address 52:54:00:92:16:5f in network mk-ha-329926
	I0501 02:32:28.379397   32853 main.go:141] libmachine: (ha-329926-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:16:5f", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:32:18 +0000 UTC Type:0 Mac:52:54:00:92:16:5f Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-329926-m02 Clientid:01:52:54:00:92:16:5f}
	I0501 02:32:28.379428   32853 main.go:141] libmachine: (ha-329926-m02) DBG | domain ha-329926-m02 has defined IP address 192.168.39.79 and MAC address 52:54:00:92:16:5f in network mk-ha-329926
	I0501 02:32:28.379548   32853 main.go:141] libmachine: (ha-329926-m02) Calling .GetSSHPort
	I0501 02:32:28.379708   32853 main.go:141] libmachine: (ha-329926-m02) Calling .GetSSHKeyPath
	I0501 02:32:28.379877   32853 main.go:141] libmachine: (ha-329926-m02) Calling .GetSSHKeyPath
	I0501 02:32:28.380038   32853 main.go:141] libmachine: (ha-329926-m02) Calling .GetSSHUsername
	I0501 02:32:28.380193   32853 main.go:141] libmachine: Using SSH client type: native
	I0501 02:32:28.380382   32853 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.79 22 <nil> <nil>}
	I0501 02:32:28.380398   32853 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-329926-m02 && echo "ha-329926-m02" | sudo tee /etc/hostname
	I0501 02:32:28.500197   32853 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-329926-m02
	
	I0501 02:32:28.500220   32853 main.go:141] libmachine: (ha-329926-m02) Calling .GetSSHHostname
	I0501 02:32:28.502847   32853 main.go:141] libmachine: (ha-329926-m02) DBG | domain ha-329926-m02 has defined MAC address 52:54:00:92:16:5f in network mk-ha-329926
	I0501 02:32:28.503142   32853 main.go:141] libmachine: (ha-329926-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:16:5f", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:32:18 +0000 UTC Type:0 Mac:52:54:00:92:16:5f Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-329926-m02 Clientid:01:52:54:00:92:16:5f}
	I0501 02:32:28.503170   32853 main.go:141] libmachine: (ha-329926-m02) DBG | domain ha-329926-m02 has defined IP address 192.168.39.79 and MAC address 52:54:00:92:16:5f in network mk-ha-329926
	I0501 02:32:28.503352   32853 main.go:141] libmachine: (ha-329926-m02) Calling .GetSSHPort
	I0501 02:32:28.503548   32853 main.go:141] libmachine: (ha-329926-m02) Calling .GetSSHKeyPath
	I0501 02:32:28.503693   32853 main.go:141] libmachine: (ha-329926-m02) Calling .GetSSHKeyPath
	I0501 02:32:28.503858   32853 main.go:141] libmachine: (ha-329926-m02) Calling .GetSSHUsername
	I0501 02:32:28.504010   32853 main.go:141] libmachine: Using SSH client type: native
	I0501 02:32:28.504251   32853 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.79 22 <nil> <nil>}
	I0501 02:32:28.504288   32853 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-329926-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-329926-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-329926-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0501 02:32:28.619098   32853 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0501 02:32:28.619130   32853 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18779-13391/.minikube CaCertPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18779-13391/.minikube}
	I0501 02:32:28.619149   32853 buildroot.go:174] setting up certificates
	I0501 02:32:28.619168   32853 provision.go:84] configureAuth start
	I0501 02:32:28.619183   32853 main.go:141] libmachine: (ha-329926-m02) Calling .GetMachineName
	I0501 02:32:28.619462   32853 main.go:141] libmachine: (ha-329926-m02) Calling .GetIP
	I0501 02:32:28.621888   32853 main.go:141] libmachine: (ha-329926-m02) DBG | domain ha-329926-m02 has defined MAC address 52:54:00:92:16:5f in network mk-ha-329926
	I0501 02:32:28.622191   32853 main.go:141] libmachine: (ha-329926-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:16:5f", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:32:18 +0000 UTC Type:0 Mac:52:54:00:92:16:5f Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-329926-m02 Clientid:01:52:54:00:92:16:5f}
	I0501 02:32:28.622223   32853 main.go:141] libmachine: (ha-329926-m02) DBG | domain ha-329926-m02 has defined IP address 192.168.39.79 and MAC address 52:54:00:92:16:5f in network mk-ha-329926
	I0501 02:32:28.622318   32853 main.go:141] libmachine: (ha-329926-m02) Calling .GetSSHHostname
	I0501 02:32:28.624655   32853 main.go:141] libmachine: (ha-329926-m02) DBG | domain ha-329926-m02 has defined MAC address 52:54:00:92:16:5f in network mk-ha-329926
	I0501 02:32:28.624978   32853 main.go:141] libmachine: (ha-329926-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:16:5f", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:32:18 +0000 UTC Type:0 Mac:52:54:00:92:16:5f Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-329926-m02 Clientid:01:52:54:00:92:16:5f}
	I0501 02:32:28.625002   32853 main.go:141] libmachine: (ha-329926-m02) DBG | domain ha-329926-m02 has defined IP address 192.168.39.79 and MAC address 52:54:00:92:16:5f in network mk-ha-329926
	I0501 02:32:28.625122   32853 provision.go:143] copyHostCerts
	I0501 02:32:28.625148   32853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem
	I0501 02:32:28.625175   32853 exec_runner.go:144] found /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem, removing ...
	I0501 02:32:28.625184   32853 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem
	I0501 02:32:28.625243   32853 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem (1078 bytes)
	I0501 02:32:28.625313   32853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem
	I0501 02:32:28.625331   32853 exec_runner.go:144] found /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem, removing ...
	I0501 02:32:28.625336   32853 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem
	I0501 02:32:28.625359   32853 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem (1123 bytes)
	I0501 02:32:28.625400   32853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem
	I0501 02:32:28.625418   32853 exec_runner.go:144] found /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem, removing ...
	I0501 02:32:28.625424   32853 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem
	I0501 02:32:28.625445   32853 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem (1675 bytes)
	I0501 02:32:28.625498   32853 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca-key.pem org=jenkins.ha-329926-m02 san=[127.0.0.1 192.168.39.79 ha-329926-m02 localhost minikube]
	I0501 02:32:28.707102   32853 provision.go:177] copyRemoteCerts
	I0501 02:32:28.707154   32853 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0501 02:32:28.707177   32853 main.go:141] libmachine: (ha-329926-m02) Calling .GetSSHHostname
	I0501 02:32:28.709603   32853 main.go:141] libmachine: (ha-329926-m02) DBG | domain ha-329926-m02 has defined MAC address 52:54:00:92:16:5f in network mk-ha-329926
	I0501 02:32:28.709910   32853 main.go:141] libmachine: (ha-329926-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:16:5f", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:32:18 +0000 UTC Type:0 Mac:52:54:00:92:16:5f Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-329926-m02 Clientid:01:52:54:00:92:16:5f}
	I0501 02:32:28.709927   32853 main.go:141] libmachine: (ha-329926-m02) DBG | domain ha-329926-m02 has defined IP address 192.168.39.79 and MAC address 52:54:00:92:16:5f in network mk-ha-329926
	I0501 02:32:28.710078   32853 main.go:141] libmachine: (ha-329926-m02) Calling .GetSSHPort
	I0501 02:32:28.710258   32853 main.go:141] libmachine: (ha-329926-m02) Calling .GetSSHKeyPath
	I0501 02:32:28.710437   32853 main.go:141] libmachine: (ha-329926-m02) Calling .GetSSHUsername
	I0501 02:32:28.710566   32853 sshutil.go:53] new ssh client: &{IP:192.168.39.79 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926-m02/id_rsa Username:docker}
	I0501 02:32:28.793538   32853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0501 02:32:28.793606   32853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0501 02:32:28.824782   32853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0501 02:32:28.824846   32853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0501 02:32:28.856031   32853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0501 02:32:28.856095   32853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0501 02:32:28.886602   32853 provision.go:87] duration metric: took 267.420274ms to configureAuth
	I0501 02:32:28.886636   32853 buildroot.go:189] setting minikube options for container-runtime
	I0501 02:32:28.886827   32853 config.go:182] Loaded profile config "ha-329926": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0501 02:32:28.886919   32853 main.go:141] libmachine: (ha-329926-m02) Calling .GetSSHHostname
	I0501 02:32:28.889589   32853 main.go:141] libmachine: (ha-329926-m02) DBG | domain ha-329926-m02 has defined MAC address 52:54:00:92:16:5f in network mk-ha-329926
	I0501 02:32:28.889945   32853 main.go:141] libmachine: (ha-329926-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:16:5f", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:32:18 +0000 UTC Type:0 Mac:52:54:00:92:16:5f Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-329926-m02 Clientid:01:52:54:00:92:16:5f}
	I0501 02:32:28.889973   32853 main.go:141] libmachine: (ha-329926-m02) DBG | domain ha-329926-m02 has defined IP address 192.168.39.79 and MAC address 52:54:00:92:16:5f in network mk-ha-329926
	I0501 02:32:28.890172   32853 main.go:141] libmachine: (ha-329926-m02) Calling .GetSSHPort
	I0501 02:32:28.890351   32853 main.go:141] libmachine: (ha-329926-m02) Calling .GetSSHKeyPath
	I0501 02:32:28.890553   32853 main.go:141] libmachine: (ha-329926-m02) Calling .GetSSHKeyPath
	I0501 02:32:28.890699   32853 main.go:141] libmachine: (ha-329926-m02) Calling .GetSSHUsername
	I0501 02:32:28.890856   32853 main.go:141] libmachine: Using SSH client type: native
	I0501 02:32:28.891001   32853 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.79 22 <nil> <nil>}
	I0501 02:32:28.891014   32853 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0501 02:32:29.159244   32853 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0501 02:32:29.159272   32853 main.go:141] libmachine: Checking connection to Docker...
	I0501 02:32:29.159283   32853 main.go:141] libmachine: (ha-329926-m02) Calling .GetURL
	I0501 02:32:29.160474   32853 main.go:141] libmachine: (ha-329926-m02) DBG | Using libvirt version 6000000
	I0501 02:32:29.162578   32853 main.go:141] libmachine: (ha-329926-m02) DBG | domain ha-329926-m02 has defined MAC address 52:54:00:92:16:5f in network mk-ha-329926
	I0501 02:32:29.163002   32853 main.go:141] libmachine: (ha-329926-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:16:5f", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:32:18 +0000 UTC Type:0 Mac:52:54:00:92:16:5f Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-329926-m02 Clientid:01:52:54:00:92:16:5f}
	I0501 02:32:29.163032   32853 main.go:141] libmachine: (ha-329926-m02) DBG | domain ha-329926-m02 has defined IP address 192.168.39.79 and MAC address 52:54:00:92:16:5f in network mk-ha-329926
	I0501 02:32:29.163145   32853 main.go:141] libmachine: Docker is up and running!
	I0501 02:32:29.163159   32853 main.go:141] libmachine: Reticulating splines...
	I0501 02:32:29.163167   32853 client.go:171] duration metric: took 26.549605676s to LocalClient.Create
	I0501 02:32:29.163194   32853 start.go:167] duration metric: took 26.549670109s to libmachine.API.Create "ha-329926"
	I0501 02:32:29.163208   32853 start.go:293] postStartSetup for "ha-329926-m02" (driver="kvm2")
	I0501 02:32:29.163222   32853 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0501 02:32:29.163245   32853 main.go:141] libmachine: (ha-329926-m02) Calling .DriverName
	I0501 02:32:29.163485   32853 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0501 02:32:29.163508   32853 main.go:141] libmachine: (ha-329926-m02) Calling .GetSSHHostname
	I0501 02:32:29.165222   32853 main.go:141] libmachine: (ha-329926-m02) DBG | domain ha-329926-m02 has defined MAC address 52:54:00:92:16:5f in network mk-ha-329926
	I0501 02:32:29.165624   32853 main.go:141] libmachine: (ha-329926-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:16:5f", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:32:18 +0000 UTC Type:0 Mac:52:54:00:92:16:5f Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-329926-m02 Clientid:01:52:54:00:92:16:5f}
	I0501 02:32:29.165652   32853 main.go:141] libmachine: (ha-329926-m02) DBG | domain ha-329926-m02 has defined IP address 192.168.39.79 and MAC address 52:54:00:92:16:5f in network mk-ha-329926
	I0501 02:32:29.165808   32853 main.go:141] libmachine: (ha-329926-m02) Calling .GetSSHPort
	I0501 02:32:29.165987   32853 main.go:141] libmachine: (ha-329926-m02) Calling .GetSSHKeyPath
	I0501 02:32:29.166131   32853 main.go:141] libmachine: (ha-329926-m02) Calling .GetSSHUsername
	I0501 02:32:29.166267   32853 sshutil.go:53] new ssh client: &{IP:192.168.39.79 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926-m02/id_rsa Username:docker}
	I0501 02:32:29.249614   32853 ssh_runner.go:195] Run: cat /etc/os-release
	I0501 02:32:29.254833   32853 info.go:137] Remote host: Buildroot 2023.02.9
	I0501 02:32:29.254865   32853 filesync.go:126] Scanning /home/jenkins/minikube-integration/18779-13391/.minikube/addons for local assets ...
	I0501 02:32:29.254942   32853 filesync.go:126] Scanning /home/jenkins/minikube-integration/18779-13391/.minikube/files for local assets ...
	I0501 02:32:29.255016   32853 filesync.go:149] local asset: /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem -> 207242.pem in /etc/ssl/certs
	I0501 02:32:29.255026   32853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem -> /etc/ssl/certs/207242.pem
	I0501 02:32:29.255104   32853 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0501 02:32:29.265848   32853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem --> /etc/ssl/certs/207242.pem (1708 bytes)
	I0501 02:32:29.294379   32853 start.go:296] duration metric: took 131.157143ms for postStartSetup
	I0501 02:32:29.294455   32853 main.go:141] libmachine: (ha-329926-m02) Calling .GetConfigRaw
	I0501 02:32:29.295051   32853 main.go:141] libmachine: (ha-329926-m02) Calling .GetIP
	I0501 02:32:29.297751   32853 main.go:141] libmachine: (ha-329926-m02) DBG | domain ha-329926-m02 has defined MAC address 52:54:00:92:16:5f in network mk-ha-329926
	I0501 02:32:29.298110   32853 main.go:141] libmachine: (ha-329926-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:16:5f", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:32:18 +0000 UTC Type:0 Mac:52:54:00:92:16:5f Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-329926-m02 Clientid:01:52:54:00:92:16:5f}
	I0501 02:32:29.298140   32853 main.go:141] libmachine: (ha-329926-m02) DBG | domain ha-329926-m02 has defined IP address 192.168.39.79 and MAC address 52:54:00:92:16:5f in network mk-ha-329926
	I0501 02:32:29.298337   32853 profile.go:143] Saving config to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/config.json ...
	I0501 02:32:29.298549   32853 start.go:128] duration metric: took 26.702914692s to createHost
	I0501 02:32:29.298571   32853 main.go:141] libmachine: (ha-329926-m02) Calling .GetSSHHostname
	I0501 02:32:29.300678   32853 main.go:141] libmachine: (ha-329926-m02) DBG | domain ha-329926-m02 has defined MAC address 52:54:00:92:16:5f in network mk-ha-329926
	I0501 02:32:29.301049   32853 main.go:141] libmachine: (ha-329926-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:16:5f", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:32:18 +0000 UTC Type:0 Mac:52:54:00:92:16:5f Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-329926-m02 Clientid:01:52:54:00:92:16:5f}
	I0501 02:32:29.301087   32853 main.go:141] libmachine: (ha-329926-m02) DBG | domain ha-329926-m02 has defined IP address 192.168.39.79 and MAC address 52:54:00:92:16:5f in network mk-ha-329926
	I0501 02:32:29.301201   32853 main.go:141] libmachine: (ha-329926-m02) Calling .GetSSHPort
	I0501 02:32:29.301444   32853 main.go:141] libmachine: (ha-329926-m02) Calling .GetSSHKeyPath
	I0501 02:32:29.301631   32853 main.go:141] libmachine: (ha-329926-m02) Calling .GetSSHKeyPath
	I0501 02:32:29.301795   32853 main.go:141] libmachine: (ha-329926-m02) Calling .GetSSHUsername
	I0501 02:32:29.301954   32853 main.go:141] libmachine: Using SSH client type: native
	I0501 02:32:29.302115   32853 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.79 22 <nil> <nil>}
	I0501 02:32:29.302125   32853 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0501 02:32:29.404161   32853 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714530749.376998038
	
	I0501 02:32:29.404185   32853 fix.go:216] guest clock: 1714530749.376998038
	I0501 02:32:29.404194   32853 fix.go:229] Guest: 2024-05-01 02:32:29.376998038 +0000 UTC Remote: 2024-05-01 02:32:29.298561287 +0000 UTC m=+87.221058556 (delta=78.436751ms)
	I0501 02:32:29.404215   32853 fix.go:200] guest clock delta is within tolerance: 78.436751ms
	I0501 02:32:29.404222   32853 start.go:83] releasing machines lock for "ha-329926-m02", held for 26.80870233s
	I0501 02:32:29.404253   32853 main.go:141] libmachine: (ha-329926-m02) Calling .DriverName
	I0501 02:32:29.404558   32853 main.go:141] libmachine: (ha-329926-m02) Calling .GetIP
	I0501 02:32:29.407060   32853 main.go:141] libmachine: (ha-329926-m02) DBG | domain ha-329926-m02 has defined MAC address 52:54:00:92:16:5f in network mk-ha-329926
	I0501 02:32:29.407456   32853 main.go:141] libmachine: (ha-329926-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:16:5f", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:32:18 +0000 UTC Type:0 Mac:52:54:00:92:16:5f Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-329926-m02 Clientid:01:52:54:00:92:16:5f}
	I0501 02:32:29.407478   32853 main.go:141] libmachine: (ha-329926-m02) DBG | domain ha-329926-m02 has defined IP address 192.168.39.79 and MAC address 52:54:00:92:16:5f in network mk-ha-329926
	I0501 02:32:29.412905   32853 out.go:177] * Found network options:
	I0501 02:32:29.414075   32853 out.go:177]   - NO_PROXY=192.168.39.5
	W0501 02:32:29.415067   32853 proxy.go:119] fail to check proxy env: Error ip not in block
	I0501 02:32:29.415094   32853 main.go:141] libmachine: (ha-329926-m02) Calling .DriverName
	I0501 02:32:29.415626   32853 main.go:141] libmachine: (ha-329926-m02) Calling .DriverName
	I0501 02:32:29.415813   32853 main.go:141] libmachine: (ha-329926-m02) Calling .DriverName
	I0501 02:32:29.415878   32853 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0501 02:32:29.415923   32853 main.go:141] libmachine: (ha-329926-m02) Calling .GetSSHHostname
	W0501 02:32:29.416037   32853 proxy.go:119] fail to check proxy env: Error ip not in block
	I0501 02:32:29.416100   32853 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0501 02:32:29.416118   32853 main.go:141] libmachine: (ha-329926-m02) Calling .GetSSHHostname
	I0501 02:32:29.418446   32853 main.go:141] libmachine: (ha-329926-m02) DBG | domain ha-329926-m02 has defined MAC address 52:54:00:92:16:5f in network mk-ha-329926
	I0501 02:32:29.418710   32853 main.go:141] libmachine: (ha-329926-m02) DBG | domain ha-329926-m02 has defined MAC address 52:54:00:92:16:5f in network mk-ha-329926
	I0501 02:32:29.418743   32853 main.go:141] libmachine: (ha-329926-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:16:5f", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:32:18 +0000 UTC Type:0 Mac:52:54:00:92:16:5f Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-329926-m02 Clientid:01:52:54:00:92:16:5f}
	I0501 02:32:29.418764   32853 main.go:141] libmachine: (ha-329926-m02) DBG | domain ha-329926-m02 has defined IP address 192.168.39.79 and MAC address 52:54:00:92:16:5f in network mk-ha-329926
	I0501 02:32:29.418896   32853 main.go:141] libmachine: (ha-329926-m02) Calling .GetSSHPort
	I0501 02:32:29.419059   32853 main.go:141] libmachine: (ha-329926-m02) Calling .GetSSHKeyPath
	I0501 02:32:29.419137   32853 main.go:141] libmachine: (ha-329926-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:16:5f", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:32:18 +0000 UTC Type:0 Mac:52:54:00:92:16:5f Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-329926-m02 Clientid:01:52:54:00:92:16:5f}
	I0501 02:32:29.419166   32853 main.go:141] libmachine: (ha-329926-m02) DBG | domain ha-329926-m02 has defined IP address 192.168.39.79 and MAC address 52:54:00:92:16:5f in network mk-ha-329926
	I0501 02:32:29.419224   32853 main.go:141] libmachine: (ha-329926-m02) Calling .GetSSHUsername
	I0501 02:32:29.419303   32853 main.go:141] libmachine: (ha-329926-m02) Calling .GetSSHPort
	I0501 02:32:29.419384   32853 sshutil.go:53] new ssh client: &{IP:192.168.39.79 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926-m02/id_rsa Username:docker}
	I0501 02:32:29.419466   32853 main.go:141] libmachine: (ha-329926-m02) Calling .GetSSHKeyPath
	I0501 02:32:29.419607   32853 main.go:141] libmachine: (ha-329926-m02) Calling .GetSSHUsername
	I0501 02:32:29.419726   32853 sshutil.go:53] new ssh client: &{IP:192.168.39.79 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926-m02/id_rsa Username:docker}
	I0501 02:32:29.660914   32853 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0501 02:32:29.668297   32853 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0501 02:32:29.668376   32853 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0501 02:32:29.687850   32853 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0501 02:32:29.687883   32853 start.go:494] detecting cgroup driver to use...
	I0501 02:32:29.687972   32853 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0501 02:32:29.706565   32853 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0501 02:32:29.723456   32853 docker.go:217] disabling cri-docker service (if available) ...
	I0501 02:32:29.723539   32853 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0501 02:32:29.738887   32853 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0501 02:32:29.754172   32853 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0501 02:32:29.874297   32853 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0501 02:32:30.042352   32853 docker.go:233] disabling docker service ...
	I0501 02:32:30.042446   32853 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0501 02:32:30.059238   32853 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0501 02:32:30.075898   32853 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0501 02:32:30.201083   32853 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0501 02:32:30.333782   32853 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0501 02:32:30.350062   32853 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0501 02:32:30.371860   32853 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0501 02:32:30.371927   32853 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 02:32:30.384981   32853 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0501 02:32:30.385056   32853 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 02:32:30.398163   32853 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 02:32:30.411332   32853 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 02:32:30.426328   32853 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0501 02:32:30.441124   32853 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 02:32:30.453834   32853 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 02:32:30.474622   32853 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 02:32:30.492765   32853 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0501 02:32:30.503973   32853 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0501 02:32:30.504044   32853 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0501 02:32:30.518436   32853 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0501 02:32:30.529512   32853 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:32:30.653918   32853 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0501 02:32:30.808199   32853 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0501 02:32:30.808267   32853 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0501 02:32:30.814260   32853 start.go:562] Will wait 60s for crictl version
	I0501 02:32:30.814333   32853 ssh_runner.go:195] Run: which crictl
	I0501 02:32:30.818797   32853 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0501 02:32:30.858905   32853 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0501 02:32:30.858991   32853 ssh_runner.go:195] Run: crio --version
	I0501 02:32:30.890383   32853 ssh_runner.go:195] Run: crio --version
	I0501 02:32:30.925385   32853 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0501 02:32:30.926898   32853 out.go:177]   - env NO_PROXY=192.168.39.5
	I0501 02:32:30.927949   32853 main.go:141] libmachine: (ha-329926-m02) Calling .GetIP
	I0501 02:32:30.930381   32853 main.go:141] libmachine: (ha-329926-m02) DBG | domain ha-329926-m02 has defined MAC address 52:54:00:92:16:5f in network mk-ha-329926
	I0501 02:32:30.930728   32853 main.go:141] libmachine: (ha-329926-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:16:5f", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:32:18 +0000 UTC Type:0 Mac:52:54:00:92:16:5f Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-329926-m02 Clientid:01:52:54:00:92:16:5f}
	I0501 02:32:30.930760   32853 main.go:141] libmachine: (ha-329926-m02) DBG | domain ha-329926-m02 has defined IP address 192.168.39.79 and MAC address 52:54:00:92:16:5f in network mk-ha-329926
	I0501 02:32:30.930932   32853 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0501 02:32:30.935561   32853 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0501 02:32:30.949642   32853 mustload.go:65] Loading cluster: ha-329926
	I0501 02:32:30.949868   32853 config.go:182] Loaded profile config "ha-329926": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0501 02:32:30.950222   32853 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:32:30.950257   32853 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:32:30.964975   32853 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41109
	I0501 02:32:30.965384   32853 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:32:30.965819   32853 main.go:141] libmachine: Using API Version  1
	I0501 02:32:30.965840   32853 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:32:30.966161   32853 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:32:30.966360   32853 main.go:141] libmachine: (ha-329926) Calling .GetState
	I0501 02:32:30.967865   32853 host.go:66] Checking if "ha-329926" exists ...
	I0501 02:32:30.968220   32853 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:32:30.968247   32853 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:32:30.983656   32853 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41319
	I0501 02:32:30.984025   32853 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:32:30.984516   32853 main.go:141] libmachine: Using API Version  1
	I0501 02:32:30.984538   32853 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:32:30.984870   32853 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:32:30.985070   32853 main.go:141] libmachine: (ha-329926) Calling .DriverName
	I0501 02:32:30.985228   32853 certs.go:68] Setting up /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926 for IP: 192.168.39.79
	I0501 02:32:30.985248   32853 certs.go:194] generating shared ca certs ...
	I0501 02:32:30.985267   32853 certs.go:226] acquiring lock for ca certs: {Name:mk85a75d14c00780d4bf09cd9bdaf0f96f1b8fce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:32:30.985407   32853 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18779-13391/.minikube/ca.key
	I0501 02:32:30.985458   32853 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.key
	I0501 02:32:30.985470   32853 certs.go:256] generating profile certs ...
	I0501 02:32:30.985562   32853 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/client.key
	I0501 02:32:30.985597   32853 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/apiserver.key.d90e43f3
	I0501 02:32:30.985619   32853 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/apiserver.crt.d90e43f3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.5 192.168.39.79 192.168.39.254]
	I0501 02:32:31.181206   32853 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/apiserver.crt.d90e43f3 ...
	I0501 02:32:31.181238   32853 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/apiserver.crt.d90e43f3: {Name:mk5518d1e07d843574fb807e035ad0b363a66c7f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:32:31.181440   32853 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/apiserver.key.d90e43f3 ...
	I0501 02:32:31.181458   32853 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/apiserver.key.d90e43f3: {Name:mkb1feab49c04187ec90bd16923d434f3fa71e99 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:32:31.181562   32853 certs.go:381] copying /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/apiserver.crt.d90e43f3 -> /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/apiserver.crt
	I0501 02:32:31.181740   32853 certs.go:385] copying /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/apiserver.key.d90e43f3 -> /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/apiserver.key
	I0501 02:32:31.181920   32853 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/proxy-client.key
	I0501 02:32:31.181951   32853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0501 02:32:31.181971   32853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0501 02:32:31.181989   32853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0501 02:32:31.182006   32853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0501 02:32:31.182022   32853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0501 02:32:31.182036   32853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0501 02:32:31.182055   32853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0501 02:32:31.182072   32853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0501 02:32:31.182135   32853 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/20724.pem (1338 bytes)
	W0501 02:32:31.182173   32853 certs.go:480] ignoring /home/jenkins/minikube-integration/18779-13391/.minikube/certs/20724_empty.pem, impossibly tiny 0 bytes
	I0501 02:32:31.182187   32853 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca-key.pem (1679 bytes)
	I0501 02:32:31.182221   32853 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem (1078 bytes)
	I0501 02:32:31.182257   32853 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem (1123 bytes)
	I0501 02:32:31.182289   32853 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem (1675 bytes)
	I0501 02:32:31.182345   32853 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem (1708 bytes)
	I0501 02:32:31.182379   32853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/20724.pem -> /usr/share/ca-certificates/20724.pem
	I0501 02:32:31.182414   32853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem -> /usr/share/ca-certificates/207242.pem
	I0501 02:32:31.182433   32853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0501 02:32:31.182472   32853 main.go:141] libmachine: (ha-329926) Calling .GetSSHHostname
	I0501 02:32:31.185284   32853 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:32:31.185667   32853 main.go:141] libmachine: (ha-329926) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:d8:43", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:31:18 +0000 UTC Type:0 Mac:52:54:00:ce:d8:43 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-329926 Clientid:01:52:54:00:ce:d8:43}
	I0501 02:32:31.185696   32853 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined IP address 192.168.39.5 and MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:32:31.185859   32853 main.go:141] libmachine: (ha-329926) Calling .GetSSHPort
	I0501 02:32:31.186079   32853 main.go:141] libmachine: (ha-329926) Calling .GetSSHKeyPath
	I0501 02:32:31.186237   32853 main.go:141] libmachine: (ha-329926) Calling .GetSSHUsername
	I0501 02:32:31.186364   32853 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926/id_rsa Username:docker}
	I0501 02:32:31.262748   32853 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0501 02:32:31.269126   32853 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0501 02:32:31.283342   32853 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0501 02:32:31.288056   32853 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0501 02:32:31.301185   32853 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0501 02:32:31.305935   32853 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0501 02:32:31.318019   32853 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0501 02:32:31.322409   32853 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0501 02:32:31.334369   32853 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0501 02:32:31.339405   32853 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0501 02:32:31.351745   32853 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0501 02:32:31.356564   32853 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0501 02:32:31.368836   32853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0501 02:32:31.400674   32853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0501 02:32:31.426866   32853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0501 02:32:31.454204   32853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0501 02:32:31.481303   32853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0501 02:32:31.508290   32853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0501 02:32:31.539629   32853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0501 02:32:31.570094   32853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0501 02:32:31.600574   32853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/certs/20724.pem --> /usr/share/ca-certificates/20724.pem (1338 bytes)
	I0501 02:32:31.632245   32853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem --> /usr/share/ca-certificates/207242.pem (1708 bytes)
	I0501 02:32:31.663620   32853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0501 02:32:31.694867   32853 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0501 02:32:31.716003   32853 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0501 02:32:31.737736   32853 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0501 02:32:31.759385   32853 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0501 02:32:31.781053   32853 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0501 02:32:31.801226   32853 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0501 02:32:31.820680   32853 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0501 02:32:31.840418   32853 ssh_runner.go:195] Run: openssl version
	I0501 02:32:31.846834   32853 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20724.pem && ln -fs /usr/share/ca-certificates/20724.pem /etc/ssl/certs/20724.pem"
	I0501 02:32:31.859445   32853 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20724.pem
	I0501 02:32:31.864815   32853 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May  1 02:20 /usr/share/ca-certificates/20724.pem
	I0501 02:32:31.864868   32853 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20724.pem
	I0501 02:32:31.871245   32853 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20724.pem /etc/ssl/certs/51391683.0"
	I0501 02:32:31.883690   32853 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/207242.pem && ln -fs /usr/share/ca-certificates/207242.pem /etc/ssl/certs/207242.pem"
	I0501 02:32:31.896212   32853 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/207242.pem
	I0501 02:32:31.901714   32853 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May  1 02:20 /usr/share/ca-certificates/207242.pem
	I0501 02:32:31.901787   32853 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/207242.pem
	I0501 02:32:31.908364   32853 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/207242.pem /etc/ssl/certs/3ec20f2e.0"
	I0501 02:32:31.921148   32853 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0501 02:32:31.933834   32853 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0501 02:32:31.939556   32853 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May  1 02:08 /usr/share/ca-certificates/minikubeCA.pem
	I0501 02:32:31.939610   32853 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0501 02:32:31.946280   32853 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0501 02:32:31.958842   32853 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0501 02:32:31.963676   32853 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0501 02:32:31.963732   32853 kubeadm.go:928] updating node {m02 192.168.39.79 8443 v1.30.0 crio true true} ...
	I0501 02:32:31.963816   32853 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-329926-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.79
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-329926 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0501 02:32:31.963847   32853 kube-vip.go:111] generating kube-vip config ...
	I0501 02:32:31.963890   32853 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0501 02:32:31.981605   32853 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0501 02:32:31.981681   32853 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0501 02:32:31.981735   32853 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0501 02:32:31.992985   32853 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.0': No such file or directory
	
	Initiating transfer...
	I0501 02:32:31.993036   32853 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.0
	I0501 02:32:32.003788   32853 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl.sha256
	I0501 02:32:32.003816   32853 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/18779-13391/.minikube/cache/linux/amd64/v1.30.0/kubeadm
	I0501 02:32:32.003819   32853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/linux/amd64/v1.30.0/kubectl -> /var/lib/minikube/binaries/v1.30.0/kubectl
	I0501 02:32:32.003787   32853 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/18779-13391/.minikube/cache/linux/amd64/v1.30.0/kubelet
	I0501 02:32:32.004006   32853 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl
	I0501 02:32:32.009102   32853 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubectl': No such file or directory
	I0501 02:32:32.009132   32853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/cache/linux/amd64/v1.30.0/kubectl --> /var/lib/minikube/binaries/v1.30.0/kubectl (51454104 bytes)
	I0501 02:32:40.645161   32853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/linux/amd64/v1.30.0/kubeadm -> /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0501 02:32:40.645247   32853 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0501 02:32:40.651214   32853 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubeadm': No such file or directory
	I0501 02:32:40.651259   32853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/cache/linux/amd64/v1.30.0/kubeadm --> /var/lib/minikube/binaries/v1.30.0/kubeadm (50249880 bytes)
	I0501 02:32:46.044024   32853 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 02:32:46.061292   32853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/linux/amd64/v1.30.0/kubelet -> /var/lib/minikube/binaries/v1.30.0/kubelet
	I0501 02:32:46.061412   32853 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet
	I0501 02:32:46.066040   32853 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubelet': No such file or directory
	I0501 02:32:46.066068   32853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/cache/linux/amd64/v1.30.0/kubelet --> /var/lib/minikube/binaries/v1.30.0/kubelet (100100024 bytes)
	I0501 02:32:46.533245   32853 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0501 02:32:46.544899   32853 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0501 02:32:46.568504   32853 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0501 02:32:46.588194   32853 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0501 02:32:46.608341   32853 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0501 02:32:46.613073   32853 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0501 02:32:46.628506   32853 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:32:46.759692   32853 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0501 02:32:46.779606   32853 host.go:66] Checking if "ha-329926" exists ...
	I0501 02:32:46.779999   32853 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:32:46.780033   32853 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:32:46.795346   32853 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37047
	I0501 02:32:46.795774   32853 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:32:46.796285   32853 main.go:141] libmachine: Using API Version  1
	I0501 02:32:46.796318   32853 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:32:46.796647   32853 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:32:46.796862   32853 main.go:141] libmachine: (ha-329926) Calling .DriverName
	I0501 02:32:46.797022   32853 start.go:316] joinCluster: &{Name:ha-329926 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Cluster
Name:ha-329926 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.5 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.79 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:2
6280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 02:32:46.797115   32853 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0501 02:32:46.797131   32853 main.go:141] libmachine: (ha-329926) Calling .GetSSHHostname
	I0501 02:32:46.799894   32853 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:32:46.800337   32853 main.go:141] libmachine: (ha-329926) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:d8:43", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:31:18 +0000 UTC Type:0 Mac:52:54:00:ce:d8:43 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-329926 Clientid:01:52:54:00:ce:d8:43}
	I0501 02:32:46.800367   32853 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined IP address 192.168.39.5 and MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:32:46.800498   32853 main.go:141] libmachine: (ha-329926) Calling .GetSSHPort
	I0501 02:32:46.800677   32853 main.go:141] libmachine: (ha-329926) Calling .GetSSHKeyPath
	I0501 02:32:46.800834   32853 main.go:141] libmachine: (ha-329926) Calling .GetSSHUsername
	I0501 02:32:46.800981   32853 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926/id_rsa Username:docker}
	I0501 02:32:46.973707   32853 start.go:342] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.79 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0501 02:32:46.973760   32853 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ii3sbv.7jvk8wpzpyemm901 --discovery-token-ca-cert-hash sha256:bd94cc6acfb95fe36f70b12c6a1f2980601b8235e7c4dea65cebdf35eb514754 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-329926-m02 --control-plane --apiserver-advertise-address=192.168.39.79 --apiserver-bind-port=8443"
	I0501 02:33:10.771192   32853 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ii3sbv.7jvk8wpzpyemm901 --discovery-token-ca-cert-hash sha256:bd94cc6acfb95fe36f70b12c6a1f2980601b8235e7c4dea65cebdf35eb514754 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-329926-m02 --control-plane --apiserver-advertise-address=192.168.39.79 --apiserver-bind-port=8443": (23.797411356s)
	I0501 02:33:10.771238   32853 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0501 02:33:11.384980   32853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-329926-m02 minikube.k8s.io/updated_at=2024_05_01T02_33_11_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=2c4eae41cda912e6a762d77f0d8868e00f97bb4e minikube.k8s.io/name=ha-329926 minikube.k8s.io/primary=false
	I0501 02:33:11.526841   32853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-329926-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0501 02:33:11.662780   32853 start.go:318] duration metric: took 24.865752449s to joinCluster
	I0501 02:33:11.662858   32853 start.go:234] Will wait 6m0s for node &{Name:m02 IP:192.168.39.79 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0501 02:33:11.664381   32853 out.go:177] * Verifying Kubernetes components...
	I0501 02:33:11.663177   32853 config.go:182] Loaded profile config "ha-329926": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0501 02:33:11.665708   32853 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:33:11.967770   32853 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0501 02:33:11.987701   32853 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18779-13391/kubeconfig
	I0501 02:33:11.987916   32853 kapi.go:59] client config for ha-329926: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/client.crt", KeyFile:"/home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/client.key", CAFile:"/home/jenkins/minikube-integration/18779-13391/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02700), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0501 02:33:11.987972   32853 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.5:8443
	I0501 02:33:11.988206   32853 node_ready.go:35] waiting up to 6m0s for node "ha-329926-m02" to be "Ready" ...
	I0501 02:33:11.988325   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926-m02
	I0501 02:33:11.988335   32853 round_trippers.go:469] Request Headers:
	I0501 02:33:11.988342   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:11.988348   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:33:12.003472   32853 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0501 02:33:12.488867   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926-m02
	I0501 02:33:12.488893   32853 round_trippers.go:469] Request Headers:
	I0501 02:33:12.488901   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:12.488905   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:33:12.494522   32853 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:33:12.989291   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926-m02
	I0501 02:33:12.989314   32853 round_trippers.go:469] Request Headers:
	I0501 02:33:12.989322   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:12.989326   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:33:12.994548   32853 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:33:13.489445   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926-m02
	I0501 02:33:13.489465   32853 round_trippers.go:469] Request Headers:
	I0501 02:33:13.489473   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:33:13.489477   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:13.492740   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:33:13.989302   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926-m02
	I0501 02:33:13.989326   32853 round_trippers.go:469] Request Headers:
	I0501 02:33:13.989332   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:33:13.989336   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:13.994574   32853 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:33:13.995174   32853 node_ready.go:53] node "ha-329926-m02" has status "Ready":"False"
	I0501 02:33:14.488543   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926-m02
	I0501 02:33:14.488564   32853 round_trippers.go:469] Request Headers:
	I0501 02:33:14.488571   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:14.488575   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:33:14.491845   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:33:14.988951   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926-m02
	I0501 02:33:14.988971   32853 round_trippers.go:469] Request Headers:
	I0501 02:33:14.988977   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:14.988981   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:33:14.994551   32853 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:33:15.489130   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926-m02
	I0501 02:33:15.489151   32853 round_trippers.go:469] Request Headers:
	I0501 02:33:15.489162   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:15.489169   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:33:15.493112   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:33:15.988869   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926-m02
	I0501 02:33:15.988893   32853 round_trippers.go:469] Request Headers:
	I0501 02:33:15.988901   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:15.988906   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:33:15.992940   32853 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:33:16.488731   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926-m02
	I0501 02:33:16.488756   32853 round_trippers.go:469] Request Headers:
	I0501 02:33:16.488774   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:16.488779   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:33:16.492717   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:33:16.493523   32853 node_ready.go:53] node "ha-329926-m02" has status "Ready":"False"
	I0501 02:33:16.989386   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926-m02
	I0501 02:33:16.989415   32853 round_trippers.go:469] Request Headers:
	I0501 02:33:16.989425   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:16.989430   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:33:16.993462   32853 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:33:17.489274   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926-m02
	I0501 02:33:17.489301   32853 round_trippers.go:469] Request Headers:
	I0501 02:33:17.489312   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:33:17.489317   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:17.494823   32853 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:33:17.988542   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926-m02
	I0501 02:33:17.988583   32853 round_trippers.go:469] Request Headers:
	I0501 02:33:17.988592   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:17.988596   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:33:17.992630   32853 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:33:18.488722   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926-m02
	I0501 02:33:18.488745   32853 round_trippers.go:469] Request Headers:
	I0501 02:33:18.488753   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:18.488757   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:33:18.492691   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:33:18.493303   32853 node_ready.go:49] node "ha-329926-m02" has status "Ready":"True"
	I0501 02:33:18.493329   32853 node_ready.go:38] duration metric: took 6.505084484s for node "ha-329926-m02" to be "Ready" ...
	I0501 02:33:18.493337   32853 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0501 02:33:18.493389   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods
	I0501 02:33:18.493411   32853 round_trippers.go:469] Request Headers:
	I0501 02:33:18.493417   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:18.493421   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:33:18.498285   32853 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:33:18.505511   32853 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-2h8lc" in "kube-system" namespace to be "Ready" ...
	I0501 02:33:18.505611   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2h8lc
	I0501 02:33:18.505622   32853 round_trippers.go:469] Request Headers:
	I0501 02:33:18.505633   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:18.505640   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:33:18.509040   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:33:18.509791   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926
	I0501 02:33:18.509807   32853 round_trippers.go:469] Request Headers:
	I0501 02:33:18.509814   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:18.509817   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:33:18.513129   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:33:18.513783   32853 pod_ready.go:92] pod "coredns-7db6d8ff4d-2h8lc" in "kube-system" namespace has status "Ready":"True"
	I0501 02:33:18.513799   32853 pod_ready.go:81] duration metric: took 8.261557ms for pod "coredns-7db6d8ff4d-2h8lc" in "kube-system" namespace to be "Ready" ...
	I0501 02:33:18.513807   32853 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-cfdqc" in "kube-system" namespace to be "Ready" ...
	I0501 02:33:18.513858   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-cfdqc
	I0501 02:33:18.513866   32853 round_trippers.go:469] Request Headers:
	I0501 02:33:18.513872   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:18.513877   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:33:18.517367   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:33:18.518083   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926
	I0501 02:33:18.518098   32853 round_trippers.go:469] Request Headers:
	I0501 02:33:18.518105   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:18.518108   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:33:18.522084   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:33:18.522665   32853 pod_ready.go:92] pod "coredns-7db6d8ff4d-cfdqc" in "kube-system" namespace has status "Ready":"True"
	I0501 02:33:18.522682   32853 pod_ready.go:81] duration metric: took 8.866578ms for pod "coredns-7db6d8ff4d-cfdqc" in "kube-system" namespace to be "Ready" ...
	I0501 02:33:18.522690   32853 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-329926" in "kube-system" namespace to be "Ready" ...
	I0501 02:33:18.522731   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-329926
	I0501 02:33:18.522739   32853 round_trippers.go:469] Request Headers:
	I0501 02:33:18.522745   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:18.522749   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:33:18.526829   32853 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:33:18.527854   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926
	I0501 02:33:18.527868   32853 round_trippers.go:469] Request Headers:
	I0501 02:33:18.527875   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:18.527879   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:33:18.530937   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:33:18.531737   32853 pod_ready.go:92] pod "etcd-ha-329926" in "kube-system" namespace has status "Ready":"True"
	I0501 02:33:18.531751   32853 pod_ready.go:81] duration metric: took 9.056356ms for pod "etcd-ha-329926" in "kube-system" namespace to be "Ready" ...
	I0501 02:33:18.531759   32853 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-329926-m02" in "kube-system" namespace to be "Ready" ...
	I0501 02:33:18.531803   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-329926-m02
	I0501 02:33:18.531816   32853 round_trippers.go:469] Request Headers:
	I0501 02:33:18.531823   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:33:18.531831   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:18.534721   32853 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 02:33:18.535312   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926-m02
	I0501 02:33:18.535325   32853 round_trippers.go:469] Request Headers:
	I0501 02:33:18.535330   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:18.535333   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:33:18.539152   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:33:19.032909   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-329926-m02
	I0501 02:33:19.032936   32853 round_trippers.go:469] Request Headers:
	I0501 02:33:19.032948   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:19.032954   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:33:19.036794   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:33:19.037616   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926-m02
	I0501 02:33:19.037636   32853 round_trippers.go:469] Request Headers:
	I0501 02:33:19.037646   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:19.037653   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:33:19.040249   32853 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 02:33:19.532218   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-329926-m02
	I0501 02:33:19.532240   32853 round_trippers.go:469] Request Headers:
	I0501 02:33:19.532248   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:19.532253   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:33:19.537133   32853 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:33:19.538351   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926-m02
	I0501 02:33:19.538367   32853 round_trippers.go:469] Request Headers:
	I0501 02:33:19.538372   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:19.538376   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:33:19.541943   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:33:20.032940   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-329926-m02
	I0501 02:33:20.032959   32853 round_trippers.go:469] Request Headers:
	I0501 02:33:20.032967   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:20.032972   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:33:20.038191   32853 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:33:20.039221   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926-m02
	I0501 02:33:20.039241   32853 round_trippers.go:469] Request Headers:
	I0501 02:33:20.039251   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:20.039259   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:33:20.041847   32853 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 02:33:20.532824   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-329926-m02
	I0501 02:33:20.532845   32853 round_trippers.go:469] Request Headers:
	I0501 02:33:20.532852   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:20.532855   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:33:20.536745   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:33:20.537569   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926-m02
	I0501 02:33:20.537588   32853 round_trippers.go:469] Request Headers:
	I0501 02:33:20.537598   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:20.537602   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:33:20.540342   32853 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 02:33:20.540967   32853 pod_ready.go:102] pod "etcd-ha-329926-m02" in "kube-system" namespace has status "Ready":"False"
	I0501 02:33:21.032383   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-329926-m02
	I0501 02:33:21.032405   32853 round_trippers.go:469] Request Headers:
	I0501 02:33:21.032412   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:21.032416   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:33:21.035771   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:33:21.036462   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926-m02
	I0501 02:33:21.036478   32853 round_trippers.go:469] Request Headers:
	I0501 02:33:21.036486   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:33:21.036492   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:21.039690   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:33:21.531898   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-329926-m02
	I0501 02:33:21.531919   32853 round_trippers.go:469] Request Headers:
	I0501 02:33:21.531925   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:21.531929   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:33:21.535449   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:33:21.536376   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926-m02
	I0501 02:33:21.536389   32853 round_trippers.go:469] Request Headers:
	I0501 02:33:21.536395   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:21.536398   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:33:21.539172   32853 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 02:33:22.032935   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-329926-m02
	I0501 02:33:22.032964   32853 round_trippers.go:469] Request Headers:
	I0501 02:33:22.032974   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:22.032981   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:33:22.037029   32853 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:33:22.037834   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926-m02
	I0501 02:33:22.037858   32853 round_trippers.go:469] Request Headers:
	I0501 02:33:22.037874   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:22.037881   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:33:22.041827   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:33:22.532130   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-329926-m02
	I0501 02:33:22.532153   32853 round_trippers.go:469] Request Headers:
	I0501 02:33:22.532161   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:22.532164   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:33:22.536068   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:33:22.537208   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926-m02
	I0501 02:33:22.537229   32853 round_trippers.go:469] Request Headers:
	I0501 02:33:22.537248   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:22.537255   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:33:22.540959   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:33:22.541676   32853 pod_ready.go:102] pod "etcd-ha-329926-m02" in "kube-system" namespace has status "Ready":"False"
	I0501 02:33:23.032523   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-329926-m02
	I0501 02:33:23.032550   32853 round_trippers.go:469] Request Headers:
	I0501 02:33:23.032558   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:23.032562   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:33:23.037008   32853 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:33:23.038957   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926-m02
	I0501 02:33:23.038978   32853 round_trippers.go:469] Request Headers:
	I0501 02:33:23.038993   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:23.038999   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:33:23.041972   32853 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 02:33:23.531969   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-329926-m02
	I0501 02:33:23.531993   32853 round_trippers.go:469] Request Headers:
	I0501 02:33:23.532003   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:23.532007   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:33:23.536144   32853 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:33:23.537306   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926-m02
	I0501 02:33:23.537338   32853 round_trippers.go:469] Request Headers:
	I0501 02:33:23.537349   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:23.537356   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:33:23.541418   32853 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:33:24.032461   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-329926-m02
	I0501 02:33:24.032489   32853 round_trippers.go:469] Request Headers:
	I0501 02:33:24.032500   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:33:24.032506   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:24.035563   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:33:24.036508   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926-m02
	I0501 02:33:24.036524   32853 round_trippers.go:469] Request Headers:
	I0501 02:33:24.036534   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:24.036538   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:33:24.039302   32853 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 02:33:24.532724   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-329926-m02
	I0501 02:33:24.532758   32853 round_trippers.go:469] Request Headers:
	I0501 02:33:24.532771   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:24.532776   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:33:24.536087   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:33:24.537058   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926-m02
	I0501 02:33:24.537070   32853 round_trippers.go:469] Request Headers:
	I0501 02:33:24.537077   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:24.537081   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:33:24.539781   32853 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 02:33:25.031933   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-329926-m02
	I0501 02:33:25.031957   32853 round_trippers.go:469] Request Headers:
	I0501 02:33:25.031965   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:25.031970   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:33:25.038111   32853 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 02:33:25.040132   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926-m02
	I0501 02:33:25.040150   32853 round_trippers.go:469] Request Headers:
	I0501 02:33:25.040158   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:25.040163   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:33:25.044966   32853 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:33:25.046443   32853 pod_ready.go:102] pod "etcd-ha-329926-m02" in "kube-system" namespace has status "Ready":"False"
	I0501 02:33:25.532822   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-329926-m02
	I0501 02:33:25.532852   32853 round_trippers.go:469] Request Headers:
	I0501 02:33:25.532861   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:25.532865   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:33:25.536720   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:33:25.537639   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926-m02
	I0501 02:33:25.537660   32853 round_trippers.go:469] Request Headers:
	I0501 02:33:25.537671   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:25.537676   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:33:25.541010   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:33:25.541525   32853 pod_ready.go:92] pod "etcd-ha-329926-m02" in "kube-system" namespace has status "Ready":"True"
	I0501 02:33:25.541542   32853 pod_ready.go:81] duration metric: took 7.00977732s for pod "etcd-ha-329926-m02" in "kube-system" namespace to be "Ready" ...
	I0501 02:33:25.541555   32853 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-329926" in "kube-system" namespace to be "Ready" ...
	I0501 02:33:25.541603   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-329926
	I0501 02:33:25.541611   32853 round_trippers.go:469] Request Headers:
	I0501 02:33:25.541618   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:25.541621   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:33:25.544520   32853 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 02:33:25.545317   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926
	I0501 02:33:25.545332   32853 round_trippers.go:469] Request Headers:
	I0501 02:33:25.545340   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:25.545342   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:33:25.547604   32853 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 02:33:25.548196   32853 pod_ready.go:92] pod "kube-apiserver-ha-329926" in "kube-system" namespace has status "Ready":"True"
	I0501 02:33:25.548211   32853 pod_ready.go:81] duration metric: took 6.649613ms for pod "kube-apiserver-ha-329926" in "kube-system" namespace to be "Ready" ...
	I0501 02:33:25.548219   32853 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-329926-m02" in "kube-system" namespace to be "Ready" ...
	I0501 02:33:25.548267   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-329926-m02
	I0501 02:33:25.548274   32853 round_trippers.go:469] Request Headers:
	I0501 02:33:25.548281   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:25.548284   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:33:25.550809   32853 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 02:33:25.551391   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926-m02
	I0501 02:33:25.551403   32853 round_trippers.go:469] Request Headers:
	I0501 02:33:25.551410   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:25.551414   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:33:25.553972   32853 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 02:33:25.554820   32853 pod_ready.go:92] pod "kube-apiserver-ha-329926-m02" in "kube-system" namespace has status "Ready":"True"
	I0501 02:33:25.554833   32853 pod_ready.go:81] duration metric: took 6.608772ms for pod "kube-apiserver-ha-329926-m02" in "kube-system" namespace to be "Ready" ...
	I0501 02:33:25.554842   32853 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-329926" in "kube-system" namespace to be "Ready" ...
	I0501 02:33:25.554885   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-329926
	I0501 02:33:25.554894   32853 round_trippers.go:469] Request Headers:
	I0501 02:33:25.554902   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:25.554910   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:33:25.557096   32853 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 02:33:25.557769   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926
	I0501 02:33:25.557784   32853 round_trippers.go:469] Request Headers:
	I0501 02:33:25.557791   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:25.557795   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:33:25.560089   32853 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 02:33:25.560780   32853 pod_ready.go:92] pod "kube-controller-manager-ha-329926" in "kube-system" namespace has status "Ready":"True"
	I0501 02:33:25.560799   32853 pod_ready.go:81] duration metric: took 5.951704ms for pod "kube-controller-manager-ha-329926" in "kube-system" namespace to be "Ready" ...
	I0501 02:33:25.560807   32853 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-329926-m02" in "kube-system" namespace to be "Ready" ...
	I0501 02:33:25.560852   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-329926-m02
	I0501 02:33:25.560859   32853 round_trippers.go:469] Request Headers:
	I0501 02:33:25.560866   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:25.560872   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:33:25.563304   32853 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 02:33:25.689395   32853 request.go:629] Waited for 125.311047ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/nodes/ha-329926-m02
	I0501 02:33:25.689473   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926-m02
	I0501 02:33:25.689481   32853 round_trippers.go:469] Request Headers:
	I0501 02:33:25.689491   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:25.689495   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:33:25.694568   32853 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:33:25.695613   32853 pod_ready.go:92] pod "kube-controller-manager-ha-329926-m02" in "kube-system" namespace has status "Ready":"True"
	I0501 02:33:25.695631   32853 pod_ready.go:81] duration metric: took 134.818644ms for pod "kube-controller-manager-ha-329926-m02" in "kube-system" namespace to be "Ready" ...
	I0501 02:33:25.695640   32853 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-msshn" in "kube-system" namespace to be "Ready" ...
	I0501 02:33:25.888934   32853 request.go:629] Waited for 193.220812ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-msshn
	I0501 02:33:25.888991   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-msshn
	I0501 02:33:25.889000   32853 round_trippers.go:469] Request Headers:
	I0501 02:33:25.889014   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:25.889020   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:33:25.891885   32853 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 02:33:26.089742   32853 request.go:629] Waited for 197.064709ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/nodes/ha-329926
	I0501 02:33:26.089823   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926
	I0501 02:33:26.089832   32853 round_trippers.go:469] Request Headers:
	I0501 02:33:26.089840   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:26.089846   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:33:26.094770   32853 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:33:26.096339   32853 pod_ready.go:92] pod "kube-proxy-msshn" in "kube-system" namespace has status "Ready":"True"
	I0501 02:33:26.096359   32853 pod_ready.go:81] duration metric: took 400.712232ms for pod "kube-proxy-msshn" in "kube-system" namespace to be "Ready" ...
	I0501 02:33:26.096369   32853 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rfsm8" in "kube-system" namespace to be "Ready" ...
	I0501 02:33:26.289504   32853 request.go:629] Waited for 193.059757ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rfsm8
	I0501 02:33:26.289558   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rfsm8
	I0501 02:33:26.289563   32853 round_trippers.go:469] Request Headers:
	I0501 02:33:26.289571   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:26.289578   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:33:26.292679   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:33:26.488853   32853 request.go:629] Waited for 195.296934ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/nodes/ha-329926-m02
	I0501 02:33:26.488915   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926-m02
	I0501 02:33:26.488929   32853 round_trippers.go:469] Request Headers:
	I0501 02:33:26.488940   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:26.488946   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:33:26.492008   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:33:26.492777   32853 pod_ready.go:92] pod "kube-proxy-rfsm8" in "kube-system" namespace has status "Ready":"True"
	I0501 02:33:26.492804   32853 pod_ready.go:81] duration metric: took 396.427668ms for pod "kube-proxy-rfsm8" in "kube-system" namespace to be "Ready" ...
	I0501 02:33:26.492818   32853 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-329926" in "kube-system" namespace to be "Ready" ...
	I0501 02:33:26.688800   32853 request.go:629] Waited for 195.916931ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-329926
	I0501 02:33:26.688858   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-329926
	I0501 02:33:26.688862   32853 round_trippers.go:469] Request Headers:
	I0501 02:33:26.688871   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:26.688877   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:33:26.692819   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:33:26.889221   32853 request.go:629] Waited for 195.41555ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/nodes/ha-329926
	I0501 02:33:26.889280   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926
	I0501 02:33:26.889285   32853 round_trippers.go:469] Request Headers:
	I0501 02:33:26.889293   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:26.889297   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:33:26.893122   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:33:26.894152   32853 pod_ready.go:92] pod "kube-scheduler-ha-329926" in "kube-system" namespace has status "Ready":"True"
	I0501 02:33:26.894171   32853 pod_ready.go:81] duration metric: took 401.345489ms for pod "kube-scheduler-ha-329926" in "kube-system" namespace to be "Ready" ...
	I0501 02:33:26.894180   32853 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-329926-m02" in "kube-system" namespace to be "Ready" ...
	I0501 02:33:27.089385   32853 request.go:629] Waited for 195.12619ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-329926-m02
	I0501 02:33:27.089452   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-329926-m02
	I0501 02:33:27.089458   32853 round_trippers.go:469] Request Headers:
	I0501 02:33:27.089465   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:27.089469   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:33:27.093418   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:33:27.289741   32853 request.go:629] Waited for 195.55559ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/nodes/ha-329926-m02
	I0501 02:33:27.289799   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926-m02
	I0501 02:33:27.289805   32853 round_trippers.go:469] Request Headers:
	I0501 02:33:27.289812   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:27.289817   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:33:27.294038   32853 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:33:27.295226   32853 pod_ready.go:92] pod "kube-scheduler-ha-329926-m02" in "kube-system" namespace has status "Ready":"True"
	I0501 02:33:27.295243   32853 pod_ready.go:81] duration metric: took 401.057138ms for pod "kube-scheduler-ha-329926-m02" in "kube-system" namespace to be "Ready" ...
	I0501 02:33:27.295253   32853 pod_ready.go:38] duration metric: took 8.801905402s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0501 02:33:27.295268   32853 api_server.go:52] waiting for apiserver process to appear ...
	I0501 02:33:27.295334   32853 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 02:33:27.311865   32853 api_server.go:72] duration metric: took 15.648969816s to wait for apiserver process to appear ...
	I0501 02:33:27.311894   32853 api_server.go:88] waiting for apiserver healthz status ...
	I0501 02:33:27.311919   32853 api_server.go:253] Checking apiserver healthz at https://192.168.39.5:8443/healthz ...
	I0501 02:33:27.317230   32853 api_server.go:279] https://192.168.39.5:8443/healthz returned 200:
	ok
	I0501 02:33:27.317286   32853 round_trippers.go:463] GET https://192.168.39.5:8443/version
	I0501 02:33:27.317294   32853 round_trippers.go:469] Request Headers:
	I0501 02:33:27.317301   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:27.317307   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:33:27.318471   32853 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0501 02:33:27.318732   32853 api_server.go:141] control plane version: v1.30.0
	I0501 02:33:27.318751   32853 api_server.go:131] duration metric: took 6.850306ms to wait for apiserver health ...
	I0501 02:33:27.318758   32853 system_pods.go:43] waiting for kube-system pods to appear ...
	I0501 02:33:27.489151   32853 request.go:629] Waited for 170.324079ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods
	I0501 02:33:27.489223   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods
	I0501 02:33:27.489229   32853 round_trippers.go:469] Request Headers:
	I0501 02:33:27.489239   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:27.489251   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:33:27.496035   32853 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 02:33:27.501718   32853 system_pods.go:59] 17 kube-system pods found
	I0501 02:33:27.501745   32853 system_pods.go:61] "coredns-7db6d8ff4d-2h8lc" [937e09f0-6a7d-4387-aa19-ee959eb5a2a5] Running
	I0501 02:33:27.501750   32853 system_pods.go:61] "coredns-7db6d8ff4d-cfdqc" [a37e982e-9e4f-43bf-b957-0d6f082f4ec8] Running
	I0501 02:33:27.501754   32853 system_pods.go:61] "etcd-ha-329926" [f0e4ae2a-a8cc-42b2-9865-fb6ec3f41acb] Running
	I0501 02:33:27.501757   32853 system_pods.go:61] "etcd-ha-329926-m02" [4ed5b754-bb3d-46de-a5b9-ff46994f25ad] Running
	I0501 02:33:27.501760   32853 system_pods.go:61] "kindnet-9r8zn" [fc187c8a-a964-45e1-adb0-f5ce23922b66] Running
	I0501 02:33:27.501762   32853 system_pods.go:61] "kindnet-kcmp7" [8e15c166-9ba1-40c9-8f33-db7f83733932] Running
	I0501 02:33:27.501765   32853 system_pods.go:61] "kube-apiserver-ha-329926" [49c47f4f-663a-4407-9d46-94fa3afbf349] Running
	I0501 02:33:27.501769   32853 system_pods.go:61] "kube-apiserver-ha-329926-m02" [886d1acc-021c-4f8b-b477-b9760260aabb] Running
	I0501 02:33:27.501773   32853 system_pods.go:61] "kube-controller-manager-ha-329926" [332785d8-9966-4823-8828-fa5e90b4aac1] Running
	I0501 02:33:27.501779   32853 system_pods.go:61] "kube-controller-manager-ha-329926-m02" [91d97fa7-6409-4620-b569-c391d21a5915] Running
	I0501 02:33:27.501783   32853 system_pods.go:61] "kube-proxy-msshn" [7575fbfc-11ce-4223-bd99-ff9cdddd3568] Running
	I0501 02:33:27.501788   32853 system_pods.go:61] "kube-proxy-rfsm8" [f0510b55-1b59-4239-b529-b7af4d017c06] Running
	I0501 02:33:27.501796   32853 system_pods.go:61] "kube-scheduler-ha-329926" [7d45e3e9-cc7e-4b69-9219-61c3006013ea] Running
	I0501 02:33:27.501801   32853 system_pods.go:61] "kube-scheduler-ha-329926-m02" [075e127f-debf-4dd4-babd-be0930fb2ef7] Running
	I0501 02:33:27.501820   32853 system_pods.go:61] "kube-vip-ha-329926" [0fbbb815-441d-48d0-b0cf-1bb57ff6d993] Running
	I0501 02:33:27.501824   32853 system_pods.go:61] "kube-vip-ha-329926-m02" [92c115f8-bb9c-4a86-b914-984985a69916] Running
	I0501 02:33:27.501827   32853 system_pods.go:61] "storage-provisioner" [371423a6-a156-4e8d-bf66-812d606cc8d7] Running
	I0501 02:33:27.501833   32853 system_pods.go:74] duration metric: took 183.069484ms to wait for pod list to return data ...
	I0501 02:33:27.501842   32853 default_sa.go:34] waiting for default service account to be created ...
	I0501 02:33:27.689121   32853 request.go:629] Waited for 187.222295ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/namespaces/default/serviceaccounts
	I0501 02:33:27.689173   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/default/serviceaccounts
	I0501 02:33:27.689191   32853 round_trippers.go:469] Request Headers:
	I0501 02:33:27.689217   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:27.689228   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:33:27.693649   32853 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:33:27.693890   32853 default_sa.go:45] found service account: "default"
	I0501 02:33:27.693908   32853 default_sa.go:55] duration metric: took 192.059311ms for default service account to be created ...
	I0501 02:33:27.693918   32853 system_pods.go:116] waiting for k8s-apps to be running ...
	I0501 02:33:27.889175   32853 request.go:629] Waited for 195.171272ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods
	I0501 02:33:27.889228   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods
	I0501 02:33:27.889239   32853 round_trippers.go:469] Request Headers:
	I0501 02:33:27.889252   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:27.889260   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:33:27.895684   32853 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 02:33:27.900752   32853 system_pods.go:86] 17 kube-system pods found
	I0501 02:33:27.900784   32853 system_pods.go:89] "coredns-7db6d8ff4d-2h8lc" [937e09f0-6a7d-4387-aa19-ee959eb5a2a5] Running
	I0501 02:33:27.900792   32853 system_pods.go:89] "coredns-7db6d8ff4d-cfdqc" [a37e982e-9e4f-43bf-b957-0d6f082f4ec8] Running
	I0501 02:33:27.900798   32853 system_pods.go:89] "etcd-ha-329926" [f0e4ae2a-a8cc-42b2-9865-fb6ec3f41acb] Running
	I0501 02:33:27.900804   32853 system_pods.go:89] "etcd-ha-329926-m02" [4ed5b754-bb3d-46de-a5b9-ff46994f25ad] Running
	I0501 02:33:27.900810   32853 system_pods.go:89] "kindnet-9r8zn" [fc187c8a-a964-45e1-adb0-f5ce23922b66] Running
	I0501 02:33:27.900816   32853 system_pods.go:89] "kindnet-kcmp7" [8e15c166-9ba1-40c9-8f33-db7f83733932] Running
	I0501 02:33:27.900822   32853 system_pods.go:89] "kube-apiserver-ha-329926" [49c47f4f-663a-4407-9d46-94fa3afbf349] Running
	I0501 02:33:27.900829   32853 system_pods.go:89] "kube-apiserver-ha-329926-m02" [886d1acc-021c-4f8b-b477-b9760260aabb] Running
	I0501 02:33:27.900840   32853 system_pods.go:89] "kube-controller-manager-ha-329926" [332785d8-9966-4823-8828-fa5e90b4aac1] Running
	I0501 02:33:27.900847   32853 system_pods.go:89] "kube-controller-manager-ha-329926-m02" [91d97fa7-6409-4620-b569-c391d21a5915] Running
	I0501 02:33:27.900853   32853 system_pods.go:89] "kube-proxy-msshn" [7575fbfc-11ce-4223-bd99-ff9cdddd3568] Running
	I0501 02:33:27.900864   32853 system_pods.go:89] "kube-proxy-rfsm8" [f0510b55-1b59-4239-b529-b7af4d017c06] Running
	I0501 02:33:27.900871   32853 system_pods.go:89] "kube-scheduler-ha-329926" [7d45e3e9-cc7e-4b69-9219-61c3006013ea] Running
	I0501 02:33:27.900880   32853 system_pods.go:89] "kube-scheduler-ha-329926-m02" [075e127f-debf-4dd4-babd-be0930fb2ef7] Running
	I0501 02:33:27.900887   32853 system_pods.go:89] "kube-vip-ha-329926" [0fbbb815-441d-48d0-b0cf-1bb57ff6d993] Running
	I0501 02:33:27.900895   32853 system_pods.go:89] "kube-vip-ha-329926-m02" [92c115f8-bb9c-4a86-b914-984985a69916] Running
	I0501 02:33:27.900904   32853 system_pods.go:89] "storage-provisioner" [371423a6-a156-4e8d-bf66-812d606cc8d7] Running
	I0501 02:33:27.900913   32853 system_pods.go:126] duration metric: took 206.988594ms to wait for k8s-apps to be running ...
	I0501 02:33:27.900927   32853 system_svc.go:44] waiting for kubelet service to be running ....
	I0501 02:33:27.900977   32853 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 02:33:27.917057   32853 system_svc.go:56] duration metric: took 16.105865ms WaitForService to wait for kubelet
	I0501 02:33:27.917082   32853 kubeadm.go:576] duration metric: took 16.254189789s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0501 02:33:27.917099   32853 node_conditions.go:102] verifying NodePressure condition ...
	I0501 02:33:28.089485   32853 request.go:629] Waited for 172.305995ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/nodes
	I0501 02:33:28.089541   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes
	I0501 02:33:28.089546   32853 round_trippers.go:469] Request Headers:
	I0501 02:33:28.089553   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:28.089557   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:33:28.093499   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:33:28.094277   32853 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0501 02:33:28.094298   32853 node_conditions.go:123] node cpu capacity is 2
	I0501 02:33:28.094312   32853 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0501 02:33:28.094318   32853 node_conditions.go:123] node cpu capacity is 2
	I0501 02:33:28.094323   32853 node_conditions.go:105] duration metric: took 177.218719ms to run NodePressure ...
	I0501 02:33:28.094336   32853 start.go:240] waiting for startup goroutines ...
	I0501 02:33:28.094364   32853 start.go:254] writing updated cluster config ...
	I0501 02:33:28.096419   32853 out.go:177] 
	I0501 02:33:28.097791   32853 config.go:182] Loaded profile config "ha-329926": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0501 02:33:28.097893   32853 profile.go:143] Saving config to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/config.json ...
	I0501 02:33:28.099538   32853 out.go:177] * Starting "ha-329926-m03" control-plane node in "ha-329926" cluster
	I0501 02:33:28.100767   32853 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0501 02:33:28.100801   32853 cache.go:56] Caching tarball of preloaded images
	I0501 02:33:28.100915   32853 preload.go:173] Found /home/jenkins/minikube-integration/18779-13391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0501 02:33:28.100932   32853 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0501 02:33:28.101053   32853 profile.go:143] Saving config to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/config.json ...
	I0501 02:33:28.101262   32853 start.go:360] acquireMachinesLock for ha-329926-m03: {Name:mkc4548e5afe1d4b0a833f6c522103562b0cefff Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0501 02:33:28.101311   32853 start.go:364] duration metric: took 25.235µs to acquireMachinesLock for "ha-329926-m03"
	I0501 02:33:28.101336   32853 start.go:93] Provisioning new machine with config: &{Name:ha-329926 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.0 ClusterName:ha-329926 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.5 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.79 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns
:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiza
tions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0501 02:33:28.101461   32853 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0501 02:33:28.103040   32853 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0501 02:33:28.103111   32853 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:33:28.103139   32853 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:33:28.117788   32853 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37789
	I0501 02:33:28.118265   32853 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:33:28.118822   32853 main.go:141] libmachine: Using API Version  1
	I0501 02:33:28.118846   32853 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:33:28.119143   32853 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:33:28.119367   32853 main.go:141] libmachine: (ha-329926-m03) Calling .GetMachineName
	I0501 02:33:28.119501   32853 main.go:141] libmachine: (ha-329926-m03) Calling .DriverName
	I0501 02:33:28.119666   32853 start.go:159] libmachine.API.Create for "ha-329926" (driver="kvm2")
	I0501 02:33:28.119696   32853 client.go:168] LocalClient.Create starting
	I0501 02:33:28.119739   32853 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem
	I0501 02:33:28.119778   32853 main.go:141] libmachine: Decoding PEM data...
	I0501 02:33:28.119800   32853 main.go:141] libmachine: Parsing certificate...
	I0501 02:33:28.119866   32853 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem
	I0501 02:33:28.119891   32853 main.go:141] libmachine: Decoding PEM data...
	I0501 02:33:28.119910   32853 main.go:141] libmachine: Parsing certificate...
	I0501 02:33:28.119931   32853 main.go:141] libmachine: Running pre-create checks...
	I0501 02:33:28.119942   32853 main.go:141] libmachine: (ha-329926-m03) Calling .PreCreateCheck
	I0501 02:33:28.120080   32853 main.go:141] libmachine: (ha-329926-m03) Calling .GetConfigRaw
	I0501 02:33:28.120474   32853 main.go:141] libmachine: Creating machine...
	I0501 02:33:28.120492   32853 main.go:141] libmachine: (ha-329926-m03) Calling .Create
	I0501 02:33:28.120604   32853 main.go:141] libmachine: (ha-329926-m03) Creating KVM machine...
	I0501 02:33:28.122036   32853 main.go:141] libmachine: (ha-329926-m03) DBG | found existing default KVM network
	I0501 02:33:28.122204   32853 main.go:141] libmachine: (ha-329926-m03) DBG | found existing private KVM network mk-ha-329926
	I0501 02:33:28.122370   32853 main.go:141] libmachine: (ha-329926-m03) Setting up store path in /home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926-m03 ...
	I0501 02:33:28.122409   32853 main.go:141] libmachine: (ha-329926-m03) Building disk image from file:///home/jenkins/minikube-integration/18779-13391/.minikube/cache/iso/amd64/minikube-v1.33.0-1714498396-18779-amd64.iso
	I0501 02:33:28.122457   32853 main.go:141] libmachine: (ha-329926-m03) DBG | I0501 02:33:28.122345   33738 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18779-13391/.minikube
	I0501 02:33:28.122564   32853 main.go:141] libmachine: (ha-329926-m03) Downloading /home/jenkins/minikube-integration/18779-13391/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18779-13391/.minikube/cache/iso/amd64/minikube-v1.33.0-1714498396-18779-amd64.iso...
	I0501 02:33:28.332066   32853 main.go:141] libmachine: (ha-329926-m03) DBG | I0501 02:33:28.331943   33738 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926-m03/id_rsa...
	I0501 02:33:28.547024   32853 main.go:141] libmachine: (ha-329926-m03) DBG | I0501 02:33:28.546919   33738 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926-m03/ha-329926-m03.rawdisk...
	I0501 02:33:28.547051   32853 main.go:141] libmachine: (ha-329926-m03) DBG | Writing magic tar header
	I0501 02:33:28.547061   32853 main.go:141] libmachine: (ha-329926-m03) DBG | Writing SSH key tar header
	I0501 02:33:28.547069   32853 main.go:141] libmachine: (ha-329926-m03) DBG | I0501 02:33:28.547024   33738 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926-m03 ...
	I0501 02:33:28.547158   32853 main.go:141] libmachine: (ha-329926-m03) Setting executable bit set on /home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926-m03 (perms=drwx------)
	I0501 02:33:28.547182   32853 main.go:141] libmachine: (ha-329926-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926-m03
	I0501 02:33:28.547190   32853 main.go:141] libmachine: (ha-329926-m03) Setting executable bit set on /home/jenkins/minikube-integration/18779-13391/.minikube/machines (perms=drwxr-xr-x)
	I0501 02:33:28.547197   32853 main.go:141] libmachine: (ha-329926-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18779-13391/.minikube/machines
	I0501 02:33:28.547207   32853 main.go:141] libmachine: (ha-329926-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18779-13391/.minikube
	I0501 02:33:28.547214   32853 main.go:141] libmachine: (ha-329926-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18779-13391
	I0501 02:33:28.547226   32853 main.go:141] libmachine: (ha-329926-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0501 02:33:28.547232   32853 main.go:141] libmachine: (ha-329926-m03) DBG | Checking permissions on dir: /home/jenkins
	I0501 02:33:28.547238   32853 main.go:141] libmachine: (ha-329926-m03) DBG | Checking permissions on dir: /home
	I0501 02:33:28.547243   32853 main.go:141] libmachine: (ha-329926-m03) DBG | Skipping /home - not owner
	I0501 02:33:28.547257   32853 main.go:141] libmachine: (ha-329926-m03) Setting executable bit set on /home/jenkins/minikube-integration/18779-13391/.minikube (perms=drwxr-xr-x)
	I0501 02:33:28.547269   32853 main.go:141] libmachine: (ha-329926-m03) Setting executable bit set on /home/jenkins/minikube-integration/18779-13391 (perms=drwxrwxr-x)
	I0501 02:33:28.547280   32853 main.go:141] libmachine: (ha-329926-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0501 02:33:28.547285   32853 main.go:141] libmachine: (ha-329926-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0501 02:33:28.547292   32853 main.go:141] libmachine: (ha-329926-m03) Creating domain...
	I0501 02:33:28.548119   32853 main.go:141] libmachine: (ha-329926-m03) define libvirt domain using xml: 
	I0501 02:33:28.548141   32853 main.go:141] libmachine: (ha-329926-m03) <domain type='kvm'>
	I0501 02:33:28.548157   32853 main.go:141] libmachine: (ha-329926-m03)   <name>ha-329926-m03</name>
	I0501 02:33:28.548166   32853 main.go:141] libmachine: (ha-329926-m03)   <memory unit='MiB'>2200</memory>
	I0501 02:33:28.548177   32853 main.go:141] libmachine: (ha-329926-m03)   <vcpu>2</vcpu>
	I0501 02:33:28.548188   32853 main.go:141] libmachine: (ha-329926-m03)   <features>
	I0501 02:33:28.548197   32853 main.go:141] libmachine: (ha-329926-m03)     <acpi/>
	I0501 02:33:28.548210   32853 main.go:141] libmachine: (ha-329926-m03)     <apic/>
	I0501 02:33:28.548229   32853 main.go:141] libmachine: (ha-329926-m03)     <pae/>
	I0501 02:33:28.548245   32853 main.go:141] libmachine: (ha-329926-m03)     
	I0501 02:33:28.548257   32853 main.go:141] libmachine: (ha-329926-m03)   </features>
	I0501 02:33:28.548272   32853 main.go:141] libmachine: (ha-329926-m03)   <cpu mode='host-passthrough'>
	I0501 02:33:28.548298   32853 main.go:141] libmachine: (ha-329926-m03)   
	I0501 02:33:28.548321   32853 main.go:141] libmachine: (ha-329926-m03)   </cpu>
	I0501 02:33:28.548330   32853 main.go:141] libmachine: (ha-329926-m03)   <os>
	I0501 02:33:28.548343   32853 main.go:141] libmachine: (ha-329926-m03)     <type>hvm</type>
	I0501 02:33:28.548358   32853 main.go:141] libmachine: (ha-329926-m03)     <boot dev='cdrom'/>
	I0501 02:33:28.548369   32853 main.go:141] libmachine: (ha-329926-m03)     <boot dev='hd'/>
	I0501 02:33:28.548378   32853 main.go:141] libmachine: (ha-329926-m03)     <bootmenu enable='no'/>
	I0501 02:33:28.548388   32853 main.go:141] libmachine: (ha-329926-m03)   </os>
	I0501 02:33:28.548396   32853 main.go:141] libmachine: (ha-329926-m03)   <devices>
	I0501 02:33:28.548407   32853 main.go:141] libmachine: (ha-329926-m03)     <disk type='file' device='cdrom'>
	I0501 02:33:28.548425   32853 main.go:141] libmachine: (ha-329926-m03)       <source file='/home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926-m03/boot2docker.iso'/>
	I0501 02:33:28.548436   32853 main.go:141] libmachine: (ha-329926-m03)       <target dev='hdc' bus='scsi'/>
	I0501 02:33:28.548446   32853 main.go:141] libmachine: (ha-329926-m03)       <readonly/>
	I0501 02:33:28.548455   32853 main.go:141] libmachine: (ha-329926-m03)     </disk>
	I0501 02:33:28.548465   32853 main.go:141] libmachine: (ha-329926-m03)     <disk type='file' device='disk'>
	I0501 02:33:28.548476   32853 main.go:141] libmachine: (ha-329926-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0501 02:33:28.548485   32853 main.go:141] libmachine: (ha-329926-m03)       <source file='/home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926-m03/ha-329926-m03.rawdisk'/>
	I0501 02:33:28.548496   32853 main.go:141] libmachine: (ha-329926-m03)       <target dev='hda' bus='virtio'/>
	I0501 02:33:28.548511   32853 main.go:141] libmachine: (ha-329926-m03)     </disk>
	I0501 02:33:28.548528   32853 main.go:141] libmachine: (ha-329926-m03)     <interface type='network'>
	I0501 02:33:28.548544   32853 main.go:141] libmachine: (ha-329926-m03)       <source network='mk-ha-329926'/>
	I0501 02:33:28.548556   32853 main.go:141] libmachine: (ha-329926-m03)       <model type='virtio'/>
	I0501 02:33:28.548569   32853 main.go:141] libmachine: (ha-329926-m03)     </interface>
	I0501 02:33:28.548581   32853 main.go:141] libmachine: (ha-329926-m03)     <interface type='network'>
	I0501 02:33:28.548608   32853 main.go:141] libmachine: (ha-329926-m03)       <source network='default'/>
	I0501 02:33:28.548638   32853 main.go:141] libmachine: (ha-329926-m03)       <model type='virtio'/>
	I0501 02:33:28.548649   32853 main.go:141] libmachine: (ha-329926-m03)     </interface>
	I0501 02:33:28.548660   32853 main.go:141] libmachine: (ha-329926-m03)     <serial type='pty'>
	I0501 02:33:28.548670   32853 main.go:141] libmachine: (ha-329926-m03)       <target port='0'/>
	I0501 02:33:28.548680   32853 main.go:141] libmachine: (ha-329926-m03)     </serial>
	I0501 02:33:28.548691   32853 main.go:141] libmachine: (ha-329926-m03)     <console type='pty'>
	I0501 02:33:28.548701   32853 main.go:141] libmachine: (ha-329926-m03)       <target type='serial' port='0'/>
	I0501 02:33:28.548712   32853 main.go:141] libmachine: (ha-329926-m03)     </console>
	I0501 02:33:28.548722   32853 main.go:141] libmachine: (ha-329926-m03)     <rng model='virtio'>
	I0501 02:33:28.548736   32853 main.go:141] libmachine: (ha-329926-m03)       <backend model='random'>/dev/random</backend>
	I0501 02:33:28.548752   32853 main.go:141] libmachine: (ha-329926-m03)     </rng>
	I0501 02:33:28.548764   32853 main.go:141] libmachine: (ha-329926-m03)     
	I0501 02:33:28.548775   32853 main.go:141] libmachine: (ha-329926-m03)     
	I0501 02:33:28.548787   32853 main.go:141] libmachine: (ha-329926-m03)   </devices>
	I0501 02:33:28.548797   32853 main.go:141] libmachine: (ha-329926-m03) </domain>
	I0501 02:33:28.548809   32853 main.go:141] libmachine: (ha-329926-m03) 
	I0501 02:33:28.555383   32853 main.go:141] libmachine: (ha-329926-m03) DBG | domain ha-329926-m03 has defined MAC address 52:54:00:67:d4:d7 in network default
	I0501 02:33:28.555898   32853 main.go:141] libmachine: (ha-329926-m03) Ensuring networks are active...
	I0501 02:33:28.555917   32853 main.go:141] libmachine: (ha-329926-m03) DBG | domain ha-329926-m03 has defined MAC address 52:54:00:f9:eb:7d in network mk-ha-329926
	I0501 02:33:28.556546   32853 main.go:141] libmachine: (ha-329926-m03) Ensuring network default is active
	I0501 02:33:28.556865   32853 main.go:141] libmachine: (ha-329926-m03) Ensuring network mk-ha-329926 is active
	I0501 02:33:28.557213   32853 main.go:141] libmachine: (ha-329926-m03) Getting domain xml...
	I0501 02:33:28.557937   32853 main.go:141] libmachine: (ha-329926-m03) Creating domain...
	I0501 02:33:29.753981   32853 main.go:141] libmachine: (ha-329926-m03) Waiting to get IP...
	I0501 02:33:29.754874   32853 main.go:141] libmachine: (ha-329926-m03) DBG | domain ha-329926-m03 has defined MAC address 52:54:00:f9:eb:7d in network mk-ha-329926
	I0501 02:33:29.755233   32853 main.go:141] libmachine: (ha-329926-m03) DBG | unable to find current IP address of domain ha-329926-m03 in network mk-ha-329926
	I0501 02:33:29.755257   32853 main.go:141] libmachine: (ha-329926-m03) DBG | I0501 02:33:29.755213   33738 retry.go:31] will retry after 264.426048ms: waiting for machine to come up
	I0501 02:33:30.021622   32853 main.go:141] libmachine: (ha-329926-m03) DBG | domain ha-329926-m03 has defined MAC address 52:54:00:f9:eb:7d in network mk-ha-329926
	I0501 02:33:30.022090   32853 main.go:141] libmachine: (ha-329926-m03) DBG | unable to find current IP address of domain ha-329926-m03 in network mk-ha-329926
	I0501 02:33:30.022125   32853 main.go:141] libmachine: (ha-329926-m03) DBG | I0501 02:33:30.022034   33738 retry.go:31] will retry after 236.771649ms: waiting for machine to come up
	I0501 02:33:30.260504   32853 main.go:141] libmachine: (ha-329926-m03) DBG | domain ha-329926-m03 has defined MAC address 52:54:00:f9:eb:7d in network mk-ha-329926
	I0501 02:33:30.260950   32853 main.go:141] libmachine: (ha-329926-m03) DBG | unable to find current IP address of domain ha-329926-m03 in network mk-ha-329926
	I0501 02:33:30.260982   32853 main.go:141] libmachine: (ha-329926-m03) DBG | I0501 02:33:30.260914   33738 retry.go:31] will retry after 381.572111ms: waiting for machine to come up
	I0501 02:33:30.644643   32853 main.go:141] libmachine: (ha-329926-m03) DBG | domain ha-329926-m03 has defined MAC address 52:54:00:f9:eb:7d in network mk-ha-329926
	I0501 02:33:30.645170   32853 main.go:141] libmachine: (ha-329926-m03) DBG | unable to find current IP address of domain ha-329926-m03 in network mk-ha-329926
	I0501 02:33:30.645211   32853 main.go:141] libmachine: (ha-329926-m03) DBG | I0501 02:33:30.645114   33738 retry.go:31] will retry after 576.635524ms: waiting for machine to come up
	I0501 02:33:31.223856   32853 main.go:141] libmachine: (ha-329926-m03) DBG | domain ha-329926-m03 has defined MAC address 52:54:00:f9:eb:7d in network mk-ha-329926
	I0501 02:33:31.224393   32853 main.go:141] libmachine: (ha-329926-m03) DBG | unable to find current IP address of domain ha-329926-m03 in network mk-ha-329926
	I0501 02:33:31.224423   32853 main.go:141] libmachine: (ha-329926-m03) DBG | I0501 02:33:31.224340   33738 retry.go:31] will retry after 695.353018ms: waiting for machine to come up
	I0501 02:33:31.920747   32853 main.go:141] libmachine: (ha-329926-m03) DBG | domain ha-329926-m03 has defined MAC address 52:54:00:f9:eb:7d in network mk-ha-329926
	I0501 02:33:31.921137   32853 main.go:141] libmachine: (ha-329926-m03) DBG | unable to find current IP address of domain ha-329926-m03 in network mk-ha-329926
	I0501 02:33:31.921166   32853 main.go:141] libmachine: (ha-329926-m03) DBG | I0501 02:33:31.921101   33738 retry.go:31] will retry after 744.069404ms: waiting for machine to come up
	I0501 02:33:32.666979   32853 main.go:141] libmachine: (ha-329926-m03) DBG | domain ha-329926-m03 has defined MAC address 52:54:00:f9:eb:7d in network mk-ha-329926
	I0501 02:33:32.667389   32853 main.go:141] libmachine: (ha-329926-m03) DBG | unable to find current IP address of domain ha-329926-m03 in network mk-ha-329926
	I0501 02:33:32.667414   32853 main.go:141] libmachine: (ha-329926-m03) DBG | I0501 02:33:32.667359   33738 retry.go:31] will retry after 1.005854202s: waiting for machine to come up
	I0501 02:33:33.675019   32853 main.go:141] libmachine: (ha-329926-m03) DBG | domain ha-329926-m03 has defined MAC address 52:54:00:f9:eb:7d in network mk-ha-329926
	I0501 02:33:33.675426   32853 main.go:141] libmachine: (ha-329926-m03) DBG | unable to find current IP address of domain ha-329926-m03 in network mk-ha-329926
	I0501 02:33:33.675449   32853 main.go:141] libmachine: (ha-329926-m03) DBG | I0501 02:33:33.675390   33738 retry.go:31] will retry after 1.01541658s: waiting for machine to come up
	I0501 02:33:34.692612   32853 main.go:141] libmachine: (ha-329926-m03) DBG | domain ha-329926-m03 has defined MAC address 52:54:00:f9:eb:7d in network mk-ha-329926
	I0501 02:33:34.693194   32853 main.go:141] libmachine: (ha-329926-m03) DBG | unable to find current IP address of domain ha-329926-m03 in network mk-ha-329926
	I0501 02:33:34.693223   32853 main.go:141] libmachine: (ha-329926-m03) DBG | I0501 02:33:34.693141   33738 retry.go:31] will retry after 1.74258816s: waiting for machine to come up
	I0501 02:33:36.437450   32853 main.go:141] libmachine: (ha-329926-m03) DBG | domain ha-329926-m03 has defined MAC address 52:54:00:f9:eb:7d in network mk-ha-329926
	I0501 02:33:36.437789   32853 main.go:141] libmachine: (ha-329926-m03) DBG | unable to find current IP address of domain ha-329926-m03 in network mk-ha-329926
	I0501 02:33:36.437830   32853 main.go:141] libmachine: (ha-329926-m03) DBG | I0501 02:33:36.437746   33738 retry.go:31] will retry after 1.680882888s: waiting for machine to come up
	I0501 02:33:38.120586   32853 main.go:141] libmachine: (ha-329926-m03) DBG | domain ha-329926-m03 has defined MAC address 52:54:00:f9:eb:7d in network mk-ha-329926
	I0501 02:33:38.121045   32853 main.go:141] libmachine: (ha-329926-m03) DBG | unable to find current IP address of domain ha-329926-m03 in network mk-ha-329926
	I0501 02:33:38.121070   32853 main.go:141] libmachine: (ha-329926-m03) DBG | I0501 02:33:38.121011   33738 retry.go:31] will retry after 2.761042118s: waiting for machine to come up
	I0501 02:33:40.883703   32853 main.go:141] libmachine: (ha-329926-m03) DBG | domain ha-329926-m03 has defined MAC address 52:54:00:f9:eb:7d in network mk-ha-329926
	I0501 02:33:40.884076   32853 main.go:141] libmachine: (ha-329926-m03) DBG | unable to find current IP address of domain ha-329926-m03 in network mk-ha-329926
	I0501 02:33:40.884117   32853 main.go:141] libmachine: (ha-329926-m03) DBG | I0501 02:33:40.884008   33738 retry.go:31] will retry after 2.930624255s: waiting for machine to come up
	I0501 02:33:43.816571   32853 main.go:141] libmachine: (ha-329926-m03) DBG | domain ha-329926-m03 has defined MAC address 52:54:00:f9:eb:7d in network mk-ha-329926
	I0501 02:33:43.816974   32853 main.go:141] libmachine: (ha-329926-m03) DBG | unable to find current IP address of domain ha-329926-m03 in network mk-ha-329926
	I0501 02:33:43.817009   32853 main.go:141] libmachine: (ha-329926-m03) DBG | I0501 02:33:43.816935   33738 retry.go:31] will retry after 3.065921207s: waiting for machine to come up
	I0501 02:33:46.884687   32853 main.go:141] libmachine: (ha-329926-m03) DBG | domain ha-329926-m03 has defined MAC address 52:54:00:f9:eb:7d in network mk-ha-329926
	I0501 02:33:46.885111   32853 main.go:141] libmachine: (ha-329926-m03) DBG | unable to find current IP address of domain ha-329926-m03 in network mk-ha-329926
	I0501 02:33:46.885137   32853 main.go:141] libmachine: (ha-329926-m03) DBG | I0501 02:33:46.885085   33738 retry.go:31] will retry after 3.477878953s: waiting for machine to come up
	I0501 02:33:50.365711   32853 main.go:141] libmachine: (ha-329926-m03) DBG | domain ha-329926-m03 has defined MAC address 52:54:00:f9:eb:7d in network mk-ha-329926
	I0501 02:33:50.366223   32853 main.go:141] libmachine: (ha-329926-m03) Found IP for machine: 192.168.39.115
	I0501 02:33:50.366257   32853 main.go:141] libmachine: (ha-329926-m03) DBG | domain ha-329926-m03 has current primary IP address 192.168.39.115 and MAC address 52:54:00:f9:eb:7d in network mk-ha-329926
	I0501 02:33:50.366266   32853 main.go:141] libmachine: (ha-329926-m03) Reserving static IP address...
	I0501 02:33:50.366601   32853 main.go:141] libmachine: (ha-329926-m03) DBG | unable to find host DHCP lease matching {name: "ha-329926-m03", mac: "52:54:00:f9:eb:7d", ip: "192.168.39.115"} in network mk-ha-329926
	I0501 02:33:50.439427   32853 main.go:141] libmachine: (ha-329926-m03) DBG | Getting to WaitForSSH function...
	I0501 02:33:50.439449   32853 main.go:141] libmachine: (ha-329926-m03) Reserved static IP address: 192.168.39.115
	I0501 02:33:50.439462   32853 main.go:141] libmachine: (ha-329926-m03) Waiting for SSH to be available...
	I0501 02:33:50.441962   32853 main.go:141] libmachine: (ha-329926-m03) DBG | domain ha-329926-m03 has defined MAC address 52:54:00:f9:eb:7d in network mk-ha-329926
	I0501 02:33:50.442330   32853 main.go:141] libmachine: (ha-329926-m03) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:f9:eb:7d", ip: ""} in network mk-ha-329926
	I0501 02:33:50.442357   32853 main.go:141] libmachine: (ha-329926-m03) DBG | unable to find defined IP address of network mk-ha-329926 interface with MAC address 52:54:00:f9:eb:7d
	I0501 02:33:50.442600   32853 main.go:141] libmachine: (ha-329926-m03) DBG | Using SSH client type: external
	I0501 02:33:50.442628   32853 main.go:141] libmachine: (ha-329926-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926-m03/id_rsa (-rw-------)
	I0501 02:33:50.442655   32853 main.go:141] libmachine: (ha-329926-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0501 02:33:50.442668   32853 main.go:141] libmachine: (ha-329926-m03) DBG | About to run SSH command:
	I0501 02:33:50.442704   32853 main.go:141] libmachine: (ha-329926-m03) DBG | exit 0
	I0501 02:33:50.446087   32853 main.go:141] libmachine: (ha-329926-m03) DBG | SSH cmd err, output: exit status 255: 
	I0501 02:33:50.446106   32853 main.go:141] libmachine: (ha-329926-m03) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0501 02:33:50.446113   32853 main.go:141] libmachine: (ha-329926-m03) DBG | command : exit 0
	I0501 02:33:50.446121   32853 main.go:141] libmachine: (ha-329926-m03) DBG | err     : exit status 255
	I0501 02:33:50.446128   32853 main.go:141] libmachine: (ha-329926-m03) DBG | output  : 
	I0501 02:33:53.446971   32853 main.go:141] libmachine: (ha-329926-m03) DBG | Getting to WaitForSSH function...
	I0501 02:33:53.449841   32853 main.go:141] libmachine: (ha-329926-m03) DBG | domain ha-329926-m03 has defined MAC address 52:54:00:f9:eb:7d in network mk-ha-329926
	I0501 02:33:53.450179   32853 main.go:141] libmachine: (ha-329926-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:eb:7d", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:33:44 +0000 UTC Type:0 Mac:52:54:00:f9:eb:7d Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:ha-329926-m03 Clientid:01:52:54:00:f9:eb:7d}
	I0501 02:33:53.450204   32853 main.go:141] libmachine: (ha-329926-m03) DBG | domain ha-329926-m03 has defined IP address 192.168.39.115 and MAC address 52:54:00:f9:eb:7d in network mk-ha-329926
	I0501 02:33:53.450301   32853 main.go:141] libmachine: (ha-329926-m03) DBG | Using SSH client type: external
	I0501 02:33:53.450327   32853 main.go:141] libmachine: (ha-329926-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926-m03/id_rsa (-rw-------)
	I0501 02:33:53.450372   32853 main.go:141] libmachine: (ha-329926-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.115 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0501 02:33:53.450389   32853 main.go:141] libmachine: (ha-329926-m03) DBG | About to run SSH command:
	I0501 02:33:53.450420   32853 main.go:141] libmachine: (ha-329926-m03) DBG | exit 0
	I0501 02:33:53.578919   32853 main.go:141] libmachine: (ha-329926-m03) DBG | SSH cmd err, output: <nil>: 
	I0501 02:33:53.579202   32853 main.go:141] libmachine: (ha-329926-m03) KVM machine creation complete!
	I0501 02:33:53.579498   32853 main.go:141] libmachine: (ha-329926-m03) Calling .GetConfigRaw
	I0501 02:33:53.580099   32853 main.go:141] libmachine: (ha-329926-m03) Calling .DriverName
	I0501 02:33:53.580316   32853 main.go:141] libmachine: (ha-329926-m03) Calling .DriverName
	I0501 02:33:53.580468   32853 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0501 02:33:53.580481   32853 main.go:141] libmachine: (ha-329926-m03) Calling .GetState
	I0501 02:33:53.581566   32853 main.go:141] libmachine: Detecting operating system of created instance...
	I0501 02:33:53.581578   32853 main.go:141] libmachine: Waiting for SSH to be available...
	I0501 02:33:53.581586   32853 main.go:141] libmachine: Getting to WaitForSSH function...
	I0501 02:33:53.581593   32853 main.go:141] libmachine: (ha-329926-m03) Calling .GetSSHHostname
	I0501 02:33:53.584271   32853 main.go:141] libmachine: (ha-329926-m03) DBG | domain ha-329926-m03 has defined MAC address 52:54:00:f9:eb:7d in network mk-ha-329926
	I0501 02:33:53.584731   32853 main.go:141] libmachine: (ha-329926-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:eb:7d", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:33:44 +0000 UTC Type:0 Mac:52:54:00:f9:eb:7d Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:ha-329926-m03 Clientid:01:52:54:00:f9:eb:7d}
	I0501 02:33:53.584758   32853 main.go:141] libmachine: (ha-329926-m03) DBG | domain ha-329926-m03 has defined IP address 192.168.39.115 and MAC address 52:54:00:f9:eb:7d in network mk-ha-329926
	I0501 02:33:53.584924   32853 main.go:141] libmachine: (ha-329926-m03) Calling .GetSSHPort
	I0501 02:33:53.585094   32853 main.go:141] libmachine: (ha-329926-m03) Calling .GetSSHKeyPath
	I0501 02:33:53.585243   32853 main.go:141] libmachine: (ha-329926-m03) Calling .GetSSHKeyPath
	I0501 02:33:53.585381   32853 main.go:141] libmachine: (ha-329926-m03) Calling .GetSSHUsername
	I0501 02:33:53.585530   32853 main.go:141] libmachine: Using SSH client type: native
	I0501 02:33:53.585733   32853 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.115 22 <nil> <nil>}
	I0501 02:33:53.585748   32853 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0501 02:33:53.701991   32853 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0501 02:33:53.702026   32853 main.go:141] libmachine: Detecting the provisioner...
	I0501 02:33:53.702034   32853 main.go:141] libmachine: (ha-329926-m03) Calling .GetSSHHostname
	I0501 02:33:53.704820   32853 main.go:141] libmachine: (ha-329926-m03) DBG | domain ha-329926-m03 has defined MAC address 52:54:00:f9:eb:7d in network mk-ha-329926
	I0501 02:33:53.705152   32853 main.go:141] libmachine: (ha-329926-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:eb:7d", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:33:44 +0000 UTC Type:0 Mac:52:54:00:f9:eb:7d Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:ha-329926-m03 Clientid:01:52:54:00:f9:eb:7d}
	I0501 02:33:53.705179   32853 main.go:141] libmachine: (ha-329926-m03) DBG | domain ha-329926-m03 has defined IP address 192.168.39.115 and MAC address 52:54:00:f9:eb:7d in network mk-ha-329926
	I0501 02:33:53.705311   32853 main.go:141] libmachine: (ha-329926-m03) Calling .GetSSHPort
	I0501 02:33:53.705484   32853 main.go:141] libmachine: (ha-329926-m03) Calling .GetSSHKeyPath
	I0501 02:33:53.705664   32853 main.go:141] libmachine: (ha-329926-m03) Calling .GetSSHKeyPath
	I0501 02:33:53.705762   32853 main.go:141] libmachine: (ha-329926-m03) Calling .GetSSHUsername
	I0501 02:33:53.705942   32853 main.go:141] libmachine: Using SSH client type: native
	I0501 02:33:53.706095   32853 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.115 22 <nil> <nil>}
	I0501 02:33:53.706106   32853 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0501 02:33:53.819574   32853 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0501 02:33:53.819644   32853 main.go:141] libmachine: found compatible host: buildroot
	I0501 02:33:53.819657   32853 main.go:141] libmachine: Provisioning with buildroot...
	I0501 02:33:53.819670   32853 main.go:141] libmachine: (ha-329926-m03) Calling .GetMachineName
	I0501 02:33:53.819887   32853 buildroot.go:166] provisioning hostname "ha-329926-m03"
	I0501 02:33:53.819912   32853 main.go:141] libmachine: (ha-329926-m03) Calling .GetMachineName
	I0501 02:33:53.820059   32853 main.go:141] libmachine: (ha-329926-m03) Calling .GetSSHHostname
	I0501 02:33:53.822803   32853 main.go:141] libmachine: (ha-329926-m03) DBG | domain ha-329926-m03 has defined MAC address 52:54:00:f9:eb:7d in network mk-ha-329926
	I0501 02:33:53.823211   32853 main.go:141] libmachine: (ha-329926-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:eb:7d", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:33:44 +0000 UTC Type:0 Mac:52:54:00:f9:eb:7d Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:ha-329926-m03 Clientid:01:52:54:00:f9:eb:7d}
	I0501 02:33:53.823238   32853 main.go:141] libmachine: (ha-329926-m03) DBG | domain ha-329926-m03 has defined IP address 192.168.39.115 and MAC address 52:54:00:f9:eb:7d in network mk-ha-329926
	I0501 02:33:53.823413   32853 main.go:141] libmachine: (ha-329926-m03) Calling .GetSSHPort
	I0501 02:33:53.823590   32853 main.go:141] libmachine: (ha-329926-m03) Calling .GetSSHKeyPath
	I0501 02:33:53.823759   32853 main.go:141] libmachine: (ha-329926-m03) Calling .GetSSHKeyPath
	I0501 02:33:53.823948   32853 main.go:141] libmachine: (ha-329926-m03) Calling .GetSSHUsername
	I0501 02:33:53.824130   32853 main.go:141] libmachine: Using SSH client type: native
	I0501 02:33:53.824345   32853 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.115 22 <nil> <nil>}
	I0501 02:33:53.824365   32853 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-329926-m03 && echo "ha-329926-m03" | sudo tee /etc/hostname
	I0501 02:33:53.958301   32853 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-329926-m03
	
	I0501 02:33:53.958334   32853 main.go:141] libmachine: (ha-329926-m03) Calling .GetSSHHostname
	I0501 02:33:53.961097   32853 main.go:141] libmachine: (ha-329926-m03) DBG | domain ha-329926-m03 has defined MAC address 52:54:00:f9:eb:7d in network mk-ha-329926
	I0501 02:33:53.961545   32853 main.go:141] libmachine: (ha-329926-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:eb:7d", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:33:44 +0000 UTC Type:0 Mac:52:54:00:f9:eb:7d Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:ha-329926-m03 Clientid:01:52:54:00:f9:eb:7d}
	I0501 02:33:53.961576   32853 main.go:141] libmachine: (ha-329926-m03) DBG | domain ha-329926-m03 has defined IP address 192.168.39.115 and MAC address 52:54:00:f9:eb:7d in network mk-ha-329926
	I0501 02:33:53.961774   32853 main.go:141] libmachine: (ha-329926-m03) Calling .GetSSHPort
	I0501 02:33:53.961992   32853 main.go:141] libmachine: (ha-329926-m03) Calling .GetSSHKeyPath
	I0501 02:33:53.962163   32853 main.go:141] libmachine: (ha-329926-m03) Calling .GetSSHKeyPath
	I0501 02:33:53.962305   32853 main.go:141] libmachine: (ha-329926-m03) Calling .GetSSHUsername
	I0501 02:33:53.962494   32853 main.go:141] libmachine: Using SSH client type: native
	I0501 02:33:53.962643   32853 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.115 22 <nil> <nil>}
	I0501 02:33:53.962660   32853 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-329926-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-329926-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-329926-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0501 02:33:54.089021   32853 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0501 02:33:54.089056   32853 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18779-13391/.minikube CaCertPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18779-13391/.minikube}
	I0501 02:33:54.089078   32853 buildroot.go:174] setting up certificates
	I0501 02:33:54.089092   32853 provision.go:84] configureAuth start
	I0501 02:33:54.089103   32853 main.go:141] libmachine: (ha-329926-m03) Calling .GetMachineName
	I0501 02:33:54.089417   32853 main.go:141] libmachine: (ha-329926-m03) Calling .GetIP
	I0501 02:33:54.091857   32853 main.go:141] libmachine: (ha-329926-m03) DBG | domain ha-329926-m03 has defined MAC address 52:54:00:f9:eb:7d in network mk-ha-329926
	I0501 02:33:54.092181   32853 main.go:141] libmachine: (ha-329926-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:eb:7d", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:33:44 +0000 UTC Type:0 Mac:52:54:00:f9:eb:7d Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:ha-329926-m03 Clientid:01:52:54:00:f9:eb:7d}
	I0501 02:33:54.092211   32853 main.go:141] libmachine: (ha-329926-m03) DBG | domain ha-329926-m03 has defined IP address 192.168.39.115 and MAC address 52:54:00:f9:eb:7d in network mk-ha-329926
	I0501 02:33:54.092345   32853 main.go:141] libmachine: (ha-329926-m03) Calling .GetSSHHostname
	I0501 02:33:54.094374   32853 main.go:141] libmachine: (ha-329926-m03) DBG | domain ha-329926-m03 has defined MAC address 52:54:00:f9:eb:7d in network mk-ha-329926
	I0501 02:33:54.094820   32853 main.go:141] libmachine: (ha-329926-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:eb:7d", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:33:44 +0000 UTC Type:0 Mac:52:54:00:f9:eb:7d Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:ha-329926-m03 Clientid:01:52:54:00:f9:eb:7d}
	I0501 02:33:54.094854   32853 main.go:141] libmachine: (ha-329926-m03) DBG | domain ha-329926-m03 has defined IP address 192.168.39.115 and MAC address 52:54:00:f9:eb:7d in network mk-ha-329926
	I0501 02:33:54.095005   32853 provision.go:143] copyHostCerts
	I0501 02:33:54.095045   32853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem
	I0501 02:33:54.095085   32853 exec_runner.go:144] found /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem, removing ...
	I0501 02:33:54.095097   32853 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem
	I0501 02:33:54.095182   32853 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem (1078 bytes)
	I0501 02:33:54.095256   32853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem
	I0501 02:33:54.095276   32853 exec_runner.go:144] found /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem, removing ...
	I0501 02:33:54.095283   32853 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem
	I0501 02:33:54.095307   32853 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem (1123 bytes)
	I0501 02:33:54.095348   32853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem
	I0501 02:33:54.095366   32853 exec_runner.go:144] found /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem, removing ...
	I0501 02:33:54.095373   32853 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem
	I0501 02:33:54.095394   32853 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem (1675 bytes)
	I0501 02:33:54.095440   32853 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca-key.pem org=jenkins.ha-329926-m03 san=[127.0.0.1 192.168.39.115 ha-329926-m03 localhost minikube]
	I0501 02:33:54.224112   32853 provision.go:177] copyRemoteCerts
	I0501 02:33:54.224166   32853 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0501 02:33:54.224187   32853 main.go:141] libmachine: (ha-329926-m03) Calling .GetSSHHostname
	I0501 02:33:54.226746   32853 main.go:141] libmachine: (ha-329926-m03) DBG | domain ha-329926-m03 has defined MAC address 52:54:00:f9:eb:7d in network mk-ha-329926
	I0501 02:33:54.227156   32853 main.go:141] libmachine: (ha-329926-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:eb:7d", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:33:44 +0000 UTC Type:0 Mac:52:54:00:f9:eb:7d Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:ha-329926-m03 Clientid:01:52:54:00:f9:eb:7d}
	I0501 02:33:54.227183   32853 main.go:141] libmachine: (ha-329926-m03) DBG | domain ha-329926-m03 has defined IP address 192.168.39.115 and MAC address 52:54:00:f9:eb:7d in network mk-ha-329926
	I0501 02:33:54.227375   32853 main.go:141] libmachine: (ha-329926-m03) Calling .GetSSHPort
	I0501 02:33:54.227570   32853 main.go:141] libmachine: (ha-329926-m03) Calling .GetSSHKeyPath
	I0501 02:33:54.227725   32853 main.go:141] libmachine: (ha-329926-m03) Calling .GetSSHUsername
	I0501 02:33:54.227861   32853 sshutil.go:53] new ssh client: &{IP:192.168.39.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926-m03/id_rsa Username:docker}
	I0501 02:33:54.314170   32853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0501 02:33:54.314242   32853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0501 02:33:54.340949   32853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0501 02:33:54.341014   32853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0501 02:33:54.367638   32853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0501 02:33:54.367713   32853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0501 02:33:54.395065   32853 provision.go:87] duration metric: took 305.962904ms to configureAuth
	I0501 02:33:54.395096   32853 buildroot.go:189] setting minikube options for container-runtime
	I0501 02:33:54.395366   32853 config.go:182] Loaded profile config "ha-329926": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0501 02:33:54.395472   32853 main.go:141] libmachine: (ha-329926-m03) Calling .GetSSHHostname
	I0501 02:33:54.398240   32853 main.go:141] libmachine: (ha-329926-m03) DBG | domain ha-329926-m03 has defined MAC address 52:54:00:f9:eb:7d in network mk-ha-329926
	I0501 02:33:54.398716   32853 main.go:141] libmachine: (ha-329926-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:eb:7d", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:33:44 +0000 UTC Type:0 Mac:52:54:00:f9:eb:7d Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:ha-329926-m03 Clientid:01:52:54:00:f9:eb:7d}
	I0501 02:33:54.398756   32853 main.go:141] libmachine: (ha-329926-m03) DBG | domain ha-329926-m03 has defined IP address 192.168.39.115 and MAC address 52:54:00:f9:eb:7d in network mk-ha-329926
	I0501 02:33:54.398961   32853 main.go:141] libmachine: (ha-329926-m03) Calling .GetSSHPort
	I0501 02:33:54.399148   32853 main.go:141] libmachine: (ha-329926-m03) Calling .GetSSHKeyPath
	I0501 02:33:54.399292   32853 main.go:141] libmachine: (ha-329926-m03) Calling .GetSSHKeyPath
	I0501 02:33:54.399469   32853 main.go:141] libmachine: (ha-329926-m03) Calling .GetSSHUsername
	I0501 02:33:54.399651   32853 main.go:141] libmachine: Using SSH client type: native
	I0501 02:33:54.399829   32853 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.115 22 <nil> <nil>}
	I0501 02:33:54.399843   32853 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0501 02:33:54.690659   32853 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0501 02:33:54.690698   32853 main.go:141] libmachine: Checking connection to Docker...
	I0501 02:33:54.690706   32853 main.go:141] libmachine: (ha-329926-m03) Calling .GetURL
	I0501 02:33:54.691918   32853 main.go:141] libmachine: (ha-329926-m03) DBG | Using libvirt version 6000000
	I0501 02:33:54.694051   32853 main.go:141] libmachine: (ha-329926-m03) DBG | domain ha-329926-m03 has defined MAC address 52:54:00:f9:eb:7d in network mk-ha-329926
	I0501 02:33:54.694359   32853 main.go:141] libmachine: (ha-329926-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:eb:7d", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:33:44 +0000 UTC Type:0 Mac:52:54:00:f9:eb:7d Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:ha-329926-m03 Clientid:01:52:54:00:f9:eb:7d}
	I0501 02:33:54.694427   32853 main.go:141] libmachine: (ha-329926-m03) DBG | domain ha-329926-m03 has defined IP address 192.168.39.115 and MAC address 52:54:00:f9:eb:7d in network mk-ha-329926
	I0501 02:33:54.694544   32853 main.go:141] libmachine: Docker is up and running!
	I0501 02:33:54.694558   32853 main.go:141] libmachine: Reticulating splines...
	I0501 02:33:54.694565   32853 client.go:171] duration metric: took 26.574862273s to LocalClient.Create
	I0501 02:33:54.694588   32853 start.go:167] duration metric: took 26.574922123s to libmachine.API.Create "ha-329926"
	I0501 02:33:54.694601   32853 start.go:293] postStartSetup for "ha-329926-m03" (driver="kvm2")
	I0501 02:33:54.694617   32853 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0501 02:33:54.694639   32853 main.go:141] libmachine: (ha-329926-m03) Calling .DriverName
	I0501 02:33:54.694843   32853 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0501 02:33:54.694865   32853 main.go:141] libmachine: (ha-329926-m03) Calling .GetSSHHostname
	I0501 02:33:54.698015   32853 main.go:141] libmachine: (ha-329926-m03) DBG | domain ha-329926-m03 has defined MAC address 52:54:00:f9:eb:7d in network mk-ha-329926
	I0501 02:33:54.698491   32853 main.go:141] libmachine: (ha-329926-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:eb:7d", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:33:44 +0000 UTC Type:0 Mac:52:54:00:f9:eb:7d Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:ha-329926-m03 Clientid:01:52:54:00:f9:eb:7d}
	I0501 02:33:54.698516   32853 main.go:141] libmachine: (ha-329926-m03) DBG | domain ha-329926-m03 has defined IP address 192.168.39.115 and MAC address 52:54:00:f9:eb:7d in network mk-ha-329926
	I0501 02:33:54.698686   32853 main.go:141] libmachine: (ha-329926-m03) Calling .GetSSHPort
	I0501 02:33:54.698867   32853 main.go:141] libmachine: (ha-329926-m03) Calling .GetSSHKeyPath
	I0501 02:33:54.699050   32853 main.go:141] libmachine: (ha-329926-m03) Calling .GetSSHUsername
	I0501 02:33:54.699169   32853 sshutil.go:53] new ssh client: &{IP:192.168.39.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926-m03/id_rsa Username:docker}
	I0501 02:33:54.794701   32853 ssh_runner.go:195] Run: cat /etc/os-release
	I0501 02:33:54.799872   32853 info.go:137] Remote host: Buildroot 2023.02.9
	I0501 02:33:54.799897   32853 filesync.go:126] Scanning /home/jenkins/minikube-integration/18779-13391/.minikube/addons for local assets ...
	I0501 02:33:54.799955   32853 filesync.go:126] Scanning /home/jenkins/minikube-integration/18779-13391/.minikube/files for local assets ...
	I0501 02:33:54.800022   32853 filesync.go:149] local asset: /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem -> 207242.pem in /etc/ssl/certs
	I0501 02:33:54.800032   32853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem -> /etc/ssl/certs/207242.pem
	I0501 02:33:54.800120   32853 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0501 02:33:54.813593   32853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem --> /etc/ssl/certs/207242.pem (1708 bytes)
	I0501 02:33:54.842323   32853 start.go:296] duration metric: took 147.707876ms for postStartSetup
	I0501 02:33:54.842369   32853 main.go:141] libmachine: (ha-329926-m03) Calling .GetConfigRaw
	I0501 02:33:54.843095   32853 main.go:141] libmachine: (ha-329926-m03) Calling .GetIP
	I0501 02:33:54.845640   32853 main.go:141] libmachine: (ha-329926-m03) DBG | domain ha-329926-m03 has defined MAC address 52:54:00:f9:eb:7d in network mk-ha-329926
	I0501 02:33:54.845998   32853 main.go:141] libmachine: (ha-329926-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:eb:7d", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:33:44 +0000 UTC Type:0 Mac:52:54:00:f9:eb:7d Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:ha-329926-m03 Clientid:01:52:54:00:f9:eb:7d}
	I0501 02:33:54.846028   32853 main.go:141] libmachine: (ha-329926-m03) DBG | domain ha-329926-m03 has defined IP address 192.168.39.115 and MAC address 52:54:00:f9:eb:7d in network mk-ha-329926
	I0501 02:33:54.846276   32853 profile.go:143] Saving config to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/config.json ...
	I0501 02:33:54.846526   32853 start.go:128] duration metric: took 26.745052966s to createHost
	I0501 02:33:54.846548   32853 main.go:141] libmachine: (ha-329926-m03) Calling .GetSSHHostname
	I0501 02:33:54.848541   32853 main.go:141] libmachine: (ha-329926-m03) DBG | domain ha-329926-m03 has defined MAC address 52:54:00:f9:eb:7d in network mk-ha-329926
	I0501 02:33:54.848882   32853 main.go:141] libmachine: (ha-329926-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:eb:7d", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:33:44 +0000 UTC Type:0 Mac:52:54:00:f9:eb:7d Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:ha-329926-m03 Clientid:01:52:54:00:f9:eb:7d}
	I0501 02:33:54.848912   32853 main.go:141] libmachine: (ha-329926-m03) DBG | domain ha-329926-m03 has defined IP address 192.168.39.115 and MAC address 52:54:00:f9:eb:7d in network mk-ha-329926
	I0501 02:33:54.849053   32853 main.go:141] libmachine: (ha-329926-m03) Calling .GetSSHPort
	I0501 02:33:54.849236   32853 main.go:141] libmachine: (ha-329926-m03) Calling .GetSSHKeyPath
	I0501 02:33:54.849419   32853 main.go:141] libmachine: (ha-329926-m03) Calling .GetSSHKeyPath
	I0501 02:33:54.849566   32853 main.go:141] libmachine: (ha-329926-m03) Calling .GetSSHUsername
	I0501 02:33:54.849701   32853 main.go:141] libmachine: Using SSH client type: native
	I0501 02:33:54.849843   32853 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.115 22 <nil> <nil>}
	I0501 02:33:54.849853   32853 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0501 02:33:54.964413   32853 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714530834.949291052
	
	I0501 02:33:54.964439   32853 fix.go:216] guest clock: 1714530834.949291052
	I0501 02:33:54.964449   32853 fix.go:229] Guest: 2024-05-01 02:33:54.949291052 +0000 UTC Remote: 2024-05-01 02:33:54.846538738 +0000 UTC m=+172.769036006 (delta=102.752314ms)
	I0501 02:33:54.964468   32853 fix.go:200] guest clock delta is within tolerance: 102.752314ms
	I0501 02:33:54.964474   32853 start.go:83] releasing machines lock for "ha-329926-m03", held for 26.863150367s
	I0501 02:33:54.964496   32853 main.go:141] libmachine: (ha-329926-m03) Calling .DriverName
	I0501 02:33:54.964764   32853 main.go:141] libmachine: (ha-329926-m03) Calling .GetIP
	I0501 02:33:54.967409   32853 main.go:141] libmachine: (ha-329926-m03) DBG | domain ha-329926-m03 has defined MAC address 52:54:00:f9:eb:7d in network mk-ha-329926
	I0501 02:33:54.967787   32853 main.go:141] libmachine: (ha-329926-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:eb:7d", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:33:44 +0000 UTC Type:0 Mac:52:54:00:f9:eb:7d Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:ha-329926-m03 Clientid:01:52:54:00:f9:eb:7d}
	I0501 02:33:54.967819   32853 main.go:141] libmachine: (ha-329926-m03) DBG | domain ha-329926-m03 has defined IP address 192.168.39.115 and MAC address 52:54:00:f9:eb:7d in network mk-ha-329926
	I0501 02:33:54.969859   32853 out.go:177] * Found network options:
	I0501 02:33:54.971200   32853 out.go:177]   - NO_PROXY=192.168.39.5,192.168.39.79
	W0501 02:33:54.972418   32853 proxy.go:119] fail to check proxy env: Error ip not in block
	W0501 02:33:54.972447   32853 proxy.go:119] fail to check proxy env: Error ip not in block
	I0501 02:33:54.972465   32853 main.go:141] libmachine: (ha-329926-m03) Calling .DriverName
	I0501 02:33:54.972936   32853 main.go:141] libmachine: (ha-329926-m03) Calling .DriverName
	I0501 02:33:54.973099   32853 main.go:141] libmachine: (ha-329926-m03) Calling .DriverName
	I0501 02:33:54.973193   32853 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0501 02:33:54.973232   32853 main.go:141] libmachine: (ha-329926-m03) Calling .GetSSHHostname
	W0501 02:33:54.973294   32853 proxy.go:119] fail to check proxy env: Error ip not in block
	W0501 02:33:54.973313   32853 proxy.go:119] fail to check proxy env: Error ip not in block
	I0501 02:33:54.973365   32853 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0501 02:33:54.973385   32853 main.go:141] libmachine: (ha-329926-m03) Calling .GetSSHHostname
	I0501 02:33:54.976075   32853 main.go:141] libmachine: (ha-329926-m03) DBG | domain ha-329926-m03 has defined MAC address 52:54:00:f9:eb:7d in network mk-ha-329926
	I0501 02:33:54.976253   32853 main.go:141] libmachine: (ha-329926-m03) DBG | domain ha-329926-m03 has defined MAC address 52:54:00:f9:eb:7d in network mk-ha-329926
	I0501 02:33:54.976478   32853 main.go:141] libmachine: (ha-329926-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:eb:7d", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:33:44 +0000 UTC Type:0 Mac:52:54:00:f9:eb:7d Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:ha-329926-m03 Clientid:01:52:54:00:f9:eb:7d}
	I0501 02:33:54.976515   32853 main.go:141] libmachine: (ha-329926-m03) DBG | domain ha-329926-m03 has defined IP address 192.168.39.115 and MAC address 52:54:00:f9:eb:7d in network mk-ha-329926
	I0501 02:33:54.976617   32853 main.go:141] libmachine: (ha-329926-m03) Calling .GetSSHPort
	I0501 02:33:54.976726   32853 main.go:141] libmachine: (ha-329926-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:eb:7d", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:33:44 +0000 UTC Type:0 Mac:52:54:00:f9:eb:7d Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:ha-329926-m03 Clientid:01:52:54:00:f9:eb:7d}
	I0501 02:33:54.976749   32853 main.go:141] libmachine: (ha-329926-m03) DBG | domain ha-329926-m03 has defined IP address 192.168.39.115 and MAC address 52:54:00:f9:eb:7d in network mk-ha-329926
	I0501 02:33:54.976782   32853 main.go:141] libmachine: (ha-329926-m03) Calling .GetSSHKeyPath
	I0501 02:33:54.976915   32853 main.go:141] libmachine: (ha-329926-m03) Calling .GetSSHPort
	I0501 02:33:54.976973   32853 main.go:141] libmachine: (ha-329926-m03) Calling .GetSSHUsername
	I0501 02:33:54.977095   32853 main.go:141] libmachine: (ha-329926-m03) Calling .GetSSHKeyPath
	I0501 02:33:54.977164   32853 sshutil.go:53] new ssh client: &{IP:192.168.39.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926-m03/id_rsa Username:docker}
	I0501 02:33:54.977245   32853 main.go:141] libmachine: (ha-329926-m03) Calling .GetSSHUsername
	I0501 02:33:54.977373   32853 sshutil.go:53] new ssh client: &{IP:192.168.39.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926-m03/id_rsa Username:docker}
	I0501 02:33:55.228774   32853 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0501 02:33:55.236740   32853 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0501 02:33:55.236805   32853 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0501 02:33:55.256868   32853 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0501 02:33:55.256893   32853 start.go:494] detecting cgroup driver to use...
	I0501 02:33:55.256963   32853 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0501 02:33:55.278379   32853 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0501 02:33:55.295278   32853 docker.go:217] disabling cri-docker service (if available) ...
	I0501 02:33:55.295367   32853 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0501 02:33:55.310071   32853 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0501 02:33:55.324746   32853 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0501 02:33:55.450716   32853 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0501 02:33:55.601554   32853 docker.go:233] disabling docker service ...
	I0501 02:33:55.601613   32853 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0501 02:33:55.620391   32853 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0501 02:33:55.634343   32853 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0501 02:33:55.776462   32853 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0501 02:33:55.905451   32853 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0501 02:33:55.921494   32853 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0501 02:33:55.944306   32853 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0501 02:33:55.944374   32853 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 02:33:55.956199   32853 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0501 02:33:55.956267   32853 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 02:33:55.968518   32853 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 02:33:55.980336   32853 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 02:33:55.992865   32853 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0501 02:33:56.005561   32853 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 02:33:56.017934   32853 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 02:33:56.039177   32853 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 02:33:56.050473   32853 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0501 02:33:56.060271   32853 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0501 02:33:56.060333   32853 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0501 02:33:56.074851   32853 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0501 02:33:56.086136   32853 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:33:56.245188   32853 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0501 02:33:56.398179   32853 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0501 02:33:56.398258   32853 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0501 02:33:56.403866   32853 start.go:562] Will wait 60s for crictl version
	I0501 02:33:56.403928   32853 ssh_runner.go:195] Run: which crictl
	I0501 02:33:56.408138   32853 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0501 02:33:56.446483   32853 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0501 02:33:56.446584   32853 ssh_runner.go:195] Run: crio --version
	I0501 02:33:56.478041   32853 ssh_runner.go:195] Run: crio --version
	I0501 02:33:56.510382   32853 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0501 02:33:56.511673   32853 out.go:177]   - env NO_PROXY=192.168.39.5
	I0501 02:33:56.512946   32853 out.go:177]   - env NO_PROXY=192.168.39.5,192.168.39.79
	I0501 02:33:56.514115   32853 main.go:141] libmachine: (ha-329926-m03) Calling .GetIP
	I0501 02:33:56.516527   32853 main.go:141] libmachine: (ha-329926-m03) DBG | domain ha-329926-m03 has defined MAC address 52:54:00:f9:eb:7d in network mk-ha-329926
	I0501 02:33:56.516881   32853 main.go:141] libmachine: (ha-329926-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:eb:7d", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:33:44 +0000 UTC Type:0 Mac:52:54:00:f9:eb:7d Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:ha-329926-m03 Clientid:01:52:54:00:f9:eb:7d}
	I0501 02:33:56.516908   32853 main.go:141] libmachine: (ha-329926-m03) DBG | domain ha-329926-m03 has defined IP address 192.168.39.115 and MAC address 52:54:00:f9:eb:7d in network mk-ha-329926
	I0501 02:33:56.517156   32853 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0501 02:33:56.521688   32853 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0501 02:33:56.534930   32853 mustload.go:65] Loading cluster: ha-329926
	I0501 02:33:56.535180   32853 config.go:182] Loaded profile config "ha-329926": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0501 02:33:56.535531   32853 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:33:56.535576   32853 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:33:56.550946   32853 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34933
	I0501 02:33:56.551366   32853 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:33:56.551880   32853 main.go:141] libmachine: Using API Version  1
	I0501 02:33:56.551897   32853 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:33:56.552181   32853 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:33:56.552325   32853 main.go:141] libmachine: (ha-329926) Calling .GetState
	I0501 02:33:56.553939   32853 host.go:66] Checking if "ha-329926" exists ...
	I0501 02:33:56.554304   32853 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:33:56.554340   32853 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:33:56.568563   32853 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43319
	I0501 02:33:56.568903   32853 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:33:56.569274   32853 main.go:141] libmachine: Using API Version  1
	I0501 02:33:56.569292   32853 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:33:56.569580   32853 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:33:56.569758   32853 main.go:141] libmachine: (ha-329926) Calling .DriverName
	I0501 02:33:56.569932   32853 certs.go:68] Setting up /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926 for IP: 192.168.39.115
	I0501 02:33:56.569946   32853 certs.go:194] generating shared ca certs ...
	I0501 02:33:56.569964   32853 certs.go:226] acquiring lock for ca certs: {Name:mk85a75d14c00780d4bf09cd9bdaf0f96f1b8fce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:33:56.570109   32853 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18779-13391/.minikube/ca.key
	I0501 02:33:56.570162   32853 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.key
	I0501 02:33:56.570175   32853 certs.go:256] generating profile certs ...
	I0501 02:33:56.570275   32853 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/client.key
	I0501 02:33:56.570309   32853 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/apiserver.key.da5f17e3
	I0501 02:33:56.570329   32853 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/apiserver.crt.da5f17e3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.5 192.168.39.79 192.168.39.115 192.168.39.254]
	I0501 02:33:56.836197   32853 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/apiserver.crt.da5f17e3 ...
	I0501 02:33:56.836227   32853 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/apiserver.crt.da5f17e3: {Name:mk19e8ab336a8011f2b618a7ee80af76218cad15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:33:56.836423   32853 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/apiserver.key.da5f17e3 ...
	I0501 02:33:56.836438   32853 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/apiserver.key.da5f17e3: {Name:mk87ba21c767b0a549751d84b1b9bc029d81cdf7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:33:56.836534   32853 certs.go:381] copying /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/apiserver.crt.da5f17e3 -> /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/apiserver.crt
	I0501 02:33:56.836705   32853 certs.go:385] copying /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/apiserver.key.da5f17e3 -> /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/apiserver.key
	I0501 02:33:56.836884   32853 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/proxy-client.key
	I0501 02:33:56.836902   32853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0501 02:33:56.836920   32853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0501 02:33:56.836939   32853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0501 02:33:56.836960   32853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0501 02:33:56.836978   32853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0501 02:33:56.836994   32853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0501 02:33:56.837011   32853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0501 02:33:56.837030   32853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0501 02:33:56.837090   32853 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/20724.pem (1338 bytes)
	W0501 02:33:56.837127   32853 certs.go:480] ignoring /home/jenkins/minikube-integration/18779-13391/.minikube/certs/20724_empty.pem, impossibly tiny 0 bytes
	I0501 02:33:56.837141   32853 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca-key.pem (1679 bytes)
	I0501 02:33:56.837177   32853 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem (1078 bytes)
	I0501 02:33:56.837206   32853 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem (1123 bytes)
	I0501 02:33:56.837234   32853 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem (1675 bytes)
	I0501 02:33:56.837287   32853 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem (1708 bytes)
	I0501 02:33:56.837323   32853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem -> /usr/share/ca-certificates/207242.pem
	I0501 02:33:56.837342   32853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0501 02:33:56.837363   32853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/20724.pem -> /usr/share/ca-certificates/20724.pem
	I0501 02:33:56.837401   32853 main.go:141] libmachine: (ha-329926) Calling .GetSSHHostname
	I0501 02:33:56.840494   32853 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:33:56.841014   32853 main.go:141] libmachine: (ha-329926) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:d8:43", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:31:18 +0000 UTC Type:0 Mac:52:54:00:ce:d8:43 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-329926 Clientid:01:52:54:00:ce:d8:43}
	I0501 02:33:56.841045   32853 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined IP address 192.168.39.5 and MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:33:56.841252   32853 main.go:141] libmachine: (ha-329926) Calling .GetSSHPort
	I0501 02:33:56.841478   32853 main.go:141] libmachine: (ha-329926) Calling .GetSSHKeyPath
	I0501 02:33:56.841693   32853 main.go:141] libmachine: (ha-329926) Calling .GetSSHUsername
	I0501 02:33:56.841856   32853 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926/id_rsa Username:docker}
	I0501 02:33:56.914715   32853 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0501 02:33:56.920177   32853 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0501 02:33:56.939591   32853 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0501 02:33:56.946435   32853 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0501 02:33:56.959821   32853 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0501 02:33:56.964912   32853 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0501 02:33:56.977275   32853 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0501 02:33:56.982220   32853 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0501 02:33:56.995757   32853 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0501 02:33:57.007074   32853 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0501 02:33:57.020516   32853 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0501 02:33:57.026328   32853 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0501 02:33:57.041243   32853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0501 02:33:57.074420   32853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0501 02:33:57.103724   32853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0501 02:33:57.134910   32853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0501 02:33:57.165073   32853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0501 02:33:57.196545   32853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0501 02:33:57.225326   32853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0501 02:33:57.252084   32853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0501 02:33:57.286208   32853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem --> /usr/share/ca-certificates/207242.pem (1708 bytes)
	I0501 02:33:57.317838   32853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0501 02:33:57.346679   32853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/certs/20724.pem --> /usr/share/ca-certificates/20724.pem (1338 bytes)
	I0501 02:33:57.375393   32853 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0501 02:33:57.394979   32853 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0501 02:33:57.414354   32853 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0501 02:33:57.433558   32853 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0501 02:33:57.455755   32853 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0501 02:33:57.476512   32853 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0501 02:33:57.497673   32853 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0501 02:33:57.517886   32853 ssh_runner.go:195] Run: openssl version
	I0501 02:33:57.524690   32853 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/207242.pem && ln -fs /usr/share/ca-certificates/207242.pem /etc/ssl/certs/207242.pem"
	I0501 02:33:57.538612   32853 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/207242.pem
	I0501 02:33:57.543865   32853 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May  1 02:20 /usr/share/ca-certificates/207242.pem
	I0501 02:33:57.543933   32853 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/207242.pem
	I0501 02:33:57.550789   32853 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/207242.pem /etc/ssl/certs/3ec20f2e.0"
	I0501 02:33:57.565578   32853 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0501 02:33:57.578568   32853 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0501 02:33:57.583785   32853 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May  1 02:08 /usr/share/ca-certificates/minikubeCA.pem
	I0501 02:33:57.583839   32853 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0501 02:33:57.590572   32853 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0501 02:33:57.602927   32853 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20724.pem && ln -fs /usr/share/ca-certificates/20724.pem /etc/ssl/certs/20724.pem"
	I0501 02:33:57.617155   32853 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20724.pem
	I0501 02:33:57.622337   32853 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May  1 02:20 /usr/share/ca-certificates/20724.pem
	I0501 02:33:57.622391   32853 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20724.pem
	I0501 02:33:57.628805   32853 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20724.pem /etc/ssl/certs/51391683.0"
	I0501 02:33:57.641042   32853 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0501 02:33:57.645699   32853 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0501 02:33:57.645751   32853 kubeadm.go:928] updating node {m03 192.168.39.115 8443 v1.30.0 crio true true} ...
	I0501 02:33:57.645825   32853 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-329926-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.115
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-329926 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0501 02:33:57.645848   32853 kube-vip.go:111] generating kube-vip config ...
	I0501 02:33:57.645886   32853 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0501 02:33:57.664754   32853 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0501 02:33:57.664818   32853 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0501 02:33:57.664885   32853 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0501 02:33:57.675936   32853 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.0': No such file or directory
	
	Initiating transfer...
	I0501 02:33:57.676002   32853 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.0
	I0501 02:33:57.686557   32853 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm.sha256
	I0501 02:33:57.686567   32853 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl.sha256
	I0501 02:33:57.686583   32853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/linux/amd64/v1.30.0/kubeadm -> /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0501 02:33:57.686590   32853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/linux/amd64/v1.30.0/kubectl -> /var/lib/minikube/binaries/v1.30.0/kubectl
	I0501 02:33:57.686652   32853 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0501 02:33:57.686658   32853 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl
	I0501 02:33:57.686557   32853 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet.sha256
	I0501 02:33:57.686726   32853 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 02:33:57.691485   32853 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubectl': No such file or directory
	I0501 02:33:57.691516   32853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/cache/linux/amd64/v1.30.0/kubectl --> /var/lib/minikube/binaries/v1.30.0/kubectl (51454104 bytes)
	I0501 02:33:57.705557   32853 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubeadm': No such file or directory
	I0501 02:33:57.705595   32853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/cache/linux/amd64/v1.30.0/kubeadm --> /var/lib/minikube/binaries/v1.30.0/kubeadm (50249880 bytes)
	I0501 02:33:57.716872   32853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/linux/amd64/v1.30.0/kubelet -> /var/lib/minikube/binaries/v1.30.0/kubelet
	I0501 02:33:57.716967   32853 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet
	I0501 02:33:57.761978   32853 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubelet': No such file or directory
	I0501 02:33:57.762021   32853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/cache/linux/amd64/v1.30.0/kubelet --> /var/lib/minikube/binaries/v1.30.0/kubelet (100100024 bytes)
	I0501 02:33:58.711990   32853 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0501 02:33:58.724400   32853 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0501 02:33:58.744940   32853 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0501 02:33:58.767269   32853 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0501 02:33:58.787739   32853 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0501 02:33:58.792566   32853 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0501 02:33:58.809119   32853 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:33:58.945032   32853 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0501 02:33:58.967940   32853 host.go:66] Checking if "ha-329926" exists ...
	I0501 02:33:58.968406   32853 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:33:58.968467   32853 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:33:58.984998   32853 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35945
	I0501 02:33:58.985460   32853 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:33:58.985963   32853 main.go:141] libmachine: Using API Version  1
	I0501 02:33:58.986033   32853 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:33:58.986380   32853 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:33:58.986599   32853 main.go:141] libmachine: (ha-329926) Calling .DriverName
	I0501 02:33:58.986770   32853 start.go:316] joinCluster: &{Name:ha-329926 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Cluster
Name:ha-329926 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.5 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.79 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.115 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false ins
pektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 02:33:58.986899   32853 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0501 02:33:58.986915   32853 main.go:141] libmachine: (ha-329926) Calling .GetSSHHostname
	I0501 02:33:58.989775   32853 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:33:58.990227   32853 main.go:141] libmachine: (ha-329926) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:d8:43", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:31:18 +0000 UTC Type:0 Mac:52:54:00:ce:d8:43 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-329926 Clientid:01:52:54:00:ce:d8:43}
	I0501 02:33:58.990252   32853 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined IP address 192.168.39.5 and MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:33:58.990489   32853 main.go:141] libmachine: (ha-329926) Calling .GetSSHPort
	I0501 02:33:58.990643   32853 main.go:141] libmachine: (ha-329926) Calling .GetSSHKeyPath
	I0501 02:33:58.990791   32853 main.go:141] libmachine: (ha-329926) Calling .GetSSHUsername
	I0501 02:33:58.990957   32853 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926/id_rsa Username:docker}
	I0501 02:33:59.580409   32853 start.go:342] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.115 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0501 02:33:59.580458   32853 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token m7l0ya.kjhwirja5kia0ep4 --discovery-token-ca-cert-hash sha256:bd94cc6acfb95fe36f70b12c6a1f2980601b8235e7c4dea65cebdf35eb514754 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-329926-m03 --control-plane --apiserver-advertise-address=192.168.39.115 --apiserver-bind-port=8443"
	I0501 02:34:25.335526   32853 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token m7l0ya.kjhwirja5kia0ep4 --discovery-token-ca-cert-hash sha256:bd94cc6acfb95fe36f70b12c6a1f2980601b8235e7c4dea65cebdf35eb514754 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-329926-m03 --control-plane --apiserver-advertise-address=192.168.39.115 --apiserver-bind-port=8443": (25.755033767s)
	I0501 02:34:25.335571   32853 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0501 02:34:25.928467   32853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-329926-m03 minikube.k8s.io/updated_at=2024_05_01T02_34_25_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=2c4eae41cda912e6a762d77f0d8868e00f97bb4e minikube.k8s.io/name=ha-329926 minikube.k8s.io/primary=false
	I0501 02:34:26.107354   32853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-329926-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0501 02:34:26.257401   32853 start.go:318] duration metric: took 27.270627658s to joinCluster
	I0501 02:34:26.257479   32853 start.go:234] Will wait 6m0s for node &{Name:m03 IP:192.168.39.115 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0501 02:34:26.259069   32853 out.go:177] * Verifying Kubernetes components...
	I0501 02:34:26.257836   32853 config.go:182] Loaded profile config "ha-329926": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0501 02:34:26.260445   32853 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:34:26.510765   32853 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0501 02:34:26.552182   32853 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18779-13391/kubeconfig
	I0501 02:34:26.552546   32853 kapi.go:59] client config for ha-329926: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/client.crt", KeyFile:"/home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/client.key", CAFile:"/home/jenkins/minikube-integration/18779-13391/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02700), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0501 02:34:26.552636   32853 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.5:8443
	I0501 02:34:26.552909   32853 node_ready.go:35] waiting up to 6m0s for node "ha-329926-m03" to be "Ready" ...
	I0501 02:34:26.552995   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926-m03
	I0501 02:34:26.553008   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:26.553019   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:26.553028   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:26.559027   32853 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:34:27.053197   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926-m03
	I0501 02:34:27.053219   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:27.053230   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:27.053234   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:27.057217   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:34:27.553859   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926-m03
	I0501 02:34:27.553883   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:27.553893   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:27.553899   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:27.557231   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:34:28.053863   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926-m03
	I0501 02:34:28.053887   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:28.053897   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:28.053901   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:28.058194   32853 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:34:28.553549   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926-m03
	I0501 02:34:28.553580   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:28.553592   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:28.553599   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:28.558384   32853 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:34:28.559675   32853 node_ready.go:53] node "ha-329926-m03" has status "Ready":"False"
	I0501 02:34:29.053346   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926-m03
	I0501 02:34:29.053367   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:29.053375   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:29.053381   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:29.057823   32853 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:34:29.553737   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926-m03
	I0501 02:34:29.553765   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:29.553775   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:29.553784   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:29.557883   32853 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:34:30.053337   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926-m03
	I0501 02:34:30.053367   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:30.053377   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:30.053384   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:30.056878   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:34:30.553820   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926-m03
	I0501 02:34:30.553846   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:30.553858   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:30.553864   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:30.557495   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:34:31.053570   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926-m03
	I0501 02:34:31.053602   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:31.053610   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:31.053613   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:31.058281   32853 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:34:31.059035   32853 node_ready.go:53] node "ha-329926-m03" has status "Ready":"False"
	I0501 02:34:31.553683   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926-m03
	I0501 02:34:31.553714   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:31.553728   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:31.553732   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:31.558150   32853 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:34:32.053180   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926-m03
	I0501 02:34:32.053203   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:32.053210   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:32.053215   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:32.058335   32853 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:34:32.553190   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926-m03
	I0501 02:34:32.553211   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:32.553219   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:32.553224   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:32.556713   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:34:33.053769   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926-m03
	I0501 02:34:33.053793   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:33.053801   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:33.053805   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:33.058695   32853 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:34:33.059985   32853 node_ready.go:53] node "ha-329926-m03" has status "Ready":"False"
	I0501 02:34:33.553873   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926-m03
	I0501 02:34:33.553896   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:33.553902   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:33.553905   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:33.557865   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:34:33.558759   32853 node_ready.go:49] node "ha-329926-m03" has status "Ready":"True"
	I0501 02:34:33.558778   32853 node_ready.go:38] duration metric: took 7.005851298s for node "ha-329926-m03" to be "Ready" ...
	I0501 02:34:33.558786   32853 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0501 02:34:33.558841   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods
	I0501 02:34:33.558851   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:33.558858   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:33.558862   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:33.566456   32853 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0501 02:34:33.573887   32853 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-2h8lc" in "kube-system" namespace to be "Ready" ...
	I0501 02:34:33.573969   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2h8lc
	I0501 02:34:33.573977   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:33.573984   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:33.573989   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:33.580267   32853 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 02:34:33.581620   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926
	I0501 02:34:33.581637   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:33.581646   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:33.581651   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:33.585397   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:34:33.586157   32853 pod_ready.go:92] pod "coredns-7db6d8ff4d-2h8lc" in "kube-system" namespace has status "Ready":"True"
	I0501 02:34:33.586182   32853 pod_ready.go:81] duration metric: took 12.268357ms for pod "coredns-7db6d8ff4d-2h8lc" in "kube-system" namespace to be "Ready" ...
	I0501 02:34:33.586195   32853 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-cfdqc" in "kube-system" namespace to be "Ready" ...
	I0501 02:34:33.586262   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-cfdqc
	I0501 02:34:33.586273   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:33.586281   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:33.586290   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:33.590164   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:34:33.591076   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926
	I0501 02:34:33.591092   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:33.591099   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:33.591104   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:33.594772   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:34:33.595506   32853 pod_ready.go:92] pod "coredns-7db6d8ff4d-cfdqc" in "kube-system" namespace has status "Ready":"True"
	I0501 02:34:33.595526   32853 pod_ready.go:81] duration metric: took 9.323438ms for pod "coredns-7db6d8ff4d-cfdqc" in "kube-system" namespace to be "Ready" ...
	I0501 02:34:33.595540   32853 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-329926" in "kube-system" namespace to be "Ready" ...
	I0501 02:34:33.595609   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-329926
	I0501 02:34:33.595620   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:33.595630   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:33.595640   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:33.598884   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:34:33.599606   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926
	I0501 02:34:33.599623   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:33.599630   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:33.599635   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:33.603088   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:34:33.603694   32853 pod_ready.go:92] pod "etcd-ha-329926" in "kube-system" namespace has status "Ready":"True"
	I0501 02:34:33.603712   32853 pod_ready.go:81] duration metric: took 8.164903ms for pod "etcd-ha-329926" in "kube-system" namespace to be "Ready" ...
	I0501 02:34:33.603719   32853 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-329926-m02" in "kube-system" namespace to be "Ready" ...
	I0501 02:34:33.603788   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-329926-m02
	I0501 02:34:33.603800   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:33.603808   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:33.603811   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:33.607305   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:34:33.608451   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926-m02
	I0501 02:34:33.608464   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:33.608471   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:33.608474   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:33.611758   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:34:33.612507   32853 pod_ready.go:92] pod "etcd-ha-329926-m02" in "kube-system" namespace has status "Ready":"True"
	I0501 02:34:33.612529   32853 pod_ready.go:81] duration metric: took 8.802946ms for pod "etcd-ha-329926-m02" in "kube-system" namespace to be "Ready" ...
	I0501 02:34:33.612541   32853 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-329926-m03" in "kube-system" namespace to be "Ready" ...
	I0501 02:34:33.754929   32853 request.go:629] Waited for 142.321048ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-329926-m03
	I0501 02:34:33.755011   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-329926-m03
	I0501 02:34:33.755020   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:33.755028   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:33.755032   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:33.759234   32853 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:34:33.954431   32853 request.go:629] Waited for 194.370954ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/nodes/ha-329926-m03
	I0501 02:34:33.954499   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926-m03
	I0501 02:34:33.954506   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:33.954515   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:33.954534   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:33.958626   32853 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:34:34.154421   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-329926-m03
	I0501 02:34:34.154446   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:34.154458   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:34.154464   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:34.158327   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:34:34.354383   32853 request.go:629] Waited for 195.391735ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/nodes/ha-329926-m03
	I0501 02:34:34.354488   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926-m03
	I0501 02:34:34.354501   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:34.354515   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:34.354527   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:34.358227   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:34:34.613107   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-329926-m03
	I0501 02:34:34.613128   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:34.613135   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:34.613139   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:34.616497   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:34:34.754798   32853 request.go:629] Waited for 137.272351ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/nodes/ha-329926-m03
	I0501 02:34:34.754884   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926-m03
	I0501 02:34:34.754894   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:34.754907   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:34.754921   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:34.758624   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:34:35.113446   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-329926-m03
	I0501 02:34:35.113477   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:35.113485   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:35.113490   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:35.117286   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:34:35.154475   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926-m03
	I0501 02:34:35.154499   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:35.154518   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:35.154522   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:35.157638   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:34:35.613752   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-329926-m03
	I0501 02:34:35.613776   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:35.613784   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:35.613787   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:35.617712   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:34:35.618352   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926-m03
	I0501 02:34:35.618369   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:35.618378   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:35.618384   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:35.623517   32853 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:34:35.624078   32853 pod_ready.go:102] pod "etcd-ha-329926-m03" in "kube-system" namespace has status "Ready":"False"
	I0501 02:34:36.113120   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-329926-m03
	I0501 02:34:36.113140   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:36.113147   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:36.113151   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:36.116690   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:34:36.118173   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926-m03
	I0501 02:34:36.118181   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:36.118187   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:36.118192   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:36.121216   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:34:36.613137   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-329926-m03
	I0501 02:34:36.613157   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:36.613163   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:36.613169   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:36.616634   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:34:36.617476   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926-m03
	I0501 02:34:36.617491   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:36.617500   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:36.617509   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:36.620593   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:34:37.113710   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-329926-m03
	I0501 02:34:37.113729   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:37.113736   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:37.113741   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:37.117351   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:34:37.118184   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926-m03
	I0501 02:34:37.118200   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:37.118209   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:37.118217   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:37.121466   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:34:37.612955   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-329926-m03
	I0501 02:34:37.612977   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:37.612986   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:37.612990   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:37.616673   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:34:37.617588   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926-m03
	I0501 02:34:37.617604   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:37.617613   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:37.617619   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:37.620996   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:34:38.113342   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-329926-m03
	I0501 02:34:38.113367   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:38.113374   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:38.113377   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:38.117427   32853 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:34:38.118302   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926-m03
	I0501 02:34:38.118323   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:38.118331   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:38.118336   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:38.123781   32853 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:34:38.125579   32853 pod_ready.go:102] pod "etcd-ha-329926-m03" in "kube-system" namespace has status "Ready":"False"
	I0501 02:34:38.613049   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-329926-m03
	I0501 02:34:38.613070   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:38.613078   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:38.613082   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:38.616757   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:34:38.617664   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926-m03
	I0501 02:34:38.617687   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:38.617697   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:38.617701   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:38.620559   32853 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 02:34:39.113253   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-329926-m03
	I0501 02:34:39.113276   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:39.113286   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:39.113291   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:39.117261   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:34:39.118314   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926-m03
	I0501 02:34:39.118330   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:39.118339   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:39.118346   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:39.123812   32853 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:34:39.613659   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-329926-m03
	I0501 02:34:39.613681   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:39.613689   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:39.613692   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:39.617494   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:34:39.618352   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926-m03
	I0501 02:34:39.618371   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:39.618381   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:39.618386   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:39.621727   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:34:40.112726   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-329926-m03
	I0501 02:34:40.112749   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:40.112757   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:40.112761   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:40.116585   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:34:40.117611   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926-m03
	I0501 02:34:40.117626   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:40.117632   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:40.117638   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:40.125153   32853 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0501 02:34:40.125922   32853 pod_ready.go:92] pod "etcd-ha-329926-m03" in "kube-system" namespace has status "Ready":"True"
	I0501 02:34:40.125940   32853 pod_ready.go:81] duration metric: took 6.513392364s for pod "etcd-ha-329926-m03" in "kube-system" namespace to be "Ready" ...
	I0501 02:34:40.125956   32853 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-329926" in "kube-system" namespace to be "Ready" ...
	I0501 02:34:40.126001   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-329926
	I0501 02:34:40.126009   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:40.126016   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:40.126020   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:40.128839   32853 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 02:34:40.129696   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926
	I0501 02:34:40.129714   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:40.129723   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:40.129731   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:40.132582   32853 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 02:34:40.133263   32853 pod_ready.go:92] pod "kube-apiserver-ha-329926" in "kube-system" namespace has status "Ready":"True"
	I0501 02:34:40.133283   32853 pod_ready.go:81] duration metric: took 7.321354ms for pod "kube-apiserver-ha-329926" in "kube-system" namespace to be "Ready" ...
	I0501 02:34:40.133292   32853 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-329926-m02" in "kube-system" namespace to be "Ready" ...
	I0501 02:34:40.133348   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-329926-m02
	I0501 02:34:40.133356   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:40.133363   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:40.133367   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:40.135773   32853 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 02:34:40.136343   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926-m02
	I0501 02:34:40.136355   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:40.136361   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:40.136364   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:40.139151   32853 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 02:34:40.139786   32853 pod_ready.go:92] pod "kube-apiserver-ha-329926-m02" in "kube-system" namespace has status "Ready":"True"
	I0501 02:34:40.139806   32853 pod_ready.go:81] duration metric: took 6.506764ms for pod "kube-apiserver-ha-329926-m02" in "kube-system" namespace to be "Ready" ...
	I0501 02:34:40.139820   32853 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-329926-m03" in "kube-system" namespace to be "Ready" ...
	I0501 02:34:40.154083   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-329926-m03
	I0501 02:34:40.154097   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:40.154103   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:40.154108   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:40.156853   32853 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 02:34:40.354192   32853 request.go:629] Waited for 196.340447ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/nodes/ha-329926-m03
	I0501 02:34:40.354256   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926-m03
	I0501 02:34:40.354263   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:40.354272   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:40.354277   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:40.358337   32853 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:34:40.359238   32853 pod_ready.go:92] pod "kube-apiserver-ha-329926-m03" in "kube-system" namespace has status "Ready":"True"
	I0501 02:34:40.359260   32853 pod_ready.go:81] duration metric: took 219.426636ms for pod "kube-apiserver-ha-329926-m03" in "kube-system" namespace to be "Ready" ...
	I0501 02:34:40.359275   32853 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-329926" in "kube-system" namespace to be "Ready" ...
	I0501 02:34:40.554714   32853 request.go:629] Waited for 195.374385ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-329926
	I0501 02:34:40.554789   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-329926
	I0501 02:34:40.554794   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:40.554803   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:40.554807   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:40.558437   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:34:40.753934   32853 request.go:629] Waited for 194.309516ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/nodes/ha-329926
	I0501 02:34:40.753993   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926
	I0501 02:34:40.754002   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:40.754015   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:40.754028   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:40.757565   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:34:40.758538   32853 pod_ready.go:92] pod "kube-controller-manager-ha-329926" in "kube-system" namespace has status "Ready":"True"
	I0501 02:34:40.758554   32853 pod_ready.go:81] duration metric: took 399.271628ms for pod "kube-controller-manager-ha-329926" in "kube-system" namespace to be "Ready" ...
	I0501 02:34:40.758565   32853 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-329926-m02" in "kube-system" namespace to be "Ready" ...
	I0501 02:34:40.954661   32853 request.go:629] Waited for 196.021432ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-329926-m02
	I0501 02:34:40.954735   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-329926-m02
	I0501 02:34:40.954740   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:40.954747   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:40.954751   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:40.958428   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:34:41.153920   32853 request.go:629] Waited for 192.630607ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/nodes/ha-329926-m02
	I0501 02:34:41.153984   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926-m02
	I0501 02:34:41.153991   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:41.154007   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:41.154016   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:41.157816   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:34:41.158520   32853 pod_ready.go:92] pod "kube-controller-manager-ha-329926-m02" in "kube-system" namespace has status "Ready":"True"
	I0501 02:34:41.158537   32853 pod_ready.go:81] duration metric: took 399.964614ms for pod "kube-controller-manager-ha-329926-m02" in "kube-system" namespace to be "Ready" ...
	I0501 02:34:41.158548   32853 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-329926-m03" in "kube-system" namespace to be "Ready" ...
	I0501 02:34:41.354666   32853 request.go:629] Waited for 196.037746ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-329926-m03
	I0501 02:34:41.354720   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-329926-m03
	I0501 02:34:41.354727   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:41.354736   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:41.354742   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:41.360533   32853 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:34:41.554443   32853 request.go:629] Waited for 193.057741ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/nodes/ha-329926-m03
	I0501 02:34:41.554511   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926-m03
	I0501 02:34:41.554518   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:41.554529   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:41.554539   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:41.558653   32853 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:34:41.559781   32853 pod_ready.go:92] pod "kube-controller-manager-ha-329926-m03" in "kube-system" namespace has status "Ready":"True"
	I0501 02:34:41.559799   32853 pod_ready.go:81] duration metric: took 401.243411ms for pod "kube-controller-manager-ha-329926-m03" in "kube-system" namespace to be "Ready" ...
	I0501 02:34:41.559813   32853 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jfnk9" in "kube-system" namespace to be "Ready" ...
	I0501 02:34:41.754453   32853 request.go:629] Waited for 194.556038ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jfnk9
	I0501 02:34:41.754506   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jfnk9
	I0501 02:34:41.754513   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:41.754523   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:41.754531   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:41.757858   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:34:41.954922   32853 request.go:629] Waited for 196.354627ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/nodes/ha-329926-m03
	I0501 02:34:41.954987   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926-m03
	I0501 02:34:41.954993   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:41.955001   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:41.955005   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:41.958705   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:34:41.959342   32853 pod_ready.go:92] pod "kube-proxy-jfnk9" in "kube-system" namespace has status "Ready":"True"
	I0501 02:34:41.959368   32853 pod_ready.go:81] duration metric: took 399.547594ms for pod "kube-proxy-jfnk9" in "kube-system" namespace to be "Ready" ...
	I0501 02:34:41.959382   32853 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-msshn" in "kube-system" namespace to be "Ready" ...
	I0501 02:34:42.154797   32853 request.go:629] Waited for 195.330667ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-msshn
	I0501 02:34:42.154856   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-msshn
	I0501 02:34:42.154864   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:42.154873   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:42.154877   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:42.159411   32853 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:34:42.354538   32853 request.go:629] Waited for 194.330503ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/nodes/ha-329926
	I0501 02:34:42.354609   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926
	I0501 02:34:42.354617   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:42.354628   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:42.354648   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:42.358369   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:34:42.359153   32853 pod_ready.go:92] pod "kube-proxy-msshn" in "kube-system" namespace has status "Ready":"True"
	I0501 02:34:42.359173   32853 pod_ready.go:81] duration metric: took 399.782461ms for pod "kube-proxy-msshn" in "kube-system" namespace to be "Ready" ...
	I0501 02:34:42.359193   32853 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rfsm8" in "kube-system" namespace to be "Ready" ...
	I0501 02:34:42.554323   32853 request.go:629] Waited for 195.038073ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rfsm8
	I0501 02:34:42.554442   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rfsm8
	I0501 02:34:42.554454   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:42.554464   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:42.554473   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:42.558176   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:34:42.754211   32853 request.go:629] Waited for 195.287871ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/nodes/ha-329926-m02
	I0501 02:34:42.754282   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926-m02
	I0501 02:34:42.754289   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:42.754301   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:42.754321   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:42.757652   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:34:42.758429   32853 pod_ready.go:92] pod "kube-proxy-rfsm8" in "kube-system" namespace has status "Ready":"True"
	I0501 02:34:42.758447   32853 pod_ready.go:81] duration metric: took 399.247286ms for pod "kube-proxy-rfsm8" in "kube-system" namespace to be "Ready" ...
	I0501 02:34:42.758457   32853 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-329926" in "kube-system" namespace to be "Ready" ...
	I0501 02:34:42.954494   32853 request.go:629] Waited for 195.971378ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-329926
	I0501 02:34:42.954582   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-329926
	I0501 02:34:42.954590   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:42.954600   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:42.954607   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:42.958480   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:34:43.154915   32853 request.go:629] Waited for 195.448033ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/nodes/ha-329926
	I0501 02:34:43.154966   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926
	I0501 02:34:43.154971   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:43.154979   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:43.154987   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:43.158195   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:34:43.159035   32853 pod_ready.go:92] pod "kube-scheduler-ha-329926" in "kube-system" namespace has status "Ready":"True"
	I0501 02:34:43.159055   32853 pod_ready.go:81] duration metric: took 400.59166ms for pod "kube-scheduler-ha-329926" in "kube-system" namespace to be "Ready" ...
	I0501 02:34:43.159065   32853 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-329926-m02" in "kube-system" namespace to be "Ready" ...
	I0501 02:34:43.354202   32853 request.go:629] Waited for 195.054236ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-329926-m02
	I0501 02:34:43.354272   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-329926-m02
	I0501 02:34:43.354279   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:43.354296   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:43.354303   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:43.358424   32853 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:34:43.554487   32853 request.go:629] Waited for 195.193446ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/nodes/ha-329926-m02
	I0501 02:34:43.554584   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926-m02
	I0501 02:34:43.554595   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:43.554606   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:43.554617   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:43.558333   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:34:43.559150   32853 pod_ready.go:92] pod "kube-scheduler-ha-329926-m02" in "kube-system" namespace has status "Ready":"True"
	I0501 02:34:43.559173   32853 pod_ready.go:81] duration metric: took 400.101615ms for pod "kube-scheduler-ha-329926-m02" in "kube-system" namespace to be "Ready" ...
	I0501 02:34:43.559184   32853 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-329926-m03" in "kube-system" namespace to be "Ready" ...
	I0501 02:34:43.754323   32853 request.go:629] Waited for 195.055548ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-329926-m03
	I0501 02:34:43.754413   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-329926-m03
	I0501 02:34:43.754422   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:43.754435   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:43.754443   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:43.758790   32853 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:34:43.954606   32853 request.go:629] Waited for 195.158979ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/nodes/ha-329926-m03
	I0501 02:34:43.954659   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926-m03
	I0501 02:34:43.954664   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:43.954673   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:43.954678   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:43.958522   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:34:43.959350   32853 pod_ready.go:92] pod "kube-scheduler-ha-329926-m03" in "kube-system" namespace has status "Ready":"True"
	I0501 02:34:43.959375   32853 pod_ready.go:81] duration metric: took 400.183308ms for pod "kube-scheduler-ha-329926-m03" in "kube-system" namespace to be "Ready" ...
	I0501 02:34:43.959388   32853 pod_ready.go:38] duration metric: took 10.400590521s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0501 02:34:43.959406   32853 api_server.go:52] waiting for apiserver process to appear ...
	I0501 02:34:43.959461   32853 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 02:34:43.977544   32853 api_server.go:72] duration metric: took 17.720028948s to wait for apiserver process to appear ...
	I0501 02:34:43.977569   32853 api_server.go:88] waiting for apiserver healthz status ...
	I0501 02:34:43.977591   32853 api_server.go:253] Checking apiserver healthz at https://192.168.39.5:8443/healthz ...
	I0501 02:34:43.983701   32853 api_server.go:279] https://192.168.39.5:8443/healthz returned 200:
	ok
	I0501 02:34:43.983758   32853 round_trippers.go:463] GET https://192.168.39.5:8443/version
	I0501 02:34:43.983766   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:43.983774   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:43.983777   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:43.984732   32853 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0501 02:34:43.984783   32853 api_server.go:141] control plane version: v1.30.0
	I0501 02:34:43.984795   32853 api_server.go:131] duration metric: took 7.220912ms to wait for apiserver health ...
	I0501 02:34:43.984802   32853 system_pods.go:43] waiting for kube-system pods to appear ...
	I0501 02:34:44.154473   32853 request.go:629] Waited for 169.608236ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods
	I0501 02:34:44.154531   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods
	I0501 02:34:44.154537   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:44.154544   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:44.154549   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:44.164116   32853 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0501 02:34:44.171344   32853 system_pods.go:59] 24 kube-system pods found
	I0501 02:34:44.171371   32853 system_pods.go:61] "coredns-7db6d8ff4d-2h8lc" [937e09f0-6a7d-4387-aa19-ee959eb5a2a5] Running
	I0501 02:34:44.171376   32853 system_pods.go:61] "coredns-7db6d8ff4d-cfdqc" [a37e982e-9e4f-43bf-b957-0d6f082f4ec8] Running
	I0501 02:34:44.171380   32853 system_pods.go:61] "etcd-ha-329926" [f0e4ae2a-a8cc-42b2-9865-fb6ec3f41acb] Running
	I0501 02:34:44.171384   32853 system_pods.go:61] "etcd-ha-329926-m02" [4ed5b754-bb3d-46de-a5b9-ff46994f25ad] Running
	I0501 02:34:44.171389   32853 system_pods.go:61] "etcd-ha-329926-m03" [5b1f17c9-f09d-4e25-9069-125ca6756bb9] Running
	I0501 02:34:44.171392   32853 system_pods.go:61] "kindnet-7gr9n" [acd0ac11-9caa-47ae-b1f9-40dbb9f25b9c] Running
	I0501 02:34:44.171395   32853 system_pods.go:61] "kindnet-9r8zn" [fc187c8a-a964-45e1-adb0-f5ce23922b66] Running
	I0501 02:34:44.171399   32853 system_pods.go:61] "kindnet-kcmp7" [8e15c166-9ba1-40c9-8f33-db7f83733932] Running
	I0501 02:34:44.171404   32853 system_pods.go:61] "kube-apiserver-ha-329926" [49c47f4f-663a-4407-9d46-94fa3afbf349] Running
	I0501 02:34:44.171409   32853 system_pods.go:61] "kube-apiserver-ha-329926-m02" [886d1acc-021c-4f8b-b477-b9760260aabb] Running
	I0501 02:34:44.171414   32853 system_pods.go:61] "kube-apiserver-ha-329926-m03" [1d9a8819-b7a1-4b6d-b633-912974f051ce] Running
	I0501 02:34:44.171419   32853 system_pods.go:61] "kube-controller-manager-ha-329926" [332785d8-9966-4823-8828-fa5e90b4aac1] Running
	I0501 02:34:44.171425   32853 system_pods.go:61] "kube-controller-manager-ha-329926-m02" [91d97fa7-6409-4620-b569-c391d21a5915] Running
	I0501 02:34:44.171431   32853 system_pods.go:61] "kube-controller-manager-ha-329926-m03" [623b64bf-d9cc-44fd-91d4-ab8296a2d0a8] Running
	I0501 02:34:44.171441   32853 system_pods.go:61] "kube-proxy-jfnk9" [a0d4b9ce-a0b5-4810-b2ea-34b1ad295e88] Running
	I0501 02:34:44.171445   32853 system_pods.go:61] "kube-proxy-msshn" [7575fbfc-11ce-4223-bd99-ff9cdddd3568] Running
	I0501 02:34:44.171448   32853 system_pods.go:61] "kube-proxy-rfsm8" [f0510b55-1b59-4239-b529-b7af4d017c06] Running
	I0501 02:34:44.171452   32853 system_pods.go:61] "kube-scheduler-ha-329926" [7d45e3e9-cc7e-4b69-9219-61c3006013ea] Running
	I0501 02:34:44.171455   32853 system_pods.go:61] "kube-scheduler-ha-329926-m02" [075e127f-debf-4dd4-babd-be0930fb2ef7] Running
	I0501 02:34:44.171461   32853 system_pods.go:61] "kube-scheduler-ha-329926-m03" [057d5d0d-b546-4007-b922-4e4db5232918] Running
	I0501 02:34:44.171464   32853 system_pods.go:61] "kube-vip-ha-329926" [0fbbb815-441d-48d0-b0cf-1bb57ff6d993] Running
	I0501 02:34:44.171467   32853 system_pods.go:61] "kube-vip-ha-329926-m02" [92c115f8-bb9c-4a86-b914-984985a69916] Running
	I0501 02:34:44.171470   32853 system_pods.go:61] "kube-vip-ha-329926-m03" [a66ba3bd-e5c6-4e6c-9f95-bac5a111bc0e] Running
	I0501 02:34:44.171473   32853 system_pods.go:61] "storage-provisioner" [371423a6-a156-4e8d-bf66-812d606cc8d7] Running
	I0501 02:34:44.171479   32853 system_pods.go:74] duration metric: took 186.669098ms to wait for pod list to return data ...
	I0501 02:34:44.171489   32853 default_sa.go:34] waiting for default service account to be created ...
	I0501 02:34:44.354917   32853 request.go:629] Waited for 183.348562ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/namespaces/default/serviceaccounts
	I0501 02:34:44.354981   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/default/serviceaccounts
	I0501 02:34:44.354986   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:44.354993   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:44.354997   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:44.357877   32853 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 02:34:44.358002   32853 default_sa.go:45] found service account: "default"
	I0501 02:34:44.358022   32853 default_sa.go:55] duration metric: took 186.526043ms for default service account to be created ...
	I0501 02:34:44.358032   32853 system_pods.go:116] waiting for k8s-apps to be running ...
	I0501 02:34:44.554486   32853 request.go:629] Waited for 196.380023ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods
	I0501 02:34:44.554547   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods
	I0501 02:34:44.554552   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:44.554560   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:44.554567   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:44.562004   32853 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0501 02:34:44.568771   32853 system_pods.go:86] 24 kube-system pods found
	I0501 02:34:44.568796   32853 system_pods.go:89] "coredns-7db6d8ff4d-2h8lc" [937e09f0-6a7d-4387-aa19-ee959eb5a2a5] Running
	I0501 02:34:44.568801   32853 system_pods.go:89] "coredns-7db6d8ff4d-cfdqc" [a37e982e-9e4f-43bf-b957-0d6f082f4ec8] Running
	I0501 02:34:44.568806   32853 system_pods.go:89] "etcd-ha-329926" [f0e4ae2a-a8cc-42b2-9865-fb6ec3f41acb] Running
	I0501 02:34:44.568810   32853 system_pods.go:89] "etcd-ha-329926-m02" [4ed5b754-bb3d-46de-a5b9-ff46994f25ad] Running
	I0501 02:34:44.568813   32853 system_pods.go:89] "etcd-ha-329926-m03" [5b1f17c9-f09d-4e25-9069-125ca6756bb9] Running
	I0501 02:34:44.568817   32853 system_pods.go:89] "kindnet-7gr9n" [acd0ac11-9caa-47ae-b1f9-40dbb9f25b9c] Running
	I0501 02:34:44.568821   32853 system_pods.go:89] "kindnet-9r8zn" [fc187c8a-a964-45e1-adb0-f5ce23922b66] Running
	I0501 02:34:44.568824   32853 system_pods.go:89] "kindnet-kcmp7" [8e15c166-9ba1-40c9-8f33-db7f83733932] Running
	I0501 02:34:44.568828   32853 system_pods.go:89] "kube-apiserver-ha-329926" [49c47f4f-663a-4407-9d46-94fa3afbf349] Running
	I0501 02:34:44.568834   32853 system_pods.go:89] "kube-apiserver-ha-329926-m02" [886d1acc-021c-4f8b-b477-b9760260aabb] Running
	I0501 02:34:44.568838   32853 system_pods.go:89] "kube-apiserver-ha-329926-m03" [1d9a8819-b7a1-4b6d-b633-912974f051ce] Running
	I0501 02:34:44.568843   32853 system_pods.go:89] "kube-controller-manager-ha-329926" [332785d8-9966-4823-8828-fa5e90b4aac1] Running
	I0501 02:34:44.568847   32853 system_pods.go:89] "kube-controller-manager-ha-329926-m02" [91d97fa7-6409-4620-b569-c391d21a5915] Running
	I0501 02:34:44.568854   32853 system_pods.go:89] "kube-controller-manager-ha-329926-m03" [623b64bf-d9cc-44fd-91d4-ab8296a2d0a8] Running
	I0501 02:34:44.568857   32853 system_pods.go:89] "kube-proxy-jfnk9" [a0d4b9ce-a0b5-4810-b2ea-34b1ad295e88] Running
	I0501 02:34:44.568863   32853 system_pods.go:89] "kube-proxy-msshn" [7575fbfc-11ce-4223-bd99-ff9cdddd3568] Running
	I0501 02:34:44.568867   32853 system_pods.go:89] "kube-proxy-rfsm8" [f0510b55-1b59-4239-b529-b7af4d017c06] Running
	I0501 02:34:44.568871   32853 system_pods.go:89] "kube-scheduler-ha-329926" [7d45e3e9-cc7e-4b69-9219-61c3006013ea] Running
	I0501 02:34:44.568874   32853 system_pods.go:89] "kube-scheduler-ha-329926-m02" [075e127f-debf-4dd4-babd-be0930fb2ef7] Running
	I0501 02:34:44.568878   32853 system_pods.go:89] "kube-scheduler-ha-329926-m03" [057d5d0d-b546-4007-b922-4e4db5232918] Running
	I0501 02:34:44.568884   32853 system_pods.go:89] "kube-vip-ha-329926" [0fbbb815-441d-48d0-b0cf-1bb57ff6d993] Running
	I0501 02:34:44.568887   32853 system_pods.go:89] "kube-vip-ha-329926-m02" [92c115f8-bb9c-4a86-b914-984985a69916] Running
	I0501 02:34:44.568891   32853 system_pods.go:89] "kube-vip-ha-329926-m03" [a66ba3bd-e5c6-4e6c-9f95-bac5a111bc0e] Running
	I0501 02:34:44.568894   32853 system_pods.go:89] "storage-provisioner" [371423a6-a156-4e8d-bf66-812d606cc8d7] Running
	I0501 02:34:44.568902   32853 system_pods.go:126] duration metric: took 210.864899ms to wait for k8s-apps to be running ...
	I0501 02:34:44.568911   32853 system_svc.go:44] waiting for kubelet service to be running ....
	I0501 02:34:44.568950   32853 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 02:34:44.586737   32853 system_svc.go:56] duration metric: took 17.813264ms WaitForService to wait for kubelet
	I0501 02:34:44.586769   32853 kubeadm.go:576] duration metric: took 18.329255466s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0501 02:34:44.586793   32853 node_conditions.go:102] verifying NodePressure condition ...
	I0501 02:34:44.754243   32853 request.go:629] Waited for 167.353029ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/nodes
	I0501 02:34:44.754293   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes
	I0501 02:34:44.754298   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:44.754306   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:44.754317   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:44.757904   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:34:44.759059   32853 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0501 02:34:44.759079   32853 node_conditions.go:123] node cpu capacity is 2
	I0501 02:34:44.759088   32853 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0501 02:34:44.759092   32853 node_conditions.go:123] node cpu capacity is 2
	I0501 02:34:44.759098   32853 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0501 02:34:44.759103   32853 node_conditions.go:123] node cpu capacity is 2
	I0501 02:34:44.759111   32853 node_conditions.go:105] duration metric: took 172.311739ms to run NodePressure ...
	I0501 02:34:44.759125   32853 start.go:240] waiting for startup goroutines ...
	I0501 02:34:44.759151   32853 start.go:254] writing updated cluster config ...
	I0501 02:34:44.759456   32853 ssh_runner.go:195] Run: rm -f paused
	I0501 02:34:44.808612   32853 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0501 02:34:44.811518   32853 out.go:177] * Done! kubectl is now configured to use "ha-329926" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	May 01 02:38:17 ha-329926 crio[686]: time="2024-05-01 02:38:17.789028577Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=08595afa-2cc7-4c87-895c-17fdbf411333 name=/runtime.v1.RuntimeService/Version
	May 01 02:38:17 ha-329926 crio[686]: time="2024-05-01 02:38:17.790212492Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b1e52758-04c2-4c3c-ae89-0b3f4dcc1ecd name=/runtime.v1.ImageService/ImageFsInfo
	May 01 02:38:17 ha-329926 crio[686]: time="2024-05-01 02:38:17.791840302Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714531097791567097,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b1e52758-04c2-4c3c-ae89-0b3f4dcc1ecd name=/runtime.v1.ImageService/ImageFsInfo
	May 01 02:38:17 ha-329926 crio[686]: time="2024-05-01 02:38:17.799210518Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4332ef53-6177-4d71-a522-2be407c5be2f name=/runtime.v1.RuntimeService/ListContainers
	May 01 02:38:17 ha-329926 crio[686]: time="2024-05-01 02:38:17.799419622Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4332ef53-6177-4d71-a522-2be407c5be2f name=/runtime.v1.RuntimeService/ListContainers
	May 01 02:38:17 ha-329926 crio[686]: time="2024-05-01 02:38:17.799795525Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4d8c54a9eb6fd7f6b4514ddfa0201da9a28692806db71b7d277493e9f9b90233,PodSandboxId:abf4acd7dd09ff0cf728fe61b1bf3c73291eacc97a98f9f2d79b9fe49a629266,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714530889047890387,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-nwj5x,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0cfb5bda-fca7-479f-98d3-6be9bddf0e1c,},Annotations:map[string]string{io.kubernetes.container.hash: b50f62c1,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:619f66869569c782fd59143653e79694cc74f9ae89927d4f16dfcb10f47d0e03,PodSandboxId:0fe93b95f6356e43c599c74942976a3c1c2025ad4e52377711767c38c88d3d63,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714530725115289651,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-cfdqc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a37e982e-9e4f-43bf-b957-0d6f082f4ec8,},Annotations:map[string]string{io.kubernetes.container.hash: d10dbf58,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:693a12cd2b2c6de7a7c03e4d290e69cdcef9f4f17b75ff84f84fb81b8297cd63,PodSandboxId:1771f42c6abecb905d783dc2c43071807fc9138d9eae9432b2b132602a450cb2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714530725082746654,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-2h8lc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
937e09f0-6a7d-4387-aa19-ee959eb5a2a5,},Annotations:map[string]string{io.kubernetes.container.hash: 64cd0f7d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbc7b6bc224b5b53e156316187f05c941fd17da22bca2cc7fecf5071d8eb4d38,PodSandboxId:05fed297415fe992b6ceac2c7aef1f62bcd2e60cf49b1d9d743697eee2cb3af3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1714530724054226796,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 371423a6-a156-4e8d-bf66-812d606cc8d7,},Annotations:map[string]string{io.kubernetes.container.hash: 79218c39,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:257c4b72e49ea613701bb138700cc82cde325fb0c005942fc50bd070378cf0eb,PodSandboxId:ad0b43789b437ced381dd7eb2d9868a7746a793b32c75f341a8f9efae3a1de24,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:17145307
22097649549,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-kcmp7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e15c166-9ba1-40c9-8f33-db7f83733932,},Annotations:map[string]string{io.kubernetes.container.hash: fdceac74,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ab64850e34b66cbe91e099c91014051472b87888130297cd7c47a4a78992140,PodSandboxId:f6611da96d51a594f93865322c664539feca749ea5a59ee33b637aae006635ba,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714530722007148476,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-msshn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7575fbfc-11ce-4223-bd99-ff9cdddd3568,},Annotations:map[string]string{io.kubernetes.container.hash: 725b7ad5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9563ee09b7dc14582bda46368040d65e26370cf354a48e6db28fb4d5169a41db,PodSandboxId:8e4b8a65b029e97b7caac8a0741c84135d0828b6c08c910ffe39c62fad15b348,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1714530704705366179,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-329926,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c3b226201c27ab5f848e6c44c130330,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3ffc6d046e214333ed85219819ad54bfa5c3ac0b10167beacf72edbd6035736,PodSandboxId:170d412885089b502fe3701809ec5f3271a9fd4f9181bf1c293cd527d144907b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714530701588736902,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-329926,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50544e7f95cae164184f9b27f78747c6,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d24a4adfe9096e0063099c3390b72f12094c22465e8b666eb999e30740b77ea3,PodSandboxId:f0b4ec2fbb3da1f22c55229886d7442b77bfddb7283930fbd8a5792aab374edd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714530701591213003,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-ha-329926,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d6c0ce9d370e02811c06c5c50fb7da1,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f36a128ab65a069e1cafa5abf1b7774272b73428ea3826f595758d5e99b4e93,PodSandboxId:0c17dc8e917b3fee067acb3ff1b63beb8742f444a12d58ee96183b021052a9ed,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714530701461255731,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name:
etcd-ha-329926,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 684463257510837a1c150a7df713bf62,},Annotations:map[string]string{io.kubernetes.container.hash: d6da2a03,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:347407ef9dd66d0f2a44d6bc871649c2f38c1263ef6f3a33d6574f0e149ab701,PodSandboxId:65643d458b7e95f734a62743c303ec72adbb23f0caf328e66b40f003fc10141e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714530701541408038,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-329926,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c603c91fe09a36a9d3862475188142a,},Annotations:map[string]string{io.kubernetes.container.hash: 740b8b39,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4332ef53-6177-4d71-a522-2be407c5be2f name=/runtime.v1.RuntimeService/ListContainers
	May 01 02:38:17 ha-329926 crio[686]: time="2024-05-01 02:38:17.841728519Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=dccb5aa0-d120-447a-a5ca-50a3b588e124 name=/runtime.v1.RuntimeService/Version
	May 01 02:38:17 ha-329926 crio[686]: time="2024-05-01 02:38:17.841809106Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=dccb5aa0-d120-447a-a5ca-50a3b588e124 name=/runtime.v1.RuntimeService/Version
	May 01 02:38:17 ha-329926 crio[686]: time="2024-05-01 02:38:17.843456948Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=160cb32f-a9d9-4cc2-b959-3677aae42d69 name=/runtime.v1.ImageService/ImageFsInfo
	May 01 02:38:17 ha-329926 crio[686]: time="2024-05-01 02:38:17.844408036Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714531097844380376,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=160cb32f-a9d9-4cc2-b959-3677aae42d69 name=/runtime.v1.ImageService/ImageFsInfo
	May 01 02:38:17 ha-329926 crio[686]: time="2024-05-01 02:38:17.845239377Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e0027c90-8e81-4a60-b84e-f218d94ba89e name=/runtime.v1.RuntimeService/ListContainers
	May 01 02:38:17 ha-329926 crio[686]: time="2024-05-01 02:38:17.845329701Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e0027c90-8e81-4a60-b84e-f218d94ba89e name=/runtime.v1.RuntimeService/ListContainers
	May 01 02:38:17 ha-329926 crio[686]: time="2024-05-01 02:38:17.845601387Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4d8c54a9eb6fd7f6b4514ddfa0201da9a28692806db71b7d277493e9f9b90233,PodSandboxId:abf4acd7dd09ff0cf728fe61b1bf3c73291eacc97a98f9f2d79b9fe49a629266,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714530889047890387,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-nwj5x,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0cfb5bda-fca7-479f-98d3-6be9bddf0e1c,},Annotations:map[string]string{io.kubernetes.container.hash: b50f62c1,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:619f66869569c782fd59143653e79694cc74f9ae89927d4f16dfcb10f47d0e03,PodSandboxId:0fe93b95f6356e43c599c74942976a3c1c2025ad4e52377711767c38c88d3d63,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714530725115289651,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-cfdqc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a37e982e-9e4f-43bf-b957-0d6f082f4ec8,},Annotations:map[string]string{io.kubernetes.container.hash: d10dbf58,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:693a12cd2b2c6de7a7c03e4d290e69cdcef9f4f17b75ff84f84fb81b8297cd63,PodSandboxId:1771f42c6abecb905d783dc2c43071807fc9138d9eae9432b2b132602a450cb2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714530725082746654,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-2h8lc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
937e09f0-6a7d-4387-aa19-ee959eb5a2a5,},Annotations:map[string]string{io.kubernetes.container.hash: 64cd0f7d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbc7b6bc224b5b53e156316187f05c941fd17da22bca2cc7fecf5071d8eb4d38,PodSandboxId:05fed297415fe992b6ceac2c7aef1f62bcd2e60cf49b1d9d743697eee2cb3af3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1714530724054226796,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 371423a6-a156-4e8d-bf66-812d606cc8d7,},Annotations:map[string]string{io.kubernetes.container.hash: 79218c39,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:257c4b72e49ea613701bb138700cc82cde325fb0c005942fc50bd070378cf0eb,PodSandboxId:ad0b43789b437ced381dd7eb2d9868a7746a793b32c75f341a8f9efae3a1de24,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:17145307
22097649549,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-kcmp7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e15c166-9ba1-40c9-8f33-db7f83733932,},Annotations:map[string]string{io.kubernetes.container.hash: fdceac74,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ab64850e34b66cbe91e099c91014051472b87888130297cd7c47a4a78992140,PodSandboxId:f6611da96d51a594f93865322c664539feca749ea5a59ee33b637aae006635ba,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714530722007148476,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-msshn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7575fbfc-11ce-4223-bd99-ff9cdddd3568,},Annotations:map[string]string{io.kubernetes.container.hash: 725b7ad5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9563ee09b7dc14582bda46368040d65e26370cf354a48e6db28fb4d5169a41db,PodSandboxId:8e4b8a65b029e97b7caac8a0741c84135d0828b6c08c910ffe39c62fad15b348,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1714530704705366179,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-329926,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c3b226201c27ab5f848e6c44c130330,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3ffc6d046e214333ed85219819ad54bfa5c3ac0b10167beacf72edbd6035736,PodSandboxId:170d412885089b502fe3701809ec5f3271a9fd4f9181bf1c293cd527d144907b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714530701588736902,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-329926,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50544e7f95cae164184f9b27f78747c6,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d24a4adfe9096e0063099c3390b72f12094c22465e8b666eb999e30740b77ea3,PodSandboxId:f0b4ec2fbb3da1f22c55229886d7442b77bfddb7283930fbd8a5792aab374edd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714530701591213003,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-ha-329926,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d6c0ce9d370e02811c06c5c50fb7da1,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f36a128ab65a069e1cafa5abf1b7774272b73428ea3826f595758d5e99b4e93,PodSandboxId:0c17dc8e917b3fee067acb3ff1b63beb8742f444a12d58ee96183b021052a9ed,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714530701461255731,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name:
etcd-ha-329926,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 684463257510837a1c150a7df713bf62,},Annotations:map[string]string{io.kubernetes.container.hash: d6da2a03,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:347407ef9dd66d0f2a44d6bc871649c2f38c1263ef6f3a33d6574f0e149ab701,PodSandboxId:65643d458b7e95f734a62743c303ec72adbb23f0caf328e66b40f003fc10141e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714530701541408038,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-329926,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c603c91fe09a36a9d3862475188142a,},Annotations:map[string]string{io.kubernetes.container.hash: 740b8b39,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e0027c90-8e81-4a60-b84e-f218d94ba89e name=/runtime.v1.RuntimeService/ListContainers
	May 01 02:38:17 ha-329926 crio[686]: time="2024-05-01 02:38:17.863075914Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=60d32d8f-64b1-4fc9-a3e0-e5b3498c4e60 name=/runtime.v1.RuntimeService/ListPodSandbox
	May 01 02:38:17 ha-329926 crio[686]: time="2024-05-01 02:38:17.863357910Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:abf4acd7dd09ff0cf728fe61b1bf3c73291eacc97a98f9f2d79b9fe49a629266,Metadata:&PodSandboxMetadata{Name:busybox-fc5497c4f-nwj5x,Uid:0cfb5bda-fca7-479f-98d3-6be9bddf0e1c,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1714530886161507213,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-fc5497c4f-nwj5x,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0cfb5bda-fca7-479f-98d3-6be9bddf0e1c,pod-template-hash: fc5497c4f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-05-01T02:34:45.838459920Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0fe93b95f6356e43c599c74942976a3c1c2025ad4e52377711767c38c88d3d63,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-cfdqc,Uid:a37e982e-9e4f-43bf-b957-0d6f082f4ec8,Namespace:kube-system,Attempt:0,},State:S
ANDBOX_READY,CreatedAt:1714530724864321813,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-cfdqc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a37e982e-9e4f-43bf-b957-0d6f082f4ec8,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-05-01T02:32:03.652882941Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1771f42c6abecb905d783dc2c43071807fc9138d9eae9432b2b132602a450cb2,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-2h8lc,Uid:937e09f0-6a7d-4387-aa19-ee959eb5a2a5,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1714530724862793403,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-2h8lc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 937e09f0-6a7d-4387-aa19-ee959eb5a2a5,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2
024-05-01T02:32:03.650626133Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:05fed297415fe992b6ceac2c7aef1f62bcd2e60cf49b1d9d743697eee2cb3af3,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:371423a6-a156-4e8d-bf66-812d606cc8d7,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1714530723957087357,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 371423a6-a156-4e8d-bf66-812d606cc8d7,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"im
age\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-05-01T02:32:03.646226094Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f6611da96d51a594f93865322c664539feca749ea5a59ee33b637aae006635ba,Metadata:&PodSandboxMetadata{Name:kube-proxy-msshn,Uid:7575fbfc-11ce-4223-bd99-ff9cdddd3568,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1714530721645073802,Labels:map[string]string{controller-revision-hash: 79cf874c65,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-msshn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7575fbfc-11ce-4223-bd99-ff9cdddd3568,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]
string{kubernetes.io/config.seen: 2024-05-01T02:32:01.299744099Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ad0b43789b437ced381dd7eb2d9868a7746a793b32c75f341a8f9efae3a1de24,Metadata:&PodSandboxMetadata{Name:kindnet-kcmp7,Uid:8e15c166-9ba1-40c9-8f33-db7f83733932,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1714530721605509846,Labels:map[string]string{app: kindnet,controller-revision-hash: 64fdfd5c6d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-kcmp7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e15c166-9ba1-40c9-8f33-db7f83733932,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-05-01T02:32:01.283615769Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:65643d458b7e95f734a62743c303ec72adbb23f0caf328e66b40f003fc10141e,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-329926,Uid:4c603c91fe09a36a9d3862475188142a,Namespace:kube-system,Attempt:0
,},State:SANDBOX_READY,CreatedAt:1714530701336984750,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-329926,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c603c91fe09a36a9d3862475188142a,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.5:8443,kubernetes.io/config.hash: 4c603c91fe09a36a9d3862475188142a,kubernetes.io/config.seen: 2024-05-01T02:31:40.836102623Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:f0b4ec2fbb3da1f22c55229886d7442b77bfddb7283930fbd8a5792aab374edd,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-329926,Uid:3d6c0ce9d370e02811c06c5c50fb7da1,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1714530701322269105,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-329926,io.kubernete
s.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d6c0ce9d370e02811c06c5c50fb7da1,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 3d6c0ce9d370e02811c06c5c50fb7da1,kubernetes.io/config.seen: 2024-05-01T02:31:40.836103355Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:0c17dc8e917b3fee067acb3ff1b63beb8742f444a12d58ee96183b021052a9ed,Metadata:&PodSandboxMetadata{Name:etcd-ha-329926,Uid:684463257510837a1c150a7df713bf62,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1714530701294808017,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-329926,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 684463257510837a1c150a7df713bf62,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.5:2379,kubernetes.io/config.hash: 684463257510837a1c150a7df713bf62,kubernetes.io/config.seen: 2024-05-01T02:31:40.836101509Z,kubernet
es.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:170d412885089b502fe3701809ec5f3271a9fd4f9181bf1c293cd527d144907b,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-329926,Uid:50544e7f95cae164184f9b27f78747c6,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1714530701289929837,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-329926,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50544e7f95cae164184f9b27f78747c6,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 50544e7f95cae164184f9b27f78747c6,kubernetes.io/config.seen: 2024-05-01T02:31:40.836097452Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:8e4b8a65b029e97b7caac8a0741c84135d0828b6c08c910ffe39c62fad15b348,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-329926,Uid:9c3b226201c27ab5f848e6c44c130330,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1714530701284189414,Labels:ma
p[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-329926,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c3b226201c27ab5f848e6c44c130330,},Annotations:map[string]string{kubernetes.io/config.hash: 9c3b226201c27ab5f848e6c44c130330,kubernetes.io/config.seen: 2024-05-01T02:31:40.836100540Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=60d32d8f-64b1-4fc9-a3e0-e5b3498c4e60 name=/runtime.v1.RuntimeService/ListPodSandbox
	May 01 02:38:17 ha-329926 crio[686]: time="2024-05-01 02:38:17.864107828Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b14da220-073c-481f-b95f-464d6f5c76c0 name=/runtime.v1.RuntimeService/ListContainers
	May 01 02:38:17 ha-329926 crio[686]: time="2024-05-01 02:38:17.864194423Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b14da220-073c-481f-b95f-464d6f5c76c0 name=/runtime.v1.RuntimeService/ListContainers
	May 01 02:38:17 ha-329926 crio[686]: time="2024-05-01 02:38:17.864445430Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4d8c54a9eb6fd7f6b4514ddfa0201da9a28692806db71b7d277493e9f9b90233,PodSandboxId:abf4acd7dd09ff0cf728fe61b1bf3c73291eacc97a98f9f2d79b9fe49a629266,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714530889047890387,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-nwj5x,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0cfb5bda-fca7-479f-98d3-6be9bddf0e1c,},Annotations:map[string]string{io.kubernetes.container.hash: b50f62c1,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:619f66869569c782fd59143653e79694cc74f9ae89927d4f16dfcb10f47d0e03,PodSandboxId:0fe93b95f6356e43c599c74942976a3c1c2025ad4e52377711767c38c88d3d63,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714530725115289651,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-cfdqc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a37e982e-9e4f-43bf-b957-0d6f082f4ec8,},Annotations:map[string]string{io.kubernetes.container.hash: d10dbf58,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:693a12cd2b2c6de7a7c03e4d290e69cdcef9f4f17b75ff84f84fb81b8297cd63,PodSandboxId:1771f42c6abecb905d783dc2c43071807fc9138d9eae9432b2b132602a450cb2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714530725082746654,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-2h8lc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
937e09f0-6a7d-4387-aa19-ee959eb5a2a5,},Annotations:map[string]string{io.kubernetes.container.hash: 64cd0f7d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbc7b6bc224b5b53e156316187f05c941fd17da22bca2cc7fecf5071d8eb4d38,PodSandboxId:05fed297415fe992b6ceac2c7aef1f62bcd2e60cf49b1d9d743697eee2cb3af3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1714530724054226796,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 371423a6-a156-4e8d-bf66-812d606cc8d7,},Annotations:map[string]string{io.kubernetes.container.hash: 79218c39,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:257c4b72e49ea613701bb138700cc82cde325fb0c005942fc50bd070378cf0eb,PodSandboxId:ad0b43789b437ced381dd7eb2d9868a7746a793b32c75f341a8f9efae3a1de24,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:17145307
22097649549,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-kcmp7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e15c166-9ba1-40c9-8f33-db7f83733932,},Annotations:map[string]string{io.kubernetes.container.hash: fdceac74,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ab64850e34b66cbe91e099c91014051472b87888130297cd7c47a4a78992140,PodSandboxId:f6611da96d51a594f93865322c664539feca749ea5a59ee33b637aae006635ba,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714530722007148476,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-msshn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7575fbfc-11ce-4223-bd99-ff9cdddd3568,},Annotations:map[string]string{io.kubernetes.container.hash: 725b7ad5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9563ee09b7dc14582bda46368040d65e26370cf354a48e6db28fb4d5169a41db,PodSandboxId:8e4b8a65b029e97b7caac8a0741c84135d0828b6c08c910ffe39c62fad15b348,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1714530704705366179,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-329926,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c3b226201c27ab5f848e6c44c130330,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3ffc6d046e214333ed85219819ad54bfa5c3ac0b10167beacf72edbd6035736,PodSandboxId:170d412885089b502fe3701809ec5f3271a9fd4f9181bf1c293cd527d144907b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714530701588736902,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-329926,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50544e7f95cae164184f9b27f78747c6,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d24a4adfe9096e0063099c3390b72f12094c22465e8b666eb999e30740b77ea3,PodSandboxId:f0b4ec2fbb3da1f22c55229886d7442b77bfddb7283930fbd8a5792aab374edd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714530701591213003,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-ha-329926,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d6c0ce9d370e02811c06c5c50fb7da1,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f36a128ab65a069e1cafa5abf1b7774272b73428ea3826f595758d5e99b4e93,PodSandboxId:0c17dc8e917b3fee067acb3ff1b63beb8742f444a12d58ee96183b021052a9ed,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714530701461255731,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name:
etcd-ha-329926,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 684463257510837a1c150a7df713bf62,},Annotations:map[string]string{io.kubernetes.container.hash: d6da2a03,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:347407ef9dd66d0f2a44d6bc871649c2f38c1263ef6f3a33d6574f0e149ab701,PodSandboxId:65643d458b7e95f734a62743c303ec72adbb23f0caf328e66b40f003fc10141e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714530701541408038,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-329926,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c603c91fe09a36a9d3862475188142a,},Annotations:map[string]string{io.kubernetes.container.hash: 740b8b39,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b14da220-073c-481f-b95f-464d6f5c76c0 name=/runtime.v1.RuntimeService/ListContainers
	May 01 02:38:17 ha-329926 crio[686]: time="2024-05-01 02:38:17.889263613Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c26d74a4-304b-446d-9164-aa38a867e3b7 name=/runtime.v1.RuntimeService/Version
	May 01 02:38:17 ha-329926 crio[686]: time="2024-05-01 02:38:17.889335500Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c26d74a4-304b-446d-9164-aa38a867e3b7 name=/runtime.v1.RuntimeService/Version
	May 01 02:38:17 ha-329926 crio[686]: time="2024-05-01 02:38:17.890616414Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=93124cd0-9e4f-44a4-9354-5eec44a6d896 name=/runtime.v1.ImageService/ImageFsInfo
	May 01 02:38:17 ha-329926 crio[686]: time="2024-05-01 02:38:17.891151997Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714531097891128741,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=93124cd0-9e4f-44a4-9354-5eec44a6d896 name=/runtime.v1.ImageService/ImageFsInfo
	May 01 02:38:17 ha-329926 crio[686]: time="2024-05-01 02:38:17.891793029Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e0260121-167f-4af1-9f3c-7a20bfa44c5e name=/runtime.v1.RuntimeService/ListContainers
	May 01 02:38:17 ha-329926 crio[686]: time="2024-05-01 02:38:17.891884588Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e0260121-167f-4af1-9f3c-7a20bfa44c5e name=/runtime.v1.RuntimeService/ListContainers
	May 01 02:38:17 ha-329926 crio[686]: time="2024-05-01 02:38:17.892178655Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4d8c54a9eb6fd7f6b4514ddfa0201da9a28692806db71b7d277493e9f9b90233,PodSandboxId:abf4acd7dd09ff0cf728fe61b1bf3c73291eacc97a98f9f2d79b9fe49a629266,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714530889047890387,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-nwj5x,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0cfb5bda-fca7-479f-98d3-6be9bddf0e1c,},Annotations:map[string]string{io.kubernetes.container.hash: b50f62c1,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:619f66869569c782fd59143653e79694cc74f9ae89927d4f16dfcb10f47d0e03,PodSandboxId:0fe93b95f6356e43c599c74942976a3c1c2025ad4e52377711767c38c88d3d63,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714530725115289651,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-cfdqc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a37e982e-9e4f-43bf-b957-0d6f082f4ec8,},Annotations:map[string]string{io.kubernetes.container.hash: d10dbf58,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:693a12cd2b2c6de7a7c03e4d290e69cdcef9f4f17b75ff84f84fb81b8297cd63,PodSandboxId:1771f42c6abecb905d783dc2c43071807fc9138d9eae9432b2b132602a450cb2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714530725082746654,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-2h8lc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
937e09f0-6a7d-4387-aa19-ee959eb5a2a5,},Annotations:map[string]string{io.kubernetes.container.hash: 64cd0f7d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbc7b6bc224b5b53e156316187f05c941fd17da22bca2cc7fecf5071d8eb4d38,PodSandboxId:05fed297415fe992b6ceac2c7aef1f62bcd2e60cf49b1d9d743697eee2cb3af3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1714530724054226796,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 371423a6-a156-4e8d-bf66-812d606cc8d7,},Annotations:map[string]string{io.kubernetes.container.hash: 79218c39,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:257c4b72e49ea613701bb138700cc82cde325fb0c005942fc50bd070378cf0eb,PodSandboxId:ad0b43789b437ced381dd7eb2d9868a7746a793b32c75f341a8f9efae3a1de24,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:17145307
22097649549,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-kcmp7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e15c166-9ba1-40c9-8f33-db7f83733932,},Annotations:map[string]string{io.kubernetes.container.hash: fdceac74,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ab64850e34b66cbe91e099c91014051472b87888130297cd7c47a4a78992140,PodSandboxId:f6611da96d51a594f93865322c664539feca749ea5a59ee33b637aae006635ba,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714530722007148476,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-msshn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7575fbfc-11ce-4223-bd99-ff9cdddd3568,},Annotations:map[string]string{io.kubernetes.container.hash: 725b7ad5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9563ee09b7dc14582bda46368040d65e26370cf354a48e6db28fb4d5169a41db,PodSandboxId:8e4b8a65b029e97b7caac8a0741c84135d0828b6c08c910ffe39c62fad15b348,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1714530704705366179,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-329926,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c3b226201c27ab5f848e6c44c130330,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3ffc6d046e214333ed85219819ad54bfa5c3ac0b10167beacf72edbd6035736,PodSandboxId:170d412885089b502fe3701809ec5f3271a9fd4f9181bf1c293cd527d144907b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714530701588736902,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-329926,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50544e7f95cae164184f9b27f78747c6,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d24a4adfe9096e0063099c3390b72f12094c22465e8b666eb999e30740b77ea3,PodSandboxId:f0b4ec2fbb3da1f22c55229886d7442b77bfddb7283930fbd8a5792aab374edd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714530701591213003,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-ha-329926,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d6c0ce9d370e02811c06c5c50fb7da1,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f36a128ab65a069e1cafa5abf1b7774272b73428ea3826f595758d5e99b4e93,PodSandboxId:0c17dc8e917b3fee067acb3ff1b63beb8742f444a12d58ee96183b021052a9ed,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714530701461255731,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name:
etcd-ha-329926,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 684463257510837a1c150a7df713bf62,},Annotations:map[string]string{io.kubernetes.container.hash: d6da2a03,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:347407ef9dd66d0f2a44d6bc871649c2f38c1263ef6f3a33d6574f0e149ab701,PodSandboxId:65643d458b7e95f734a62743c303ec72adbb23f0caf328e66b40f003fc10141e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714530701541408038,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-329926,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c603c91fe09a36a9d3862475188142a,},Annotations:map[string]string{io.kubernetes.container.hash: 740b8b39,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e0260121-167f-4af1-9f3c-7a20bfa44c5e name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	4d8c54a9eb6fd       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   abf4acd7dd09f       busybox-fc5497c4f-nwj5x
	619f66869569c       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago       Running             coredns                   0                   0fe93b95f6356       coredns-7db6d8ff4d-cfdqc
	693a12cd2b2c6       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago       Running             coredns                   0                   1771f42c6abec       coredns-7db6d8ff4d-2h8lc
	fbc7b6bc224b5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   05fed297415fe       storage-provisioner
	257c4b72e49ea       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      6 minutes ago       Running             kindnet-cni               0                   ad0b43789b437       kindnet-kcmp7
	2ab64850e34b6       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b                                      6 minutes ago       Running             kube-proxy                0                   f6611da96d51a       kube-proxy-msshn
	9563ee09b7dc1       ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a     6 minutes ago       Running             kube-vip                  0                   8e4b8a65b029e       kube-vip-ha-329926
	d24a4adfe9096       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b                                      6 minutes ago       Running             kube-controller-manager   0                   f0b4ec2fbb3da       kube-controller-manager-ha-329926
	e3ffc6d046e21       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced                                      6 minutes ago       Running             kube-scheduler            0                   170d412885089       kube-scheduler-ha-329926
	347407ef9dd66       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0                                      6 minutes ago       Running             kube-apiserver            0                   65643d458b7e9       kube-apiserver-ha-329926
	9f36a128ab65a       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      6 minutes ago       Running             etcd                      0                   0c17dc8e917b3       etcd-ha-329926
	
	
	==> coredns [619f66869569c782fd59143653e79694cc74f9ae89927d4f16dfcb10f47d0e03] <==
	[INFO] 10.244.1.2:53229 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000157631s
	[INFO] 10.244.1.2:58661 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.013593602s
	[INFO] 10.244.1.2:38209 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000174169s
	[INFO] 10.244.1.2:49411 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000226927s
	[INFO] 10.244.0.4:36823 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000251251s
	[INFO] 10.244.0.4:50159 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001217267s
	[INFO] 10.244.0.4:40861 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000095644s
	[INFO] 10.244.0.4:39347 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000037736s
	[INFO] 10.244.2.2:41105 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000265426s
	[INFO] 10.244.2.2:60245 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000092358s
	[INFO] 10.244.2.2:33866 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00027339s
	[INFO] 10.244.2.2:40430 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000118178s
	[INFO] 10.244.2.2:34835 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000101675s
	[INFO] 10.244.1.2:50970 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000173405s
	[INFO] 10.244.1.2:45808 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000138806s
	[INFO] 10.244.0.4:35255 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000156547s
	[INFO] 10.244.0.4:41916 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000142712s
	[INFO] 10.244.0.4:47485 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000089433s
	[INFO] 10.244.2.2:53686 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000133335s
	[INFO] 10.244.2.2:36841 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000214942s
	[INFO] 10.244.2.2:60707 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000154s
	[INFO] 10.244.1.2:56577 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000484498s
	[INFO] 10.244.0.4:54313 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000184738s
	[INFO] 10.244.0.4:52463 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000369344s
	[INFO] 10.244.2.2:41039 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000224698s
	
	
	==> coredns [693a12cd2b2c6de7a7c03e4d290e69cdcef9f4f17b75ff84f84fb81b8297cd63] <==
	[INFO] 10.244.1.2:53262 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.005416004s
	[INFO] 10.244.0.4:55487 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000178658s
	[INFO] 10.244.1.2:56056 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000290654s
	[INFO] 10.244.1.2:49988 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000174539s
	[INFO] 10.244.1.2:51093 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.019474136s
	[INFO] 10.244.1.2:60518 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00017936s
	[INFO] 10.244.0.4:49957 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000203599s
	[INFO] 10.244.0.4:42538 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001710693s
	[INFO] 10.244.0.4:56099 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000083655s
	[INFO] 10.244.0.4:32984 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000156518s
	[INFO] 10.244.2.2:55668 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001793326s
	[INFO] 10.244.2.2:50808 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001174633s
	[INFO] 10.244.2.2:44291 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000119382s
	[INFO] 10.244.1.2:38278 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000204436s
	[INFO] 10.244.1.2:59141 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000117309s
	[INFO] 10.244.0.4:37516 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00005532s
	[INFO] 10.244.2.2:57332 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000189855s
	[INFO] 10.244.1.2:34171 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00024042s
	[INFO] 10.244.1.2:37491 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000234774s
	[INFO] 10.244.1.2:47588 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000815872s
	[INFO] 10.244.0.4:38552 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000135078s
	[INFO] 10.244.0.4:37827 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000154857s
	[INFO] 10.244.2.2:47767 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000154967s
	[INFO] 10.244.2.2:56393 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000156764s
	[INFO] 10.244.2.2:38616 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000127045s
	
	
	==> describe nodes <==
	Name:               ha-329926
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-329926
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2c4eae41cda912e6a762d77f0d8868e00f97bb4e
	                    minikube.k8s.io/name=ha-329926
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_01T02_31_49_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 01 May 2024 02:31:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-329926
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 01 May 2024 02:38:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 01 May 2024 02:35:22 +0000   Wed, 01 May 2024 02:31:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 01 May 2024 02:35:22 +0000   Wed, 01 May 2024 02:31:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 01 May 2024 02:35:22 +0000   Wed, 01 May 2024 02:31:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 01 May 2024 02:35:22 +0000   Wed, 01 May 2024 02:32:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.5
	  Hostname:    ha-329926
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f2958e1e59474320901fe20ba723db00
	  System UUID:                f2958e1e-5947-4320-901f-e20ba723db00
	  Boot ID:                    29fc4c0c-83d6-4af9-8767-4e1b7b7102d4
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-nwj5x              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m33s
	  kube-system                 coredns-7db6d8ff4d-2h8lc             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6m17s
	  kube-system                 coredns-7db6d8ff4d-cfdqc             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6m17s
	  kube-system                 etcd-ha-329926                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m30s
	  kube-system                 kindnet-kcmp7                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m17s
	  kube-system                 kube-apiserver-ha-329926             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m30s
	  kube-system                 kube-controller-manager-ha-329926    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m32s
	  kube-system                 kube-proxy-msshn                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m17s
	  kube-system                 kube-scheduler-ha-329926             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m30s
	  kube-system                 kube-vip-ha-329926                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m33s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m16s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m15s  kube-proxy       
	  Normal  Starting                 6m30s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m30s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m30s  kubelet          Node ha-329926 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m30s  kubelet          Node ha-329926 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m30s  kubelet          Node ha-329926 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m18s  node-controller  Node ha-329926 event: Registered Node ha-329926 in Controller
	  Normal  NodeReady                6m15s  kubelet          Node ha-329926 status is now: NodeReady
	  Normal  RegisteredNode           4m53s  node-controller  Node ha-329926 event: Registered Node ha-329926 in Controller
	  Normal  RegisteredNode           3m38s  node-controller  Node ha-329926 event: Registered Node ha-329926 in Controller
	
	
	Name:               ha-329926-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-329926-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2c4eae41cda912e6a762d77f0d8868e00f97bb4e
	                    minikube.k8s.io/name=ha-329926
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_01T02_33_11_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 01 May 2024 02:33:07 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-329926-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 01 May 2024 02:35:52 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Wed, 01 May 2024 02:35:09 +0000   Wed, 01 May 2024 02:36:35 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Wed, 01 May 2024 02:35:09 +0000   Wed, 01 May 2024 02:36:35 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Wed, 01 May 2024 02:35:09 +0000   Wed, 01 May 2024 02:36:35 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Wed, 01 May 2024 02:35:09 +0000   Wed, 01 May 2024 02:36:35 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.79
	  Hostname:    ha-329926-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 135aac161d694487846d436743753149
	  System UUID:                135aac16-1d69-4487-846d-436743753149
	  Boot ID:                    34317182-9a7b-42af-9a1d-807830167258
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-h8dxv                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m33s
	  kube-system                 etcd-ha-329926-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m9s
	  kube-system                 kindnet-9r8zn                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m11s
	  kube-system                 kube-apiserver-ha-329926-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m10s
	  kube-system                 kube-controller-manager-ha-329926-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m9s
	  kube-system                 kube-proxy-rfsm8                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m11s
	  kube-system                 kube-scheduler-ha-329926-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m10s
	  kube-system                 kube-vip-ha-329926-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m6s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m6s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  5m11s (x8 over 5m11s)  kubelet          Node ha-329926-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m11s (x8 over 5m11s)  kubelet          Node ha-329926-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m11s (x7 over 5m11s)  kubelet          Node ha-329926-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m11s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m8s                   node-controller  Node ha-329926-m02 event: Registered Node ha-329926-m02 in Controller
	  Normal  RegisteredNode           4m53s                  node-controller  Node ha-329926-m02 event: Registered Node ha-329926-m02 in Controller
	  Normal  RegisteredNode           3m38s                  node-controller  Node ha-329926-m02 event: Registered Node ha-329926-m02 in Controller
	  Normal  NodeNotReady             103s                   node-controller  Node ha-329926-m02 status is now: NodeNotReady
	
	
	Name:               ha-329926-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-329926-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2c4eae41cda912e6a762d77f0d8868e00f97bb4e
	                    minikube.k8s.io/name=ha-329926
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_01T02_34_25_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 01 May 2024 02:34:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-329926-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 01 May 2024 02:38:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 01 May 2024 02:34:51 +0000   Wed, 01 May 2024 02:34:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 01 May 2024 02:34:51 +0000   Wed, 01 May 2024 02:34:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 01 May 2024 02:34:51 +0000   Wed, 01 May 2024 02:34:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 01 May 2024 02:34:51 +0000   Wed, 01 May 2024 02:34:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.115
	  Hostname:    ha-329926-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 1767eff05cce4be88efdc97aef5d41f4
	  System UUID:                1767eff0-5cce-4be8-8efd-c97aef5d41f4
	  Boot ID:                    cb6b191b-c518-4f73-b29a-a16a5fcd9713
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-s528n                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m33s
	  kube-system                 etcd-ha-329926-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         3m55s
	  kube-system                 kindnet-7gr9n                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m57s
	  kube-system                 kube-apiserver-ha-329926-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m55s
	  kube-system                 kube-controller-manager-ha-329926-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m55s
	  kube-system                 kube-proxy-jfnk9                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m57s
	  kube-system                 kube-scheduler-ha-329926-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m55s
	  kube-system                 kube-vip-ha-329926-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m51s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  3m57s (x8 over 3m57s)  kubelet          Node ha-329926-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m57s (x8 over 3m57s)  kubelet          Node ha-329926-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m57s (x7 over 3m57s)  kubelet          Node ha-329926-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m57s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m53s                  node-controller  Node ha-329926-m03 event: Registered Node ha-329926-m03 in Controller
	  Normal  RegisteredNode           3m53s                  node-controller  Node ha-329926-m03 event: Registered Node ha-329926-m03 in Controller
	  Normal  RegisteredNode           3m38s                  node-controller  Node ha-329926-m03 event: Registered Node ha-329926-m03 in Controller
	
	
	Name:               ha-329926-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-329926-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2c4eae41cda912e6a762d77f0d8868e00f97bb4e
	                    minikube.k8s.io/name=ha-329926
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_01T02_35_25_0700
	                    minikube.k8s.io/version=v1.33.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 01 May 2024 02:35:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-329926-m04
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 01 May 2024 02:38:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 01 May 2024 02:35:55 +0000   Wed, 01 May 2024 02:35:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 01 May 2024 02:35:55 +0000   Wed, 01 May 2024 02:35:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 01 May 2024 02:35:55 +0000   Wed, 01 May 2024 02:35:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 01 May 2024 02:35:55 +0000   Wed, 01 May 2024 02:35:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.84
	  Hostname:    ha-329926-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 b19ce422aa224cda91e88f6cd8b003f9
	  System UUID:                b19ce422-aa22-4cda-91e8-8f6cd8b003f9
	  Boot ID:                    8e829b97-ffa9-4d75-abf6-2a174d768e30
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-86ngt       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      2m54s
	  kube-system                 kube-proxy-9492r    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m48s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  2m54s (x2 over 2m54s)  kubelet          Node ha-329926-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m54s (x2 over 2m54s)  kubelet          Node ha-329926-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m54s (x2 over 2m54s)  kubelet          Node ha-329926-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m54s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m53s                  node-controller  Node ha-329926-m04 event: Registered Node ha-329926-m04 in Controller
	  Normal  RegisteredNode           2m53s                  node-controller  Node ha-329926-m04 event: Registered Node ha-329926-m04 in Controller
	  Normal  RegisteredNode           2m53s                  node-controller  Node ha-329926-m04 event: Registered Node ha-329926-m04 in Controller
	  Normal  NodeReady                2m43s                  kubelet          Node ha-329926-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[May 1 02:31] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052256] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.043894] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.694169] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.641579] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.672174] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.723194] systemd-fstab-generator[602]: Ignoring "noauto" option for root device
	[  +0.059078] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.050190] systemd-fstab-generator[614]: Ignoring "noauto" option for root device
	[  +0.172804] systemd-fstab-generator[628]: Ignoring "noauto" option for root device
	[  +0.147592] systemd-fstab-generator[641]: Ignoring "noauto" option for root device
	[  +0.297725] systemd-fstab-generator[671]: Ignoring "noauto" option for root device
	[  +4.784571] systemd-fstab-generator[771]: Ignoring "noauto" option for root device
	[  +0.063787] kauditd_printk_skb: 130 callbacks suppressed
	[  +5.533501] systemd-fstab-generator[961]: Ignoring "noauto" option for root device
	[  +0.060916] kauditd_printk_skb: 18 callbacks suppressed
	[  +7.479829] systemd-fstab-generator[1381]: Ignoring "noauto" option for root device
	[  +0.092024] kauditd_printk_skb: 79 callbacks suppressed
	[May 1 02:32] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.650154] kauditd_printk_skb: 74 callbacks suppressed
	
	
	==> etcd [9f36a128ab65a069e1cafa5abf1b7774272b73428ea3826f595758d5e99b4e93] <==
	{"level":"warn","ts":"2024-05-01T02:38:18.085283Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"64dbb1bdcfddc92c","rtt":"1.1965ms","error":"dial tcp 192.168.39.79:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-05-01T02:38:18.170811Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c5263387c79c0223","from":"c5263387c79c0223","remote-peer-id":"64dbb1bdcfddc92c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-01T02:38:18.187034Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c5263387c79c0223","from":"c5263387c79c0223","remote-peer-id":"64dbb1bdcfddc92c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-01T02:38:18.19527Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c5263387c79c0223","from":"c5263387c79c0223","remote-peer-id":"64dbb1bdcfddc92c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-01T02:38:18.209604Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c5263387c79c0223","from":"c5263387c79c0223","remote-peer-id":"64dbb1bdcfddc92c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-01T02:38:18.227314Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c5263387c79c0223","from":"c5263387c79c0223","remote-peer-id":"64dbb1bdcfddc92c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-01T02:38:18.236601Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c5263387c79c0223","from":"c5263387c79c0223","remote-peer-id":"64dbb1bdcfddc92c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-01T02:38:18.250754Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c5263387c79c0223","from":"c5263387c79c0223","remote-peer-id":"64dbb1bdcfddc92c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-01T02:38:18.254738Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c5263387c79c0223","from":"c5263387c79c0223","remote-peer-id":"64dbb1bdcfddc92c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-01T02:38:18.269491Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c5263387c79c0223","from":"c5263387c79c0223","remote-peer-id":"64dbb1bdcfddc92c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-01T02:38:18.27133Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c5263387c79c0223","from":"c5263387c79c0223","remote-peer-id":"64dbb1bdcfddc92c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-01T02:38:18.291419Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c5263387c79c0223","from":"c5263387c79c0223","remote-peer-id":"64dbb1bdcfddc92c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-01T02:38:18.301233Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c5263387c79c0223","from":"c5263387c79c0223","remote-peer-id":"64dbb1bdcfddc92c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-01T02:38:18.309449Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c5263387c79c0223","from":"c5263387c79c0223","remote-peer-id":"64dbb1bdcfddc92c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-01T02:38:18.314459Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c5263387c79c0223","from":"c5263387c79c0223","remote-peer-id":"64dbb1bdcfddc92c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-01T02:38:18.318938Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c5263387c79c0223","from":"c5263387c79c0223","remote-peer-id":"64dbb1bdcfddc92c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-01T02:38:18.330285Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c5263387c79c0223","from":"c5263387c79c0223","remote-peer-id":"64dbb1bdcfddc92c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-01T02:38:18.336938Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c5263387c79c0223","from":"c5263387c79c0223","remote-peer-id":"64dbb1bdcfddc92c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-01T02:38:18.344155Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c5263387c79c0223","from":"c5263387c79c0223","remote-peer-id":"64dbb1bdcfddc92c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-01T02:38:18.349453Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c5263387c79c0223","from":"c5263387c79c0223","remote-peer-id":"64dbb1bdcfddc92c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-01T02:38:18.353213Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c5263387c79c0223","from":"c5263387c79c0223","remote-peer-id":"64dbb1bdcfddc92c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-01T02:38:18.360393Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c5263387c79c0223","from":"c5263387c79c0223","remote-peer-id":"64dbb1bdcfddc92c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-01T02:38:18.367003Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c5263387c79c0223","from":"c5263387c79c0223","remote-peer-id":"64dbb1bdcfddc92c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-01T02:38:18.37109Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c5263387c79c0223","from":"c5263387c79c0223","remote-peer-id":"64dbb1bdcfddc92c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-01T02:38:18.375199Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c5263387c79c0223","from":"c5263387c79c0223","remote-peer-id":"64dbb1bdcfddc92c","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 02:38:18 up 7 min,  0 users,  load average: 0.25, 0.42, 0.25
	Linux ha-329926 5.10.207 #1 SMP Tue Apr 30 22:38:43 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [257c4b72e49ea613701bb138700cc82cde325fb0c005942fc50bd070378cf0eb] <==
	I0501 02:37:43.757904       1 main.go:250] Node ha-329926-m04 has CIDR [10.244.3.0/24] 
	I0501 02:37:53.770086       1 main.go:223] Handling node with IPs: map[192.168.39.5:{}]
	I0501 02:37:53.770150       1 main.go:227] handling current node
	I0501 02:37:53.770172       1 main.go:223] Handling node with IPs: map[192.168.39.79:{}]
	I0501 02:37:53.770184       1 main.go:250] Node ha-329926-m02 has CIDR [10.244.1.0/24] 
	I0501 02:37:53.770839       1 main.go:223] Handling node with IPs: map[192.168.39.115:{}]
	I0501 02:37:53.770894       1 main.go:250] Node ha-329926-m03 has CIDR [10.244.2.0/24] 
	I0501 02:37:53.771016       1 main.go:223] Handling node with IPs: map[192.168.39.84:{}]
	I0501 02:37:53.771066       1 main.go:250] Node ha-329926-m04 has CIDR [10.244.3.0/24] 
	I0501 02:38:03.779056       1 main.go:223] Handling node with IPs: map[192.168.39.5:{}]
	I0501 02:38:03.779196       1 main.go:227] handling current node
	I0501 02:38:03.779228       1 main.go:223] Handling node with IPs: map[192.168.39.79:{}]
	I0501 02:38:03.779250       1 main.go:250] Node ha-329926-m02 has CIDR [10.244.1.0/24] 
	I0501 02:38:03.779384       1 main.go:223] Handling node with IPs: map[192.168.39.115:{}]
	I0501 02:38:03.779405       1 main.go:250] Node ha-329926-m03 has CIDR [10.244.2.0/24] 
	I0501 02:38:03.779462       1 main.go:223] Handling node with IPs: map[192.168.39.84:{}]
	I0501 02:38:03.779480       1 main.go:250] Node ha-329926-m04 has CIDR [10.244.3.0/24] 
	I0501 02:38:13.793127       1 main.go:223] Handling node with IPs: map[192.168.39.5:{}]
	I0501 02:38:13.793486       1 main.go:227] handling current node
	I0501 02:38:13.793616       1 main.go:223] Handling node with IPs: map[192.168.39.79:{}]
	I0501 02:38:13.793625       1 main.go:250] Node ha-329926-m02 has CIDR [10.244.1.0/24] 
	I0501 02:38:13.794364       1 main.go:223] Handling node with IPs: map[192.168.39.115:{}]
	I0501 02:38:13.794473       1 main.go:250] Node ha-329926-m03 has CIDR [10.244.2.0/24] 
	I0501 02:38:13.794758       1 main.go:223] Handling node with IPs: map[192.168.39.84:{}]
	I0501 02:38:13.794894       1 main.go:250] Node ha-329926-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [347407ef9dd66d0f2a44d6bc871649c2f38c1263ef6f3a33d6574f0e149ab701] <==
	I0501 02:31:48.173859       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0501 02:31:48.202132       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0501 02:31:48.233574       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0501 02:32:01.259025       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0501 02:32:01.406512       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0501 02:34:22.090167       1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout
	E0501 02:34:22.090264       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0501 02:34:22.090350       1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 56.4µs, panicked: false, err: context canceled, panic-reason: <nil>
	E0501 02:34:22.091592       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0501 02:34:22.091760       1 timeout.go:142] post-timeout activity - time-elapsed: 1.723373ms, POST "/api/v1/namespaces/kube-system/events" result: <nil>
	E0501 02:34:51.977591       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54016: use of closed network connection
	E0501 02:34:52.184940       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54034: use of closed network connection
	E0501 02:34:52.404232       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54048: use of closed network connection
	E0501 02:34:52.645445       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54076: use of closed network connection
	E0501 02:34:52.872942       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54094: use of closed network connection
	E0501 02:34:53.072199       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54102: use of closed network connection
	E0501 02:34:53.269091       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54128: use of closed network connection
	E0501 02:34:53.473976       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54142: use of closed network connection
	E0501 02:34:53.675277       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54160: use of closed network connection
	E0501 02:34:53.992299       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54184: use of closed network connection
	E0501 02:34:54.200055       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54200: use of closed network connection
	E0501 02:34:54.405926       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54218: use of closed network connection
	E0501 02:34:54.607335       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54244: use of closed network connection
	E0501 02:34:55.030335       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54268: use of closed network connection
	W0501 02:36:17.051220       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.115 192.168.39.5]
	
	
	==> kube-controller-manager [d24a4adfe9096e0063099c3390b72f12094c22465e8b666eb999e30740b77ea3] <==
	I0501 02:34:46.423487       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="62.197µs"
	I0501 02:34:47.410301       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="111.689µs"
	I0501 02:34:47.424534       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="120.812µs"
	I0501 02:34:47.436181       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="51.729µs"
	I0501 02:34:47.460575       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="82.202µs"
	I0501 02:34:47.464058       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="62.501µs"
	I0501 02:34:47.481353       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="78.292µs"
	I0501 02:34:47.589609       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="85.126µs"
	I0501 02:34:48.132319       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="69.791µs"
	I0501 02:34:50.025583       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.52337ms"
	I0501 02:34:50.027423       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="114.319µs"
	I0501 02:34:50.080334       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.407484ms"
	I0501 02:34:50.080442       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="43.624µs"
	I0501 02:34:51.488074       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="42.558313ms"
	I0501 02:34:51.488940       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="606.851µs"
	E0501 02:35:24.461939       1 certificate_controller.go:146] Sync csr-kt9bz failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-kt9bz": the object has been modified; please apply your changes to the latest version and try again
	I0501 02:35:24.761549       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-329926-m04\" does not exist"
	I0501 02:35:24.778000       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-329926-m04" podCIDRs=["10.244.3.0/24"]
	E0501 02:35:24.943411       1 daemon_controller.go:324] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"089e39dc-d22b-4162-b254-170ffb790464", ResourceVersion:"932", Generation:1, CreationTimestamp:time.Date(2024, time.May, 1, 2, 31, 48, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc001b14bc0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0
, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolume
Source)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc0020c2240), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001ad8480), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolum
eSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.
VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001ad8498), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersist
entDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"registry.k8s.io/kube-proxy:v1.30.0", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc001b14c00)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil), Claims:[]v1.ResourceClaim(nil)}, ResizePolicy:[]v1.ContainerResizePolicy(nil), RestartPolicy:(*v1.ContainerRestartPolicy)(nil), VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-
proxy", ReadOnly:false, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc001f7f080), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContai
ner(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00206b8d8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001babd00), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPoli
cy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil), OS:(*v1.PodOS)(nil), HostUsers:(*bool)(nil), SchedulingGates:[]v1.PodSchedulingGate(nil), ResourceClaims:[]v1.PodResourceClaim(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc002214250)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc00206ba10)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:3, NumberMisscheduled:0, DesiredNumberScheduled:3, NumberReady:3, ObservedGeneration:1, UpdatedNumberScheduled:3, NumberAvailable:3, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
	I0501 02:35:25.657560       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-329926-m04"
	I0501 02:35:35.822369       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-329926-m04"
	I0501 02:36:35.074777       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-329926-m04"
	I0501 02:36:35.260779       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.655882ms"
	I0501 02:36:35.262620       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="57.334µs"
	
	
	==> kube-proxy [2ab64850e34b66cbe91e099c91014051472b87888130297cd7c47a4a78992140] <==
	I0501 02:32:02.374716       1 server_linux.go:69] "Using iptables proxy"
	I0501 02:32:02.384514       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.5"]
	I0501 02:32:02.544454       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0501 02:32:02.544529       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0501 02:32:02.544548       1 server_linux.go:165] "Using iptables Proxier"
	I0501 02:32:02.550009       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0501 02:32:02.550292       1 server.go:872] "Version info" version="v1.30.0"
	I0501 02:32:02.550331       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0501 02:32:02.560773       1 config.go:192] "Starting service config controller"
	I0501 02:32:02.560817       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0501 02:32:02.560846       1 config.go:101] "Starting endpoint slice config controller"
	I0501 02:32:02.560850       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0501 02:32:02.568529       1 config.go:319] "Starting node config controller"
	I0501 02:32:02.568571       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0501 02:32:02.660905       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0501 02:32:02.660950       1 shared_informer.go:320] Caches are synced for service config
	I0501 02:32:02.669133       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [e3ffc6d046e214333ed85219819ad54bfa5c3ac0b10167beacf72edbd6035736] <==
	I0501 02:34:45.753527       1 cache.go:503] "Pod was added to a different node than it was assumed" podKey="6a244374-9326-48de-9c65-1f46061e6e1c" pod="default/busybox-fc5497c4f-h8dxv" assumedNode="ha-329926-m02" currentNode="ha-329926-m03"
	E0501 02:34:45.779238       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-h8dxv\": pod busybox-fc5497c4f-h8dxv is already assigned to node \"ha-329926-m02\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-h8dxv" node="ha-329926-m03"
	E0501 02:34:45.781832       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 6a244374-9326-48de-9c65-1f46061e6e1c(default/busybox-fc5497c4f-h8dxv) was assumed on ha-329926-m03 but assigned to ha-329926-m02" pod="default/busybox-fc5497c4f-h8dxv"
	E0501 02:34:45.781931       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-h8dxv\": pod busybox-fc5497c4f-h8dxv is already assigned to node \"ha-329926-m02\"" pod="default/busybox-fc5497c4f-h8dxv"
	I0501 02:34:45.782004       1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-h8dxv" node="ha-329926-m02"
	E0501 02:35:24.875138       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-86ngt\": pod kindnet-86ngt is already assigned to node \"ha-329926-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-86ngt" node="ha-329926-m04"
	E0501 02:35:24.875288       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 64f4f56d-5f20-47a6-8cdb-bb56d4515758(kube-system/kindnet-86ngt) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-86ngt"
	E0501 02:35:24.875317       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-86ngt\": pod kindnet-86ngt is already assigned to node \"ha-329926-m04\"" pod="kube-system/kindnet-86ngt"
	I0501 02:35:24.875337       1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-86ngt" node="ha-329926-m04"
	E0501 02:35:24.884208       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-j728v\": pod kube-proxy-j728v is already assigned to node \"ha-329926-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-j728v" node="ha-329926-m04"
	E0501 02:35:24.884314       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 3af4ad58-4beb-45c6-9152-4549816009a5(kube-system/kube-proxy-j728v) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-j728v"
	E0501 02:35:24.884350       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-j728v\": pod kube-proxy-j728v is already assigned to node \"ha-329926-m04\"" pod="kube-system/kube-proxy-j728v"
	I0501 02:35:24.884486       1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-j728v" node="ha-329926-m04"
	E0501 02:35:24.886508       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-cc2wd\": pod kindnet-cc2wd is already assigned to node \"ha-329926-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-cc2wd" node="ha-329926-m04"
	E0501 02:35:24.886591       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 9cf82faf-4728-47f7-83e4-36b674b85759(kube-system/kindnet-cc2wd) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-cc2wd"
	E0501 02:35:24.886629       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-cc2wd\": pod kindnet-cc2wd is already assigned to node \"ha-329926-m04\"" pod="kube-system/kindnet-cc2wd"
	I0501 02:35:24.886734       1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-cc2wd" node="ha-329926-m04"
	E0501 02:35:25.032187       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-zvz47\": pod kindnet-zvz47 is already assigned to node \"ha-329926-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-zvz47" node="ha-329926-m04"
	E0501 02:35:25.032285       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 1de4bf64-3ff4-42ee-afb5-fe7629e1e992(kube-system/kindnet-zvz47) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-zvz47"
	E0501 02:35:25.032343       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-zvz47\": pod kindnet-zvz47 is already assigned to node \"ha-329926-m04\"" pod="kube-system/kindnet-zvz47"
	I0501 02:35:25.032475       1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-zvz47" node="ha-329926-m04"
	E0501 02:35:25.040119       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-77fqn\": pod kube-proxy-77fqn is already assigned to node \"ha-329926-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-77fqn" node="ha-329926-m04"
	E0501 02:35:25.040231       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 9a3678b0-5806-435a-ad11-9368201f3377(kube-system/kube-proxy-77fqn) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-77fqn"
	E0501 02:35:25.040255       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-77fqn\": pod kube-proxy-77fqn is already assigned to node \"ha-329926-m04\"" pod="kube-system/kube-proxy-77fqn"
	I0501 02:35:25.040287       1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-77fqn" node="ha-329926-m04"
	
	
	==> kubelet <==
	May 01 02:33:48 ha-329926 kubelet[1388]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 01 02:33:48 ha-329926 kubelet[1388]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 01 02:33:48 ha-329926 kubelet[1388]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 01 02:34:45 ha-329926 kubelet[1388]: I0501 02:34:45.838977    1388 topology_manager.go:215] "Topology Admit Handler" podUID="0cfb5bda-fca7-479f-98d3-6be9bddf0e1c" podNamespace="default" podName="busybox-fc5497c4f-nwj5x"
	May 01 02:34:45 ha-329926 kubelet[1388]: I0501 02:34:45.885246    1388 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zl49r\" (UniqueName: \"kubernetes.io/projected/0cfb5bda-fca7-479f-98d3-6be9bddf0e1c-kube-api-access-zl49r\") pod \"busybox-fc5497c4f-nwj5x\" (UID: \"0cfb5bda-fca7-479f-98d3-6be9bddf0e1c\") " pod="default/busybox-fc5497c4f-nwj5x"
	May 01 02:34:48 ha-329926 kubelet[1388]: E0501 02:34:48.137367    1388 iptables.go:577] "Could not set up iptables canary" err=<
	May 01 02:34:48 ha-329926 kubelet[1388]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 01 02:34:48 ha-329926 kubelet[1388]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 01 02:34:48 ha-329926 kubelet[1388]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 01 02:34:48 ha-329926 kubelet[1388]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 01 02:35:48 ha-329926 kubelet[1388]: E0501 02:35:48.136754    1388 iptables.go:577] "Could not set up iptables canary" err=<
	May 01 02:35:48 ha-329926 kubelet[1388]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 01 02:35:48 ha-329926 kubelet[1388]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 01 02:35:48 ha-329926 kubelet[1388]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 01 02:35:48 ha-329926 kubelet[1388]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 01 02:36:48 ha-329926 kubelet[1388]: E0501 02:36:48.135875    1388 iptables.go:577] "Could not set up iptables canary" err=<
	May 01 02:36:48 ha-329926 kubelet[1388]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 01 02:36:48 ha-329926 kubelet[1388]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 01 02:36:48 ha-329926 kubelet[1388]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 01 02:36:48 ha-329926 kubelet[1388]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 01 02:37:48 ha-329926 kubelet[1388]: E0501 02:37:48.134896    1388 iptables.go:577] "Could not set up iptables canary" err=<
	May 01 02:37:48 ha-329926 kubelet[1388]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 01 02:37:48 ha-329926 kubelet[1388]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 01 02:37:48 ha-329926 kubelet[1388]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 01 02:37:48 ha-329926 kubelet[1388]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-329926 -n ha-329926
helpers_test.go:261: (dbg) Run:  kubectl --context ha-329926 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (142.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (59.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-329926 node start m02 -v=7 --alsologtostderr
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-329926 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-329926 status -v=7 --alsologtostderr: exit status 3 (3.2040498s)

                                                
                                                
-- stdout --
	ha-329926
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-329926-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-329926-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-329926-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0501 02:38:23.086262   37727 out.go:291] Setting OutFile to fd 1 ...
	I0501 02:38:23.086382   37727 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 02:38:23.086392   37727 out.go:304] Setting ErrFile to fd 2...
	I0501 02:38:23.086416   37727 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 02:38:23.086594   37727 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18779-13391/.minikube/bin
	I0501 02:38:23.086756   37727 out.go:298] Setting JSON to false
	I0501 02:38:23.086781   37727 mustload.go:65] Loading cluster: ha-329926
	I0501 02:38:23.086913   37727 notify.go:220] Checking for updates...
	I0501 02:38:23.087311   37727 config.go:182] Loaded profile config "ha-329926": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0501 02:38:23.087333   37727 status.go:255] checking status of ha-329926 ...
	I0501 02:38:23.087863   37727 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:38:23.087912   37727 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:38:23.103787   37727 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34285
	I0501 02:38:23.104237   37727 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:38:23.104761   37727 main.go:141] libmachine: Using API Version  1
	I0501 02:38:23.104796   37727 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:38:23.105103   37727 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:38:23.105262   37727 main.go:141] libmachine: (ha-329926) Calling .GetState
	I0501 02:38:23.106899   37727 status.go:330] ha-329926 host status = "Running" (err=<nil>)
	I0501 02:38:23.106919   37727 host.go:66] Checking if "ha-329926" exists ...
	I0501 02:38:23.107193   37727 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:38:23.107227   37727 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:38:23.122226   37727 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38153
	I0501 02:38:23.122674   37727 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:38:23.123199   37727 main.go:141] libmachine: Using API Version  1
	I0501 02:38:23.123224   37727 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:38:23.123540   37727 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:38:23.123716   37727 main.go:141] libmachine: (ha-329926) Calling .GetIP
	I0501 02:38:23.126595   37727 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:38:23.126992   37727 main.go:141] libmachine: (ha-329926) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:d8:43", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:31:18 +0000 UTC Type:0 Mac:52:54:00:ce:d8:43 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-329926 Clientid:01:52:54:00:ce:d8:43}
	I0501 02:38:23.127028   37727 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined IP address 192.168.39.5 and MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:38:23.127151   37727 host.go:66] Checking if "ha-329926" exists ...
	I0501 02:38:23.127434   37727 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:38:23.127468   37727 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:38:23.142859   37727 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37917
	I0501 02:38:23.143335   37727 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:38:23.143800   37727 main.go:141] libmachine: Using API Version  1
	I0501 02:38:23.143825   37727 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:38:23.144160   37727 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:38:23.144366   37727 main.go:141] libmachine: (ha-329926) Calling .DriverName
	I0501 02:38:23.144596   37727 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0501 02:38:23.144626   37727 main.go:141] libmachine: (ha-329926) Calling .GetSSHHostname
	I0501 02:38:23.147500   37727 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:38:23.147955   37727 main.go:141] libmachine: (ha-329926) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:d8:43", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:31:18 +0000 UTC Type:0 Mac:52:54:00:ce:d8:43 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-329926 Clientid:01:52:54:00:ce:d8:43}
	I0501 02:38:23.147987   37727 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined IP address 192.168.39.5 and MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:38:23.148109   37727 main.go:141] libmachine: (ha-329926) Calling .GetSSHPort
	I0501 02:38:23.148299   37727 main.go:141] libmachine: (ha-329926) Calling .GetSSHKeyPath
	I0501 02:38:23.148442   37727 main.go:141] libmachine: (ha-329926) Calling .GetSSHUsername
	I0501 02:38:23.148590   37727 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926/id_rsa Username:docker}
	I0501 02:38:23.231701   37727 ssh_runner.go:195] Run: systemctl --version
	I0501 02:38:23.239539   37727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 02:38:23.265690   37727 kubeconfig.go:125] found "ha-329926" server: "https://192.168.39.254:8443"
	I0501 02:38:23.265724   37727 api_server.go:166] Checking apiserver status ...
	I0501 02:38:23.265787   37727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 02:38:23.283351   37727 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1174/cgroup
	W0501 02:38:23.297517   37727 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1174/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0501 02:38:23.297574   37727 ssh_runner.go:195] Run: ls
	I0501 02:38:23.303912   37727 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0501 02:38:23.312830   37727 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0501 02:38:23.312865   37727 status.go:422] ha-329926 apiserver status = Running (err=<nil>)
	I0501 02:38:23.312875   37727 status.go:257] ha-329926 status: &{Name:ha-329926 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0501 02:38:23.312891   37727 status.go:255] checking status of ha-329926-m02 ...
	I0501 02:38:23.313259   37727 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:38:23.313303   37727 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:38:23.328500   37727 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36015
	I0501 02:38:23.328910   37727 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:38:23.329364   37727 main.go:141] libmachine: Using API Version  1
	I0501 02:38:23.329391   37727 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:38:23.329722   37727 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:38:23.329908   37727 main.go:141] libmachine: (ha-329926-m02) Calling .GetState
	I0501 02:38:23.331691   37727 status.go:330] ha-329926-m02 host status = "Running" (err=<nil>)
	I0501 02:38:23.331712   37727 host.go:66] Checking if "ha-329926-m02" exists ...
	I0501 02:38:23.331995   37727 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:38:23.332057   37727 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:38:23.347527   37727 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38655
	I0501 02:38:23.348008   37727 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:38:23.348517   37727 main.go:141] libmachine: Using API Version  1
	I0501 02:38:23.348556   37727 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:38:23.348863   37727 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:38:23.349027   37727 main.go:141] libmachine: (ha-329926-m02) Calling .GetIP
	I0501 02:38:23.351835   37727 main.go:141] libmachine: (ha-329926-m02) DBG | domain ha-329926-m02 has defined MAC address 52:54:00:92:16:5f in network mk-ha-329926
	I0501 02:38:23.352271   37727 main.go:141] libmachine: (ha-329926-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:16:5f", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:32:18 +0000 UTC Type:0 Mac:52:54:00:92:16:5f Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-329926-m02 Clientid:01:52:54:00:92:16:5f}
	I0501 02:38:23.352305   37727 main.go:141] libmachine: (ha-329926-m02) DBG | domain ha-329926-m02 has defined IP address 192.168.39.79 and MAC address 52:54:00:92:16:5f in network mk-ha-329926
	I0501 02:38:23.352501   37727 host.go:66] Checking if "ha-329926-m02" exists ...
	I0501 02:38:23.352826   37727 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:38:23.352872   37727 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:38:23.369035   37727 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46697
	I0501 02:38:23.369480   37727 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:38:23.369955   37727 main.go:141] libmachine: Using API Version  1
	I0501 02:38:23.369979   37727 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:38:23.370337   37727 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:38:23.370585   37727 main.go:141] libmachine: (ha-329926-m02) Calling .DriverName
	I0501 02:38:23.370784   37727 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0501 02:38:23.370806   37727 main.go:141] libmachine: (ha-329926-m02) Calling .GetSSHHostname
	I0501 02:38:23.373772   37727 main.go:141] libmachine: (ha-329926-m02) DBG | domain ha-329926-m02 has defined MAC address 52:54:00:92:16:5f in network mk-ha-329926
	I0501 02:38:23.374195   37727 main.go:141] libmachine: (ha-329926-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:16:5f", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:32:18 +0000 UTC Type:0 Mac:52:54:00:92:16:5f Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-329926-m02 Clientid:01:52:54:00:92:16:5f}
	I0501 02:38:23.374220   37727 main.go:141] libmachine: (ha-329926-m02) DBG | domain ha-329926-m02 has defined IP address 192.168.39.79 and MAC address 52:54:00:92:16:5f in network mk-ha-329926
	I0501 02:38:23.374416   37727 main.go:141] libmachine: (ha-329926-m02) Calling .GetSSHPort
	I0501 02:38:23.374598   37727 main.go:141] libmachine: (ha-329926-m02) Calling .GetSSHKeyPath
	I0501 02:38:23.374789   37727 main.go:141] libmachine: (ha-329926-m02) Calling .GetSSHUsername
	I0501 02:38:23.374934   37727 sshutil.go:53] new ssh client: &{IP:192.168.39.79 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926-m02/id_rsa Username:docker}
	W0501 02:38:25.862798   37727 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.79:22: connect: no route to host
	W0501 02:38:25.862898   37727 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.79:22: connect: no route to host
	E0501 02:38:25.862957   37727 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.79:22: connect: no route to host
	I0501 02:38:25.862968   37727 status.go:257] ha-329926-m02 status: &{Name:ha-329926-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0501 02:38:25.862988   37727 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.79:22: connect: no route to host
	I0501 02:38:25.862997   37727 status.go:255] checking status of ha-329926-m03 ...
	I0501 02:38:25.863434   37727 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:38:25.863486   37727 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:38:25.879255   37727 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45649
	I0501 02:38:25.879673   37727 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:38:25.880192   37727 main.go:141] libmachine: Using API Version  1
	I0501 02:38:25.880214   37727 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:38:25.880575   37727 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:38:25.880800   37727 main.go:141] libmachine: (ha-329926-m03) Calling .GetState
	I0501 02:38:25.882490   37727 status.go:330] ha-329926-m03 host status = "Running" (err=<nil>)
	I0501 02:38:25.882509   37727 host.go:66] Checking if "ha-329926-m03" exists ...
	I0501 02:38:25.882838   37727 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:38:25.882879   37727 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:38:25.898264   37727 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37769
	I0501 02:38:25.898702   37727 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:38:25.899212   37727 main.go:141] libmachine: Using API Version  1
	I0501 02:38:25.899235   37727 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:38:25.899586   37727 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:38:25.899799   37727 main.go:141] libmachine: (ha-329926-m03) Calling .GetIP
	I0501 02:38:25.902589   37727 main.go:141] libmachine: (ha-329926-m03) DBG | domain ha-329926-m03 has defined MAC address 52:54:00:f9:eb:7d in network mk-ha-329926
	I0501 02:38:25.903016   37727 main.go:141] libmachine: (ha-329926-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:eb:7d", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:33:44 +0000 UTC Type:0 Mac:52:54:00:f9:eb:7d Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:ha-329926-m03 Clientid:01:52:54:00:f9:eb:7d}
	I0501 02:38:25.903045   37727 main.go:141] libmachine: (ha-329926-m03) DBG | domain ha-329926-m03 has defined IP address 192.168.39.115 and MAC address 52:54:00:f9:eb:7d in network mk-ha-329926
	I0501 02:38:25.903184   37727 host.go:66] Checking if "ha-329926-m03" exists ...
	I0501 02:38:25.903459   37727 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:38:25.903495   37727 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:38:25.918982   37727 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42301
	I0501 02:38:25.919425   37727 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:38:25.919874   37727 main.go:141] libmachine: Using API Version  1
	I0501 02:38:25.919901   37727 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:38:25.920225   37727 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:38:25.920405   37727 main.go:141] libmachine: (ha-329926-m03) Calling .DriverName
	I0501 02:38:25.920583   37727 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0501 02:38:25.920602   37727 main.go:141] libmachine: (ha-329926-m03) Calling .GetSSHHostname
	I0501 02:38:25.923415   37727 main.go:141] libmachine: (ha-329926-m03) DBG | domain ha-329926-m03 has defined MAC address 52:54:00:f9:eb:7d in network mk-ha-329926
	I0501 02:38:25.923817   37727 main.go:141] libmachine: (ha-329926-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:eb:7d", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:33:44 +0000 UTC Type:0 Mac:52:54:00:f9:eb:7d Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:ha-329926-m03 Clientid:01:52:54:00:f9:eb:7d}
	I0501 02:38:25.923853   37727 main.go:141] libmachine: (ha-329926-m03) DBG | domain ha-329926-m03 has defined IP address 192.168.39.115 and MAC address 52:54:00:f9:eb:7d in network mk-ha-329926
	I0501 02:38:25.924097   37727 main.go:141] libmachine: (ha-329926-m03) Calling .GetSSHPort
	I0501 02:38:25.924272   37727 main.go:141] libmachine: (ha-329926-m03) Calling .GetSSHKeyPath
	I0501 02:38:25.924412   37727 main.go:141] libmachine: (ha-329926-m03) Calling .GetSSHUsername
	I0501 02:38:25.924542   37727 sshutil.go:53] new ssh client: &{IP:192.168.39.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926-m03/id_rsa Username:docker}
	I0501 02:38:26.011512   37727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 02:38:26.029421   37727 kubeconfig.go:125] found "ha-329926" server: "https://192.168.39.254:8443"
	I0501 02:38:26.029447   37727 api_server.go:166] Checking apiserver status ...
	I0501 02:38:26.029476   37727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 02:38:26.045582   37727 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1592/cgroup
	W0501 02:38:26.057956   37727 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1592/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0501 02:38:26.058010   37727 ssh_runner.go:195] Run: ls
	I0501 02:38:26.062964   37727 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0501 02:38:26.069975   37727 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0501 02:38:26.070001   37727 status.go:422] ha-329926-m03 apiserver status = Running (err=<nil>)
	I0501 02:38:26.070012   37727 status.go:257] ha-329926-m03 status: &{Name:ha-329926-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0501 02:38:26.070032   37727 status.go:255] checking status of ha-329926-m04 ...
	I0501 02:38:26.070448   37727 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:38:26.070487   37727 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:38:26.085095   37727 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39739
	I0501 02:38:26.085480   37727 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:38:26.085922   37727 main.go:141] libmachine: Using API Version  1
	I0501 02:38:26.085943   37727 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:38:26.086237   37727 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:38:26.086437   37727 main.go:141] libmachine: (ha-329926-m04) Calling .GetState
	I0501 02:38:26.087781   37727 status.go:330] ha-329926-m04 host status = "Running" (err=<nil>)
	I0501 02:38:26.087799   37727 host.go:66] Checking if "ha-329926-m04" exists ...
	I0501 02:38:26.088065   37727 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:38:26.088095   37727 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:38:26.102129   37727 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36675
	I0501 02:38:26.102475   37727 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:38:26.102935   37727 main.go:141] libmachine: Using API Version  1
	I0501 02:38:26.102952   37727 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:38:26.103222   37727 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:38:26.103419   37727 main.go:141] libmachine: (ha-329926-m04) Calling .GetIP
	I0501 02:38:26.106149   37727 main.go:141] libmachine: (ha-329926-m04) DBG | domain ha-329926-m04 has defined MAC address 52:54:00:95:f4:8b in network mk-ha-329926
	I0501 02:38:26.106555   37727 main.go:141] libmachine: (ha-329926-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:f4:8b", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:35:11 +0000 UTC Type:0 Mac:52:54:00:95:f4:8b Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-329926-m04 Clientid:01:52:54:00:95:f4:8b}
	I0501 02:38:26.106594   37727 main.go:141] libmachine: (ha-329926-m04) DBG | domain ha-329926-m04 has defined IP address 192.168.39.84 and MAC address 52:54:00:95:f4:8b in network mk-ha-329926
	I0501 02:38:26.106706   37727 host.go:66] Checking if "ha-329926-m04" exists ...
	I0501 02:38:26.106981   37727 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:38:26.107013   37727 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:38:26.121987   37727 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40257
	I0501 02:38:26.122386   37727 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:38:26.122778   37727 main.go:141] libmachine: Using API Version  1
	I0501 02:38:26.122801   37727 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:38:26.123057   37727 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:38:26.123213   37727 main.go:141] libmachine: (ha-329926-m04) Calling .DriverName
	I0501 02:38:26.123347   37727 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0501 02:38:26.123383   37727 main.go:141] libmachine: (ha-329926-m04) Calling .GetSSHHostname
	I0501 02:38:26.125956   37727 main.go:141] libmachine: (ha-329926-m04) DBG | domain ha-329926-m04 has defined MAC address 52:54:00:95:f4:8b in network mk-ha-329926
	I0501 02:38:26.126358   37727 main.go:141] libmachine: (ha-329926-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:f4:8b", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:35:11 +0000 UTC Type:0 Mac:52:54:00:95:f4:8b Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-329926-m04 Clientid:01:52:54:00:95:f4:8b}
	I0501 02:38:26.126380   37727 main.go:141] libmachine: (ha-329926-m04) DBG | domain ha-329926-m04 has defined IP address 192.168.39.84 and MAC address 52:54:00:95:f4:8b in network mk-ha-329926
	I0501 02:38:26.126526   37727 main.go:141] libmachine: (ha-329926-m04) Calling .GetSSHPort
	I0501 02:38:26.126710   37727 main.go:141] libmachine: (ha-329926-m04) Calling .GetSSHKeyPath
	I0501 02:38:26.126864   37727 main.go:141] libmachine: (ha-329926-m04) Calling .GetSSHUsername
	I0501 02:38:26.127013   37727 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926-m04/id_rsa Username:docker}
	I0501 02:38:26.218856   37727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 02:38:26.235854   37727 status.go:257] ha-329926-m04 status: &{Name:ha-329926-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-329926 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-329926 status -v=7 --alsologtostderr: exit status 3 (5.188633022s)

                                                
                                                
-- stdout --
	ha-329926
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-329926-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-329926-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-329926-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0501 02:38:27.248030   37827 out.go:291] Setting OutFile to fd 1 ...
	I0501 02:38:27.248175   37827 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 02:38:27.248185   37827 out.go:304] Setting ErrFile to fd 2...
	I0501 02:38:27.248191   37827 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 02:38:27.248407   37827 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18779-13391/.minikube/bin
	I0501 02:38:27.248597   37827 out.go:298] Setting JSON to false
	I0501 02:38:27.248624   37827 mustload.go:65] Loading cluster: ha-329926
	I0501 02:38:27.248729   37827 notify.go:220] Checking for updates...
	I0501 02:38:27.249023   37827 config.go:182] Loaded profile config "ha-329926": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0501 02:38:27.249039   37827 status.go:255] checking status of ha-329926 ...
	I0501 02:38:27.249480   37827 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:38:27.249555   37827 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:38:27.265316   37827 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38991
	I0501 02:38:27.265835   37827 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:38:27.266451   37827 main.go:141] libmachine: Using API Version  1
	I0501 02:38:27.266483   37827 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:38:27.266779   37827 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:38:27.266954   37827 main.go:141] libmachine: (ha-329926) Calling .GetState
	I0501 02:38:27.268379   37827 status.go:330] ha-329926 host status = "Running" (err=<nil>)
	I0501 02:38:27.268396   37827 host.go:66] Checking if "ha-329926" exists ...
	I0501 02:38:27.268677   37827 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:38:27.268726   37827 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:38:27.283714   37827 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35605
	I0501 02:38:27.284130   37827 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:38:27.284646   37827 main.go:141] libmachine: Using API Version  1
	I0501 02:38:27.284669   37827 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:38:27.284966   37827 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:38:27.285141   37827 main.go:141] libmachine: (ha-329926) Calling .GetIP
	I0501 02:38:27.288354   37827 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:38:27.288840   37827 main.go:141] libmachine: (ha-329926) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:d8:43", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:31:18 +0000 UTC Type:0 Mac:52:54:00:ce:d8:43 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-329926 Clientid:01:52:54:00:ce:d8:43}
	I0501 02:38:27.288865   37827 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined IP address 192.168.39.5 and MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:38:27.288995   37827 host.go:66] Checking if "ha-329926" exists ...
	I0501 02:38:27.289289   37827 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:38:27.289325   37827 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:38:27.304956   37827 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34779
	I0501 02:38:27.305353   37827 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:38:27.305795   37827 main.go:141] libmachine: Using API Version  1
	I0501 02:38:27.305816   37827 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:38:27.306117   37827 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:38:27.306296   37827 main.go:141] libmachine: (ha-329926) Calling .DriverName
	I0501 02:38:27.306541   37827 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0501 02:38:27.306576   37827 main.go:141] libmachine: (ha-329926) Calling .GetSSHHostname
	I0501 02:38:27.309273   37827 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:38:27.309674   37827 main.go:141] libmachine: (ha-329926) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:d8:43", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:31:18 +0000 UTC Type:0 Mac:52:54:00:ce:d8:43 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-329926 Clientid:01:52:54:00:ce:d8:43}
	I0501 02:38:27.309708   37827 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined IP address 192.168.39.5 and MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:38:27.309778   37827 main.go:141] libmachine: (ha-329926) Calling .GetSSHPort
	I0501 02:38:27.309965   37827 main.go:141] libmachine: (ha-329926) Calling .GetSSHKeyPath
	I0501 02:38:27.310107   37827 main.go:141] libmachine: (ha-329926) Calling .GetSSHUsername
	I0501 02:38:27.310219   37827 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926/id_rsa Username:docker}
	I0501 02:38:27.393744   37827 ssh_runner.go:195] Run: systemctl --version
	I0501 02:38:27.400828   37827 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 02:38:27.418097   37827 kubeconfig.go:125] found "ha-329926" server: "https://192.168.39.254:8443"
	I0501 02:38:27.418126   37827 api_server.go:166] Checking apiserver status ...
	I0501 02:38:27.418171   37827 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 02:38:27.433888   37827 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1174/cgroup
	W0501 02:38:27.451274   37827 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1174/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0501 02:38:27.451350   37827 ssh_runner.go:195] Run: ls
	I0501 02:38:27.457007   37827 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0501 02:38:27.463246   37827 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0501 02:38:27.463266   37827 status.go:422] ha-329926 apiserver status = Running (err=<nil>)
	I0501 02:38:27.463276   37827 status.go:257] ha-329926 status: &{Name:ha-329926 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0501 02:38:27.463297   37827 status.go:255] checking status of ha-329926-m02 ...
	I0501 02:38:27.463555   37827 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:38:27.463586   37827 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:38:27.479297   37827 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37997
	I0501 02:38:27.479652   37827 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:38:27.480120   37827 main.go:141] libmachine: Using API Version  1
	I0501 02:38:27.480143   37827 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:38:27.480501   37827 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:38:27.480721   37827 main.go:141] libmachine: (ha-329926-m02) Calling .GetState
	I0501 02:38:27.482209   37827 status.go:330] ha-329926-m02 host status = "Running" (err=<nil>)
	I0501 02:38:27.482220   37827 host.go:66] Checking if "ha-329926-m02" exists ...
	I0501 02:38:27.482516   37827 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:38:27.482548   37827 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:38:27.497349   37827 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32819
	I0501 02:38:27.497755   37827 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:38:27.498215   37827 main.go:141] libmachine: Using API Version  1
	I0501 02:38:27.498239   37827 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:38:27.498581   37827 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:38:27.498789   37827 main.go:141] libmachine: (ha-329926-m02) Calling .GetIP
	I0501 02:38:27.501505   37827 main.go:141] libmachine: (ha-329926-m02) DBG | domain ha-329926-m02 has defined MAC address 52:54:00:92:16:5f in network mk-ha-329926
	I0501 02:38:27.501886   37827 main.go:141] libmachine: (ha-329926-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:16:5f", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:32:18 +0000 UTC Type:0 Mac:52:54:00:92:16:5f Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-329926-m02 Clientid:01:52:54:00:92:16:5f}
	I0501 02:38:27.501931   37827 main.go:141] libmachine: (ha-329926-m02) DBG | domain ha-329926-m02 has defined IP address 192.168.39.79 and MAC address 52:54:00:92:16:5f in network mk-ha-329926
	I0501 02:38:27.502086   37827 host.go:66] Checking if "ha-329926-m02" exists ...
	I0501 02:38:27.502430   37827 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:38:27.502471   37827 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:38:27.517561   37827 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36405
	I0501 02:38:27.517999   37827 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:38:27.518459   37827 main.go:141] libmachine: Using API Version  1
	I0501 02:38:27.518482   37827 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:38:27.518755   37827 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:38:27.518922   37827 main.go:141] libmachine: (ha-329926-m02) Calling .DriverName
	I0501 02:38:27.519115   37827 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0501 02:38:27.519133   37827 main.go:141] libmachine: (ha-329926-m02) Calling .GetSSHHostname
	I0501 02:38:27.521749   37827 main.go:141] libmachine: (ha-329926-m02) DBG | domain ha-329926-m02 has defined MAC address 52:54:00:92:16:5f in network mk-ha-329926
	I0501 02:38:27.522204   37827 main.go:141] libmachine: (ha-329926-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:16:5f", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:32:18 +0000 UTC Type:0 Mac:52:54:00:92:16:5f Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-329926-m02 Clientid:01:52:54:00:92:16:5f}
	I0501 02:38:27.522235   37827 main.go:141] libmachine: (ha-329926-m02) DBG | domain ha-329926-m02 has defined IP address 192.168.39.79 and MAC address 52:54:00:92:16:5f in network mk-ha-329926
	I0501 02:38:27.522364   37827 main.go:141] libmachine: (ha-329926-m02) Calling .GetSSHPort
	I0501 02:38:27.522558   37827 main.go:141] libmachine: (ha-329926-m02) Calling .GetSSHKeyPath
	I0501 02:38:27.522700   37827 main.go:141] libmachine: (ha-329926-m02) Calling .GetSSHUsername
	I0501 02:38:27.522849   37827 sshutil.go:53] new ssh client: &{IP:192.168.39.79 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926-m02/id_rsa Username:docker}
	W0501 02:38:28.934729   37827 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.79:22: connect: no route to host
	I0501 02:38:28.934787   37827 retry.go:31] will retry after 271.345998ms: dial tcp 192.168.39.79:22: connect: no route to host
	W0501 02:38:32.006689   37827 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.79:22: connect: no route to host
	W0501 02:38:32.006783   37827 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.79:22: connect: no route to host
	E0501 02:38:32.006828   37827 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.79:22: connect: no route to host
	I0501 02:38:32.006843   37827 status.go:257] ha-329926-m02 status: &{Name:ha-329926-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0501 02:38:32.006869   37827 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.79:22: connect: no route to host
	I0501 02:38:32.006881   37827 status.go:255] checking status of ha-329926-m03 ...
	I0501 02:38:32.007199   37827 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:38:32.007260   37827 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:38:32.022158   37827 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44643
	I0501 02:38:32.022649   37827 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:38:32.023130   37827 main.go:141] libmachine: Using API Version  1
	I0501 02:38:32.023153   37827 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:38:32.023470   37827 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:38:32.023642   37827 main.go:141] libmachine: (ha-329926-m03) Calling .GetState
	I0501 02:38:32.025031   37827 status.go:330] ha-329926-m03 host status = "Running" (err=<nil>)
	I0501 02:38:32.025049   37827 host.go:66] Checking if "ha-329926-m03" exists ...
	I0501 02:38:32.025378   37827 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:38:32.025423   37827 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:38:32.042092   37827 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40657
	I0501 02:38:32.042485   37827 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:38:32.042985   37827 main.go:141] libmachine: Using API Version  1
	I0501 02:38:32.043005   37827 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:38:32.043378   37827 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:38:32.043595   37827 main.go:141] libmachine: (ha-329926-m03) Calling .GetIP
	I0501 02:38:32.046470   37827 main.go:141] libmachine: (ha-329926-m03) DBG | domain ha-329926-m03 has defined MAC address 52:54:00:f9:eb:7d in network mk-ha-329926
	I0501 02:38:32.046946   37827 main.go:141] libmachine: (ha-329926-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:eb:7d", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:33:44 +0000 UTC Type:0 Mac:52:54:00:f9:eb:7d Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:ha-329926-m03 Clientid:01:52:54:00:f9:eb:7d}
	I0501 02:38:32.046988   37827 main.go:141] libmachine: (ha-329926-m03) DBG | domain ha-329926-m03 has defined IP address 192.168.39.115 and MAC address 52:54:00:f9:eb:7d in network mk-ha-329926
	I0501 02:38:32.047107   37827 host.go:66] Checking if "ha-329926-m03" exists ...
	I0501 02:38:32.047403   37827 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:38:32.047435   37827 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:38:32.064271   37827 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36427
	I0501 02:38:32.064671   37827 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:38:32.065161   37827 main.go:141] libmachine: Using API Version  1
	I0501 02:38:32.065182   37827 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:38:32.065512   37827 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:38:32.065685   37827 main.go:141] libmachine: (ha-329926-m03) Calling .DriverName
	I0501 02:38:32.065857   37827 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0501 02:38:32.065878   37827 main.go:141] libmachine: (ha-329926-m03) Calling .GetSSHHostname
	I0501 02:38:32.068315   37827 main.go:141] libmachine: (ha-329926-m03) DBG | domain ha-329926-m03 has defined MAC address 52:54:00:f9:eb:7d in network mk-ha-329926
	I0501 02:38:32.068711   37827 main.go:141] libmachine: (ha-329926-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:eb:7d", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:33:44 +0000 UTC Type:0 Mac:52:54:00:f9:eb:7d Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:ha-329926-m03 Clientid:01:52:54:00:f9:eb:7d}
	I0501 02:38:32.068749   37827 main.go:141] libmachine: (ha-329926-m03) DBG | domain ha-329926-m03 has defined IP address 192.168.39.115 and MAC address 52:54:00:f9:eb:7d in network mk-ha-329926
	I0501 02:38:32.068877   37827 main.go:141] libmachine: (ha-329926-m03) Calling .GetSSHPort
	I0501 02:38:32.069059   37827 main.go:141] libmachine: (ha-329926-m03) Calling .GetSSHKeyPath
	I0501 02:38:32.069212   37827 main.go:141] libmachine: (ha-329926-m03) Calling .GetSSHUsername
	I0501 02:38:32.069361   37827 sshutil.go:53] new ssh client: &{IP:192.168.39.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926-m03/id_rsa Username:docker}
	I0501 02:38:32.155954   37827 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 02:38:32.172698   37827 kubeconfig.go:125] found "ha-329926" server: "https://192.168.39.254:8443"
	I0501 02:38:32.172729   37827 api_server.go:166] Checking apiserver status ...
	I0501 02:38:32.172770   37827 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 02:38:32.188272   37827 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1592/cgroup
	W0501 02:38:32.199329   37827 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1592/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0501 02:38:32.199396   37827 ssh_runner.go:195] Run: ls
	I0501 02:38:32.204882   37827 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0501 02:38:32.210172   37827 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0501 02:38:32.210201   37827 status.go:422] ha-329926-m03 apiserver status = Running (err=<nil>)
	I0501 02:38:32.210212   37827 status.go:257] ha-329926-m03 status: &{Name:ha-329926-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0501 02:38:32.210226   37827 status.go:255] checking status of ha-329926-m04 ...
	I0501 02:38:32.210631   37827 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:38:32.210686   37827 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:38:32.226121   37827 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36837
	I0501 02:38:32.226536   37827 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:38:32.227116   37827 main.go:141] libmachine: Using API Version  1
	I0501 02:38:32.227135   37827 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:38:32.227433   37827 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:38:32.227640   37827 main.go:141] libmachine: (ha-329926-m04) Calling .GetState
	I0501 02:38:32.229184   37827 status.go:330] ha-329926-m04 host status = "Running" (err=<nil>)
	I0501 02:38:32.229201   37827 host.go:66] Checking if "ha-329926-m04" exists ...
	I0501 02:38:32.229479   37827 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:38:32.229512   37827 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:38:32.244856   37827 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32841
	I0501 02:38:32.245249   37827 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:38:32.245736   37827 main.go:141] libmachine: Using API Version  1
	I0501 02:38:32.245761   37827 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:38:32.246051   37827 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:38:32.246264   37827 main.go:141] libmachine: (ha-329926-m04) Calling .GetIP
	I0501 02:38:32.249000   37827 main.go:141] libmachine: (ha-329926-m04) DBG | domain ha-329926-m04 has defined MAC address 52:54:00:95:f4:8b in network mk-ha-329926
	I0501 02:38:32.249565   37827 host.go:66] Checking if "ha-329926-m04" exists ...
	I0501 02:38:32.250562   37827 main.go:141] libmachine: (ha-329926-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:f4:8b", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:35:11 +0000 UTC Type:0 Mac:52:54:00:95:f4:8b Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-329926-m04 Clientid:01:52:54:00:95:f4:8b}
	I0501 02:38:32.250599   37827 main.go:141] libmachine: (ha-329926-m04) DBG | domain ha-329926-m04 has defined IP address 192.168.39.84 and MAC address 52:54:00:95:f4:8b in network mk-ha-329926
	I0501 02:38:32.250840   37827 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:38:32.250881   37827 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:38:32.267116   37827 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38603
	I0501 02:38:32.267706   37827 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:38:32.268158   37827 main.go:141] libmachine: Using API Version  1
	I0501 02:38:32.268182   37827 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:38:32.268459   37827 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:38:32.268638   37827 main.go:141] libmachine: (ha-329926-m04) Calling .DriverName
	I0501 02:38:32.268817   37827 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0501 02:38:32.268839   37827 main.go:141] libmachine: (ha-329926-m04) Calling .GetSSHHostname
	I0501 02:38:32.271439   37827 main.go:141] libmachine: (ha-329926-m04) DBG | domain ha-329926-m04 has defined MAC address 52:54:00:95:f4:8b in network mk-ha-329926
	I0501 02:38:32.271836   37827 main.go:141] libmachine: (ha-329926-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:f4:8b", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:35:11 +0000 UTC Type:0 Mac:52:54:00:95:f4:8b Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-329926-m04 Clientid:01:52:54:00:95:f4:8b}
	I0501 02:38:32.271865   37827 main.go:141] libmachine: (ha-329926-m04) DBG | domain ha-329926-m04 has defined IP address 192.168.39.84 and MAC address 52:54:00:95:f4:8b in network mk-ha-329926
	I0501 02:38:32.271967   37827 main.go:141] libmachine: (ha-329926-m04) Calling .GetSSHPort
	I0501 02:38:32.272128   37827 main.go:141] libmachine: (ha-329926-m04) Calling .GetSSHKeyPath
	I0501 02:38:32.272254   37827 main.go:141] libmachine: (ha-329926-m04) Calling .GetSSHUsername
	I0501 02:38:32.272403   37827 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926-m04/id_rsa Username:docker}
	I0501 02:38:32.362265   37827 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 02:38:32.379949   37827 status.go:257] ha-329926-m04 status: &{Name:ha-329926-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-329926 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-329926 status -v=7 --alsologtostderr: exit status 3 (5.164963383s)

                                                
                                                
-- stdout --
	ha-329926
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-329926-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-329926-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-329926-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0501 02:38:33.417589   37927 out.go:291] Setting OutFile to fd 1 ...
	I0501 02:38:33.417844   37927 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 02:38:33.417853   37927 out.go:304] Setting ErrFile to fd 2...
	I0501 02:38:33.417857   37927 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 02:38:33.418049   37927 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18779-13391/.minikube/bin
	I0501 02:38:33.418249   37927 out.go:298] Setting JSON to false
	I0501 02:38:33.418276   37927 mustload.go:65] Loading cluster: ha-329926
	I0501 02:38:33.418332   37927 notify.go:220] Checking for updates...
	I0501 02:38:33.418756   37927 config.go:182] Loaded profile config "ha-329926": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0501 02:38:33.418774   37927 status.go:255] checking status of ha-329926 ...
	I0501 02:38:33.419179   37927 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:38:33.419230   37927 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:38:33.437030   37927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42397
	I0501 02:38:33.437431   37927 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:38:33.438041   37927 main.go:141] libmachine: Using API Version  1
	I0501 02:38:33.438063   37927 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:38:33.438539   37927 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:38:33.438784   37927 main.go:141] libmachine: (ha-329926) Calling .GetState
	I0501 02:38:33.440607   37927 status.go:330] ha-329926 host status = "Running" (err=<nil>)
	I0501 02:38:33.440632   37927 host.go:66] Checking if "ha-329926" exists ...
	I0501 02:38:33.441054   37927 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:38:33.441099   37927 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:38:33.455848   37927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43615
	I0501 02:38:33.456258   37927 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:38:33.456916   37927 main.go:141] libmachine: Using API Version  1
	I0501 02:38:33.456948   37927 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:38:33.457299   37927 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:38:33.457502   37927 main.go:141] libmachine: (ha-329926) Calling .GetIP
	I0501 02:38:33.459984   37927 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:38:33.460370   37927 main.go:141] libmachine: (ha-329926) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:d8:43", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:31:18 +0000 UTC Type:0 Mac:52:54:00:ce:d8:43 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-329926 Clientid:01:52:54:00:ce:d8:43}
	I0501 02:38:33.460395   37927 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined IP address 192.168.39.5 and MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:38:33.460560   37927 host.go:66] Checking if "ha-329926" exists ...
	I0501 02:38:33.460970   37927 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:38:33.461015   37927 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:38:33.475842   37927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44977
	I0501 02:38:33.476187   37927 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:38:33.476619   37927 main.go:141] libmachine: Using API Version  1
	I0501 02:38:33.476640   37927 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:38:33.476964   37927 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:38:33.477147   37927 main.go:141] libmachine: (ha-329926) Calling .DriverName
	I0501 02:38:33.477331   37927 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0501 02:38:33.477352   37927 main.go:141] libmachine: (ha-329926) Calling .GetSSHHostname
	I0501 02:38:33.479643   37927 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:38:33.480019   37927 main.go:141] libmachine: (ha-329926) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:d8:43", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:31:18 +0000 UTC Type:0 Mac:52:54:00:ce:d8:43 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-329926 Clientid:01:52:54:00:ce:d8:43}
	I0501 02:38:33.480040   37927 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined IP address 192.168.39.5 and MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:38:33.480159   37927 main.go:141] libmachine: (ha-329926) Calling .GetSSHPort
	I0501 02:38:33.480330   37927 main.go:141] libmachine: (ha-329926) Calling .GetSSHKeyPath
	I0501 02:38:33.480433   37927 main.go:141] libmachine: (ha-329926) Calling .GetSSHUsername
	I0501 02:38:33.480529   37927 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926/id_rsa Username:docker}
	I0501 02:38:33.559195   37927 ssh_runner.go:195] Run: systemctl --version
	I0501 02:38:33.565807   37927 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 02:38:33.582152   37927 kubeconfig.go:125] found "ha-329926" server: "https://192.168.39.254:8443"
	I0501 02:38:33.582179   37927 api_server.go:166] Checking apiserver status ...
	I0501 02:38:33.582209   37927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 02:38:33.597377   37927 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1174/cgroup
	W0501 02:38:33.613248   37927 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1174/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0501 02:38:33.613324   37927 ssh_runner.go:195] Run: ls
	I0501 02:38:33.619634   37927 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0501 02:38:33.624310   37927 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0501 02:38:33.624333   37927 status.go:422] ha-329926 apiserver status = Running (err=<nil>)
	I0501 02:38:33.624342   37927 status.go:257] ha-329926 status: &{Name:ha-329926 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0501 02:38:33.624357   37927 status.go:255] checking status of ha-329926-m02 ...
	I0501 02:38:33.624641   37927 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:38:33.624685   37927 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:38:33.639430   37927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39535
	I0501 02:38:33.639917   37927 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:38:33.640375   37927 main.go:141] libmachine: Using API Version  1
	I0501 02:38:33.640401   37927 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:38:33.640776   37927 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:38:33.640961   37927 main.go:141] libmachine: (ha-329926-m02) Calling .GetState
	I0501 02:38:33.642341   37927 status.go:330] ha-329926-m02 host status = "Running" (err=<nil>)
	I0501 02:38:33.642356   37927 host.go:66] Checking if "ha-329926-m02" exists ...
	I0501 02:38:33.642700   37927 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:38:33.642743   37927 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:38:33.656958   37927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41631
	I0501 02:38:33.657444   37927 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:38:33.657913   37927 main.go:141] libmachine: Using API Version  1
	I0501 02:38:33.657937   37927 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:38:33.658308   37927 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:38:33.658472   37927 main.go:141] libmachine: (ha-329926-m02) Calling .GetIP
	I0501 02:38:33.660891   37927 main.go:141] libmachine: (ha-329926-m02) DBG | domain ha-329926-m02 has defined MAC address 52:54:00:92:16:5f in network mk-ha-329926
	I0501 02:38:33.661241   37927 main.go:141] libmachine: (ha-329926-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:16:5f", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:32:18 +0000 UTC Type:0 Mac:52:54:00:92:16:5f Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-329926-m02 Clientid:01:52:54:00:92:16:5f}
	I0501 02:38:33.661263   37927 main.go:141] libmachine: (ha-329926-m02) DBG | domain ha-329926-m02 has defined IP address 192.168.39.79 and MAC address 52:54:00:92:16:5f in network mk-ha-329926
	I0501 02:38:33.661402   37927 host.go:66] Checking if "ha-329926-m02" exists ...
	I0501 02:38:33.661670   37927 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:38:33.661706   37927 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:38:33.676208   37927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32879
	I0501 02:38:33.676617   37927 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:38:33.677070   37927 main.go:141] libmachine: Using API Version  1
	I0501 02:38:33.677095   37927 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:38:33.677399   37927 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:38:33.677564   37927 main.go:141] libmachine: (ha-329926-m02) Calling .DriverName
	I0501 02:38:33.677735   37927 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0501 02:38:33.677754   37927 main.go:141] libmachine: (ha-329926-m02) Calling .GetSSHHostname
	I0501 02:38:33.680099   37927 main.go:141] libmachine: (ha-329926-m02) DBG | domain ha-329926-m02 has defined MAC address 52:54:00:92:16:5f in network mk-ha-329926
	I0501 02:38:33.680485   37927 main.go:141] libmachine: (ha-329926-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:16:5f", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:32:18 +0000 UTC Type:0 Mac:52:54:00:92:16:5f Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-329926-m02 Clientid:01:52:54:00:92:16:5f}
	I0501 02:38:33.680524   37927 main.go:141] libmachine: (ha-329926-m02) DBG | domain ha-329926-m02 has defined IP address 192.168.39.79 and MAC address 52:54:00:92:16:5f in network mk-ha-329926
	I0501 02:38:33.680649   37927 main.go:141] libmachine: (ha-329926-m02) Calling .GetSSHPort
	I0501 02:38:33.680819   37927 main.go:141] libmachine: (ha-329926-m02) Calling .GetSSHKeyPath
	I0501 02:38:33.680953   37927 main.go:141] libmachine: (ha-329926-m02) Calling .GetSSHUsername
	I0501 02:38:33.681097   37927 sshutil.go:53] new ssh client: &{IP:192.168.39.79 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926-m02/id_rsa Username:docker}
	W0501 02:38:35.078667   37927 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.79:22: connect: no route to host
	I0501 02:38:35.078725   37927 retry.go:31] will retry after 194.07635ms: dial tcp 192.168.39.79:22: connect: no route to host
	W0501 02:38:38.150722   37927 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.79:22: connect: no route to host
	W0501 02:38:38.150799   37927 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.79:22: connect: no route to host
	E0501 02:38:38.150819   37927 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.79:22: connect: no route to host
	I0501 02:38:38.150832   37927 status.go:257] ha-329926-m02 status: &{Name:ha-329926-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0501 02:38:38.150853   37927 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.79:22: connect: no route to host
	I0501 02:38:38.150861   37927 status.go:255] checking status of ha-329926-m03 ...
	I0501 02:38:38.151196   37927 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:38:38.151250   37927 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:38:38.166782   37927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33221
	I0501 02:38:38.167204   37927 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:38:38.167671   37927 main.go:141] libmachine: Using API Version  1
	I0501 02:38:38.167693   37927 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:38:38.168044   37927 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:38:38.168241   37927 main.go:141] libmachine: (ha-329926-m03) Calling .GetState
	I0501 02:38:38.169722   37927 status.go:330] ha-329926-m03 host status = "Running" (err=<nil>)
	I0501 02:38:38.169735   37927 host.go:66] Checking if "ha-329926-m03" exists ...
	I0501 02:38:38.170028   37927 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:38:38.170070   37927 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:38:38.185132   37927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41503
	I0501 02:38:38.185573   37927 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:38:38.186134   37927 main.go:141] libmachine: Using API Version  1
	I0501 02:38:38.186158   37927 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:38:38.186491   37927 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:38:38.186679   37927 main.go:141] libmachine: (ha-329926-m03) Calling .GetIP
	I0501 02:38:38.189173   37927 main.go:141] libmachine: (ha-329926-m03) DBG | domain ha-329926-m03 has defined MAC address 52:54:00:f9:eb:7d in network mk-ha-329926
	I0501 02:38:38.189537   37927 main.go:141] libmachine: (ha-329926-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:eb:7d", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:33:44 +0000 UTC Type:0 Mac:52:54:00:f9:eb:7d Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:ha-329926-m03 Clientid:01:52:54:00:f9:eb:7d}
	I0501 02:38:38.189562   37927 main.go:141] libmachine: (ha-329926-m03) DBG | domain ha-329926-m03 has defined IP address 192.168.39.115 and MAC address 52:54:00:f9:eb:7d in network mk-ha-329926
	I0501 02:38:38.189697   37927 host.go:66] Checking if "ha-329926-m03" exists ...
	I0501 02:38:38.189966   37927 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:38:38.190000   37927 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:38:38.205239   37927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44245
	I0501 02:38:38.205620   37927 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:38:38.206105   37927 main.go:141] libmachine: Using API Version  1
	I0501 02:38:38.206127   37927 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:38:38.206426   37927 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:38:38.206607   37927 main.go:141] libmachine: (ha-329926-m03) Calling .DriverName
	I0501 02:38:38.206794   37927 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0501 02:38:38.206819   37927 main.go:141] libmachine: (ha-329926-m03) Calling .GetSSHHostname
	I0501 02:38:38.209085   37927 main.go:141] libmachine: (ha-329926-m03) DBG | domain ha-329926-m03 has defined MAC address 52:54:00:f9:eb:7d in network mk-ha-329926
	I0501 02:38:38.209529   37927 main.go:141] libmachine: (ha-329926-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:eb:7d", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:33:44 +0000 UTC Type:0 Mac:52:54:00:f9:eb:7d Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:ha-329926-m03 Clientid:01:52:54:00:f9:eb:7d}
	I0501 02:38:38.209559   37927 main.go:141] libmachine: (ha-329926-m03) DBG | domain ha-329926-m03 has defined IP address 192.168.39.115 and MAC address 52:54:00:f9:eb:7d in network mk-ha-329926
	I0501 02:38:38.209665   37927 main.go:141] libmachine: (ha-329926-m03) Calling .GetSSHPort
	I0501 02:38:38.209826   37927 main.go:141] libmachine: (ha-329926-m03) Calling .GetSSHKeyPath
	I0501 02:38:38.209961   37927 main.go:141] libmachine: (ha-329926-m03) Calling .GetSSHUsername
	I0501 02:38:38.210126   37927 sshutil.go:53] new ssh client: &{IP:192.168.39.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926-m03/id_rsa Username:docker}
	I0501 02:38:38.296037   37927 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 02:38:38.315769   37927 kubeconfig.go:125] found "ha-329926" server: "https://192.168.39.254:8443"
	I0501 02:38:38.315793   37927 api_server.go:166] Checking apiserver status ...
	I0501 02:38:38.315827   37927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 02:38:38.331869   37927 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1592/cgroup
	W0501 02:38:38.346013   37927 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1592/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0501 02:38:38.346085   37927 ssh_runner.go:195] Run: ls
	I0501 02:38:38.351426   37927 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0501 02:38:38.355862   37927 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0501 02:38:38.355885   37927 status.go:422] ha-329926-m03 apiserver status = Running (err=<nil>)
	I0501 02:38:38.355895   37927 status.go:257] ha-329926-m03 status: &{Name:ha-329926-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0501 02:38:38.355916   37927 status.go:255] checking status of ha-329926-m04 ...
	I0501 02:38:38.356197   37927 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:38:38.356239   37927 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:38:38.371212   37927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35259
	I0501 02:38:38.371683   37927 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:38:38.372168   37927 main.go:141] libmachine: Using API Version  1
	I0501 02:38:38.372190   37927 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:38:38.372519   37927 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:38:38.372732   37927 main.go:141] libmachine: (ha-329926-m04) Calling .GetState
	I0501 02:38:38.374301   37927 status.go:330] ha-329926-m04 host status = "Running" (err=<nil>)
	I0501 02:38:38.374314   37927 host.go:66] Checking if "ha-329926-m04" exists ...
	I0501 02:38:38.374617   37927 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:38:38.374657   37927 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:38:38.389398   37927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40915
	I0501 02:38:38.389773   37927 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:38:38.390280   37927 main.go:141] libmachine: Using API Version  1
	I0501 02:38:38.390304   37927 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:38:38.390622   37927 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:38:38.390792   37927 main.go:141] libmachine: (ha-329926-m04) Calling .GetIP
	I0501 02:38:38.393702   37927 main.go:141] libmachine: (ha-329926-m04) DBG | domain ha-329926-m04 has defined MAC address 52:54:00:95:f4:8b in network mk-ha-329926
	I0501 02:38:38.394158   37927 main.go:141] libmachine: (ha-329926-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:f4:8b", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:35:11 +0000 UTC Type:0 Mac:52:54:00:95:f4:8b Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-329926-m04 Clientid:01:52:54:00:95:f4:8b}
	I0501 02:38:38.394194   37927 main.go:141] libmachine: (ha-329926-m04) DBG | domain ha-329926-m04 has defined IP address 192.168.39.84 and MAC address 52:54:00:95:f4:8b in network mk-ha-329926
	I0501 02:38:38.394320   37927 host.go:66] Checking if "ha-329926-m04" exists ...
	I0501 02:38:38.394638   37927 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:38:38.394676   37927 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:38:38.409256   37927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42815
	I0501 02:38:38.409661   37927 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:38:38.410098   37927 main.go:141] libmachine: Using API Version  1
	I0501 02:38:38.410126   37927 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:38:38.410422   37927 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:38:38.410586   37927 main.go:141] libmachine: (ha-329926-m04) Calling .DriverName
	I0501 02:38:38.410773   37927 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0501 02:38:38.410791   37927 main.go:141] libmachine: (ha-329926-m04) Calling .GetSSHHostname
	I0501 02:38:38.413108   37927 main.go:141] libmachine: (ha-329926-m04) DBG | domain ha-329926-m04 has defined MAC address 52:54:00:95:f4:8b in network mk-ha-329926
	I0501 02:38:38.413432   37927 main.go:141] libmachine: (ha-329926-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:f4:8b", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:35:11 +0000 UTC Type:0 Mac:52:54:00:95:f4:8b Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-329926-m04 Clientid:01:52:54:00:95:f4:8b}
	I0501 02:38:38.413456   37927 main.go:141] libmachine: (ha-329926-m04) DBG | domain ha-329926-m04 has defined IP address 192.168.39.84 and MAC address 52:54:00:95:f4:8b in network mk-ha-329926
	I0501 02:38:38.413634   37927 main.go:141] libmachine: (ha-329926-m04) Calling .GetSSHPort
	I0501 02:38:38.413789   37927 main.go:141] libmachine: (ha-329926-m04) Calling .GetSSHKeyPath
	I0501 02:38:38.413927   37927 main.go:141] libmachine: (ha-329926-m04) Calling .GetSSHUsername
	I0501 02:38:38.414063   37927 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926-m04/id_rsa Username:docker}
	I0501 02:38:38.505630   37927 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 02:38:38.523261   37927 status.go:257] ha-329926-m04 status: &{Name:ha-329926-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-329926 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-329926 status -v=7 --alsologtostderr: exit status 3 (3.790003809s)

                                                
                                                
-- stdout --
	ha-329926
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-329926-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-329926-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-329926-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0501 02:38:41.262211   38044 out.go:291] Setting OutFile to fd 1 ...
	I0501 02:38:41.262696   38044 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 02:38:41.262715   38044 out.go:304] Setting ErrFile to fd 2...
	I0501 02:38:41.262722   38044 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 02:38:41.263155   38044 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18779-13391/.minikube/bin
	I0501 02:38:41.263492   38044 out.go:298] Setting JSON to false
	I0501 02:38:41.263531   38044 mustload.go:65] Loading cluster: ha-329926
	I0501 02:38:41.263649   38044 notify.go:220] Checking for updates...
	I0501 02:38:41.264193   38044 config.go:182] Loaded profile config "ha-329926": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0501 02:38:41.264213   38044 status.go:255] checking status of ha-329926 ...
	I0501 02:38:41.264600   38044 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:38:41.264699   38044 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:38:41.280698   38044 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32867
	I0501 02:38:41.281147   38044 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:38:41.281752   38044 main.go:141] libmachine: Using API Version  1
	I0501 02:38:41.281777   38044 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:38:41.282079   38044 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:38:41.282293   38044 main.go:141] libmachine: (ha-329926) Calling .GetState
	I0501 02:38:41.284120   38044 status.go:330] ha-329926 host status = "Running" (err=<nil>)
	I0501 02:38:41.284135   38044 host.go:66] Checking if "ha-329926" exists ...
	I0501 02:38:41.284467   38044 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:38:41.284510   38044 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:38:41.299648   38044 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42047
	I0501 02:38:41.300031   38044 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:38:41.300479   38044 main.go:141] libmachine: Using API Version  1
	I0501 02:38:41.300501   38044 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:38:41.300804   38044 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:38:41.300992   38044 main.go:141] libmachine: (ha-329926) Calling .GetIP
	I0501 02:38:41.303833   38044 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:38:41.304228   38044 main.go:141] libmachine: (ha-329926) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:d8:43", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:31:18 +0000 UTC Type:0 Mac:52:54:00:ce:d8:43 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-329926 Clientid:01:52:54:00:ce:d8:43}
	I0501 02:38:41.304267   38044 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined IP address 192.168.39.5 and MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:38:41.304422   38044 host.go:66] Checking if "ha-329926" exists ...
	I0501 02:38:41.304708   38044 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:38:41.304748   38044 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:38:41.319481   38044 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45753
	I0501 02:38:41.319848   38044 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:38:41.320246   38044 main.go:141] libmachine: Using API Version  1
	I0501 02:38:41.320271   38044 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:38:41.320562   38044 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:38:41.320727   38044 main.go:141] libmachine: (ha-329926) Calling .DriverName
	I0501 02:38:41.320894   38044 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0501 02:38:41.320924   38044 main.go:141] libmachine: (ha-329926) Calling .GetSSHHostname
	I0501 02:38:41.323649   38044 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:38:41.324087   38044 main.go:141] libmachine: (ha-329926) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:d8:43", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:31:18 +0000 UTC Type:0 Mac:52:54:00:ce:d8:43 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-329926 Clientid:01:52:54:00:ce:d8:43}
	I0501 02:38:41.324116   38044 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined IP address 192.168.39.5 and MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:38:41.324240   38044 main.go:141] libmachine: (ha-329926) Calling .GetSSHPort
	I0501 02:38:41.324392   38044 main.go:141] libmachine: (ha-329926) Calling .GetSSHKeyPath
	I0501 02:38:41.324499   38044 main.go:141] libmachine: (ha-329926) Calling .GetSSHUsername
	I0501 02:38:41.324670   38044 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926/id_rsa Username:docker}
	I0501 02:38:41.410891   38044 ssh_runner.go:195] Run: systemctl --version
	I0501 02:38:41.417894   38044 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 02:38:41.437188   38044 kubeconfig.go:125] found "ha-329926" server: "https://192.168.39.254:8443"
	I0501 02:38:41.437213   38044 api_server.go:166] Checking apiserver status ...
	I0501 02:38:41.437246   38044 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 02:38:41.455894   38044 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1174/cgroup
	W0501 02:38:41.468896   38044 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1174/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0501 02:38:41.468957   38044 ssh_runner.go:195] Run: ls
	I0501 02:38:41.474286   38044 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0501 02:38:41.478766   38044 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0501 02:38:41.478802   38044 status.go:422] ha-329926 apiserver status = Running (err=<nil>)
	I0501 02:38:41.478824   38044 status.go:257] ha-329926 status: &{Name:ha-329926 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0501 02:38:41.478849   38044 status.go:255] checking status of ha-329926-m02 ...
	I0501 02:38:41.479205   38044 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:38:41.479248   38044 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:38:41.494582   38044 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42633
	I0501 02:38:41.495049   38044 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:38:41.495608   38044 main.go:141] libmachine: Using API Version  1
	I0501 02:38:41.495638   38044 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:38:41.495947   38044 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:38:41.496149   38044 main.go:141] libmachine: (ha-329926-m02) Calling .GetState
	I0501 02:38:41.497725   38044 status.go:330] ha-329926-m02 host status = "Running" (err=<nil>)
	I0501 02:38:41.497740   38044 host.go:66] Checking if "ha-329926-m02" exists ...
	I0501 02:38:41.498050   38044 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:38:41.498090   38044 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:38:41.514154   38044 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41251
	I0501 02:38:41.514658   38044 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:38:41.515202   38044 main.go:141] libmachine: Using API Version  1
	I0501 02:38:41.515234   38044 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:38:41.515572   38044 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:38:41.515749   38044 main.go:141] libmachine: (ha-329926-m02) Calling .GetIP
	I0501 02:38:41.518842   38044 main.go:141] libmachine: (ha-329926-m02) DBG | domain ha-329926-m02 has defined MAC address 52:54:00:92:16:5f in network mk-ha-329926
	I0501 02:38:41.519229   38044 main.go:141] libmachine: (ha-329926-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:16:5f", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:32:18 +0000 UTC Type:0 Mac:52:54:00:92:16:5f Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-329926-m02 Clientid:01:52:54:00:92:16:5f}
	I0501 02:38:41.519256   38044 main.go:141] libmachine: (ha-329926-m02) DBG | domain ha-329926-m02 has defined IP address 192.168.39.79 and MAC address 52:54:00:92:16:5f in network mk-ha-329926
	I0501 02:38:41.519399   38044 host.go:66] Checking if "ha-329926-m02" exists ...
	I0501 02:38:41.519707   38044 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:38:41.519744   38044 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:38:41.535269   38044 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40913
	I0501 02:38:41.535661   38044 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:38:41.536080   38044 main.go:141] libmachine: Using API Version  1
	I0501 02:38:41.536101   38044 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:38:41.536382   38044 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:38:41.536569   38044 main.go:141] libmachine: (ha-329926-m02) Calling .DriverName
	I0501 02:38:41.536779   38044 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0501 02:38:41.536800   38044 main.go:141] libmachine: (ha-329926-m02) Calling .GetSSHHostname
	I0501 02:38:41.539303   38044 main.go:141] libmachine: (ha-329926-m02) DBG | domain ha-329926-m02 has defined MAC address 52:54:00:92:16:5f in network mk-ha-329926
	I0501 02:38:41.539792   38044 main.go:141] libmachine: (ha-329926-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:16:5f", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:32:18 +0000 UTC Type:0 Mac:52:54:00:92:16:5f Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-329926-m02 Clientid:01:52:54:00:92:16:5f}
	I0501 02:38:41.539820   38044 main.go:141] libmachine: (ha-329926-m02) DBG | domain ha-329926-m02 has defined IP address 192.168.39.79 and MAC address 52:54:00:92:16:5f in network mk-ha-329926
	I0501 02:38:41.539969   38044 main.go:141] libmachine: (ha-329926-m02) Calling .GetSSHPort
	I0501 02:38:41.540119   38044 main.go:141] libmachine: (ha-329926-m02) Calling .GetSSHKeyPath
	I0501 02:38:41.540255   38044 main.go:141] libmachine: (ha-329926-m02) Calling .GetSSHUsername
	I0501 02:38:41.540397   38044 sshutil.go:53] new ssh client: &{IP:192.168.39.79 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926-m02/id_rsa Username:docker}
	W0501 02:38:44.614652   38044 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.79:22: connect: no route to host
	W0501 02:38:44.614726   38044 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.79:22: connect: no route to host
	E0501 02:38:44.614740   38044 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.79:22: connect: no route to host
	I0501 02:38:44.614749   38044 status.go:257] ha-329926-m02 status: &{Name:ha-329926-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0501 02:38:44.614786   38044 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.79:22: connect: no route to host
	I0501 02:38:44.614793   38044 status.go:255] checking status of ha-329926-m03 ...
	I0501 02:38:44.615078   38044 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:38:44.615123   38044 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:38:44.631580   38044 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40233
	I0501 02:38:44.631975   38044 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:38:44.632523   38044 main.go:141] libmachine: Using API Version  1
	I0501 02:38:44.632549   38044 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:38:44.632895   38044 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:38:44.633113   38044 main.go:141] libmachine: (ha-329926-m03) Calling .GetState
	I0501 02:38:44.634875   38044 status.go:330] ha-329926-m03 host status = "Running" (err=<nil>)
	I0501 02:38:44.634895   38044 host.go:66] Checking if "ha-329926-m03" exists ...
	I0501 02:38:44.635206   38044 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:38:44.635249   38044 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:38:44.651395   38044 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44641
	I0501 02:38:44.651807   38044 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:38:44.652263   38044 main.go:141] libmachine: Using API Version  1
	I0501 02:38:44.652284   38044 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:38:44.652558   38044 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:38:44.652708   38044 main.go:141] libmachine: (ha-329926-m03) Calling .GetIP
	I0501 02:38:44.655532   38044 main.go:141] libmachine: (ha-329926-m03) DBG | domain ha-329926-m03 has defined MAC address 52:54:00:f9:eb:7d in network mk-ha-329926
	I0501 02:38:44.655945   38044 main.go:141] libmachine: (ha-329926-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:eb:7d", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:33:44 +0000 UTC Type:0 Mac:52:54:00:f9:eb:7d Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:ha-329926-m03 Clientid:01:52:54:00:f9:eb:7d}
	I0501 02:38:44.655977   38044 main.go:141] libmachine: (ha-329926-m03) DBG | domain ha-329926-m03 has defined IP address 192.168.39.115 and MAC address 52:54:00:f9:eb:7d in network mk-ha-329926
	I0501 02:38:44.656074   38044 host.go:66] Checking if "ha-329926-m03" exists ...
	I0501 02:38:44.656418   38044 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:38:44.656467   38044 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:38:44.672271   38044 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39303
	I0501 02:38:44.672713   38044 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:38:44.673173   38044 main.go:141] libmachine: Using API Version  1
	I0501 02:38:44.673195   38044 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:38:44.673495   38044 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:38:44.673711   38044 main.go:141] libmachine: (ha-329926-m03) Calling .DriverName
	I0501 02:38:44.673890   38044 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0501 02:38:44.673911   38044 main.go:141] libmachine: (ha-329926-m03) Calling .GetSSHHostname
	I0501 02:38:44.676393   38044 main.go:141] libmachine: (ha-329926-m03) DBG | domain ha-329926-m03 has defined MAC address 52:54:00:f9:eb:7d in network mk-ha-329926
	I0501 02:38:44.676750   38044 main.go:141] libmachine: (ha-329926-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:eb:7d", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:33:44 +0000 UTC Type:0 Mac:52:54:00:f9:eb:7d Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:ha-329926-m03 Clientid:01:52:54:00:f9:eb:7d}
	I0501 02:38:44.676786   38044 main.go:141] libmachine: (ha-329926-m03) DBG | domain ha-329926-m03 has defined IP address 192.168.39.115 and MAC address 52:54:00:f9:eb:7d in network mk-ha-329926
	I0501 02:38:44.676879   38044 main.go:141] libmachine: (ha-329926-m03) Calling .GetSSHPort
	I0501 02:38:44.677024   38044 main.go:141] libmachine: (ha-329926-m03) Calling .GetSSHKeyPath
	I0501 02:38:44.677182   38044 main.go:141] libmachine: (ha-329926-m03) Calling .GetSSHUsername
	I0501 02:38:44.677316   38044 sshutil.go:53] new ssh client: &{IP:192.168.39.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926-m03/id_rsa Username:docker}
	I0501 02:38:44.763141   38044 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 02:38:44.786370   38044 kubeconfig.go:125] found "ha-329926" server: "https://192.168.39.254:8443"
	I0501 02:38:44.786419   38044 api_server.go:166] Checking apiserver status ...
	I0501 02:38:44.786460   38044 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 02:38:44.803657   38044 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1592/cgroup
	W0501 02:38:44.815947   38044 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1592/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0501 02:38:44.816016   38044 ssh_runner.go:195] Run: ls
	I0501 02:38:44.821850   38044 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0501 02:38:44.826282   38044 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0501 02:38:44.826311   38044 status.go:422] ha-329926-m03 apiserver status = Running (err=<nil>)
	I0501 02:38:44.826320   38044 status.go:257] ha-329926-m03 status: &{Name:ha-329926-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0501 02:38:44.826337   38044 status.go:255] checking status of ha-329926-m04 ...
	I0501 02:38:44.826754   38044 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:38:44.826796   38044 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:38:44.841651   38044 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43597
	I0501 02:38:44.842040   38044 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:38:44.842601   38044 main.go:141] libmachine: Using API Version  1
	I0501 02:38:44.842625   38044 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:38:44.842937   38044 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:38:44.843125   38044 main.go:141] libmachine: (ha-329926-m04) Calling .GetState
	I0501 02:38:44.844643   38044 status.go:330] ha-329926-m04 host status = "Running" (err=<nil>)
	I0501 02:38:44.844657   38044 host.go:66] Checking if "ha-329926-m04" exists ...
	I0501 02:38:44.844937   38044 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:38:44.844986   38044 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:38:44.859814   38044 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46265
	I0501 02:38:44.860192   38044 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:38:44.860650   38044 main.go:141] libmachine: Using API Version  1
	I0501 02:38:44.860671   38044 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:38:44.860971   38044 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:38:44.861146   38044 main.go:141] libmachine: (ha-329926-m04) Calling .GetIP
	I0501 02:38:44.863760   38044 main.go:141] libmachine: (ha-329926-m04) DBG | domain ha-329926-m04 has defined MAC address 52:54:00:95:f4:8b in network mk-ha-329926
	I0501 02:38:44.864170   38044 main.go:141] libmachine: (ha-329926-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:f4:8b", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:35:11 +0000 UTC Type:0 Mac:52:54:00:95:f4:8b Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-329926-m04 Clientid:01:52:54:00:95:f4:8b}
	I0501 02:38:44.864201   38044 main.go:141] libmachine: (ha-329926-m04) DBG | domain ha-329926-m04 has defined IP address 192.168.39.84 and MAC address 52:54:00:95:f4:8b in network mk-ha-329926
	I0501 02:38:44.864377   38044 host.go:66] Checking if "ha-329926-m04" exists ...
	I0501 02:38:44.864713   38044 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:38:44.864748   38044 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:38:44.879820   38044 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40493
	I0501 02:38:44.880242   38044 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:38:44.880741   38044 main.go:141] libmachine: Using API Version  1
	I0501 02:38:44.880764   38044 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:38:44.881047   38044 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:38:44.881249   38044 main.go:141] libmachine: (ha-329926-m04) Calling .DriverName
	I0501 02:38:44.881428   38044 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0501 02:38:44.881447   38044 main.go:141] libmachine: (ha-329926-m04) Calling .GetSSHHostname
	I0501 02:38:44.884017   38044 main.go:141] libmachine: (ha-329926-m04) DBG | domain ha-329926-m04 has defined MAC address 52:54:00:95:f4:8b in network mk-ha-329926
	I0501 02:38:44.884382   38044 main.go:141] libmachine: (ha-329926-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:f4:8b", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:35:11 +0000 UTC Type:0 Mac:52:54:00:95:f4:8b Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-329926-m04 Clientid:01:52:54:00:95:f4:8b}
	I0501 02:38:44.884409   38044 main.go:141] libmachine: (ha-329926-m04) DBG | domain ha-329926-m04 has defined IP address 192.168.39.84 and MAC address 52:54:00:95:f4:8b in network mk-ha-329926
	I0501 02:38:44.884577   38044 main.go:141] libmachine: (ha-329926-m04) Calling .GetSSHPort
	I0501 02:38:44.884790   38044 main.go:141] libmachine: (ha-329926-m04) Calling .GetSSHKeyPath
	I0501 02:38:44.884965   38044 main.go:141] libmachine: (ha-329926-m04) Calling .GetSSHUsername
	I0501 02:38:44.885157   38044 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926-m04/id_rsa Username:docker}
	I0501 02:38:44.979292   38044 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 02:38:44.996171   38044 status.go:257] ha-329926-m04 status: &{Name:ha-329926-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-329926 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-329926 status -v=7 --alsologtostderr: exit status 3 (3.782121161s)

                                                
                                                
-- stdout --
	ha-329926
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-329926-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-329926-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-329926-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0501 02:38:48.508660   38145 out.go:291] Setting OutFile to fd 1 ...
	I0501 02:38:48.508915   38145 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 02:38:48.508926   38145 out.go:304] Setting ErrFile to fd 2...
	I0501 02:38:48.508930   38145 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 02:38:48.509103   38145 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18779-13391/.minikube/bin
	I0501 02:38:48.509271   38145 out.go:298] Setting JSON to false
	I0501 02:38:48.509313   38145 mustload.go:65] Loading cluster: ha-329926
	I0501 02:38:48.509434   38145 notify.go:220] Checking for updates...
	I0501 02:38:48.509724   38145 config.go:182] Loaded profile config "ha-329926": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0501 02:38:48.509742   38145 status.go:255] checking status of ha-329926 ...
	I0501 02:38:48.510100   38145 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:38:48.510150   38145 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:38:48.526589   38145 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43285
	I0501 02:38:48.526967   38145 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:38:48.527539   38145 main.go:141] libmachine: Using API Version  1
	I0501 02:38:48.527561   38145 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:38:48.527967   38145 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:38:48.528182   38145 main.go:141] libmachine: (ha-329926) Calling .GetState
	I0501 02:38:48.529699   38145 status.go:330] ha-329926 host status = "Running" (err=<nil>)
	I0501 02:38:48.529714   38145 host.go:66] Checking if "ha-329926" exists ...
	I0501 02:38:48.530115   38145 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:38:48.530168   38145 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:38:48.545894   38145 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43539
	I0501 02:38:48.546300   38145 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:38:48.546727   38145 main.go:141] libmachine: Using API Version  1
	I0501 02:38:48.546746   38145 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:38:48.547036   38145 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:38:48.547188   38145 main.go:141] libmachine: (ha-329926) Calling .GetIP
	I0501 02:38:48.549763   38145 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:38:48.550152   38145 main.go:141] libmachine: (ha-329926) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:d8:43", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:31:18 +0000 UTC Type:0 Mac:52:54:00:ce:d8:43 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-329926 Clientid:01:52:54:00:ce:d8:43}
	I0501 02:38:48.550177   38145 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined IP address 192.168.39.5 and MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:38:48.550292   38145 host.go:66] Checking if "ha-329926" exists ...
	I0501 02:38:48.550681   38145 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:38:48.550714   38145 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:38:48.565476   38145 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43015
	I0501 02:38:48.565853   38145 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:38:48.566301   38145 main.go:141] libmachine: Using API Version  1
	I0501 02:38:48.566322   38145 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:38:48.566759   38145 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:38:48.566967   38145 main.go:141] libmachine: (ha-329926) Calling .DriverName
	I0501 02:38:48.567137   38145 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0501 02:38:48.567171   38145 main.go:141] libmachine: (ha-329926) Calling .GetSSHHostname
	I0501 02:38:48.569682   38145 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:38:48.570060   38145 main.go:141] libmachine: (ha-329926) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:d8:43", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:31:18 +0000 UTC Type:0 Mac:52:54:00:ce:d8:43 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-329926 Clientid:01:52:54:00:ce:d8:43}
	I0501 02:38:48.570086   38145 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined IP address 192.168.39.5 and MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:38:48.570206   38145 main.go:141] libmachine: (ha-329926) Calling .GetSSHPort
	I0501 02:38:48.570394   38145 main.go:141] libmachine: (ha-329926) Calling .GetSSHKeyPath
	I0501 02:38:48.570545   38145 main.go:141] libmachine: (ha-329926) Calling .GetSSHUsername
	I0501 02:38:48.570663   38145 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926/id_rsa Username:docker}
	I0501 02:38:48.650575   38145 ssh_runner.go:195] Run: systemctl --version
	I0501 02:38:48.657094   38145 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 02:38:48.674729   38145 kubeconfig.go:125] found "ha-329926" server: "https://192.168.39.254:8443"
	I0501 02:38:48.674839   38145 api_server.go:166] Checking apiserver status ...
	I0501 02:38:48.674902   38145 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 02:38:48.693500   38145 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1174/cgroup
	W0501 02:38:48.706912   38145 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1174/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0501 02:38:48.706974   38145 ssh_runner.go:195] Run: ls
	I0501 02:38:48.712568   38145 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0501 02:38:48.720426   38145 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0501 02:38:48.720451   38145 status.go:422] ha-329926 apiserver status = Running (err=<nil>)
	I0501 02:38:48.720462   38145 status.go:257] ha-329926 status: &{Name:ha-329926 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0501 02:38:48.720482   38145 status.go:255] checking status of ha-329926-m02 ...
	I0501 02:38:48.720781   38145 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:38:48.720820   38145 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:38:48.736062   38145 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34501
	I0501 02:38:48.736528   38145 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:38:48.736985   38145 main.go:141] libmachine: Using API Version  1
	I0501 02:38:48.737005   38145 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:38:48.737310   38145 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:38:48.737511   38145 main.go:141] libmachine: (ha-329926-m02) Calling .GetState
	I0501 02:38:48.739009   38145 status.go:330] ha-329926-m02 host status = "Running" (err=<nil>)
	I0501 02:38:48.739023   38145 host.go:66] Checking if "ha-329926-m02" exists ...
	I0501 02:38:48.739342   38145 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:38:48.739378   38145 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:38:48.753826   38145 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43569
	I0501 02:38:48.754204   38145 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:38:48.754661   38145 main.go:141] libmachine: Using API Version  1
	I0501 02:38:48.754682   38145 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:38:48.754986   38145 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:38:48.755177   38145 main.go:141] libmachine: (ha-329926-m02) Calling .GetIP
	I0501 02:38:48.757628   38145 main.go:141] libmachine: (ha-329926-m02) DBG | domain ha-329926-m02 has defined MAC address 52:54:00:92:16:5f in network mk-ha-329926
	I0501 02:38:48.758032   38145 main.go:141] libmachine: (ha-329926-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:16:5f", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:32:18 +0000 UTC Type:0 Mac:52:54:00:92:16:5f Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-329926-m02 Clientid:01:52:54:00:92:16:5f}
	I0501 02:38:48.758065   38145 main.go:141] libmachine: (ha-329926-m02) DBG | domain ha-329926-m02 has defined IP address 192.168.39.79 and MAC address 52:54:00:92:16:5f in network mk-ha-329926
	I0501 02:38:48.758145   38145 host.go:66] Checking if "ha-329926-m02" exists ...
	I0501 02:38:48.758622   38145 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:38:48.758670   38145 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:38:48.773293   38145 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40991
	I0501 02:38:48.773774   38145 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:38:48.774281   38145 main.go:141] libmachine: Using API Version  1
	I0501 02:38:48.774303   38145 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:38:48.774629   38145 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:38:48.774815   38145 main.go:141] libmachine: (ha-329926-m02) Calling .DriverName
	I0501 02:38:48.774997   38145 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0501 02:38:48.775021   38145 main.go:141] libmachine: (ha-329926-m02) Calling .GetSSHHostname
	I0501 02:38:48.777779   38145 main.go:141] libmachine: (ha-329926-m02) DBG | domain ha-329926-m02 has defined MAC address 52:54:00:92:16:5f in network mk-ha-329926
	I0501 02:38:48.778336   38145 main.go:141] libmachine: (ha-329926-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:16:5f", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:32:18 +0000 UTC Type:0 Mac:52:54:00:92:16:5f Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-329926-m02 Clientid:01:52:54:00:92:16:5f}
	I0501 02:38:48.778368   38145 main.go:141] libmachine: (ha-329926-m02) DBG | domain ha-329926-m02 has defined IP address 192.168.39.79 and MAC address 52:54:00:92:16:5f in network mk-ha-329926
	I0501 02:38:48.778467   38145 main.go:141] libmachine: (ha-329926-m02) Calling .GetSSHPort
	I0501 02:38:48.778675   38145 main.go:141] libmachine: (ha-329926-m02) Calling .GetSSHKeyPath
	I0501 02:38:48.778868   38145 main.go:141] libmachine: (ha-329926-m02) Calling .GetSSHUsername
	I0501 02:38:48.778996   38145 sshutil.go:53] new ssh client: &{IP:192.168.39.79 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926-m02/id_rsa Username:docker}
	W0501 02:38:51.846630   38145 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.79:22: connect: no route to host
	W0501 02:38:51.846757   38145 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.79:22: connect: no route to host
	E0501 02:38:51.846785   38145 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.79:22: connect: no route to host
	I0501 02:38:51.846794   38145 status.go:257] ha-329926-m02 status: &{Name:ha-329926-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0501 02:38:51.846811   38145 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.79:22: connect: no route to host
	I0501 02:38:51.846829   38145 status.go:255] checking status of ha-329926-m03 ...
	I0501 02:38:51.847122   38145 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:38:51.847170   38145 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:38:51.861672   38145 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46121
	I0501 02:38:51.862118   38145 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:38:51.862599   38145 main.go:141] libmachine: Using API Version  1
	I0501 02:38:51.862621   38145 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:38:51.862982   38145 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:38:51.863208   38145 main.go:141] libmachine: (ha-329926-m03) Calling .GetState
	I0501 02:38:51.864760   38145 status.go:330] ha-329926-m03 host status = "Running" (err=<nil>)
	I0501 02:38:51.864774   38145 host.go:66] Checking if "ha-329926-m03" exists ...
	I0501 02:38:51.865041   38145 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:38:51.865092   38145 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:38:51.879213   38145 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38983
	I0501 02:38:51.879594   38145 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:38:51.880030   38145 main.go:141] libmachine: Using API Version  1
	I0501 02:38:51.880051   38145 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:38:51.880333   38145 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:38:51.880543   38145 main.go:141] libmachine: (ha-329926-m03) Calling .GetIP
	I0501 02:38:51.883256   38145 main.go:141] libmachine: (ha-329926-m03) DBG | domain ha-329926-m03 has defined MAC address 52:54:00:f9:eb:7d in network mk-ha-329926
	I0501 02:38:51.883620   38145 main.go:141] libmachine: (ha-329926-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:eb:7d", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:33:44 +0000 UTC Type:0 Mac:52:54:00:f9:eb:7d Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:ha-329926-m03 Clientid:01:52:54:00:f9:eb:7d}
	I0501 02:38:51.883651   38145 main.go:141] libmachine: (ha-329926-m03) DBG | domain ha-329926-m03 has defined IP address 192.168.39.115 and MAC address 52:54:00:f9:eb:7d in network mk-ha-329926
	I0501 02:38:51.883786   38145 host.go:66] Checking if "ha-329926-m03" exists ...
	I0501 02:38:51.884148   38145 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:38:51.884185   38145 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:38:51.898150   38145 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42435
	I0501 02:38:51.898525   38145 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:38:51.898898   38145 main.go:141] libmachine: Using API Version  1
	I0501 02:38:51.898919   38145 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:38:51.899181   38145 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:38:51.899357   38145 main.go:141] libmachine: (ha-329926-m03) Calling .DriverName
	I0501 02:38:51.899542   38145 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0501 02:38:51.899559   38145 main.go:141] libmachine: (ha-329926-m03) Calling .GetSSHHostname
	I0501 02:38:51.902311   38145 main.go:141] libmachine: (ha-329926-m03) DBG | domain ha-329926-m03 has defined MAC address 52:54:00:f9:eb:7d in network mk-ha-329926
	I0501 02:38:51.902772   38145 main.go:141] libmachine: (ha-329926-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:eb:7d", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:33:44 +0000 UTC Type:0 Mac:52:54:00:f9:eb:7d Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:ha-329926-m03 Clientid:01:52:54:00:f9:eb:7d}
	I0501 02:38:51.902800   38145 main.go:141] libmachine: (ha-329926-m03) DBG | domain ha-329926-m03 has defined IP address 192.168.39.115 and MAC address 52:54:00:f9:eb:7d in network mk-ha-329926
	I0501 02:38:51.903013   38145 main.go:141] libmachine: (ha-329926-m03) Calling .GetSSHPort
	I0501 02:38:51.903196   38145 main.go:141] libmachine: (ha-329926-m03) Calling .GetSSHKeyPath
	I0501 02:38:51.903352   38145 main.go:141] libmachine: (ha-329926-m03) Calling .GetSSHUsername
	I0501 02:38:51.903516   38145 sshutil.go:53] new ssh client: &{IP:192.168.39.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926-m03/id_rsa Username:docker}
	I0501 02:38:52.007967   38145 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 02:38:52.024068   38145 kubeconfig.go:125] found "ha-329926" server: "https://192.168.39.254:8443"
	I0501 02:38:52.024095   38145 api_server.go:166] Checking apiserver status ...
	I0501 02:38:52.024125   38145 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 02:38:52.041968   38145 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1592/cgroup
	W0501 02:38:52.052237   38145 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1592/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0501 02:38:52.052294   38145 ssh_runner.go:195] Run: ls
	I0501 02:38:52.057388   38145 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0501 02:38:52.068522   38145 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0501 02:38:52.068547   38145 status.go:422] ha-329926-m03 apiserver status = Running (err=<nil>)
	I0501 02:38:52.068556   38145 status.go:257] ha-329926-m03 status: &{Name:ha-329926-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0501 02:38:52.068570   38145 status.go:255] checking status of ha-329926-m04 ...
	I0501 02:38:52.068874   38145 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:38:52.068917   38145 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:38:52.083679   38145 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37147
	I0501 02:38:52.084107   38145 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:38:52.084597   38145 main.go:141] libmachine: Using API Version  1
	I0501 02:38:52.084619   38145 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:38:52.084895   38145 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:38:52.085052   38145 main.go:141] libmachine: (ha-329926-m04) Calling .GetState
	I0501 02:38:52.086622   38145 status.go:330] ha-329926-m04 host status = "Running" (err=<nil>)
	I0501 02:38:52.086635   38145 host.go:66] Checking if "ha-329926-m04" exists ...
	I0501 02:38:52.086903   38145 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:38:52.086936   38145 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:38:52.103229   38145 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34195
	I0501 02:38:52.103673   38145 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:38:52.104157   38145 main.go:141] libmachine: Using API Version  1
	I0501 02:38:52.104179   38145 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:38:52.104498   38145 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:38:52.104699   38145 main.go:141] libmachine: (ha-329926-m04) Calling .GetIP
	I0501 02:38:52.107599   38145 main.go:141] libmachine: (ha-329926-m04) DBG | domain ha-329926-m04 has defined MAC address 52:54:00:95:f4:8b in network mk-ha-329926
	I0501 02:38:52.108012   38145 main.go:141] libmachine: (ha-329926-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:f4:8b", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:35:11 +0000 UTC Type:0 Mac:52:54:00:95:f4:8b Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-329926-m04 Clientid:01:52:54:00:95:f4:8b}
	I0501 02:38:52.108032   38145 main.go:141] libmachine: (ha-329926-m04) DBG | domain ha-329926-m04 has defined IP address 192.168.39.84 and MAC address 52:54:00:95:f4:8b in network mk-ha-329926
	I0501 02:38:52.108148   38145 host.go:66] Checking if "ha-329926-m04" exists ...
	I0501 02:38:52.108566   38145 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:38:52.108611   38145 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:38:52.123631   38145 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44375
	I0501 02:38:52.123989   38145 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:38:52.124425   38145 main.go:141] libmachine: Using API Version  1
	I0501 02:38:52.124448   38145 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:38:52.124713   38145 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:38:52.124922   38145 main.go:141] libmachine: (ha-329926-m04) Calling .DriverName
	I0501 02:38:52.125123   38145 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0501 02:38:52.125145   38145 main.go:141] libmachine: (ha-329926-m04) Calling .GetSSHHostname
	I0501 02:38:52.127832   38145 main.go:141] libmachine: (ha-329926-m04) DBG | domain ha-329926-m04 has defined MAC address 52:54:00:95:f4:8b in network mk-ha-329926
	I0501 02:38:52.128188   38145 main.go:141] libmachine: (ha-329926-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:f4:8b", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:35:11 +0000 UTC Type:0 Mac:52:54:00:95:f4:8b Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-329926-m04 Clientid:01:52:54:00:95:f4:8b}
	I0501 02:38:52.128215   38145 main.go:141] libmachine: (ha-329926-m04) DBG | domain ha-329926-m04 has defined IP address 192.168.39.84 and MAC address 52:54:00:95:f4:8b in network mk-ha-329926
	I0501 02:38:52.128312   38145 main.go:141] libmachine: (ha-329926-m04) Calling .GetSSHPort
	I0501 02:38:52.128467   38145 main.go:141] libmachine: (ha-329926-m04) Calling .GetSSHKeyPath
	I0501 02:38:52.128608   38145 main.go:141] libmachine: (ha-329926-m04) Calling .GetSSHUsername
	I0501 02:38:52.128718   38145 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926-m04/id_rsa Username:docker}
	I0501 02:38:52.219211   38145 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 02:38:52.235961   38145 status.go:257] ha-329926-m04 status: &{Name:ha-329926-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-329926 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-329926 status -v=7 --alsologtostderr: exit status 3 (3.778630826s)

                                                
                                                
-- stdout --
	ha-329926
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-329926-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-329926-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-329926-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0501 02:38:55.992367   38262 out.go:291] Setting OutFile to fd 1 ...
	I0501 02:38:55.992743   38262 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 02:38:55.992754   38262 out.go:304] Setting ErrFile to fd 2...
	I0501 02:38:55.992759   38262 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 02:38:55.992931   38262 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18779-13391/.minikube/bin
	I0501 02:38:55.993190   38262 out.go:298] Setting JSON to false
	I0501 02:38:55.993223   38262 mustload.go:65] Loading cluster: ha-329926
	I0501 02:38:55.993276   38262 notify.go:220] Checking for updates...
	I0501 02:38:55.993798   38262 config.go:182] Loaded profile config "ha-329926": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0501 02:38:55.993820   38262 status.go:255] checking status of ha-329926 ...
	I0501 02:38:55.994276   38262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:38:55.994328   38262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:38:56.009538   38262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36913
	I0501 02:38:56.009983   38262 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:38:56.010570   38262 main.go:141] libmachine: Using API Version  1
	I0501 02:38:56.010590   38262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:38:56.010942   38262 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:38:56.011180   38262 main.go:141] libmachine: (ha-329926) Calling .GetState
	I0501 02:38:56.013068   38262 status.go:330] ha-329926 host status = "Running" (err=<nil>)
	I0501 02:38:56.013084   38262 host.go:66] Checking if "ha-329926" exists ...
	I0501 02:38:56.013385   38262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:38:56.013437   38262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:38:56.029699   38262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43319
	I0501 02:38:56.030129   38262 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:38:56.030705   38262 main.go:141] libmachine: Using API Version  1
	I0501 02:38:56.030731   38262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:38:56.031055   38262 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:38:56.031228   38262 main.go:141] libmachine: (ha-329926) Calling .GetIP
	I0501 02:38:56.033875   38262 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:38:56.034342   38262 main.go:141] libmachine: (ha-329926) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:d8:43", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:31:18 +0000 UTC Type:0 Mac:52:54:00:ce:d8:43 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-329926 Clientid:01:52:54:00:ce:d8:43}
	I0501 02:38:56.034376   38262 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined IP address 192.168.39.5 and MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:38:56.034538   38262 host.go:66] Checking if "ha-329926" exists ...
	I0501 02:38:56.034940   38262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:38:56.034987   38262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:38:56.049860   38262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33563
	I0501 02:38:56.050329   38262 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:38:56.050835   38262 main.go:141] libmachine: Using API Version  1
	I0501 02:38:56.050857   38262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:38:56.051161   38262 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:38:56.051343   38262 main.go:141] libmachine: (ha-329926) Calling .DriverName
	I0501 02:38:56.051521   38262 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0501 02:38:56.051552   38262 main.go:141] libmachine: (ha-329926) Calling .GetSSHHostname
	I0501 02:38:56.054569   38262 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:38:56.055010   38262 main.go:141] libmachine: (ha-329926) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:d8:43", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:31:18 +0000 UTC Type:0 Mac:52:54:00:ce:d8:43 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-329926 Clientid:01:52:54:00:ce:d8:43}
	I0501 02:38:56.055046   38262 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined IP address 192.168.39.5 and MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:38:56.055205   38262 main.go:141] libmachine: (ha-329926) Calling .GetSSHPort
	I0501 02:38:56.055395   38262 main.go:141] libmachine: (ha-329926) Calling .GetSSHKeyPath
	I0501 02:38:56.055574   38262 main.go:141] libmachine: (ha-329926) Calling .GetSSHUsername
	I0501 02:38:56.055730   38262 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926/id_rsa Username:docker}
	I0501 02:38:56.146072   38262 ssh_runner.go:195] Run: systemctl --version
	I0501 02:38:56.153684   38262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 02:38:56.173340   38262 kubeconfig.go:125] found "ha-329926" server: "https://192.168.39.254:8443"
	I0501 02:38:56.173367   38262 api_server.go:166] Checking apiserver status ...
	I0501 02:38:56.173395   38262 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 02:38:56.190352   38262 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1174/cgroup
	W0501 02:38:56.204109   38262 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1174/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0501 02:38:56.204184   38262 ssh_runner.go:195] Run: ls
	I0501 02:38:56.209735   38262 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0501 02:38:56.216707   38262 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0501 02:38:56.216734   38262 status.go:422] ha-329926 apiserver status = Running (err=<nil>)
	I0501 02:38:56.216746   38262 status.go:257] ha-329926 status: &{Name:ha-329926 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0501 02:38:56.216761   38262 status.go:255] checking status of ha-329926-m02 ...
	I0501 02:38:56.217048   38262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:38:56.217081   38262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:38:56.231653   38262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35097
	I0501 02:38:56.232139   38262 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:38:56.232643   38262 main.go:141] libmachine: Using API Version  1
	I0501 02:38:56.232665   38262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:38:56.232948   38262 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:38:56.233134   38262 main.go:141] libmachine: (ha-329926-m02) Calling .GetState
	I0501 02:38:56.234761   38262 status.go:330] ha-329926-m02 host status = "Running" (err=<nil>)
	I0501 02:38:56.234778   38262 host.go:66] Checking if "ha-329926-m02" exists ...
	I0501 02:38:56.235138   38262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:38:56.235176   38262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:38:56.250479   38262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37985
	I0501 02:38:56.250927   38262 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:38:56.251381   38262 main.go:141] libmachine: Using API Version  1
	I0501 02:38:56.251402   38262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:38:56.251700   38262 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:38:56.251881   38262 main.go:141] libmachine: (ha-329926-m02) Calling .GetIP
	I0501 02:38:56.254764   38262 main.go:141] libmachine: (ha-329926-m02) DBG | domain ha-329926-m02 has defined MAC address 52:54:00:92:16:5f in network mk-ha-329926
	I0501 02:38:56.255200   38262 main.go:141] libmachine: (ha-329926-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:16:5f", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:32:18 +0000 UTC Type:0 Mac:52:54:00:92:16:5f Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-329926-m02 Clientid:01:52:54:00:92:16:5f}
	I0501 02:38:56.255227   38262 main.go:141] libmachine: (ha-329926-m02) DBG | domain ha-329926-m02 has defined IP address 192.168.39.79 and MAC address 52:54:00:92:16:5f in network mk-ha-329926
	I0501 02:38:56.255397   38262 host.go:66] Checking if "ha-329926-m02" exists ...
	I0501 02:38:56.255726   38262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:38:56.255769   38262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:38:56.271237   38262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41031
	I0501 02:38:56.271637   38262 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:38:56.272097   38262 main.go:141] libmachine: Using API Version  1
	I0501 02:38:56.272118   38262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:38:56.272429   38262 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:38:56.272626   38262 main.go:141] libmachine: (ha-329926-m02) Calling .DriverName
	I0501 02:38:56.272821   38262 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0501 02:38:56.272842   38262 main.go:141] libmachine: (ha-329926-m02) Calling .GetSSHHostname
	I0501 02:38:56.275616   38262 main.go:141] libmachine: (ha-329926-m02) DBG | domain ha-329926-m02 has defined MAC address 52:54:00:92:16:5f in network mk-ha-329926
	I0501 02:38:56.276041   38262 main.go:141] libmachine: (ha-329926-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:16:5f", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:32:18 +0000 UTC Type:0 Mac:52:54:00:92:16:5f Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-329926-m02 Clientid:01:52:54:00:92:16:5f}
	I0501 02:38:56.276068   38262 main.go:141] libmachine: (ha-329926-m02) DBG | domain ha-329926-m02 has defined IP address 192.168.39.79 and MAC address 52:54:00:92:16:5f in network mk-ha-329926
	I0501 02:38:56.276191   38262 main.go:141] libmachine: (ha-329926-m02) Calling .GetSSHPort
	I0501 02:38:56.276356   38262 main.go:141] libmachine: (ha-329926-m02) Calling .GetSSHKeyPath
	I0501 02:38:56.276489   38262 main.go:141] libmachine: (ha-329926-m02) Calling .GetSSHUsername
	I0501 02:38:56.276606   38262 sshutil.go:53] new ssh client: &{IP:192.168.39.79 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926-m02/id_rsa Username:docker}
	W0501 02:38:59.334643   38262 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.79:22: connect: no route to host
	W0501 02:38:59.334760   38262 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.79:22: connect: no route to host
	E0501 02:38:59.334784   38262 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.79:22: connect: no route to host
	I0501 02:38:59.334793   38262 status.go:257] ha-329926-m02 status: &{Name:ha-329926-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0501 02:38:59.334818   38262 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.79:22: connect: no route to host
	I0501 02:38:59.334828   38262 status.go:255] checking status of ha-329926-m03 ...
	I0501 02:38:59.335270   38262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:38:59.335360   38262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:38:59.351931   38262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41785
	I0501 02:38:59.352373   38262 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:38:59.352906   38262 main.go:141] libmachine: Using API Version  1
	I0501 02:38:59.352932   38262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:38:59.353276   38262 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:38:59.353458   38262 main.go:141] libmachine: (ha-329926-m03) Calling .GetState
	I0501 02:38:59.355312   38262 status.go:330] ha-329926-m03 host status = "Running" (err=<nil>)
	I0501 02:38:59.355330   38262 host.go:66] Checking if "ha-329926-m03" exists ...
	I0501 02:38:59.355746   38262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:38:59.355794   38262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:38:59.373013   38262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42041
	I0501 02:38:59.373472   38262 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:38:59.373985   38262 main.go:141] libmachine: Using API Version  1
	I0501 02:38:59.374013   38262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:38:59.374314   38262 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:38:59.374477   38262 main.go:141] libmachine: (ha-329926-m03) Calling .GetIP
	I0501 02:38:59.377457   38262 main.go:141] libmachine: (ha-329926-m03) DBG | domain ha-329926-m03 has defined MAC address 52:54:00:f9:eb:7d in network mk-ha-329926
	I0501 02:38:59.377695   38262 main.go:141] libmachine: (ha-329926-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:eb:7d", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:33:44 +0000 UTC Type:0 Mac:52:54:00:f9:eb:7d Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:ha-329926-m03 Clientid:01:52:54:00:f9:eb:7d}
	I0501 02:38:59.377727   38262 main.go:141] libmachine: (ha-329926-m03) DBG | domain ha-329926-m03 has defined IP address 192.168.39.115 and MAC address 52:54:00:f9:eb:7d in network mk-ha-329926
	I0501 02:38:59.377843   38262 host.go:66] Checking if "ha-329926-m03" exists ...
	I0501 02:38:59.378156   38262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:38:59.378208   38262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:38:59.396172   38262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38909
	I0501 02:38:59.396598   38262 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:38:59.397150   38262 main.go:141] libmachine: Using API Version  1
	I0501 02:38:59.397177   38262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:38:59.398158   38262 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:38:59.398348   38262 main.go:141] libmachine: (ha-329926-m03) Calling .DriverName
	I0501 02:38:59.398541   38262 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0501 02:38:59.398572   38262 main.go:141] libmachine: (ha-329926-m03) Calling .GetSSHHostname
	I0501 02:38:59.401076   38262 main.go:141] libmachine: (ha-329926-m03) DBG | domain ha-329926-m03 has defined MAC address 52:54:00:f9:eb:7d in network mk-ha-329926
	I0501 02:38:59.401574   38262 main.go:141] libmachine: (ha-329926-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:eb:7d", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:33:44 +0000 UTC Type:0 Mac:52:54:00:f9:eb:7d Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:ha-329926-m03 Clientid:01:52:54:00:f9:eb:7d}
	I0501 02:38:59.401599   38262 main.go:141] libmachine: (ha-329926-m03) DBG | domain ha-329926-m03 has defined IP address 192.168.39.115 and MAC address 52:54:00:f9:eb:7d in network mk-ha-329926
	I0501 02:38:59.401742   38262 main.go:141] libmachine: (ha-329926-m03) Calling .GetSSHPort
	I0501 02:38:59.401935   38262 main.go:141] libmachine: (ha-329926-m03) Calling .GetSSHKeyPath
	I0501 02:38:59.402101   38262 main.go:141] libmachine: (ha-329926-m03) Calling .GetSSHUsername
	I0501 02:38:59.402261   38262 sshutil.go:53] new ssh client: &{IP:192.168.39.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926-m03/id_rsa Username:docker}
	I0501 02:38:59.494813   38262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 02:38:59.510595   38262 kubeconfig.go:125] found "ha-329926" server: "https://192.168.39.254:8443"
	I0501 02:38:59.510620   38262 api_server.go:166] Checking apiserver status ...
	I0501 02:38:59.510648   38262 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 02:38:59.525172   38262 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1592/cgroup
	W0501 02:38:59.535061   38262 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1592/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0501 02:38:59.535135   38262 ssh_runner.go:195] Run: ls
	I0501 02:38:59.540460   38262 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0501 02:38:59.545200   38262 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0501 02:38:59.545219   38262 status.go:422] ha-329926-m03 apiserver status = Running (err=<nil>)
	I0501 02:38:59.545228   38262 status.go:257] ha-329926-m03 status: &{Name:ha-329926-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0501 02:38:59.545245   38262 status.go:255] checking status of ha-329926-m04 ...
	I0501 02:38:59.545519   38262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:38:59.545550   38262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:38:59.560147   38262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35409
	I0501 02:38:59.560572   38262 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:38:59.561061   38262 main.go:141] libmachine: Using API Version  1
	I0501 02:38:59.561080   38262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:38:59.561390   38262 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:38:59.561567   38262 main.go:141] libmachine: (ha-329926-m04) Calling .GetState
	I0501 02:38:59.562903   38262 status.go:330] ha-329926-m04 host status = "Running" (err=<nil>)
	I0501 02:38:59.562920   38262 host.go:66] Checking if "ha-329926-m04" exists ...
	I0501 02:38:59.563208   38262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:38:59.563254   38262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:38:59.578865   38262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41379
	I0501 02:38:59.579385   38262 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:38:59.579842   38262 main.go:141] libmachine: Using API Version  1
	I0501 02:38:59.579864   38262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:38:59.580137   38262 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:38:59.580311   38262 main.go:141] libmachine: (ha-329926-m04) Calling .GetIP
	I0501 02:38:59.583041   38262 main.go:141] libmachine: (ha-329926-m04) DBG | domain ha-329926-m04 has defined MAC address 52:54:00:95:f4:8b in network mk-ha-329926
	I0501 02:38:59.583415   38262 main.go:141] libmachine: (ha-329926-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:f4:8b", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:35:11 +0000 UTC Type:0 Mac:52:54:00:95:f4:8b Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-329926-m04 Clientid:01:52:54:00:95:f4:8b}
	I0501 02:38:59.583458   38262 main.go:141] libmachine: (ha-329926-m04) DBG | domain ha-329926-m04 has defined IP address 192.168.39.84 and MAC address 52:54:00:95:f4:8b in network mk-ha-329926
	I0501 02:38:59.584777   38262 host.go:66] Checking if "ha-329926-m04" exists ...
	I0501 02:38:59.585165   38262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:38:59.585241   38262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:38:59.600030   38262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46759
	I0501 02:38:59.600441   38262 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:38:59.600994   38262 main.go:141] libmachine: Using API Version  1
	I0501 02:38:59.601023   38262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:38:59.601371   38262 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:38:59.601604   38262 main.go:141] libmachine: (ha-329926-m04) Calling .DriverName
	I0501 02:38:59.601825   38262 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0501 02:38:59.601851   38262 main.go:141] libmachine: (ha-329926-m04) Calling .GetSSHHostname
	I0501 02:38:59.604757   38262 main.go:141] libmachine: (ha-329926-m04) DBG | domain ha-329926-m04 has defined MAC address 52:54:00:95:f4:8b in network mk-ha-329926
	I0501 02:38:59.605206   38262 main.go:141] libmachine: (ha-329926-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:f4:8b", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:35:11 +0000 UTC Type:0 Mac:52:54:00:95:f4:8b Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-329926-m04 Clientid:01:52:54:00:95:f4:8b}
	I0501 02:38:59.605234   38262 main.go:141] libmachine: (ha-329926-m04) DBG | domain ha-329926-m04 has defined IP address 192.168.39.84 and MAC address 52:54:00:95:f4:8b in network mk-ha-329926
	I0501 02:38:59.605317   38262 main.go:141] libmachine: (ha-329926-m04) Calling .GetSSHPort
	I0501 02:38:59.605498   38262 main.go:141] libmachine: (ha-329926-m04) Calling .GetSSHKeyPath
	I0501 02:38:59.605623   38262 main.go:141] libmachine: (ha-329926-m04) Calling .GetSSHUsername
	I0501 02:38:59.605781   38262 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926-m04/id_rsa Username:docker}
	I0501 02:38:59.695323   38262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 02:38:59.713423   38262 status.go:257] ha-329926-m04 status: &{Name:ha-329926-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-329926 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-329926 status -v=7 --alsologtostderr: exit status 7 (652.676893ms)

                                                
                                                
-- stdout --
	ha-329926
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-329926-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-329926-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-329926-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0501 02:39:06.114030   38399 out.go:291] Setting OutFile to fd 1 ...
	I0501 02:39:06.114176   38399 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 02:39:06.114187   38399 out.go:304] Setting ErrFile to fd 2...
	I0501 02:39:06.114191   38399 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 02:39:06.114448   38399 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18779-13391/.minikube/bin
	I0501 02:39:06.114641   38399 out.go:298] Setting JSON to false
	I0501 02:39:06.114668   38399 mustload.go:65] Loading cluster: ha-329926
	I0501 02:39:06.114769   38399 notify.go:220] Checking for updates...
	I0501 02:39:06.115028   38399 config.go:182] Loaded profile config "ha-329926": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0501 02:39:06.115043   38399 status.go:255] checking status of ha-329926 ...
	I0501 02:39:06.115461   38399 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:39:06.115523   38399 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:39:06.131303   38399 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37521
	I0501 02:39:06.131710   38399 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:39:06.132354   38399 main.go:141] libmachine: Using API Version  1
	I0501 02:39:06.132395   38399 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:39:06.132699   38399 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:39:06.132886   38399 main.go:141] libmachine: (ha-329926) Calling .GetState
	I0501 02:39:06.134634   38399 status.go:330] ha-329926 host status = "Running" (err=<nil>)
	I0501 02:39:06.134654   38399 host.go:66] Checking if "ha-329926" exists ...
	I0501 02:39:06.134949   38399 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:39:06.134987   38399 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:39:06.150679   38399 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43579
	I0501 02:39:06.151073   38399 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:39:06.151577   38399 main.go:141] libmachine: Using API Version  1
	I0501 02:39:06.151603   38399 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:39:06.151932   38399 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:39:06.152158   38399 main.go:141] libmachine: (ha-329926) Calling .GetIP
	I0501 02:39:06.155507   38399 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:39:06.156041   38399 main.go:141] libmachine: (ha-329926) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:d8:43", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:31:18 +0000 UTC Type:0 Mac:52:54:00:ce:d8:43 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-329926 Clientid:01:52:54:00:ce:d8:43}
	I0501 02:39:06.156068   38399 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined IP address 192.168.39.5 and MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:39:06.156204   38399 host.go:66] Checking if "ha-329926" exists ...
	I0501 02:39:06.156606   38399 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:39:06.156661   38399 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:39:06.172731   38399 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33899
	I0501 02:39:06.173176   38399 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:39:06.173617   38399 main.go:141] libmachine: Using API Version  1
	I0501 02:39:06.173638   38399 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:39:06.173991   38399 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:39:06.174162   38399 main.go:141] libmachine: (ha-329926) Calling .DriverName
	I0501 02:39:06.174431   38399 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0501 02:39:06.174459   38399 main.go:141] libmachine: (ha-329926) Calling .GetSSHHostname
	I0501 02:39:06.177013   38399 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:39:06.177423   38399 main.go:141] libmachine: (ha-329926) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:d8:43", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:31:18 +0000 UTC Type:0 Mac:52:54:00:ce:d8:43 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-329926 Clientid:01:52:54:00:ce:d8:43}
	I0501 02:39:06.177443   38399 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined IP address 192.168.39.5 and MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:39:06.177616   38399 main.go:141] libmachine: (ha-329926) Calling .GetSSHPort
	I0501 02:39:06.177788   38399 main.go:141] libmachine: (ha-329926) Calling .GetSSHKeyPath
	I0501 02:39:06.177921   38399 main.go:141] libmachine: (ha-329926) Calling .GetSSHUsername
	I0501 02:39:06.178055   38399 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926/id_rsa Username:docker}
	I0501 02:39:06.263149   38399 ssh_runner.go:195] Run: systemctl --version
	I0501 02:39:06.270167   38399 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 02:39:06.287192   38399 kubeconfig.go:125] found "ha-329926" server: "https://192.168.39.254:8443"
	I0501 02:39:06.287217   38399 api_server.go:166] Checking apiserver status ...
	I0501 02:39:06.287247   38399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 02:39:06.303815   38399 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1174/cgroup
	W0501 02:39:06.316091   38399 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1174/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0501 02:39:06.316154   38399 ssh_runner.go:195] Run: ls
	I0501 02:39:06.321531   38399 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0501 02:39:06.327158   38399 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0501 02:39:06.327183   38399 status.go:422] ha-329926 apiserver status = Running (err=<nil>)
	I0501 02:39:06.327193   38399 status.go:257] ha-329926 status: &{Name:ha-329926 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0501 02:39:06.327211   38399 status.go:255] checking status of ha-329926-m02 ...
	I0501 02:39:06.327551   38399 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:39:06.327597   38399 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:39:06.342235   38399 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39715
	I0501 02:39:06.342662   38399 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:39:06.343172   38399 main.go:141] libmachine: Using API Version  1
	I0501 02:39:06.343197   38399 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:39:06.343469   38399 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:39:06.343609   38399 main.go:141] libmachine: (ha-329926-m02) Calling .GetState
	I0501 02:39:06.345152   38399 status.go:330] ha-329926-m02 host status = "Stopped" (err=<nil>)
	I0501 02:39:06.345166   38399 status.go:343] host is not running, skipping remaining checks
	I0501 02:39:06.345172   38399 status.go:257] ha-329926-m02 status: &{Name:ha-329926-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0501 02:39:06.345186   38399 status.go:255] checking status of ha-329926-m03 ...
	I0501 02:39:06.345540   38399 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:39:06.345588   38399 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:39:06.359960   38399 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36699
	I0501 02:39:06.360380   38399 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:39:06.360849   38399 main.go:141] libmachine: Using API Version  1
	I0501 02:39:06.360879   38399 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:39:06.361254   38399 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:39:06.361435   38399 main.go:141] libmachine: (ha-329926-m03) Calling .GetState
	I0501 02:39:06.363012   38399 status.go:330] ha-329926-m03 host status = "Running" (err=<nil>)
	I0501 02:39:06.363029   38399 host.go:66] Checking if "ha-329926-m03" exists ...
	I0501 02:39:06.363414   38399 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:39:06.363465   38399 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:39:06.377390   38399 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46359
	I0501 02:39:06.377825   38399 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:39:06.378315   38399 main.go:141] libmachine: Using API Version  1
	I0501 02:39:06.378334   38399 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:39:06.378635   38399 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:39:06.378826   38399 main.go:141] libmachine: (ha-329926-m03) Calling .GetIP
	I0501 02:39:06.381393   38399 main.go:141] libmachine: (ha-329926-m03) DBG | domain ha-329926-m03 has defined MAC address 52:54:00:f9:eb:7d in network mk-ha-329926
	I0501 02:39:06.381750   38399 main.go:141] libmachine: (ha-329926-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:eb:7d", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:33:44 +0000 UTC Type:0 Mac:52:54:00:f9:eb:7d Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:ha-329926-m03 Clientid:01:52:54:00:f9:eb:7d}
	I0501 02:39:06.381773   38399 main.go:141] libmachine: (ha-329926-m03) DBG | domain ha-329926-m03 has defined IP address 192.168.39.115 and MAC address 52:54:00:f9:eb:7d in network mk-ha-329926
	I0501 02:39:06.381939   38399 host.go:66] Checking if "ha-329926-m03" exists ...
	I0501 02:39:06.382229   38399 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:39:06.382262   38399 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:39:06.396213   38399 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32889
	I0501 02:39:06.396694   38399 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:39:06.397102   38399 main.go:141] libmachine: Using API Version  1
	I0501 02:39:06.397121   38399 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:39:06.397430   38399 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:39:06.397574   38399 main.go:141] libmachine: (ha-329926-m03) Calling .DriverName
	I0501 02:39:06.397742   38399 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0501 02:39:06.397761   38399 main.go:141] libmachine: (ha-329926-m03) Calling .GetSSHHostname
	I0501 02:39:06.400444   38399 main.go:141] libmachine: (ha-329926-m03) DBG | domain ha-329926-m03 has defined MAC address 52:54:00:f9:eb:7d in network mk-ha-329926
	I0501 02:39:06.400874   38399 main.go:141] libmachine: (ha-329926-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:eb:7d", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:33:44 +0000 UTC Type:0 Mac:52:54:00:f9:eb:7d Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:ha-329926-m03 Clientid:01:52:54:00:f9:eb:7d}
	I0501 02:39:06.400900   38399 main.go:141] libmachine: (ha-329926-m03) DBG | domain ha-329926-m03 has defined IP address 192.168.39.115 and MAC address 52:54:00:f9:eb:7d in network mk-ha-329926
	I0501 02:39:06.401024   38399 main.go:141] libmachine: (ha-329926-m03) Calling .GetSSHPort
	I0501 02:39:06.401164   38399 main.go:141] libmachine: (ha-329926-m03) Calling .GetSSHKeyPath
	I0501 02:39:06.401306   38399 main.go:141] libmachine: (ha-329926-m03) Calling .GetSSHUsername
	I0501 02:39:06.401450   38399 sshutil.go:53] new ssh client: &{IP:192.168.39.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926-m03/id_rsa Username:docker}
	I0501 02:39:06.490610   38399 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 02:39:06.508773   38399 kubeconfig.go:125] found "ha-329926" server: "https://192.168.39.254:8443"
	I0501 02:39:06.508798   38399 api_server.go:166] Checking apiserver status ...
	I0501 02:39:06.508830   38399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 02:39:06.524545   38399 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1592/cgroup
	W0501 02:39:06.535591   38399 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1592/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0501 02:39:06.535640   38399 ssh_runner.go:195] Run: ls
	I0501 02:39:06.543991   38399 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0501 02:39:06.548767   38399 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0501 02:39:06.548794   38399 status.go:422] ha-329926-m03 apiserver status = Running (err=<nil>)
	I0501 02:39:06.548805   38399 status.go:257] ha-329926-m03 status: &{Name:ha-329926-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0501 02:39:06.548830   38399 status.go:255] checking status of ha-329926-m04 ...
	I0501 02:39:06.549212   38399 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:39:06.549261   38399 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:39:06.563703   38399 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46339
	I0501 02:39:06.564051   38399 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:39:06.564491   38399 main.go:141] libmachine: Using API Version  1
	I0501 02:39:06.564510   38399 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:39:06.564780   38399 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:39:06.564955   38399 main.go:141] libmachine: (ha-329926-m04) Calling .GetState
	I0501 02:39:06.566344   38399 status.go:330] ha-329926-m04 host status = "Running" (err=<nil>)
	I0501 02:39:06.566359   38399 host.go:66] Checking if "ha-329926-m04" exists ...
	I0501 02:39:06.566674   38399 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:39:06.566719   38399 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:39:06.580354   38399 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42547
	I0501 02:39:06.580758   38399 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:39:06.581159   38399 main.go:141] libmachine: Using API Version  1
	I0501 02:39:06.581178   38399 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:39:06.581448   38399 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:39:06.581604   38399 main.go:141] libmachine: (ha-329926-m04) Calling .GetIP
	I0501 02:39:06.584065   38399 main.go:141] libmachine: (ha-329926-m04) DBG | domain ha-329926-m04 has defined MAC address 52:54:00:95:f4:8b in network mk-ha-329926
	I0501 02:39:06.584457   38399 main.go:141] libmachine: (ha-329926-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:f4:8b", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:35:11 +0000 UTC Type:0 Mac:52:54:00:95:f4:8b Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-329926-m04 Clientid:01:52:54:00:95:f4:8b}
	I0501 02:39:06.584478   38399 main.go:141] libmachine: (ha-329926-m04) DBG | domain ha-329926-m04 has defined IP address 192.168.39.84 and MAC address 52:54:00:95:f4:8b in network mk-ha-329926
	I0501 02:39:06.584640   38399 host.go:66] Checking if "ha-329926-m04" exists ...
	I0501 02:39:06.584908   38399 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:39:06.584938   38399 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:39:06.599055   38399 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36391
	I0501 02:39:06.599381   38399 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:39:06.599743   38399 main.go:141] libmachine: Using API Version  1
	I0501 02:39:06.599755   38399 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:39:06.600007   38399 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:39:06.600188   38399 main.go:141] libmachine: (ha-329926-m04) Calling .DriverName
	I0501 02:39:06.600352   38399 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0501 02:39:06.600375   38399 main.go:141] libmachine: (ha-329926-m04) Calling .GetSSHHostname
	I0501 02:39:06.602733   38399 main.go:141] libmachine: (ha-329926-m04) DBG | domain ha-329926-m04 has defined MAC address 52:54:00:95:f4:8b in network mk-ha-329926
	I0501 02:39:06.603123   38399 main.go:141] libmachine: (ha-329926-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:f4:8b", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:35:11 +0000 UTC Type:0 Mac:52:54:00:95:f4:8b Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-329926-m04 Clientid:01:52:54:00:95:f4:8b}
	I0501 02:39:06.603148   38399 main.go:141] libmachine: (ha-329926-m04) DBG | domain ha-329926-m04 has defined IP address 192.168.39.84 and MAC address 52:54:00:95:f4:8b in network mk-ha-329926
	I0501 02:39:06.603292   38399 main.go:141] libmachine: (ha-329926-m04) Calling .GetSSHPort
	I0501 02:39:06.603463   38399 main.go:141] libmachine: (ha-329926-m04) Calling .GetSSHKeyPath
	I0501 02:39:06.603616   38399 main.go:141] libmachine: (ha-329926-m04) Calling .GetSSHUsername
	I0501 02:39:06.603757   38399 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926-m04/id_rsa Username:docker}
	I0501 02:39:06.694195   38399 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 02:39:06.711163   38399 status.go:257] ha-329926-m04 status: &{Name:ha-329926-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-329926 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-329926 status -v=7 --alsologtostderr: exit status 7 (666.228007ms)

                                                
                                                
-- stdout --
	ha-329926
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-329926-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-329926-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-329926-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0501 02:39:19.550265   38504 out.go:291] Setting OutFile to fd 1 ...
	I0501 02:39:19.550440   38504 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 02:39:19.550452   38504 out.go:304] Setting ErrFile to fd 2...
	I0501 02:39:19.550457   38504 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 02:39:19.550664   38504 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18779-13391/.minikube/bin
	I0501 02:39:19.550841   38504 out.go:298] Setting JSON to false
	I0501 02:39:19.550869   38504 mustload.go:65] Loading cluster: ha-329926
	I0501 02:39:19.550973   38504 notify.go:220] Checking for updates...
	I0501 02:39:19.551334   38504 config.go:182] Loaded profile config "ha-329926": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0501 02:39:19.551354   38504 status.go:255] checking status of ha-329926 ...
	I0501 02:39:19.551816   38504 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:39:19.551858   38504 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:39:19.572543   38504 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37841
	I0501 02:39:19.573049   38504 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:39:19.573649   38504 main.go:141] libmachine: Using API Version  1
	I0501 02:39:19.573691   38504 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:39:19.574013   38504 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:39:19.574210   38504 main.go:141] libmachine: (ha-329926) Calling .GetState
	I0501 02:39:19.575964   38504 status.go:330] ha-329926 host status = "Running" (err=<nil>)
	I0501 02:39:19.575984   38504 host.go:66] Checking if "ha-329926" exists ...
	I0501 02:39:19.576314   38504 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:39:19.576361   38504 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:39:19.591211   38504 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32959
	I0501 02:39:19.591580   38504 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:39:19.592006   38504 main.go:141] libmachine: Using API Version  1
	I0501 02:39:19.592034   38504 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:39:19.592326   38504 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:39:19.592543   38504 main.go:141] libmachine: (ha-329926) Calling .GetIP
	I0501 02:39:19.595292   38504 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:39:19.595777   38504 main.go:141] libmachine: (ha-329926) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:d8:43", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:31:18 +0000 UTC Type:0 Mac:52:54:00:ce:d8:43 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-329926 Clientid:01:52:54:00:ce:d8:43}
	I0501 02:39:19.595802   38504 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined IP address 192.168.39.5 and MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:39:19.595999   38504 host.go:66] Checking if "ha-329926" exists ...
	I0501 02:39:19.596332   38504 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:39:19.596371   38504 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:39:19.612135   38504 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46733
	I0501 02:39:19.612547   38504 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:39:19.612992   38504 main.go:141] libmachine: Using API Version  1
	I0501 02:39:19.613013   38504 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:39:19.613274   38504 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:39:19.613464   38504 main.go:141] libmachine: (ha-329926) Calling .DriverName
	I0501 02:39:19.613647   38504 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0501 02:39:19.613682   38504 main.go:141] libmachine: (ha-329926) Calling .GetSSHHostname
	I0501 02:39:19.616500   38504 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:39:19.616932   38504 main.go:141] libmachine: (ha-329926) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:d8:43", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:31:18 +0000 UTC Type:0 Mac:52:54:00:ce:d8:43 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-329926 Clientid:01:52:54:00:ce:d8:43}
	I0501 02:39:19.616952   38504 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined IP address 192.168.39.5 and MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:39:19.617112   38504 main.go:141] libmachine: (ha-329926) Calling .GetSSHPort
	I0501 02:39:19.617290   38504 main.go:141] libmachine: (ha-329926) Calling .GetSSHKeyPath
	I0501 02:39:19.617460   38504 main.go:141] libmachine: (ha-329926) Calling .GetSSHUsername
	I0501 02:39:19.617596   38504 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926/id_rsa Username:docker}
	I0501 02:39:19.698918   38504 ssh_runner.go:195] Run: systemctl --version
	I0501 02:39:19.706338   38504 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 02:39:19.725163   38504 kubeconfig.go:125] found "ha-329926" server: "https://192.168.39.254:8443"
	I0501 02:39:19.725193   38504 api_server.go:166] Checking apiserver status ...
	I0501 02:39:19.725230   38504 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 02:39:19.743514   38504 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1174/cgroup
	W0501 02:39:19.756640   38504 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1174/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0501 02:39:19.756695   38504 ssh_runner.go:195] Run: ls
	I0501 02:39:19.762558   38504 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0501 02:39:19.768704   38504 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0501 02:39:19.768723   38504 status.go:422] ha-329926 apiserver status = Running (err=<nil>)
	I0501 02:39:19.768742   38504 status.go:257] ha-329926 status: &{Name:ha-329926 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0501 02:39:19.768759   38504 status.go:255] checking status of ha-329926-m02 ...
	I0501 02:39:19.769038   38504 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:39:19.769075   38504 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:39:19.783470   38504 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39199
	I0501 02:39:19.783879   38504 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:39:19.784353   38504 main.go:141] libmachine: Using API Version  1
	I0501 02:39:19.784373   38504 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:39:19.784642   38504 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:39:19.784807   38504 main.go:141] libmachine: (ha-329926-m02) Calling .GetState
	I0501 02:39:19.786171   38504 status.go:330] ha-329926-m02 host status = "Stopped" (err=<nil>)
	I0501 02:39:19.786185   38504 status.go:343] host is not running, skipping remaining checks
	I0501 02:39:19.786194   38504 status.go:257] ha-329926-m02 status: &{Name:ha-329926-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0501 02:39:19.786212   38504 status.go:255] checking status of ha-329926-m03 ...
	I0501 02:39:19.786574   38504 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:39:19.786610   38504 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:39:19.800787   38504 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38585
	I0501 02:39:19.801487   38504 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:39:19.802892   38504 main.go:141] libmachine: Using API Version  1
	I0501 02:39:19.802918   38504 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:39:19.803282   38504 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:39:19.803468   38504 main.go:141] libmachine: (ha-329926-m03) Calling .GetState
	I0501 02:39:19.805125   38504 status.go:330] ha-329926-m03 host status = "Running" (err=<nil>)
	I0501 02:39:19.805139   38504 host.go:66] Checking if "ha-329926-m03" exists ...
	I0501 02:39:19.805420   38504 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:39:19.805457   38504 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:39:19.819889   38504 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33779
	I0501 02:39:19.820280   38504 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:39:19.820805   38504 main.go:141] libmachine: Using API Version  1
	I0501 02:39:19.820829   38504 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:39:19.821109   38504 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:39:19.821321   38504 main.go:141] libmachine: (ha-329926-m03) Calling .GetIP
	I0501 02:39:19.824215   38504 main.go:141] libmachine: (ha-329926-m03) DBG | domain ha-329926-m03 has defined MAC address 52:54:00:f9:eb:7d in network mk-ha-329926
	I0501 02:39:19.824630   38504 main.go:141] libmachine: (ha-329926-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:eb:7d", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:33:44 +0000 UTC Type:0 Mac:52:54:00:f9:eb:7d Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:ha-329926-m03 Clientid:01:52:54:00:f9:eb:7d}
	I0501 02:39:19.824666   38504 main.go:141] libmachine: (ha-329926-m03) DBG | domain ha-329926-m03 has defined IP address 192.168.39.115 and MAC address 52:54:00:f9:eb:7d in network mk-ha-329926
	I0501 02:39:19.824819   38504 host.go:66] Checking if "ha-329926-m03" exists ...
	I0501 02:39:19.825094   38504 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:39:19.825129   38504 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:39:19.840495   38504 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42199
	I0501 02:39:19.840881   38504 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:39:19.841298   38504 main.go:141] libmachine: Using API Version  1
	I0501 02:39:19.841324   38504 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:39:19.841567   38504 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:39:19.841734   38504 main.go:141] libmachine: (ha-329926-m03) Calling .DriverName
	I0501 02:39:19.841873   38504 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0501 02:39:19.841891   38504 main.go:141] libmachine: (ha-329926-m03) Calling .GetSSHHostname
	I0501 02:39:19.844495   38504 main.go:141] libmachine: (ha-329926-m03) DBG | domain ha-329926-m03 has defined MAC address 52:54:00:f9:eb:7d in network mk-ha-329926
	I0501 02:39:19.844904   38504 main.go:141] libmachine: (ha-329926-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:eb:7d", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:33:44 +0000 UTC Type:0 Mac:52:54:00:f9:eb:7d Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:ha-329926-m03 Clientid:01:52:54:00:f9:eb:7d}
	I0501 02:39:19.844933   38504 main.go:141] libmachine: (ha-329926-m03) DBG | domain ha-329926-m03 has defined IP address 192.168.39.115 and MAC address 52:54:00:f9:eb:7d in network mk-ha-329926
	I0501 02:39:19.845041   38504 main.go:141] libmachine: (ha-329926-m03) Calling .GetSSHPort
	I0501 02:39:19.845184   38504 main.go:141] libmachine: (ha-329926-m03) Calling .GetSSHKeyPath
	I0501 02:39:19.845335   38504 main.go:141] libmachine: (ha-329926-m03) Calling .GetSSHUsername
	I0501 02:39:19.845477   38504 sshutil.go:53] new ssh client: &{IP:192.168.39.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926-m03/id_rsa Username:docker}
	I0501 02:39:19.936100   38504 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 02:39:19.954055   38504 kubeconfig.go:125] found "ha-329926" server: "https://192.168.39.254:8443"
	I0501 02:39:19.954090   38504 api_server.go:166] Checking apiserver status ...
	I0501 02:39:19.954132   38504 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 02:39:19.969952   38504 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1592/cgroup
	W0501 02:39:19.981439   38504 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1592/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0501 02:39:19.981501   38504 ssh_runner.go:195] Run: ls
	I0501 02:39:19.986873   38504 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0501 02:39:19.991477   38504 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0501 02:39:19.991500   38504 status.go:422] ha-329926-m03 apiserver status = Running (err=<nil>)
	I0501 02:39:19.991509   38504 status.go:257] ha-329926-m03 status: &{Name:ha-329926-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0501 02:39:19.991525   38504 status.go:255] checking status of ha-329926-m04 ...
	I0501 02:39:19.991872   38504 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:39:19.991913   38504 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:39:20.007135   38504 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41681
	I0501 02:39:20.007524   38504 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:39:20.007949   38504 main.go:141] libmachine: Using API Version  1
	I0501 02:39:20.007968   38504 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:39:20.008261   38504 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:39:20.008449   38504 main.go:141] libmachine: (ha-329926-m04) Calling .GetState
	I0501 02:39:20.009892   38504 status.go:330] ha-329926-m04 host status = "Running" (err=<nil>)
	I0501 02:39:20.009910   38504 host.go:66] Checking if "ha-329926-m04" exists ...
	I0501 02:39:20.010229   38504 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:39:20.010268   38504 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:39:20.028860   38504 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41213
	I0501 02:39:20.029359   38504 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:39:20.029797   38504 main.go:141] libmachine: Using API Version  1
	I0501 02:39:20.029820   38504 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:39:20.030089   38504 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:39:20.030285   38504 main.go:141] libmachine: (ha-329926-m04) Calling .GetIP
	I0501 02:39:20.033053   38504 main.go:141] libmachine: (ha-329926-m04) DBG | domain ha-329926-m04 has defined MAC address 52:54:00:95:f4:8b in network mk-ha-329926
	I0501 02:39:20.033410   38504 main.go:141] libmachine: (ha-329926-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:f4:8b", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:35:11 +0000 UTC Type:0 Mac:52:54:00:95:f4:8b Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-329926-m04 Clientid:01:52:54:00:95:f4:8b}
	I0501 02:39:20.033443   38504 main.go:141] libmachine: (ha-329926-m04) DBG | domain ha-329926-m04 has defined IP address 192.168.39.84 and MAC address 52:54:00:95:f4:8b in network mk-ha-329926
	I0501 02:39:20.033601   38504 host.go:66] Checking if "ha-329926-m04" exists ...
	I0501 02:39:20.033919   38504 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:39:20.033954   38504 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:39:20.047780   38504 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34053
	I0501 02:39:20.048155   38504 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:39:20.048581   38504 main.go:141] libmachine: Using API Version  1
	I0501 02:39:20.048602   38504 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:39:20.048959   38504 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:39:20.049131   38504 main.go:141] libmachine: (ha-329926-m04) Calling .DriverName
	I0501 02:39:20.049326   38504 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0501 02:39:20.049343   38504 main.go:141] libmachine: (ha-329926-m04) Calling .GetSSHHostname
	I0501 02:39:20.051931   38504 main.go:141] libmachine: (ha-329926-m04) DBG | domain ha-329926-m04 has defined MAC address 52:54:00:95:f4:8b in network mk-ha-329926
	I0501 02:39:20.052292   38504 main.go:141] libmachine: (ha-329926-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:f4:8b", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:35:11 +0000 UTC Type:0 Mac:52:54:00:95:f4:8b Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-329926-m04 Clientid:01:52:54:00:95:f4:8b}
	I0501 02:39:20.052344   38504 main.go:141] libmachine: (ha-329926-m04) DBG | domain ha-329926-m04 has defined IP address 192.168.39.84 and MAC address 52:54:00:95:f4:8b in network mk-ha-329926
	I0501 02:39:20.052480   38504 main.go:141] libmachine: (ha-329926-m04) Calling .GetSSHPort
	I0501 02:39:20.052663   38504 main.go:141] libmachine: (ha-329926-m04) Calling .GetSSHKeyPath
	I0501 02:39:20.052808   38504 main.go:141] libmachine: (ha-329926-m04) Calling .GetSSHUsername
	I0501 02:39:20.052926   38504 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926-m04/id_rsa Username:docker}
	I0501 02:39:20.138368   38504 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 02:39:20.155205   38504 status.go:257] ha-329926-m04 status: &{Name:ha-329926-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-329926 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-329926 -n ha-329926
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-329926 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-329926 logs -n 25: (1.596889182s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-329926 ssh -n                                                                | ha-329926 | jenkins | v1.33.0 | 01 May 24 02:35 UTC | 01 May 24 02:35 UTC |
	|         | ha-329926-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-329926 cp ha-329926-m03:/home/docker/cp-test.txt                             | ha-329926 | jenkins | v1.33.0 | 01 May 24 02:35 UTC | 01 May 24 02:35 UTC |
	|         | ha-329926:/home/docker/cp-test_ha-329926-m03_ha-329926.txt                      |           |         |         |                     |                     |
	| ssh     | ha-329926 ssh -n                                                                | ha-329926 | jenkins | v1.33.0 | 01 May 24 02:35 UTC | 01 May 24 02:35 UTC |
	|         | ha-329926-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-329926 ssh -n ha-329926 sudo cat                                             | ha-329926 | jenkins | v1.33.0 | 01 May 24 02:35 UTC | 01 May 24 02:35 UTC |
	|         | /home/docker/cp-test_ha-329926-m03_ha-329926.txt                                |           |         |         |                     |                     |
	| cp      | ha-329926 cp ha-329926-m03:/home/docker/cp-test.txt                             | ha-329926 | jenkins | v1.33.0 | 01 May 24 02:35 UTC | 01 May 24 02:35 UTC |
	|         | ha-329926-m02:/home/docker/cp-test_ha-329926-m03_ha-329926-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-329926 ssh -n                                                                | ha-329926 | jenkins | v1.33.0 | 01 May 24 02:35 UTC | 01 May 24 02:35 UTC |
	|         | ha-329926-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-329926 ssh -n ha-329926-m02 sudo cat                                         | ha-329926 | jenkins | v1.33.0 | 01 May 24 02:35 UTC | 01 May 24 02:35 UTC |
	|         | /home/docker/cp-test_ha-329926-m03_ha-329926-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-329926 cp ha-329926-m03:/home/docker/cp-test.txt                             | ha-329926 | jenkins | v1.33.0 | 01 May 24 02:35 UTC | 01 May 24 02:35 UTC |
	|         | ha-329926-m04:/home/docker/cp-test_ha-329926-m03_ha-329926-m04.txt              |           |         |         |                     |                     |
	| ssh     | ha-329926 ssh -n                                                                | ha-329926 | jenkins | v1.33.0 | 01 May 24 02:35 UTC | 01 May 24 02:35 UTC |
	|         | ha-329926-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-329926 ssh -n ha-329926-m04 sudo cat                                         | ha-329926 | jenkins | v1.33.0 | 01 May 24 02:35 UTC | 01 May 24 02:35 UTC |
	|         | /home/docker/cp-test_ha-329926-m03_ha-329926-m04.txt                            |           |         |         |                     |                     |
	| cp      | ha-329926 cp testdata/cp-test.txt                                               | ha-329926 | jenkins | v1.33.0 | 01 May 24 02:35 UTC | 01 May 24 02:35 UTC |
	|         | ha-329926-m04:/home/docker/cp-test.txt                                          |           |         |         |                     |                     |
	| ssh     | ha-329926 ssh -n                                                                | ha-329926 | jenkins | v1.33.0 | 01 May 24 02:35 UTC | 01 May 24 02:35 UTC |
	|         | ha-329926-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-329926 cp ha-329926-m04:/home/docker/cp-test.txt                             | ha-329926 | jenkins | v1.33.0 | 01 May 24 02:35 UTC | 01 May 24 02:35 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile895580191/001/cp-test_ha-329926-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-329926 ssh -n                                                                | ha-329926 | jenkins | v1.33.0 | 01 May 24 02:35 UTC | 01 May 24 02:35 UTC |
	|         | ha-329926-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-329926 cp ha-329926-m04:/home/docker/cp-test.txt                             | ha-329926 | jenkins | v1.33.0 | 01 May 24 02:35 UTC | 01 May 24 02:35 UTC |
	|         | ha-329926:/home/docker/cp-test_ha-329926-m04_ha-329926.txt                      |           |         |         |                     |                     |
	| ssh     | ha-329926 ssh -n                                                                | ha-329926 | jenkins | v1.33.0 | 01 May 24 02:35 UTC | 01 May 24 02:35 UTC |
	|         | ha-329926-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-329926 ssh -n ha-329926 sudo cat                                             | ha-329926 | jenkins | v1.33.0 | 01 May 24 02:35 UTC | 01 May 24 02:35 UTC |
	|         | /home/docker/cp-test_ha-329926-m04_ha-329926.txt                                |           |         |         |                     |                     |
	| cp      | ha-329926 cp ha-329926-m04:/home/docker/cp-test.txt                             | ha-329926 | jenkins | v1.33.0 | 01 May 24 02:35 UTC | 01 May 24 02:35 UTC |
	|         | ha-329926-m02:/home/docker/cp-test_ha-329926-m04_ha-329926-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-329926 ssh -n                                                                | ha-329926 | jenkins | v1.33.0 | 01 May 24 02:35 UTC | 01 May 24 02:35 UTC |
	|         | ha-329926-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-329926 ssh -n ha-329926-m02 sudo cat                                         | ha-329926 | jenkins | v1.33.0 | 01 May 24 02:35 UTC | 01 May 24 02:35 UTC |
	|         | /home/docker/cp-test_ha-329926-m04_ha-329926-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-329926 cp ha-329926-m04:/home/docker/cp-test.txt                             | ha-329926 | jenkins | v1.33.0 | 01 May 24 02:35 UTC | 01 May 24 02:35 UTC |
	|         | ha-329926-m03:/home/docker/cp-test_ha-329926-m04_ha-329926-m03.txt              |           |         |         |                     |                     |
	| ssh     | ha-329926 ssh -n                                                                | ha-329926 | jenkins | v1.33.0 | 01 May 24 02:35 UTC | 01 May 24 02:35 UTC |
	|         | ha-329926-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-329926 ssh -n ha-329926-m03 sudo cat                                         | ha-329926 | jenkins | v1.33.0 | 01 May 24 02:35 UTC | 01 May 24 02:35 UTC |
	|         | /home/docker/cp-test_ha-329926-m04_ha-329926-m03.txt                            |           |         |         |                     |                     |
	| node    | ha-329926 node stop m02 -v=7                                                    | ha-329926 | jenkins | v1.33.0 | 01 May 24 02:35 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | ha-329926 node start m02 -v=7                                                   | ha-329926 | jenkins | v1.33.0 | 01 May 24 02:38 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/01 02:31:02
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0501 02:31:02.127151   32853 out.go:291] Setting OutFile to fd 1 ...
	I0501 02:31:02.127254   32853 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 02:31:02.127264   32853 out.go:304] Setting ErrFile to fd 2...
	I0501 02:31:02.127268   32853 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 02:31:02.127458   32853 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18779-13391/.minikube/bin
	I0501 02:31:02.128001   32853 out.go:298] Setting JSON to false
	I0501 02:31:02.128797   32853 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4405,"bootTime":1714526257,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0501 02:31:02.128859   32853 start.go:139] virtualization: kvm guest
	I0501 02:31:02.130891   32853 out.go:177] * [ha-329926] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0501 02:31:02.132216   32853 out.go:177]   - MINIKUBE_LOCATION=18779
	I0501 02:31:02.133332   32853 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0501 02:31:02.132243   32853 notify.go:220] Checking for updates...
	I0501 02:31:02.135670   32853 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18779-13391/kubeconfig
	I0501 02:31:02.137084   32853 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18779-13391/.minikube
	I0501 02:31:02.138504   32853 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0501 02:31:02.139897   32853 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0501 02:31:02.141367   32853 driver.go:392] Setting default libvirt URI to qemu:///system
	I0501 02:31:02.174964   32853 out.go:177] * Using the kvm2 driver based on user configuration
	I0501 02:31:02.176378   32853 start.go:297] selected driver: kvm2
	I0501 02:31:02.176396   32853 start.go:901] validating driver "kvm2" against <nil>
	I0501 02:31:02.176406   32853 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0501 02:31:02.177100   32853 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0501 02:31:02.177168   32853 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18779-13391/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0501 02:31:02.191961   32853 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0501 02:31:02.192043   32853 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0501 02:31:02.192259   32853 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0501 02:31:02.192310   32853 cni.go:84] Creating CNI manager for ""
	I0501 02:31:02.192331   32853 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0501 02:31:02.192341   32853 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0501 02:31:02.192386   32853 start.go:340] cluster config:
	{Name:ha-329926 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-329926 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0501 02:31:02.192467   32853 iso.go:125] acquiring lock: {Name:mkcd2496aadb29931e179193d707c731bfda98b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0501 02:31:02.194294   32853 out.go:177] * Starting "ha-329926" primary control-plane node in "ha-329926" cluster
	I0501 02:31:02.195474   32853 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0501 02:31:02.195504   32853 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0501 02:31:02.195513   32853 cache.go:56] Caching tarball of preloaded images
	I0501 02:31:02.195589   32853 preload.go:173] Found /home/jenkins/minikube-integration/18779-13391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0501 02:31:02.195609   32853 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0501 02:31:02.195892   32853 profile.go:143] Saving config to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/config.json ...
	I0501 02:31:02.195913   32853 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/config.json: {Name:mkac9273eac834ed61b43bee84b2def140a2e5fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:31:02.196029   32853 start.go:360] acquireMachinesLock for ha-329926: {Name:mkc4548e5afe1d4b0a833f6c522103562b0cefff Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0501 02:31:02.196056   32853 start.go:364] duration metric: took 15.002µs to acquireMachinesLock for "ha-329926"
	I0501 02:31:02.196073   32853 start.go:93] Provisioning new machine with config: &{Name:ha-329926 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.0 ClusterName:ha-329926 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0501 02:31:02.196129   32853 start.go:125] createHost starting for "" (driver="kvm2")
	I0501 02:31:02.197767   32853 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0501 02:31:02.197867   32853 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:31:02.197898   32853 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:31:02.211916   32853 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39605
	I0501 02:31:02.212295   32853 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:31:02.212848   32853 main.go:141] libmachine: Using API Version  1
	I0501 02:31:02.212868   32853 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:31:02.213166   32853 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:31:02.213347   32853 main.go:141] libmachine: (ha-329926) Calling .GetMachineName
	I0501 02:31:02.213482   32853 main.go:141] libmachine: (ha-329926) Calling .DriverName
	I0501 02:31:02.213609   32853 start.go:159] libmachine.API.Create for "ha-329926" (driver="kvm2")
	I0501 02:31:02.213645   32853 client.go:168] LocalClient.Create starting
	I0501 02:31:02.213678   32853 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem
	I0501 02:31:02.213717   32853 main.go:141] libmachine: Decoding PEM data...
	I0501 02:31:02.213747   32853 main.go:141] libmachine: Parsing certificate...
	I0501 02:31:02.213833   32853 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem
	I0501 02:31:02.213874   32853 main.go:141] libmachine: Decoding PEM data...
	I0501 02:31:02.213896   32853 main.go:141] libmachine: Parsing certificate...
	I0501 02:31:02.213928   32853 main.go:141] libmachine: Running pre-create checks...
	I0501 02:31:02.213941   32853 main.go:141] libmachine: (ha-329926) Calling .PreCreateCheck
	I0501 02:31:02.214241   32853 main.go:141] libmachine: (ha-329926) Calling .GetConfigRaw
	I0501 02:31:02.214579   32853 main.go:141] libmachine: Creating machine...
	I0501 02:31:02.214605   32853 main.go:141] libmachine: (ha-329926) Calling .Create
	I0501 02:31:02.214738   32853 main.go:141] libmachine: (ha-329926) Creating KVM machine...
	I0501 02:31:02.216059   32853 main.go:141] libmachine: (ha-329926) DBG | found existing default KVM network
	I0501 02:31:02.216643   32853 main.go:141] libmachine: (ha-329926) DBG | I0501 02:31:02.216531   32876 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00012d980}
	I0501 02:31:02.216682   32853 main.go:141] libmachine: (ha-329926) DBG | created network xml: 
	I0501 02:31:02.216705   32853 main.go:141] libmachine: (ha-329926) DBG | <network>
	I0501 02:31:02.216715   32853 main.go:141] libmachine: (ha-329926) DBG |   <name>mk-ha-329926</name>
	I0501 02:31:02.216726   32853 main.go:141] libmachine: (ha-329926) DBG |   <dns enable='no'/>
	I0501 02:31:02.216735   32853 main.go:141] libmachine: (ha-329926) DBG |   
	I0501 02:31:02.216747   32853 main.go:141] libmachine: (ha-329926) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0501 02:31:02.216756   32853 main.go:141] libmachine: (ha-329926) DBG |     <dhcp>
	I0501 02:31:02.216763   32853 main.go:141] libmachine: (ha-329926) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0501 02:31:02.216775   32853 main.go:141] libmachine: (ha-329926) DBG |     </dhcp>
	I0501 02:31:02.216787   32853 main.go:141] libmachine: (ha-329926) DBG |   </ip>
	I0501 02:31:02.216799   32853 main.go:141] libmachine: (ha-329926) DBG |   
	I0501 02:31:02.216819   32853 main.go:141] libmachine: (ha-329926) DBG | </network>
	I0501 02:31:02.216849   32853 main.go:141] libmachine: (ha-329926) DBG | 
	I0501 02:31:02.221819   32853 main.go:141] libmachine: (ha-329926) DBG | trying to create private KVM network mk-ha-329926 192.168.39.0/24...
	I0501 02:31:02.283186   32853 main.go:141] libmachine: (ha-329926) DBG | private KVM network mk-ha-329926 192.168.39.0/24 created
	I0501 02:31:02.283216   32853 main.go:141] libmachine: (ha-329926) Setting up store path in /home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926 ...
	I0501 02:31:02.283228   32853 main.go:141] libmachine: (ha-329926) DBG | I0501 02:31:02.283155   32876 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18779-13391/.minikube
	I0501 02:31:02.283265   32853 main.go:141] libmachine: (ha-329926) Building disk image from file:///home/jenkins/minikube-integration/18779-13391/.minikube/cache/iso/amd64/minikube-v1.33.0-1714498396-18779-amd64.iso
	I0501 02:31:02.283290   32853 main.go:141] libmachine: (ha-329926) Downloading /home/jenkins/minikube-integration/18779-13391/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18779-13391/.minikube/cache/iso/amd64/minikube-v1.33.0-1714498396-18779-amd64.iso...
	I0501 02:31:02.508576   32853 main.go:141] libmachine: (ha-329926) DBG | I0501 02:31:02.508477   32876 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926/id_rsa...
	I0501 02:31:02.768972   32853 main.go:141] libmachine: (ha-329926) DBG | I0501 02:31:02.768811   32876 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926/ha-329926.rawdisk...
	I0501 02:31:02.769012   32853 main.go:141] libmachine: (ha-329926) DBG | Writing magic tar header
	I0501 02:31:02.769028   32853 main.go:141] libmachine: (ha-329926) DBG | Writing SSH key tar header
	I0501 02:31:02.769049   32853 main.go:141] libmachine: (ha-329926) DBG | I0501 02:31:02.768957   32876 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926 ...
	I0501 02:31:02.769112   32853 main.go:141] libmachine: (ha-329926) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926
	I0501 02:31:02.769152   32853 main.go:141] libmachine: (ha-329926) Setting executable bit set on /home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926 (perms=drwx------)
	I0501 02:31:02.769164   32853 main.go:141] libmachine: (ha-329926) Setting executable bit set on /home/jenkins/minikube-integration/18779-13391/.minikube/machines (perms=drwxr-xr-x)
	I0501 02:31:02.769176   32853 main.go:141] libmachine: (ha-329926) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18779-13391/.minikube/machines
	I0501 02:31:02.769188   32853 main.go:141] libmachine: (ha-329926) Setting executable bit set on /home/jenkins/minikube-integration/18779-13391/.minikube (perms=drwxr-xr-x)
	I0501 02:31:02.769198   32853 main.go:141] libmachine: (ha-329926) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18779-13391/.minikube
	I0501 02:31:02.769211   32853 main.go:141] libmachine: (ha-329926) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18779-13391
	I0501 02:31:02.769218   32853 main.go:141] libmachine: (ha-329926) Setting executable bit set on /home/jenkins/minikube-integration/18779-13391 (perms=drwxrwxr-x)
	I0501 02:31:02.769224   32853 main.go:141] libmachine: (ha-329926) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0501 02:31:02.769253   32853 main.go:141] libmachine: (ha-329926) DBG | Checking permissions on dir: /home/jenkins
	I0501 02:31:02.769266   32853 main.go:141] libmachine: (ha-329926) DBG | Checking permissions on dir: /home
	I0501 02:31:02.769278   32853 main.go:141] libmachine: (ha-329926) DBG | Skipping /home - not owner
	I0501 02:31:02.769298   32853 main.go:141] libmachine: (ha-329926) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0501 02:31:02.769312   32853 main.go:141] libmachine: (ha-329926) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0501 02:31:02.769318   32853 main.go:141] libmachine: (ha-329926) Creating domain...
	I0501 02:31:02.770295   32853 main.go:141] libmachine: (ha-329926) define libvirt domain using xml: 
	I0501 02:31:02.770324   32853 main.go:141] libmachine: (ha-329926) <domain type='kvm'>
	I0501 02:31:02.770331   32853 main.go:141] libmachine: (ha-329926)   <name>ha-329926</name>
	I0501 02:31:02.770336   32853 main.go:141] libmachine: (ha-329926)   <memory unit='MiB'>2200</memory>
	I0501 02:31:02.770342   32853 main.go:141] libmachine: (ha-329926)   <vcpu>2</vcpu>
	I0501 02:31:02.770348   32853 main.go:141] libmachine: (ha-329926)   <features>
	I0501 02:31:02.770359   32853 main.go:141] libmachine: (ha-329926)     <acpi/>
	I0501 02:31:02.770366   32853 main.go:141] libmachine: (ha-329926)     <apic/>
	I0501 02:31:02.770394   32853 main.go:141] libmachine: (ha-329926)     <pae/>
	I0501 02:31:02.770427   32853 main.go:141] libmachine: (ha-329926)     
	I0501 02:31:02.770434   32853 main.go:141] libmachine: (ha-329926)   </features>
	I0501 02:31:02.770444   32853 main.go:141] libmachine: (ha-329926)   <cpu mode='host-passthrough'>
	I0501 02:31:02.770450   32853 main.go:141] libmachine: (ha-329926)   
	I0501 02:31:02.770457   32853 main.go:141] libmachine: (ha-329926)   </cpu>
	I0501 02:31:02.770465   32853 main.go:141] libmachine: (ha-329926)   <os>
	I0501 02:31:02.770470   32853 main.go:141] libmachine: (ha-329926)     <type>hvm</type>
	I0501 02:31:02.770474   32853 main.go:141] libmachine: (ha-329926)     <boot dev='cdrom'/>
	I0501 02:31:02.770481   32853 main.go:141] libmachine: (ha-329926)     <boot dev='hd'/>
	I0501 02:31:02.770486   32853 main.go:141] libmachine: (ha-329926)     <bootmenu enable='no'/>
	I0501 02:31:02.770490   32853 main.go:141] libmachine: (ha-329926)   </os>
	I0501 02:31:02.770495   32853 main.go:141] libmachine: (ha-329926)   <devices>
	I0501 02:31:02.770501   32853 main.go:141] libmachine: (ha-329926)     <disk type='file' device='cdrom'>
	I0501 02:31:02.770511   32853 main.go:141] libmachine: (ha-329926)       <source file='/home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926/boot2docker.iso'/>
	I0501 02:31:02.770516   32853 main.go:141] libmachine: (ha-329926)       <target dev='hdc' bus='scsi'/>
	I0501 02:31:02.770521   32853 main.go:141] libmachine: (ha-329926)       <readonly/>
	I0501 02:31:02.770525   32853 main.go:141] libmachine: (ha-329926)     </disk>
	I0501 02:31:02.770533   32853 main.go:141] libmachine: (ha-329926)     <disk type='file' device='disk'>
	I0501 02:31:02.770539   32853 main.go:141] libmachine: (ha-329926)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0501 02:31:02.770549   32853 main.go:141] libmachine: (ha-329926)       <source file='/home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926/ha-329926.rawdisk'/>
	I0501 02:31:02.770558   32853 main.go:141] libmachine: (ha-329926)       <target dev='hda' bus='virtio'/>
	I0501 02:31:02.770562   32853 main.go:141] libmachine: (ha-329926)     </disk>
	I0501 02:31:02.770568   32853 main.go:141] libmachine: (ha-329926)     <interface type='network'>
	I0501 02:31:02.770575   32853 main.go:141] libmachine: (ha-329926)       <source network='mk-ha-329926'/>
	I0501 02:31:02.770580   32853 main.go:141] libmachine: (ha-329926)       <model type='virtio'/>
	I0501 02:31:02.770585   32853 main.go:141] libmachine: (ha-329926)     </interface>
	I0501 02:31:02.770590   32853 main.go:141] libmachine: (ha-329926)     <interface type='network'>
	I0501 02:31:02.770597   32853 main.go:141] libmachine: (ha-329926)       <source network='default'/>
	I0501 02:31:02.770602   32853 main.go:141] libmachine: (ha-329926)       <model type='virtio'/>
	I0501 02:31:02.770609   32853 main.go:141] libmachine: (ha-329926)     </interface>
	I0501 02:31:02.770613   32853 main.go:141] libmachine: (ha-329926)     <serial type='pty'>
	I0501 02:31:02.770620   32853 main.go:141] libmachine: (ha-329926)       <target port='0'/>
	I0501 02:31:02.770625   32853 main.go:141] libmachine: (ha-329926)     </serial>
	I0501 02:31:02.770630   32853 main.go:141] libmachine: (ha-329926)     <console type='pty'>
	I0501 02:31:02.770636   32853 main.go:141] libmachine: (ha-329926)       <target type='serial' port='0'/>
	I0501 02:31:02.770652   32853 main.go:141] libmachine: (ha-329926)     </console>
	I0501 02:31:02.770660   32853 main.go:141] libmachine: (ha-329926)     <rng model='virtio'>
	I0501 02:31:02.770665   32853 main.go:141] libmachine: (ha-329926)       <backend model='random'>/dev/random</backend>
	I0501 02:31:02.770673   32853 main.go:141] libmachine: (ha-329926)     </rng>
	I0501 02:31:02.770677   32853 main.go:141] libmachine: (ha-329926)     
	I0501 02:31:02.770688   32853 main.go:141] libmachine: (ha-329926)     
	I0501 02:31:02.770695   32853 main.go:141] libmachine: (ha-329926)   </devices>
	I0501 02:31:02.770699   32853 main.go:141] libmachine: (ha-329926) </domain>
	I0501 02:31:02.770705   32853 main.go:141] libmachine: (ha-329926) 
	I0501 02:31:02.775111   32853 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined MAC address 52:54:00:b8:ba:6a in network default
	I0501 02:31:02.775622   32853 main.go:141] libmachine: (ha-329926) Ensuring networks are active...
	I0501 02:31:02.775642   32853 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:31:02.776188   32853 main.go:141] libmachine: (ha-329926) Ensuring network default is active
	I0501 02:31:02.776475   32853 main.go:141] libmachine: (ha-329926) Ensuring network mk-ha-329926 is active
	I0501 02:31:02.776962   32853 main.go:141] libmachine: (ha-329926) Getting domain xml...
	I0501 02:31:02.777590   32853 main.go:141] libmachine: (ha-329926) Creating domain...
	I0501 02:31:03.958358   32853 main.go:141] libmachine: (ha-329926) Waiting to get IP...
	I0501 02:31:03.959186   32853 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:31:03.959545   32853 main.go:141] libmachine: (ha-329926) DBG | unable to find current IP address of domain ha-329926 in network mk-ha-329926
	I0501 02:31:03.959566   32853 main.go:141] libmachine: (ha-329926) DBG | I0501 02:31:03.959529   32876 retry.go:31] will retry after 238.732907ms: waiting for machine to come up
	I0501 02:31:04.200166   32853 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:31:04.200557   32853 main.go:141] libmachine: (ha-329926) DBG | unable to find current IP address of domain ha-329926 in network mk-ha-329926
	I0501 02:31:04.200587   32853 main.go:141] libmachine: (ha-329926) DBG | I0501 02:31:04.200531   32876 retry.go:31] will retry after 374.829741ms: waiting for machine to come up
	I0501 02:31:04.576992   32853 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:31:04.577416   32853 main.go:141] libmachine: (ha-329926) DBG | unable to find current IP address of domain ha-329926 in network mk-ha-329926
	I0501 02:31:04.577449   32853 main.go:141] libmachine: (ha-329926) DBG | I0501 02:31:04.577372   32876 retry.go:31] will retry after 309.413827ms: waiting for machine to come up
	I0501 02:31:04.888766   32853 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:31:04.889189   32853 main.go:141] libmachine: (ha-329926) DBG | unable to find current IP address of domain ha-329926 in network mk-ha-329926
	I0501 02:31:04.889238   32853 main.go:141] libmachine: (ha-329926) DBG | I0501 02:31:04.889142   32876 retry.go:31] will retry after 366.291711ms: waiting for machine to come up
	I0501 02:31:05.256536   32853 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:31:05.256930   32853 main.go:141] libmachine: (ha-329926) DBG | unable to find current IP address of domain ha-329926 in network mk-ha-329926
	I0501 02:31:05.256960   32853 main.go:141] libmachine: (ha-329926) DBG | I0501 02:31:05.256882   32876 retry.go:31] will retry after 711.660535ms: waiting for machine to come up
	I0501 02:31:05.969606   32853 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:31:05.969985   32853 main.go:141] libmachine: (ha-329926) DBG | unable to find current IP address of domain ha-329926 in network mk-ha-329926
	I0501 02:31:05.970044   32853 main.go:141] libmachine: (ha-329926) DBG | I0501 02:31:05.969964   32876 retry.go:31] will retry after 826.819518ms: waiting for machine to come up
	I0501 02:31:06.797981   32853 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:31:06.798491   32853 main.go:141] libmachine: (ha-329926) DBG | unable to find current IP address of domain ha-329926 in network mk-ha-329926
	I0501 02:31:06.798551   32853 main.go:141] libmachine: (ha-329926) DBG | I0501 02:31:06.798455   32876 retry.go:31] will retry after 766.952141ms: waiting for machine to come up
	I0501 02:31:07.566945   32853 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:31:07.567298   32853 main.go:141] libmachine: (ha-329926) DBG | unable to find current IP address of domain ha-329926 in network mk-ha-329926
	I0501 02:31:07.567328   32853 main.go:141] libmachine: (ha-329926) DBG | I0501 02:31:07.567254   32876 retry.go:31] will retry after 1.148906462s: waiting for machine to come up
	I0501 02:31:08.717544   32853 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:31:08.717895   32853 main.go:141] libmachine: (ha-329926) DBG | unable to find current IP address of domain ha-329926 in network mk-ha-329926
	I0501 02:31:08.717921   32853 main.go:141] libmachine: (ha-329926) DBG | I0501 02:31:08.717850   32876 retry.go:31] will retry after 1.572762289s: waiting for machine to come up
	I0501 02:31:10.292539   32853 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:31:10.292913   32853 main.go:141] libmachine: (ha-329926) DBG | unable to find current IP address of domain ha-329926 in network mk-ha-329926
	I0501 02:31:10.292941   32853 main.go:141] libmachine: (ha-329926) DBG | I0501 02:31:10.292867   32876 retry.go:31] will retry after 2.066139393s: waiting for machine to come up
	I0501 02:31:12.360803   32853 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:31:12.361151   32853 main.go:141] libmachine: (ha-329926) DBG | unable to find current IP address of domain ha-329926 in network mk-ha-329926
	I0501 02:31:12.361176   32853 main.go:141] libmachine: (ha-329926) DBG | I0501 02:31:12.361095   32876 retry.go:31] will retry after 2.871501826s: waiting for machine to come up
	I0501 02:31:15.236013   32853 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:31:15.236432   32853 main.go:141] libmachine: (ha-329926) DBG | unable to find current IP address of domain ha-329926 in network mk-ha-329926
	I0501 02:31:15.236459   32853 main.go:141] libmachine: (ha-329926) DBG | I0501 02:31:15.236387   32876 retry.go:31] will retry after 3.153540987s: waiting for machine to come up
	I0501 02:31:18.391419   32853 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:31:18.391858   32853 main.go:141] libmachine: (ha-329926) DBG | unable to find current IP address of domain ha-329926 in network mk-ha-329926
	I0501 02:31:18.391886   32853 main.go:141] libmachine: (ha-329926) DBG | I0501 02:31:18.391808   32876 retry.go:31] will retry after 4.132363881s: waiting for machine to come up
	I0501 02:31:22.525823   32853 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:31:22.526223   32853 main.go:141] libmachine: (ha-329926) DBG | unable to find current IP address of domain ha-329926 in network mk-ha-329926
	I0501 02:31:22.526247   32853 main.go:141] libmachine: (ha-329926) DBG | I0501 02:31:22.526172   32876 retry.go:31] will retry after 4.703892793s: waiting for machine to come up
	I0501 02:31:27.231444   32853 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:31:27.231840   32853 main.go:141] libmachine: (ha-329926) Found IP for machine: 192.168.39.5
	I0501 02:31:27.231868   32853 main.go:141] libmachine: (ha-329926) Reserving static IP address...
	I0501 02:31:27.231882   32853 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has current primary IP address 192.168.39.5 and MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:31:27.232282   32853 main.go:141] libmachine: (ha-329926) DBG | unable to find host DHCP lease matching {name: "ha-329926", mac: "52:54:00:ce:d8:43", ip: "192.168.39.5"} in network mk-ha-329926
	I0501 02:31:27.306230   32853 main.go:141] libmachine: (ha-329926) DBG | Getting to WaitForSSH function...
	I0501 02:31:27.306257   32853 main.go:141] libmachine: (ha-329926) Reserved static IP address: 192.168.39.5
	I0501 02:31:27.306296   32853 main.go:141] libmachine: (ha-329926) Waiting for SSH to be available...
	I0501 02:31:27.308886   32853 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:31:27.309237   32853 main.go:141] libmachine: (ha-329926) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:d8:43", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:31:18 +0000 UTC Type:0 Mac:52:54:00:ce:d8:43 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:minikube Clientid:01:52:54:00:ce:d8:43}
	I0501 02:31:27.309262   32853 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined IP address 192.168.39.5 and MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:31:27.309426   32853 main.go:141] libmachine: (ha-329926) DBG | Using SSH client type: external
	I0501 02:31:27.309451   32853 main.go:141] libmachine: (ha-329926) DBG | Using SSH private key: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926/id_rsa (-rw-------)
	I0501 02:31:27.309482   32853 main.go:141] libmachine: (ha-329926) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.5 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0501 02:31:27.309495   32853 main.go:141] libmachine: (ha-329926) DBG | About to run SSH command:
	I0501 02:31:27.309507   32853 main.go:141] libmachine: (ha-329926) DBG | exit 0
	I0501 02:31:27.434752   32853 main.go:141] libmachine: (ha-329926) DBG | SSH cmd err, output: <nil>: 
	I0501 02:31:27.435030   32853 main.go:141] libmachine: (ha-329926) KVM machine creation complete!
	I0501 02:31:27.435317   32853 main.go:141] libmachine: (ha-329926) Calling .GetConfigRaw
	I0501 02:31:27.435956   32853 main.go:141] libmachine: (ha-329926) Calling .DriverName
	I0501 02:31:27.436206   32853 main.go:141] libmachine: (ha-329926) Calling .DriverName
	I0501 02:31:27.436384   32853 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0501 02:31:27.436396   32853 main.go:141] libmachine: (ha-329926) Calling .GetState
	I0501 02:31:27.437585   32853 main.go:141] libmachine: Detecting operating system of created instance...
	I0501 02:31:27.437597   32853 main.go:141] libmachine: Waiting for SSH to be available...
	I0501 02:31:27.437603   32853 main.go:141] libmachine: Getting to WaitForSSH function...
	I0501 02:31:27.437609   32853 main.go:141] libmachine: (ha-329926) Calling .GetSSHHostname
	I0501 02:31:27.439934   32853 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:31:27.440337   32853 main.go:141] libmachine: (ha-329926) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:d8:43", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:31:18 +0000 UTC Type:0 Mac:52:54:00:ce:d8:43 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-329926 Clientid:01:52:54:00:ce:d8:43}
	I0501 02:31:27.440369   32853 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined IP address 192.168.39.5 and MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:31:27.440519   32853 main.go:141] libmachine: (ha-329926) Calling .GetSSHPort
	I0501 02:31:27.440713   32853 main.go:141] libmachine: (ha-329926) Calling .GetSSHKeyPath
	I0501 02:31:27.440852   32853 main.go:141] libmachine: (ha-329926) Calling .GetSSHKeyPath
	I0501 02:31:27.440949   32853 main.go:141] libmachine: (ha-329926) Calling .GetSSHUsername
	I0501 02:31:27.441092   32853 main.go:141] libmachine: Using SSH client type: native
	I0501 02:31:27.441261   32853 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.5 22 <nil> <nil>}
	I0501 02:31:27.441271   32853 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0501 02:31:27.542047   32853 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0501 02:31:27.542073   32853 main.go:141] libmachine: Detecting the provisioner...
	I0501 02:31:27.542084   32853 main.go:141] libmachine: (ha-329926) Calling .GetSSHHostname
	I0501 02:31:27.544546   32853 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:31:27.544801   32853 main.go:141] libmachine: (ha-329926) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:d8:43", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:31:18 +0000 UTC Type:0 Mac:52:54:00:ce:d8:43 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-329926 Clientid:01:52:54:00:ce:d8:43}
	I0501 02:31:27.544823   32853 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined IP address 192.168.39.5 and MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:31:27.544948   32853 main.go:141] libmachine: (ha-329926) Calling .GetSSHPort
	I0501 02:31:27.545142   32853 main.go:141] libmachine: (ha-329926) Calling .GetSSHKeyPath
	I0501 02:31:27.545293   32853 main.go:141] libmachine: (ha-329926) Calling .GetSSHKeyPath
	I0501 02:31:27.545418   32853 main.go:141] libmachine: (ha-329926) Calling .GetSSHUsername
	I0501 02:31:27.545555   32853 main.go:141] libmachine: Using SSH client type: native
	I0501 02:31:27.545774   32853 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.5 22 <nil> <nil>}
	I0501 02:31:27.545791   32853 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0501 02:31:27.651855   32853 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0501 02:31:27.651922   32853 main.go:141] libmachine: found compatible host: buildroot
	I0501 02:31:27.651933   32853 main.go:141] libmachine: Provisioning with buildroot...
	I0501 02:31:27.651942   32853 main.go:141] libmachine: (ha-329926) Calling .GetMachineName
	I0501 02:31:27.652222   32853 buildroot.go:166] provisioning hostname "ha-329926"
	I0501 02:31:27.652254   32853 main.go:141] libmachine: (ha-329926) Calling .GetMachineName
	I0501 02:31:27.652482   32853 main.go:141] libmachine: (ha-329926) Calling .GetSSHHostname
	I0501 02:31:27.654880   32853 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:31:27.655220   32853 main.go:141] libmachine: (ha-329926) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:d8:43", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:31:18 +0000 UTC Type:0 Mac:52:54:00:ce:d8:43 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-329926 Clientid:01:52:54:00:ce:d8:43}
	I0501 02:31:27.655237   32853 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined IP address 192.168.39.5 and MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:31:27.655371   32853 main.go:141] libmachine: (ha-329926) Calling .GetSSHPort
	I0501 02:31:27.655541   32853 main.go:141] libmachine: (ha-329926) Calling .GetSSHKeyPath
	I0501 02:31:27.655687   32853 main.go:141] libmachine: (ha-329926) Calling .GetSSHKeyPath
	I0501 02:31:27.655837   32853 main.go:141] libmachine: (ha-329926) Calling .GetSSHUsername
	I0501 02:31:27.655996   32853 main.go:141] libmachine: Using SSH client type: native
	I0501 02:31:27.656194   32853 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.5 22 <nil> <nil>}
	I0501 02:31:27.656209   32853 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-329926 && echo "ha-329926" | sudo tee /etc/hostname
	I0501 02:31:27.775558   32853 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-329926
	
	I0501 02:31:27.775590   32853 main.go:141] libmachine: (ha-329926) Calling .GetSSHHostname
	I0501 02:31:27.778154   32853 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:31:27.778534   32853 main.go:141] libmachine: (ha-329926) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:d8:43", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:31:18 +0000 UTC Type:0 Mac:52:54:00:ce:d8:43 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-329926 Clientid:01:52:54:00:ce:d8:43}
	I0501 02:31:27.778586   32853 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined IP address 192.168.39.5 and MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:31:27.778713   32853 main.go:141] libmachine: (ha-329926) Calling .GetSSHPort
	I0501 02:31:27.778940   32853 main.go:141] libmachine: (ha-329926) Calling .GetSSHKeyPath
	I0501 02:31:27.779113   32853 main.go:141] libmachine: (ha-329926) Calling .GetSSHKeyPath
	I0501 02:31:27.779293   32853 main.go:141] libmachine: (ha-329926) Calling .GetSSHUsername
	I0501 02:31:27.779460   32853 main.go:141] libmachine: Using SSH client type: native
	I0501 02:31:27.779694   32853 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.5 22 <nil> <nil>}
	I0501 02:31:27.779714   32853 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-329926' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-329926/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-329926' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0501 02:31:27.893285   32853 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0501 02:31:27.893325   32853 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18779-13391/.minikube CaCertPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18779-13391/.minikube}
	I0501 02:31:27.893378   32853 buildroot.go:174] setting up certificates
	I0501 02:31:27.893397   32853 provision.go:84] configureAuth start
	I0501 02:31:27.893416   32853 main.go:141] libmachine: (ha-329926) Calling .GetMachineName
	I0501 02:31:27.893706   32853 main.go:141] libmachine: (ha-329926) Calling .GetIP
	I0501 02:31:27.896155   32853 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:31:27.896491   32853 main.go:141] libmachine: (ha-329926) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:d8:43", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:31:18 +0000 UTC Type:0 Mac:52:54:00:ce:d8:43 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-329926 Clientid:01:52:54:00:ce:d8:43}
	I0501 02:31:27.896510   32853 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined IP address 192.168.39.5 and MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:31:27.896597   32853 main.go:141] libmachine: (ha-329926) Calling .GetSSHHostname
	I0501 02:31:27.898661   32853 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:31:27.898974   32853 main.go:141] libmachine: (ha-329926) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:d8:43", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:31:18 +0000 UTC Type:0 Mac:52:54:00:ce:d8:43 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-329926 Clientid:01:52:54:00:ce:d8:43}
	I0501 02:31:27.899000   32853 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined IP address 192.168.39.5 and MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:31:27.899156   32853 provision.go:143] copyHostCerts
	I0501 02:31:27.899193   32853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem
	I0501 02:31:27.899220   32853 exec_runner.go:144] found /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem, removing ...
	I0501 02:31:27.899228   32853 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem
	I0501 02:31:27.899302   32853 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem (1123 bytes)
	I0501 02:31:27.899395   32853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem
	I0501 02:31:27.899415   32853 exec_runner.go:144] found /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem, removing ...
	I0501 02:31:27.899419   32853 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem
	I0501 02:31:27.899442   32853 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem (1675 bytes)
	I0501 02:31:27.899495   32853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem
	I0501 02:31:27.899510   32853 exec_runner.go:144] found /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem, removing ...
	I0501 02:31:27.899514   32853 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem
	I0501 02:31:27.899547   32853 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem (1078 bytes)
	I0501 02:31:27.899606   32853 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca-key.pem org=jenkins.ha-329926 san=[127.0.0.1 192.168.39.5 ha-329926 localhost minikube]
	I0501 02:31:28.044485   32853 provision.go:177] copyRemoteCerts
	I0501 02:31:28.044535   32853 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0501 02:31:28.044556   32853 main.go:141] libmachine: (ha-329926) Calling .GetSSHHostname
	I0501 02:31:28.047199   32853 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:31:28.047648   32853 main.go:141] libmachine: (ha-329926) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:d8:43", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:31:18 +0000 UTC Type:0 Mac:52:54:00:ce:d8:43 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-329926 Clientid:01:52:54:00:ce:d8:43}
	I0501 02:31:28.047686   32853 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined IP address 192.168.39.5 and MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:31:28.047841   32853 main.go:141] libmachine: (ha-329926) Calling .GetSSHPort
	I0501 02:31:28.048023   32853 main.go:141] libmachine: (ha-329926) Calling .GetSSHKeyPath
	I0501 02:31:28.048183   32853 main.go:141] libmachine: (ha-329926) Calling .GetSSHUsername
	I0501 02:31:28.048316   32853 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926/id_rsa Username:docker}
	I0501 02:31:28.131981   32853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0501 02:31:28.132055   32853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0501 02:31:28.161023   32853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0501 02:31:28.161097   32853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0501 02:31:28.190060   32853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0501 02:31:28.190132   32853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0501 02:31:28.218808   32853 provision.go:87] duration metric: took 325.394032ms to configureAuth
	I0501 02:31:28.218836   32853 buildroot.go:189] setting minikube options for container-runtime
	I0501 02:31:28.219004   32853 config.go:182] Loaded profile config "ha-329926": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0501 02:31:28.219110   32853 main.go:141] libmachine: (ha-329926) Calling .GetSSHHostname
	I0501 02:31:28.221523   32853 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:31:28.221859   32853 main.go:141] libmachine: (ha-329926) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:d8:43", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:31:18 +0000 UTC Type:0 Mac:52:54:00:ce:d8:43 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-329926 Clientid:01:52:54:00:ce:d8:43}
	I0501 02:31:28.221888   32853 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined IP address 192.168.39.5 and MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:31:28.222053   32853 main.go:141] libmachine: (ha-329926) Calling .GetSSHPort
	I0501 02:31:28.222248   32853 main.go:141] libmachine: (ha-329926) Calling .GetSSHKeyPath
	I0501 02:31:28.222434   32853 main.go:141] libmachine: (ha-329926) Calling .GetSSHKeyPath
	I0501 02:31:28.222567   32853 main.go:141] libmachine: (ha-329926) Calling .GetSSHUsername
	I0501 02:31:28.222683   32853 main.go:141] libmachine: Using SSH client type: native
	I0501 02:31:28.222846   32853 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.5 22 <nil> <nil>}
	I0501 02:31:28.222860   32853 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0501 02:31:28.506794   32853 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0501 02:31:28.506824   32853 main.go:141] libmachine: Checking connection to Docker...
	I0501 02:31:28.506834   32853 main.go:141] libmachine: (ha-329926) Calling .GetURL
	I0501 02:31:28.508069   32853 main.go:141] libmachine: (ha-329926) DBG | Using libvirt version 6000000
	I0501 02:31:28.510048   32853 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:31:28.510322   32853 main.go:141] libmachine: (ha-329926) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:d8:43", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:31:18 +0000 UTC Type:0 Mac:52:54:00:ce:d8:43 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-329926 Clientid:01:52:54:00:ce:d8:43}
	I0501 02:31:28.510343   32853 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined IP address 192.168.39.5 and MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:31:28.510543   32853 main.go:141] libmachine: Docker is up and running!
	I0501 02:31:28.510569   32853 main.go:141] libmachine: Reticulating splines...
	I0501 02:31:28.510575   32853 client.go:171] duration metric: took 26.296922163s to LocalClient.Create
	I0501 02:31:28.510597   32853 start.go:167] duration metric: took 26.296986611s to libmachine.API.Create "ha-329926"
	I0501 02:31:28.510609   32853 start.go:293] postStartSetup for "ha-329926" (driver="kvm2")
	I0501 02:31:28.510624   32853 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0501 02:31:28.510639   32853 main.go:141] libmachine: (ha-329926) Calling .DriverName
	I0501 02:31:28.510865   32853 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0501 02:31:28.510895   32853 main.go:141] libmachine: (ha-329926) Calling .GetSSHHostname
	I0501 02:31:28.512814   32853 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:31:28.513130   32853 main.go:141] libmachine: (ha-329926) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:d8:43", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:31:18 +0000 UTC Type:0 Mac:52:54:00:ce:d8:43 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-329926 Clientid:01:52:54:00:ce:d8:43}
	I0501 02:31:28.513152   32853 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined IP address 192.168.39.5 and MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:31:28.513256   32853 main.go:141] libmachine: (ha-329926) Calling .GetSSHPort
	I0501 02:31:28.513422   32853 main.go:141] libmachine: (ha-329926) Calling .GetSSHKeyPath
	I0501 02:31:28.513566   32853 main.go:141] libmachine: (ha-329926) Calling .GetSSHUsername
	I0501 02:31:28.513673   32853 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926/id_rsa Username:docker}
	I0501 02:31:28.593262   32853 ssh_runner.go:195] Run: cat /etc/os-release
	I0501 02:31:28.598118   32853 info.go:137] Remote host: Buildroot 2023.02.9
	I0501 02:31:28.598146   32853 filesync.go:126] Scanning /home/jenkins/minikube-integration/18779-13391/.minikube/addons for local assets ...
	I0501 02:31:28.598226   32853 filesync.go:126] Scanning /home/jenkins/minikube-integration/18779-13391/.minikube/files for local assets ...
	I0501 02:31:28.598317   32853 filesync.go:149] local asset: /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem -> 207242.pem in /etc/ssl/certs
	I0501 02:31:28.598329   32853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem -> /etc/ssl/certs/207242.pem
	I0501 02:31:28.598460   32853 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0501 02:31:28.608303   32853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem --> /etc/ssl/certs/207242.pem (1708 bytes)
	I0501 02:31:28.634374   32853 start.go:296] duration metric: took 123.748542ms for postStartSetup
	I0501 02:31:28.634435   32853 main.go:141] libmachine: (ha-329926) Calling .GetConfigRaw
	I0501 02:31:28.635011   32853 main.go:141] libmachine: (ha-329926) Calling .GetIP
	I0501 02:31:28.637415   32853 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:31:28.637744   32853 main.go:141] libmachine: (ha-329926) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:d8:43", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:31:18 +0000 UTC Type:0 Mac:52:54:00:ce:d8:43 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-329926 Clientid:01:52:54:00:ce:d8:43}
	I0501 02:31:28.637772   32853 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined IP address 192.168.39.5 and MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:31:28.638014   32853 profile.go:143] Saving config to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/config.json ...
	I0501 02:31:28.638164   32853 start.go:128] duration metric: took 26.442026735s to createHost
	I0501 02:31:28.638184   32853 main.go:141] libmachine: (ha-329926) Calling .GetSSHHostname
	I0501 02:31:28.640154   32853 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:31:28.640404   32853 main.go:141] libmachine: (ha-329926) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:d8:43", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:31:18 +0000 UTC Type:0 Mac:52:54:00:ce:d8:43 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-329926 Clientid:01:52:54:00:ce:d8:43}
	I0501 02:31:28.640430   32853 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined IP address 192.168.39.5 and MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:31:28.640526   32853 main.go:141] libmachine: (ha-329926) Calling .GetSSHPort
	I0501 02:31:28.640720   32853 main.go:141] libmachine: (ha-329926) Calling .GetSSHKeyPath
	I0501 02:31:28.640860   32853 main.go:141] libmachine: (ha-329926) Calling .GetSSHKeyPath
	I0501 02:31:28.640990   32853 main.go:141] libmachine: (ha-329926) Calling .GetSSHUsername
	I0501 02:31:28.641115   32853 main.go:141] libmachine: Using SSH client type: native
	I0501 02:31:28.641289   32853 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.5 22 <nil> <nil>}
	I0501 02:31:28.641312   32853 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0501 02:31:28.743716   32853 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714530688.716538813
	
	I0501 02:31:28.743739   32853 fix.go:216] guest clock: 1714530688.716538813
	I0501 02:31:28.743746   32853 fix.go:229] Guest: 2024-05-01 02:31:28.716538813 +0000 UTC Remote: 2024-05-01 02:31:28.638174692 +0000 UTC m=+26.560671961 (delta=78.364121ms)
	I0501 02:31:28.743771   32853 fix.go:200] guest clock delta is within tolerance: 78.364121ms
	I0501 02:31:28.743777   32853 start.go:83] releasing machines lock for "ha-329926", held for 26.547711947s
	I0501 02:31:28.743799   32853 main.go:141] libmachine: (ha-329926) Calling .DriverName
	I0501 02:31:28.744031   32853 main.go:141] libmachine: (ha-329926) Calling .GetIP
	I0501 02:31:28.746551   32853 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:31:28.746896   32853 main.go:141] libmachine: (ha-329926) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:d8:43", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:31:18 +0000 UTC Type:0 Mac:52:54:00:ce:d8:43 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-329926 Clientid:01:52:54:00:ce:d8:43}
	I0501 02:31:28.746920   32853 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined IP address 192.168.39.5 and MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:31:28.747070   32853 main.go:141] libmachine: (ha-329926) Calling .DriverName
	I0501 02:31:28.747674   32853 main.go:141] libmachine: (ha-329926) Calling .DriverName
	I0501 02:31:28.747860   32853 main.go:141] libmachine: (ha-329926) Calling .DriverName
	I0501 02:31:28.747973   32853 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0501 02:31:28.748005   32853 main.go:141] libmachine: (ha-329926) Calling .GetSSHHostname
	I0501 02:31:28.748095   32853 ssh_runner.go:195] Run: cat /version.json
	I0501 02:31:28.748117   32853 main.go:141] libmachine: (ha-329926) Calling .GetSSHHostname
	I0501 02:31:28.750298   32853 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:31:28.750669   32853 main.go:141] libmachine: (ha-329926) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:d8:43", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:31:18 +0000 UTC Type:0 Mac:52:54:00:ce:d8:43 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-329926 Clientid:01:52:54:00:ce:d8:43}
	I0501 02:31:28.750693   32853 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined IP address 192.168.39.5 and MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:31:28.750711   32853 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:31:28.750864   32853 main.go:141] libmachine: (ha-329926) Calling .GetSSHPort
	I0501 02:31:28.751018   32853 main.go:141] libmachine: (ha-329926) Calling .GetSSHKeyPath
	I0501 02:31:28.751129   32853 main.go:141] libmachine: (ha-329926) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:d8:43", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:31:18 +0000 UTC Type:0 Mac:52:54:00:ce:d8:43 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-329926 Clientid:01:52:54:00:ce:d8:43}
	I0501 02:31:28.751150   32853 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined IP address 192.168.39.5 and MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:31:28.751166   32853 main.go:141] libmachine: (ha-329926) Calling .GetSSHUsername
	I0501 02:31:28.751306   32853 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926/id_rsa Username:docker}
	I0501 02:31:28.751389   32853 main.go:141] libmachine: (ha-329926) Calling .GetSSHPort
	I0501 02:31:28.751533   32853 main.go:141] libmachine: (ha-329926) Calling .GetSSHKeyPath
	I0501 02:31:28.751656   32853 main.go:141] libmachine: (ha-329926) Calling .GetSSHUsername
	I0501 02:31:28.751809   32853 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926/id_rsa Username:docker}
	I0501 02:31:28.848869   32853 ssh_runner.go:195] Run: systemctl --version
	I0501 02:31:28.855210   32853 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0501 02:31:29.016256   32853 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0501 02:31:29.023608   32853 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0501 02:31:29.023691   32853 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0501 02:31:29.042085   32853 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0501 02:31:29.042115   32853 start.go:494] detecting cgroup driver to use...
	I0501 02:31:29.042178   32853 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0501 02:31:29.059776   32853 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0501 02:31:29.075189   32853 docker.go:217] disabling cri-docker service (if available) ...
	I0501 02:31:29.075262   32853 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0501 02:31:29.090216   32853 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0501 02:31:29.105523   32853 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0501 02:31:29.221270   32853 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0501 02:31:29.352751   32853 docker.go:233] disabling docker service ...
	I0501 02:31:29.352848   32853 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0501 02:31:29.369405   32853 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0501 02:31:29.383459   32853 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0501 02:31:29.520606   32853 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0501 02:31:29.660010   32853 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0501 02:31:29.675021   32853 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0501 02:31:29.695267   32853 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0501 02:31:29.695336   32853 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 02:31:29.707073   32853 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0501 02:31:29.707136   32853 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 02:31:29.718755   32853 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 02:31:29.730541   32853 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 02:31:29.743583   32853 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0501 02:31:29.756320   32853 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 02:31:29.768711   32853 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 02:31:29.788302   32853 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 02:31:29.800367   32853 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0501 02:31:29.811307   32853 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0501 02:31:29.811373   32853 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0501 02:31:29.825777   32853 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0501 02:31:29.837371   32853 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:31:29.952518   32853 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0501 02:31:30.093573   32853 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0501 02:31:30.093652   32853 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0501 02:31:30.098671   32853 start.go:562] Will wait 60s for crictl version
	I0501 02:31:30.098708   32853 ssh_runner.go:195] Run: which crictl
	I0501 02:31:30.103137   32853 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0501 02:31:30.139019   32853 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0501 02:31:30.139117   32853 ssh_runner.go:195] Run: crio --version
	I0501 02:31:30.168469   32853 ssh_runner.go:195] Run: crio --version
	I0501 02:31:30.203703   32853 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0501 02:31:30.205011   32853 main.go:141] libmachine: (ha-329926) Calling .GetIP
	I0501 02:31:30.207922   32853 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:31:30.208309   32853 main.go:141] libmachine: (ha-329926) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:d8:43", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:31:18 +0000 UTC Type:0 Mac:52:54:00:ce:d8:43 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-329926 Clientid:01:52:54:00:ce:d8:43}
	I0501 02:31:30.208340   32853 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined IP address 192.168.39.5 and MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:31:30.208519   32853 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0501 02:31:30.213134   32853 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0501 02:31:30.227739   32853 kubeadm.go:877] updating cluster {Name:ha-329926 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Cl
usterName:ha-329926 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.5 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0501 02:31:30.227847   32853 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0501 02:31:30.227895   32853 ssh_runner.go:195] Run: sudo crictl images --output json
	I0501 02:31:30.278071   32853 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0501 02:31:30.278133   32853 ssh_runner.go:195] Run: which lz4
	I0501 02:31:30.282738   32853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0501 02:31:30.282841   32853 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0501 02:31:30.287593   32853 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0501 02:31:30.287625   32853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394544937 bytes)
	I0501 02:31:31.899532   32853 crio.go:462] duration metric: took 1.616715499s to copy over tarball
	I0501 02:31:31.899619   32853 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0501 02:31:34.331728   32853 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.432078045s)
	I0501 02:31:34.331754   32853 crio.go:469] duration metric: took 2.432192448s to extract the tarball
	I0501 02:31:34.331761   32853 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0501 02:31:34.372975   32853 ssh_runner.go:195] Run: sudo crictl images --output json
	I0501 02:31:34.421556   32853 crio.go:514] all images are preloaded for cri-o runtime.
	I0501 02:31:34.421580   32853 cache_images.go:84] Images are preloaded, skipping loading
	I0501 02:31:34.421589   32853 kubeadm.go:928] updating node { 192.168.39.5 8443 v1.30.0 crio true true} ...
	I0501 02:31:34.421690   32853 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-329926 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-329926 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0501 02:31:34.421758   32853 ssh_runner.go:195] Run: crio config
	I0501 02:31:34.470851   32853 cni.go:84] Creating CNI manager for ""
	I0501 02:31:34.470875   32853 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0501 02:31:34.470887   32853 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0501 02:31:34.470908   32853 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.5 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-329926 NodeName:ha-329926 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0501 02:31:34.471082   32853 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-329926"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0501 02:31:34.471110   32853 kube-vip.go:111] generating kube-vip config ...
	I0501 02:31:34.471157   32853 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0501 02:31:34.494493   32853 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0501 02:31:34.494609   32853 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0501 02:31:34.494670   32853 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0501 02:31:34.506544   32853 binaries.go:44] Found k8s binaries, skipping transfer
	I0501 02:31:34.506641   32853 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0501 02:31:34.518288   32853 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I0501 02:31:34.537345   32853 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0501 02:31:34.556679   32853 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2147 bytes)
	I0501 02:31:34.575628   32853 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1352 bytes)
	I0501 02:31:34.594823   32853 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0501 02:31:34.599305   32853 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0501 02:31:34.613451   32853 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:31:34.737037   32853 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0501 02:31:34.757717   32853 certs.go:68] Setting up /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926 for IP: 192.168.39.5
	I0501 02:31:34.757740   32853 certs.go:194] generating shared ca certs ...
	I0501 02:31:34.757759   32853 certs.go:226] acquiring lock for ca certs: {Name:mk85a75d14c00780d4bf09cd9bdaf0f96f1b8fce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:31:34.757924   32853 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18779-13391/.minikube/ca.key
	I0501 02:31:34.757995   32853 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.key
	I0501 02:31:34.758010   32853 certs.go:256] generating profile certs ...
	I0501 02:31:34.758085   32853 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/client.key
	I0501 02:31:34.758102   32853 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/client.crt with IP's: []
	I0501 02:31:35.184404   32853 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/client.crt ...
	I0501 02:31:35.184439   32853 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/client.crt: {Name:mk7262274ab19f428bd917a3a08a2ab22cf28192 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:31:35.184627   32853 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/client.key ...
	I0501 02:31:35.184641   32853 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/client.key: {Name:mk6a4a995038232669fc0f6a17d68762f3b81c49 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:31:35.184741   32853 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/apiserver.key.a34e53e1
	I0501 02:31:35.184761   32853 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/apiserver.crt.a34e53e1 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.5 192.168.39.254]
	I0501 02:31:35.280636   32853 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/apiserver.crt.a34e53e1 ...
	I0501 02:31:35.280666   32853 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/apiserver.crt.a34e53e1: {Name:mk4e096a3a58435245d20a768dcb5062bf6dfa7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:31:35.280838   32853 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/apiserver.key.a34e53e1 ...
	I0501 02:31:35.280854   32853 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/apiserver.key.a34e53e1: {Name:mk47468eb32dd383aceebd71d208491de3b69700 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:31:35.280943   32853 certs.go:381] copying /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/apiserver.crt.a34e53e1 -> /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/apiserver.crt
	I0501 02:31:35.281017   32853 certs.go:385] copying /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/apiserver.key.a34e53e1 -> /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/apiserver.key
	I0501 02:31:35.281066   32853 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/proxy-client.key
	I0501 02:31:35.281080   32853 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/proxy-client.crt with IP's: []
	I0501 02:31:35.610854   32853 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/proxy-client.crt ...
	I0501 02:31:35.610887   32853 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/proxy-client.crt: {Name:mk234aa7e8d9b93676c6aac1337f4aea75086303 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:31:35.611073   32853 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/proxy-client.key ...
	I0501 02:31:35.611088   32853 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/proxy-client.key: {Name:mk9b5c622227e136431a0d879f84ae5015bc057c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:31:35.611187   32853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0501 02:31:35.611205   32853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0501 02:31:35.611215   32853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0501 02:31:35.611228   32853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0501 02:31:35.611241   32853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0501 02:31:35.611254   32853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0501 02:31:35.611266   32853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0501 02:31:35.611283   32853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0501 02:31:35.611330   32853 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/20724.pem (1338 bytes)
	W0501 02:31:35.611367   32853 certs.go:480] ignoring /home/jenkins/minikube-integration/18779-13391/.minikube/certs/20724_empty.pem, impossibly tiny 0 bytes
	I0501 02:31:35.611377   32853 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca-key.pem (1679 bytes)
	I0501 02:31:35.611397   32853 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem (1078 bytes)
	I0501 02:31:35.611420   32853 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem (1123 bytes)
	I0501 02:31:35.611442   32853 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem (1675 bytes)
	I0501 02:31:35.611480   32853 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem (1708 bytes)
	I0501 02:31:35.611507   32853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0501 02:31:35.611521   32853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/20724.pem -> /usr/share/ca-certificates/20724.pem
	I0501 02:31:35.611533   32853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem -> /usr/share/ca-certificates/207242.pem
	I0501 02:31:35.612028   32853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0501 02:31:35.653543   32853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0501 02:31:35.682002   32853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0501 02:31:35.728901   32853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0501 02:31:35.756532   32853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0501 02:31:35.783290   32853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0501 02:31:35.810383   32853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0501 02:31:35.839806   32853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0501 02:31:35.867844   32853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0501 02:31:35.895750   32853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/certs/20724.pem --> /usr/share/ca-certificates/20724.pem (1338 bytes)
	I0501 02:31:35.921788   32853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem --> /usr/share/ca-certificates/207242.pem (1708 bytes)
	I0501 02:31:35.947447   32853 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0501 02:31:35.966086   32853 ssh_runner.go:195] Run: openssl version
	I0501 02:31:35.973809   32853 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0501 02:31:35.986988   32853 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0501 02:31:35.992144   32853 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May  1 02:08 /usr/share/ca-certificates/minikubeCA.pem
	I0501 02:31:35.992201   32853 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0501 02:31:35.998663   32853 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0501 02:31:36.011667   32853 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20724.pem && ln -fs /usr/share/ca-certificates/20724.pem /etc/ssl/certs/20724.pem"
	I0501 02:31:36.025015   32853 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20724.pem
	I0501 02:31:36.030258   32853 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May  1 02:20 /usr/share/ca-certificates/20724.pem
	I0501 02:31:36.030315   32853 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20724.pem
	I0501 02:31:36.036690   32853 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20724.pem /etc/ssl/certs/51391683.0"
	I0501 02:31:36.050321   32853 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/207242.pem && ln -fs /usr/share/ca-certificates/207242.pem /etc/ssl/certs/207242.pem"
	I0501 02:31:36.063932   32853 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/207242.pem
	I0501 02:31:36.069293   32853 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May  1 02:20 /usr/share/ca-certificates/207242.pem
	I0501 02:31:36.069351   32853 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/207242.pem
	I0501 02:31:36.075994   32853 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/207242.pem /etc/ssl/certs/3ec20f2e.0"
	I0501 02:31:36.089356   32853 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0501 02:31:36.094230   32853 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0501 02:31:36.094289   32853 kubeadm.go:391] StartCluster: {Name:ha-329926 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Clust
erName:ha-329926 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.5 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 02:31:36.094363   32853 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0501 02:31:36.094444   32853 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0501 02:31:36.134470   32853 cri.go:89] found id: ""
	I0501 02:31:36.134545   32853 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0501 02:31:36.146249   32853 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0501 02:31:36.157221   32853 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0501 02:31:36.167977   32853 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0501 02:31:36.167994   32853 kubeadm.go:156] found existing configuration files:
	
	I0501 02:31:36.168026   32853 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0501 02:31:36.178309   32853 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0501 02:31:36.178363   32853 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0501 02:31:36.189152   32853 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0501 02:31:36.199616   32853 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0501 02:31:36.199667   32853 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0501 02:31:36.210379   32853 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0501 02:31:36.220906   32853 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0501 02:31:36.220954   32853 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0501 02:31:36.232914   32853 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0501 02:31:36.244646   32853 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0501 02:31:36.244693   32853 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0501 02:31:36.255737   32853 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0501 02:31:36.512862   32853 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0501 02:31:48.755507   32853 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0501 02:31:48.755566   32853 kubeadm.go:309] [preflight] Running pre-flight checks
	I0501 02:31:48.755657   32853 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0501 02:31:48.755766   32853 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0501 02:31:48.755902   32853 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0501 02:31:48.756000   32853 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0501 02:31:48.757306   32853 out.go:204]   - Generating certificates and keys ...
	I0501 02:31:48.757389   32853 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0501 02:31:48.757467   32853 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0501 02:31:48.757562   32853 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0501 02:31:48.757643   32853 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0501 02:31:48.757721   32853 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0501 02:31:48.757797   32853 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0501 02:31:48.757875   32853 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0501 02:31:48.758036   32853 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-329926 localhost] and IPs [192.168.39.5 127.0.0.1 ::1]
	I0501 02:31:48.758119   32853 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0501 02:31:48.758222   32853 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-329926 localhost] and IPs [192.168.39.5 127.0.0.1 ::1]
	I0501 02:31:48.758282   32853 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0501 02:31:48.758355   32853 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0501 02:31:48.758433   32853 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0501 02:31:48.758499   32853 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0501 02:31:48.758570   32853 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0501 02:31:48.758615   32853 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0501 02:31:48.758656   32853 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0501 02:31:48.758708   32853 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0501 02:31:48.758772   32853 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0501 02:31:48.758885   32853 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0501 02:31:48.758938   32853 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0501 02:31:48.760267   32853 out.go:204]   - Booting up control plane ...
	I0501 02:31:48.760382   32853 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0501 02:31:48.760490   32853 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0501 02:31:48.760544   32853 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0501 02:31:48.760626   32853 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0501 02:31:48.760693   32853 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0501 02:31:48.760758   32853 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0501 02:31:48.760935   32853 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0501 02:31:48.761027   32853 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0501 02:31:48.761120   32853 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.001253757s
	I0501 02:31:48.761214   32853 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0501 02:31:48.761289   32853 kubeadm.go:309] [api-check] The API server is healthy after 6.003159502s
	I0501 02:31:48.761425   32853 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0501 02:31:48.761542   32853 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0501 02:31:48.761627   32853 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0501 02:31:48.761821   32853 kubeadm.go:309] [mark-control-plane] Marking the node ha-329926 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0501 02:31:48.761882   32853 kubeadm.go:309] [bootstrap-token] Using token: ig5cw9.dz3x2efs4246n26l
	I0501 02:31:48.763213   32853 out.go:204]   - Configuring RBAC rules ...
	I0501 02:31:48.763314   32853 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0501 02:31:48.763416   32853 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0501 02:31:48.763542   32853 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0501 02:31:48.763649   32853 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0501 02:31:48.763771   32853 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0501 02:31:48.763903   32853 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0501 02:31:48.764014   32853 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0501 02:31:48.764060   32853 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0501 02:31:48.764132   32853 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0501 02:31:48.764140   32853 kubeadm.go:309] 
	I0501 02:31:48.764226   32853 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0501 02:31:48.764248   32853 kubeadm.go:309] 
	I0501 02:31:48.764346   32853 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0501 02:31:48.764356   32853 kubeadm.go:309] 
	I0501 02:31:48.764401   32853 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0501 02:31:48.764479   32853 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0501 02:31:48.764532   32853 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0501 02:31:48.764538   32853 kubeadm.go:309] 
	I0501 02:31:48.764582   32853 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0501 02:31:48.764588   32853 kubeadm.go:309] 
	I0501 02:31:48.764636   32853 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0501 02:31:48.764644   32853 kubeadm.go:309] 
	I0501 02:31:48.764724   32853 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0501 02:31:48.764814   32853 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0501 02:31:48.764876   32853 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0501 02:31:48.764883   32853 kubeadm.go:309] 
	I0501 02:31:48.764950   32853 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0501 02:31:48.765012   32853 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0501 02:31:48.765019   32853 kubeadm.go:309] 
	I0501 02:31:48.765089   32853 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token ig5cw9.dz3x2efs4246n26l \
	I0501 02:31:48.765173   32853 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:bd94cc6acfb95fe36f70b12c6a1f2980601b8235e7c4dea65cebdf35eb514754 \
	I0501 02:31:48.765194   32853 kubeadm.go:309] 	--control-plane 
	I0501 02:31:48.765200   32853 kubeadm.go:309] 
	I0501 02:31:48.765270   32853 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0501 02:31:48.765278   32853 kubeadm.go:309] 
	I0501 02:31:48.765343   32853 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token ig5cw9.dz3x2efs4246n26l \
	I0501 02:31:48.765445   32853 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:bd94cc6acfb95fe36f70b12c6a1f2980601b8235e7c4dea65cebdf35eb514754 
	I0501 02:31:48.765465   32853 cni.go:84] Creating CNI manager for ""
	I0501 02:31:48.765471   32853 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0501 02:31:48.766782   32853 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0501 02:31:48.767793   32853 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0501 02:31:48.773813   32853 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.0/kubectl ...
	I0501 02:31:48.773830   32853 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0501 02:31:48.796832   32853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0501 02:31:49.171787   32853 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0501 02:31:49.171873   32853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:31:49.171885   32853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-329926 minikube.k8s.io/updated_at=2024_05_01T02_31_49_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=2c4eae41cda912e6a762d77f0d8868e00f97bb4e minikube.k8s.io/name=ha-329926 minikube.k8s.io/primary=true
	I0501 02:31:49.199739   32853 ops.go:34] apiserver oom_adj: -16
	I0501 02:31:49.397251   32853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:31:49.898132   32853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:31:50.398273   32853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:31:50.897522   32853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:31:51.397933   32853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:31:51.897587   32853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:31:52.398178   32853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:31:52.898135   32853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:31:53.397977   32853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:31:53.897989   32853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:31:54.398231   32853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:31:54.897911   32853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:31:55.398096   32853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:31:55.897405   32853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:31:56.397928   32853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:31:56.897483   32853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:31:57.397649   32853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:31:57.897882   32853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:31:58.398240   32853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:31:58.897674   32853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:31:59.397723   32853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:31:59.898128   32853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:32:00.397427   32853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:32:00.897398   32853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:32:01.398293   32853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:32:01.500100   32853 kubeadm.go:1107] duration metric: took 12.328290279s to wait for elevateKubeSystemPrivileges
	W0501 02:32:01.500160   32853 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0501 02:32:01.500170   32853 kubeadm.go:393] duration metric: took 25.405886252s to StartCluster
	I0501 02:32:01.500193   32853 settings.go:142] acquiring lock: {Name:mkcfe08bcadd45d99c00ed1eaf4e9c226a489fe2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:32:01.500290   32853 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18779-13391/kubeconfig
	I0501 02:32:01.500970   32853 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18779-13391/kubeconfig: {Name:mk5d1131557f885800877622151c0915337cb23a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:32:01.501171   32853 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.5 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0501 02:32:01.501187   32853 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0501 02:32:01.501201   32853 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0501 02:32:01.501279   32853 addons.go:69] Setting storage-provisioner=true in profile "ha-329926"
	I0501 02:32:01.501194   32853 start.go:240] waiting for startup goroutines ...
	I0501 02:32:01.501302   32853 addons.go:69] Setting default-storageclass=true in profile "ha-329926"
	I0501 02:32:01.501314   32853 addons.go:234] Setting addon storage-provisioner=true in "ha-329926"
	I0501 02:32:01.501331   32853 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-329926"
	I0501 02:32:01.501347   32853 host.go:66] Checking if "ha-329926" exists ...
	I0501 02:32:01.501398   32853 config.go:182] Loaded profile config "ha-329926": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0501 02:32:01.501782   32853 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:32:01.501804   32853 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:32:01.501785   32853 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:32:01.501919   32853 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:32:01.517256   32853 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38955
	I0501 02:32:01.517710   32853 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:32:01.518224   32853 main.go:141] libmachine: Using API Version  1
	I0501 02:32:01.518244   32853 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:32:01.518254   32853 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39407
	I0501 02:32:01.518608   32853 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:32:01.518693   32853 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:32:01.518781   32853 main.go:141] libmachine: (ha-329926) Calling .GetState
	I0501 02:32:01.519264   32853 main.go:141] libmachine: Using API Version  1
	I0501 02:32:01.519290   32853 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:32:01.519642   32853 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:32:01.520167   32853 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:32:01.520195   32853 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:32:01.520971   32853 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18779-13391/kubeconfig
	I0501 02:32:01.521213   32853 kapi.go:59] client config for ha-329926: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/client.crt", KeyFile:"/home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/client.key", CAFile:"/home/jenkins/minikube-integration/18779-13391/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02700), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0501 02:32:01.521662   32853 cert_rotation.go:137] Starting client certificate rotation controller
	I0501 02:32:01.521833   32853 addons.go:234] Setting addon default-storageclass=true in "ha-329926"
	I0501 02:32:01.521864   32853 host.go:66] Checking if "ha-329926" exists ...
	I0501 02:32:01.522122   32853 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:32:01.522169   32853 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:32:01.535997   32853 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44213
	I0501 02:32:01.536512   32853 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:32:01.536995   32853 main.go:141] libmachine: Using API Version  1
	I0501 02:32:01.537021   32853 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:32:01.537377   32853 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:32:01.537390   32853 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45103
	I0501 02:32:01.537589   32853 main.go:141] libmachine: (ha-329926) Calling .GetState
	I0501 02:32:01.537774   32853 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:32:01.538266   32853 main.go:141] libmachine: Using API Version  1
	I0501 02:32:01.538289   32853 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:32:01.538674   32853 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:32:01.539243   32853 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:32:01.539274   32853 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:32:01.539513   32853 main.go:141] libmachine: (ha-329926) Calling .DriverName
	I0501 02:32:01.541192   32853 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0501 02:32:01.542333   32853 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0501 02:32:01.542350   32853 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0501 02:32:01.542363   32853 main.go:141] libmachine: (ha-329926) Calling .GetSSHHostname
	I0501 02:32:01.545583   32853 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:32:01.546102   32853 main.go:141] libmachine: (ha-329926) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:d8:43", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:31:18 +0000 UTC Type:0 Mac:52:54:00:ce:d8:43 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-329926 Clientid:01:52:54:00:ce:d8:43}
	I0501 02:32:01.546127   32853 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined IP address 192.168.39.5 and MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:32:01.546289   32853 main.go:141] libmachine: (ha-329926) Calling .GetSSHPort
	I0501 02:32:01.546480   32853 main.go:141] libmachine: (ha-329926) Calling .GetSSHKeyPath
	I0501 02:32:01.546654   32853 main.go:141] libmachine: (ha-329926) Calling .GetSSHUsername
	I0501 02:32:01.546813   32853 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926/id_rsa Username:docker}
	I0501 02:32:01.555702   32853 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42311
	I0501 02:32:01.556148   32853 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:32:01.556629   32853 main.go:141] libmachine: Using API Version  1
	I0501 02:32:01.556650   32853 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:32:01.556959   32853 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:32:01.557174   32853 main.go:141] libmachine: (ha-329926) Calling .GetState
	I0501 02:32:01.558844   32853 main.go:141] libmachine: (ha-329926) Calling .DriverName
	I0501 02:32:01.559088   32853 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0501 02:32:01.559107   32853 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0501 02:32:01.559125   32853 main.go:141] libmachine: (ha-329926) Calling .GetSSHHostname
	I0501 02:32:01.561835   32853 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:32:01.562222   32853 main.go:141] libmachine: (ha-329926) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:d8:43", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:31:18 +0000 UTC Type:0 Mac:52:54:00:ce:d8:43 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-329926 Clientid:01:52:54:00:ce:d8:43}
	I0501 02:32:01.562250   32853 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined IP address 192.168.39.5 and MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:32:01.562441   32853 main.go:141] libmachine: (ha-329926) Calling .GetSSHPort
	I0501 02:32:01.562623   32853 main.go:141] libmachine: (ha-329926) Calling .GetSSHKeyPath
	I0501 02:32:01.562773   32853 main.go:141] libmachine: (ha-329926) Calling .GetSSHUsername
	I0501 02:32:01.562925   32853 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926/id_rsa Username:docker}
	I0501 02:32:01.751537   32853 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0501 02:32:01.785259   32853 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0501 02:32:01.792036   32853 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0501 02:32:02.564393   32853 main.go:141] libmachine: Making call to close driver server
	I0501 02:32:02.564419   32853 main.go:141] libmachine: (ha-329926) Calling .Close
	I0501 02:32:02.564434   32853 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0501 02:32:02.564490   32853 main.go:141] libmachine: Making call to close driver server
	I0501 02:32:02.564503   32853 main.go:141] libmachine: (ha-329926) Calling .Close
	I0501 02:32:02.564715   32853 main.go:141] libmachine: Successfully made call to close driver server
	I0501 02:32:02.564733   32853 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 02:32:02.564741   32853 main.go:141] libmachine: Making call to close driver server
	I0501 02:32:02.564748   32853 main.go:141] libmachine: (ha-329926) Calling .Close
	I0501 02:32:02.564850   32853 main.go:141] libmachine: Successfully made call to close driver server
	I0501 02:32:02.564860   32853 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 02:32:02.564868   32853 main.go:141] libmachine: Making call to close driver server
	I0501 02:32:02.564876   32853 main.go:141] libmachine: (ha-329926) Calling .Close
	I0501 02:32:02.564993   32853 main.go:141] libmachine: Successfully made call to close driver server
	I0501 02:32:02.565008   32853 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 02:32:02.565110   32853 main.go:141] libmachine: (ha-329926) DBG | Closing plugin on server side
	I0501 02:32:02.565109   32853 main.go:141] libmachine: Successfully made call to close driver server
	I0501 02:32:02.565142   32853 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 02:32:02.565144   32853 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0501 02:32:02.565153   32853 round_trippers.go:469] Request Headers:
	I0501 02:32:02.565164   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:32:02.565169   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:32:02.581629   32853 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0501 02:32:02.582188   32853 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0501 02:32:02.582205   32853 round_trippers.go:469] Request Headers:
	I0501 02:32:02.582212   32853 round_trippers.go:473]     Content-Type: application/json
	I0501 02:32:02.582215   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:32:02.582218   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:32:02.586925   32853 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:32:02.587164   32853 main.go:141] libmachine: Making call to close driver server
	I0501 02:32:02.587180   32853 main.go:141] libmachine: (ha-329926) Calling .Close
	I0501 02:32:02.587421   32853 main.go:141] libmachine: Successfully made call to close driver server
	I0501 02:32:02.587439   32853 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 02:32:02.588760   32853 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0501 02:32:02.589714   32853 addons.go:505] duration metric: took 1.088515167s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0501 02:32:02.589748   32853 start.go:245] waiting for cluster config update ...
	I0501 02:32:02.589759   32853 start.go:254] writing updated cluster config ...
	I0501 02:32:02.591174   32853 out.go:177] 
	I0501 02:32:02.592511   32853 config.go:182] Loaded profile config "ha-329926": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0501 02:32:02.592585   32853 profile.go:143] Saving config to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/config.json ...
	I0501 02:32:02.594055   32853 out.go:177] * Starting "ha-329926-m02" control-plane node in "ha-329926" cluster
	I0501 02:32:02.595029   32853 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0501 02:32:02.595055   32853 cache.go:56] Caching tarball of preloaded images
	I0501 02:32:02.595143   32853 preload.go:173] Found /home/jenkins/minikube-integration/18779-13391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0501 02:32:02.595159   32853 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0501 02:32:02.595239   32853 profile.go:143] Saving config to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/config.json ...
	I0501 02:32:02.595457   32853 start.go:360] acquireMachinesLock for ha-329926-m02: {Name:mkc4548e5afe1d4b0a833f6c522103562b0cefff Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0501 02:32:02.595506   32853 start.go:364] duration metric: took 27.6µs to acquireMachinesLock for "ha-329926-m02"
	I0501 02:32:02.595540   32853 start.go:93] Provisioning new machine with config: &{Name:ha-329926 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.0 ClusterName:ha-329926 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.5 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0501 02:32:02.595624   32853 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0501 02:32:02.597000   32853 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0501 02:32:02.597096   32853 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:32:02.597126   32853 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:32:02.611846   32853 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43957
	I0501 02:32:02.612237   32853 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:32:02.612731   32853 main.go:141] libmachine: Using API Version  1
	I0501 02:32:02.612754   32853 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:32:02.613047   32853 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:32:02.613230   32853 main.go:141] libmachine: (ha-329926-m02) Calling .GetMachineName
	I0501 02:32:02.613367   32853 main.go:141] libmachine: (ha-329926-m02) Calling .DriverName
	I0501 02:32:02.613524   32853 start.go:159] libmachine.API.Create for "ha-329926" (driver="kvm2")
	I0501 02:32:02.613551   32853 client.go:168] LocalClient.Create starting
	I0501 02:32:02.613580   32853 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem
	I0501 02:32:02.613611   32853 main.go:141] libmachine: Decoding PEM data...
	I0501 02:32:02.613625   32853 main.go:141] libmachine: Parsing certificate...
	I0501 02:32:02.613671   32853 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem
	I0501 02:32:02.613688   32853 main.go:141] libmachine: Decoding PEM data...
	I0501 02:32:02.613698   32853 main.go:141] libmachine: Parsing certificate...
	I0501 02:32:02.613716   32853 main.go:141] libmachine: Running pre-create checks...
	I0501 02:32:02.613724   32853 main.go:141] libmachine: (ha-329926-m02) Calling .PreCreateCheck
	I0501 02:32:02.613900   32853 main.go:141] libmachine: (ha-329926-m02) Calling .GetConfigRaw
	I0501 02:32:02.614249   32853 main.go:141] libmachine: Creating machine...
	I0501 02:32:02.614262   32853 main.go:141] libmachine: (ha-329926-m02) Calling .Create
	I0501 02:32:02.614381   32853 main.go:141] libmachine: (ha-329926-m02) Creating KVM machine...
	I0501 02:32:02.615568   32853 main.go:141] libmachine: (ha-329926-m02) DBG | found existing default KVM network
	I0501 02:32:02.615712   32853 main.go:141] libmachine: (ha-329926-m02) DBG | found existing private KVM network mk-ha-329926
	I0501 02:32:02.615805   32853 main.go:141] libmachine: (ha-329926-m02) Setting up store path in /home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926-m02 ...
	I0501 02:32:02.615836   32853 main.go:141] libmachine: (ha-329926-m02) Building disk image from file:///home/jenkins/minikube-integration/18779-13391/.minikube/cache/iso/amd64/minikube-v1.33.0-1714498396-18779-amd64.iso
	I0501 02:32:02.615905   32853 main.go:141] libmachine: (ha-329926-m02) DBG | I0501 02:32:02.615811   33274 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18779-13391/.minikube
	I0501 02:32:02.615996   32853 main.go:141] libmachine: (ha-329926-m02) Downloading /home/jenkins/minikube-integration/18779-13391/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18779-13391/.minikube/cache/iso/amd64/minikube-v1.33.0-1714498396-18779-amd64.iso...
	I0501 02:32:02.826831   32853 main.go:141] libmachine: (ha-329926-m02) DBG | I0501 02:32:02.826712   33274 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926-m02/id_rsa...
	I0501 02:32:02.959121   32853 main.go:141] libmachine: (ha-329926-m02) DBG | I0501 02:32:02.958954   33274 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926-m02/ha-329926-m02.rawdisk...
	I0501 02:32:02.959153   32853 main.go:141] libmachine: (ha-329926-m02) DBG | Writing magic tar header
	I0501 02:32:02.959179   32853 main.go:141] libmachine: (ha-329926-m02) DBG | Writing SSH key tar header
	I0501 02:32:02.959194   32853 main.go:141] libmachine: (ha-329926-m02) DBG | I0501 02:32:02.959067   33274 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926-m02 ...
	I0501 02:32:02.959239   32853 main.go:141] libmachine: (ha-329926-m02) Setting executable bit set on /home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926-m02 (perms=drwx------)
	I0501 02:32:02.959258   32853 main.go:141] libmachine: (ha-329926-m02) Setting executable bit set on /home/jenkins/minikube-integration/18779-13391/.minikube/machines (perms=drwxr-xr-x)
	I0501 02:32:02.959266   32853 main.go:141] libmachine: (ha-329926-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926-m02
	I0501 02:32:02.959279   32853 main.go:141] libmachine: (ha-329926-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18779-13391/.minikube/machines
	I0501 02:32:02.959288   32853 main.go:141] libmachine: (ha-329926-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18779-13391/.minikube
	I0501 02:32:02.959303   32853 main.go:141] libmachine: (ha-329926-m02) Setting executable bit set on /home/jenkins/minikube-integration/18779-13391/.minikube (perms=drwxr-xr-x)
	I0501 02:32:02.959315   32853 main.go:141] libmachine: (ha-329926-m02) Setting executable bit set on /home/jenkins/minikube-integration/18779-13391 (perms=drwxrwxr-x)
	I0501 02:32:02.959325   32853 main.go:141] libmachine: (ha-329926-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0501 02:32:02.959336   32853 main.go:141] libmachine: (ha-329926-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0501 02:32:02.959341   32853 main.go:141] libmachine: (ha-329926-m02) Creating domain...
	I0501 02:32:02.959353   32853 main.go:141] libmachine: (ha-329926-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18779-13391
	I0501 02:32:02.959361   32853 main.go:141] libmachine: (ha-329926-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0501 02:32:02.959372   32853 main.go:141] libmachine: (ha-329926-m02) DBG | Checking permissions on dir: /home/jenkins
	I0501 02:32:02.959402   32853 main.go:141] libmachine: (ha-329926-m02) DBG | Checking permissions on dir: /home
	I0501 02:32:02.959419   32853 main.go:141] libmachine: (ha-329926-m02) DBG | Skipping /home - not owner
	I0501 02:32:02.960337   32853 main.go:141] libmachine: (ha-329926-m02) define libvirt domain using xml: 
	I0501 02:32:02.960360   32853 main.go:141] libmachine: (ha-329926-m02) <domain type='kvm'>
	I0501 02:32:02.960371   32853 main.go:141] libmachine: (ha-329926-m02)   <name>ha-329926-m02</name>
	I0501 02:32:02.960384   32853 main.go:141] libmachine: (ha-329926-m02)   <memory unit='MiB'>2200</memory>
	I0501 02:32:02.960396   32853 main.go:141] libmachine: (ha-329926-m02)   <vcpu>2</vcpu>
	I0501 02:32:02.960402   32853 main.go:141] libmachine: (ha-329926-m02)   <features>
	I0501 02:32:02.960414   32853 main.go:141] libmachine: (ha-329926-m02)     <acpi/>
	I0501 02:32:02.960424   32853 main.go:141] libmachine: (ha-329926-m02)     <apic/>
	I0501 02:32:02.960431   32853 main.go:141] libmachine: (ha-329926-m02)     <pae/>
	I0501 02:32:02.960440   32853 main.go:141] libmachine: (ha-329926-m02)     
	I0501 02:32:02.960450   32853 main.go:141] libmachine: (ha-329926-m02)   </features>
	I0501 02:32:02.960464   32853 main.go:141] libmachine: (ha-329926-m02)   <cpu mode='host-passthrough'>
	I0501 02:32:02.960475   32853 main.go:141] libmachine: (ha-329926-m02)   
	I0501 02:32:02.960484   32853 main.go:141] libmachine: (ha-329926-m02)   </cpu>
	I0501 02:32:02.960493   32853 main.go:141] libmachine: (ha-329926-m02)   <os>
	I0501 02:32:02.960511   32853 main.go:141] libmachine: (ha-329926-m02)     <type>hvm</type>
	I0501 02:32:02.960528   32853 main.go:141] libmachine: (ha-329926-m02)     <boot dev='cdrom'/>
	I0501 02:32:02.960570   32853 main.go:141] libmachine: (ha-329926-m02)     <boot dev='hd'/>
	I0501 02:32:02.960606   32853 main.go:141] libmachine: (ha-329926-m02)     <bootmenu enable='no'/>
	I0501 02:32:02.960620   32853 main.go:141] libmachine: (ha-329926-m02)   </os>
	I0501 02:32:02.960634   32853 main.go:141] libmachine: (ha-329926-m02)   <devices>
	I0501 02:32:02.960651   32853 main.go:141] libmachine: (ha-329926-m02)     <disk type='file' device='cdrom'>
	I0501 02:32:02.960667   32853 main.go:141] libmachine: (ha-329926-m02)       <source file='/home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926-m02/boot2docker.iso'/>
	I0501 02:32:02.960681   32853 main.go:141] libmachine: (ha-329926-m02)       <target dev='hdc' bus='scsi'/>
	I0501 02:32:02.960697   32853 main.go:141] libmachine: (ha-329926-m02)       <readonly/>
	I0501 02:32:02.960722   32853 main.go:141] libmachine: (ha-329926-m02)     </disk>
	I0501 02:32:02.960734   32853 main.go:141] libmachine: (ha-329926-m02)     <disk type='file' device='disk'>
	I0501 02:32:02.960748   32853 main.go:141] libmachine: (ha-329926-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0501 02:32:02.960763   32853 main.go:141] libmachine: (ha-329926-m02)       <source file='/home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926-m02/ha-329926-m02.rawdisk'/>
	I0501 02:32:02.960783   32853 main.go:141] libmachine: (ha-329926-m02)       <target dev='hda' bus='virtio'/>
	I0501 02:32:02.960795   32853 main.go:141] libmachine: (ha-329926-m02)     </disk>
	I0501 02:32:02.960810   32853 main.go:141] libmachine: (ha-329926-m02)     <interface type='network'>
	I0501 02:32:02.960823   32853 main.go:141] libmachine: (ha-329926-m02)       <source network='mk-ha-329926'/>
	I0501 02:32:02.960835   32853 main.go:141] libmachine: (ha-329926-m02)       <model type='virtio'/>
	I0501 02:32:02.960847   32853 main.go:141] libmachine: (ha-329926-m02)     </interface>
	I0501 02:32:02.960855   32853 main.go:141] libmachine: (ha-329926-m02)     <interface type='network'>
	I0501 02:32:02.960868   32853 main.go:141] libmachine: (ha-329926-m02)       <source network='default'/>
	I0501 02:32:02.960884   32853 main.go:141] libmachine: (ha-329926-m02)       <model type='virtio'/>
	I0501 02:32:02.960897   32853 main.go:141] libmachine: (ha-329926-m02)     </interface>
	I0501 02:32:02.960907   32853 main.go:141] libmachine: (ha-329926-m02)     <serial type='pty'>
	I0501 02:32:02.960918   32853 main.go:141] libmachine: (ha-329926-m02)       <target port='0'/>
	I0501 02:32:02.960929   32853 main.go:141] libmachine: (ha-329926-m02)     </serial>
	I0501 02:32:02.960940   32853 main.go:141] libmachine: (ha-329926-m02)     <console type='pty'>
	I0501 02:32:02.960951   32853 main.go:141] libmachine: (ha-329926-m02)       <target type='serial' port='0'/>
	I0501 02:32:02.960960   32853 main.go:141] libmachine: (ha-329926-m02)     </console>
	I0501 02:32:02.960972   32853 main.go:141] libmachine: (ha-329926-m02)     <rng model='virtio'>
	I0501 02:32:02.960985   32853 main.go:141] libmachine: (ha-329926-m02)       <backend model='random'>/dev/random</backend>
	I0501 02:32:02.960998   32853 main.go:141] libmachine: (ha-329926-m02)     </rng>
	I0501 02:32:02.961008   32853 main.go:141] libmachine: (ha-329926-m02)     
	I0501 02:32:02.961017   32853 main.go:141] libmachine: (ha-329926-m02)     
	I0501 02:32:02.961025   32853 main.go:141] libmachine: (ha-329926-m02)   </devices>
	I0501 02:32:02.961043   32853 main.go:141] libmachine: (ha-329926-m02) </domain>
	I0501 02:32:02.961054   32853 main.go:141] libmachine: (ha-329926-m02) 
	I0501 02:32:02.967307   32853 main.go:141] libmachine: (ha-329926-m02) DBG | domain ha-329926-m02 has defined MAC address 52:54:00:6f:35:48 in network default
	I0501 02:32:02.967939   32853 main.go:141] libmachine: (ha-329926-m02) Ensuring networks are active...
	I0501 02:32:02.967959   32853 main.go:141] libmachine: (ha-329926-m02) DBG | domain ha-329926-m02 has defined MAC address 52:54:00:92:16:5f in network mk-ha-329926
	I0501 02:32:02.968665   32853 main.go:141] libmachine: (ha-329926-m02) Ensuring network default is active
	I0501 02:32:02.968978   32853 main.go:141] libmachine: (ha-329926-m02) Ensuring network mk-ha-329926 is active
	I0501 02:32:02.969344   32853 main.go:141] libmachine: (ha-329926-m02) Getting domain xml...
	I0501 02:32:02.970049   32853 main.go:141] libmachine: (ha-329926-m02) Creating domain...
	I0501 02:32:04.175671   32853 main.go:141] libmachine: (ha-329926-m02) Waiting to get IP...
	I0501 02:32:04.176721   32853 main.go:141] libmachine: (ha-329926-m02) DBG | domain ha-329926-m02 has defined MAC address 52:54:00:92:16:5f in network mk-ha-329926
	I0501 02:32:04.177224   32853 main.go:141] libmachine: (ha-329926-m02) DBG | unable to find current IP address of domain ha-329926-m02 in network mk-ha-329926
	I0501 02:32:04.177270   32853 main.go:141] libmachine: (ha-329926-m02) DBG | I0501 02:32:04.177210   33274 retry.go:31] will retry after 291.477557ms: waiting for machine to come up
	I0501 02:32:04.470804   32853 main.go:141] libmachine: (ha-329926-m02) DBG | domain ha-329926-m02 has defined MAC address 52:54:00:92:16:5f in network mk-ha-329926
	I0501 02:32:04.471377   32853 main.go:141] libmachine: (ha-329926-m02) DBG | unable to find current IP address of domain ha-329926-m02 in network mk-ha-329926
	I0501 02:32:04.471398   32853 main.go:141] libmachine: (ha-329926-m02) DBG | I0501 02:32:04.471334   33274 retry.go:31] will retry after 247.398331ms: waiting for machine to come up
	I0501 02:32:04.720554   32853 main.go:141] libmachine: (ha-329926-m02) DBG | domain ha-329926-m02 has defined MAC address 52:54:00:92:16:5f in network mk-ha-329926
	I0501 02:32:04.720929   32853 main.go:141] libmachine: (ha-329926-m02) DBG | unable to find current IP address of domain ha-329926-m02 in network mk-ha-329926
	I0501 02:32:04.720959   32853 main.go:141] libmachine: (ha-329926-m02) DBG | I0501 02:32:04.720886   33274 retry.go:31] will retry after 470.735543ms: waiting for machine to come up
	I0501 02:32:05.193520   32853 main.go:141] libmachine: (ha-329926-m02) DBG | domain ha-329926-m02 has defined MAC address 52:54:00:92:16:5f in network mk-ha-329926
	I0501 02:32:05.193999   32853 main.go:141] libmachine: (ha-329926-m02) DBG | unable to find current IP address of domain ha-329926-m02 in network mk-ha-329926
	I0501 02:32:05.194029   32853 main.go:141] libmachine: (ha-329926-m02) DBG | I0501 02:32:05.193939   33274 retry.go:31] will retry after 376.557887ms: waiting for machine to come up
	I0501 02:32:05.572714   32853 main.go:141] libmachine: (ha-329926-m02) DBG | domain ha-329926-m02 has defined MAC address 52:54:00:92:16:5f in network mk-ha-329926
	I0501 02:32:05.573167   32853 main.go:141] libmachine: (ha-329926-m02) DBG | unable to find current IP address of domain ha-329926-m02 in network mk-ha-329926
	I0501 02:32:05.573199   32853 main.go:141] libmachine: (ha-329926-m02) DBG | I0501 02:32:05.573101   33274 retry.go:31] will retry after 716.277143ms: waiting for machine to come up
	I0501 02:32:06.291055   32853 main.go:141] libmachine: (ha-329926-m02) DBG | domain ha-329926-m02 has defined MAC address 52:54:00:92:16:5f in network mk-ha-329926
	I0501 02:32:06.291486   32853 main.go:141] libmachine: (ha-329926-m02) DBG | unable to find current IP address of domain ha-329926-m02 in network mk-ha-329926
	I0501 02:32:06.291515   32853 main.go:141] libmachine: (ha-329926-m02) DBG | I0501 02:32:06.291451   33274 retry.go:31] will retry after 673.420155ms: waiting for machine to come up
	I0501 02:32:06.966230   32853 main.go:141] libmachine: (ha-329926-m02) DBG | domain ha-329926-m02 has defined MAC address 52:54:00:92:16:5f in network mk-ha-329926
	I0501 02:32:06.966667   32853 main.go:141] libmachine: (ha-329926-m02) DBG | unable to find current IP address of domain ha-329926-m02 in network mk-ha-329926
	I0501 02:32:06.966700   32853 main.go:141] libmachine: (ha-329926-m02) DBG | I0501 02:32:06.966625   33274 retry.go:31] will retry after 763.13328ms: waiting for machine to come up
	I0501 02:32:07.732579   32853 main.go:141] libmachine: (ha-329926-m02) DBG | domain ha-329926-m02 has defined MAC address 52:54:00:92:16:5f in network mk-ha-329926
	I0501 02:32:07.733018   32853 main.go:141] libmachine: (ha-329926-m02) DBG | unable to find current IP address of domain ha-329926-m02 in network mk-ha-329926
	I0501 02:32:07.733039   32853 main.go:141] libmachine: (ha-329926-m02) DBG | I0501 02:32:07.732998   33274 retry.go:31] will retry after 1.123440141s: waiting for machine to come up
	I0501 02:32:08.858360   32853 main.go:141] libmachine: (ha-329926-m02) DBG | domain ha-329926-m02 has defined MAC address 52:54:00:92:16:5f in network mk-ha-329926
	I0501 02:32:08.858874   32853 main.go:141] libmachine: (ha-329926-m02) DBG | unable to find current IP address of domain ha-329926-m02 in network mk-ha-329926
	I0501 02:32:08.858907   32853 main.go:141] libmachine: (ha-329926-m02) DBG | I0501 02:32:08.858830   33274 retry.go:31] will retry after 1.476597499s: waiting for machine to come up
	I0501 02:32:10.337562   32853 main.go:141] libmachine: (ha-329926-m02) DBG | domain ha-329926-m02 has defined MAC address 52:54:00:92:16:5f in network mk-ha-329926
	I0501 02:32:10.337956   32853 main.go:141] libmachine: (ha-329926-m02) DBG | unable to find current IP address of domain ha-329926-m02 in network mk-ha-329926
	I0501 02:32:10.337985   32853 main.go:141] libmachine: (ha-329926-m02) DBG | I0501 02:32:10.337918   33274 retry.go:31] will retry after 2.200841931s: waiting for machine to come up
	I0501 02:32:12.540585   32853 main.go:141] libmachine: (ha-329926-m02) DBG | domain ha-329926-m02 has defined MAC address 52:54:00:92:16:5f in network mk-ha-329926
	I0501 02:32:12.541052   32853 main.go:141] libmachine: (ha-329926-m02) DBG | unable to find current IP address of domain ha-329926-m02 in network mk-ha-329926
	I0501 02:32:12.541103   32853 main.go:141] libmachine: (ha-329926-m02) DBG | I0501 02:32:12.541026   33274 retry.go:31] will retry after 2.547827016s: waiting for machine to come up
	I0501 02:32:15.091592   32853 main.go:141] libmachine: (ha-329926-m02) DBG | domain ha-329926-m02 has defined MAC address 52:54:00:92:16:5f in network mk-ha-329926
	I0501 02:32:15.092126   32853 main.go:141] libmachine: (ha-329926-m02) DBG | unable to find current IP address of domain ha-329926-m02 in network mk-ha-329926
	I0501 02:32:15.092158   32853 main.go:141] libmachine: (ha-329926-m02) DBG | I0501 02:32:15.092067   33274 retry.go:31] will retry after 2.718478189s: waiting for machine to come up
	I0501 02:32:17.812506   32853 main.go:141] libmachine: (ha-329926-m02) DBG | domain ha-329926-m02 has defined MAC address 52:54:00:92:16:5f in network mk-ha-329926
	I0501 02:32:17.812877   32853 main.go:141] libmachine: (ha-329926-m02) DBG | unable to find current IP address of domain ha-329926-m02 in network mk-ha-329926
	I0501 02:32:17.812903   32853 main.go:141] libmachine: (ha-329926-m02) DBG | I0501 02:32:17.812839   33274 retry.go:31] will retry after 3.715125165s: waiting for machine to come up
	I0501 02:32:21.532524   32853 main.go:141] libmachine: (ha-329926-m02) DBG | domain ha-329926-m02 has defined MAC address 52:54:00:92:16:5f in network mk-ha-329926
	I0501 02:32:21.533034   32853 main.go:141] libmachine: (ha-329926-m02) DBG | unable to find current IP address of domain ha-329926-m02 in network mk-ha-329926
	I0501 02:32:21.533063   32853 main.go:141] libmachine: (ha-329926-m02) DBG | I0501 02:32:21.533000   33274 retry.go:31] will retry after 3.412402033s: waiting for machine to come up
	I0501 02:32:24.948532   32853 main.go:141] libmachine: (ha-329926-m02) DBG | domain ha-329926-m02 has defined MAC address 52:54:00:92:16:5f in network mk-ha-329926
	I0501 02:32:24.948969   32853 main.go:141] libmachine: (ha-329926-m02) Found IP for machine: 192.168.39.79
	I0501 02:32:24.948994   32853 main.go:141] libmachine: (ha-329926-m02) Reserving static IP address...
	I0501 02:32:24.949009   32853 main.go:141] libmachine: (ha-329926-m02) DBG | domain ha-329926-m02 has current primary IP address 192.168.39.79 and MAC address 52:54:00:92:16:5f in network mk-ha-329926
	I0501 02:32:24.949344   32853 main.go:141] libmachine: (ha-329926-m02) DBG | unable to find host DHCP lease matching {name: "ha-329926-m02", mac: "52:54:00:92:16:5f", ip: "192.168.39.79"} in network mk-ha-329926
	I0501 02:32:25.021976   32853 main.go:141] libmachine: (ha-329926-m02) DBG | Getting to WaitForSSH function...
	I0501 02:32:25.022006   32853 main.go:141] libmachine: (ha-329926-m02) Reserved static IP address: 192.168.39.79
	I0501 02:32:25.022019   32853 main.go:141] libmachine: (ha-329926-m02) Waiting for SSH to be available...
	I0501 02:32:25.024815   32853 main.go:141] libmachine: (ha-329926-m02) DBG | domain ha-329926-m02 has defined MAC address 52:54:00:92:16:5f in network mk-ha-329926
	I0501 02:32:25.025333   32853 main.go:141] libmachine: (ha-329926-m02) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:92:16:5f", ip: ""} in network mk-ha-329926
	I0501 02:32:25.025376   32853 main.go:141] libmachine: (ha-329926-m02) DBG | unable to find defined IP address of network mk-ha-329926 interface with MAC address 52:54:00:92:16:5f
	I0501 02:32:25.025416   32853 main.go:141] libmachine: (ha-329926-m02) DBG | Using SSH client type: external
	I0501 02:32:25.025449   32853 main.go:141] libmachine: (ha-329926-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926-m02/id_rsa (-rw-------)
	I0501 02:32:25.025483   32853 main.go:141] libmachine: (ha-329926-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0501 02:32:25.025498   32853 main.go:141] libmachine: (ha-329926-m02) DBG | About to run SSH command:
	I0501 02:32:25.025512   32853 main.go:141] libmachine: (ha-329926-m02) DBG | exit 0
	I0501 02:32:25.029148   32853 main.go:141] libmachine: (ha-329926-m02) DBG | SSH cmd err, output: exit status 255: 
	I0501 02:32:25.029172   32853 main.go:141] libmachine: (ha-329926-m02) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0501 02:32:25.029183   32853 main.go:141] libmachine: (ha-329926-m02) DBG | command : exit 0
	I0501 02:32:25.029213   32853 main.go:141] libmachine: (ha-329926-m02) DBG | err     : exit status 255
	I0501 02:32:25.029227   32853 main.go:141] libmachine: (ha-329926-m02) DBG | output  : 
	I0501 02:32:28.029440   32853 main.go:141] libmachine: (ha-329926-m02) DBG | Getting to WaitForSSH function...
	I0501 02:32:28.031840   32853 main.go:141] libmachine: (ha-329926-m02) DBG | domain ha-329926-m02 has defined MAC address 52:54:00:92:16:5f in network mk-ha-329926
	I0501 02:32:28.032190   32853 main.go:141] libmachine: (ha-329926-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:16:5f", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:32:18 +0000 UTC Type:0 Mac:52:54:00:92:16:5f Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-329926-m02 Clientid:01:52:54:00:92:16:5f}
	I0501 02:32:28.032214   32853 main.go:141] libmachine: (ha-329926-m02) DBG | domain ha-329926-m02 has defined IP address 192.168.39.79 and MAC address 52:54:00:92:16:5f in network mk-ha-329926
	I0501 02:32:28.032355   32853 main.go:141] libmachine: (ha-329926-m02) DBG | Using SSH client type: external
	I0501 02:32:28.032375   32853 main.go:141] libmachine: (ha-329926-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926-m02/id_rsa (-rw-------)
	I0501 02:32:28.032395   32853 main.go:141] libmachine: (ha-329926-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.79 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0501 02:32:28.032402   32853 main.go:141] libmachine: (ha-329926-m02) DBG | About to run SSH command:
	I0501 02:32:28.032413   32853 main.go:141] libmachine: (ha-329926-m02) DBG | exit 0
	I0501 02:32:28.158886   32853 main.go:141] libmachine: (ha-329926-m02) DBG | SSH cmd err, output: <nil>: 
	I0501 02:32:28.159180   32853 main.go:141] libmachine: (ha-329926-m02) KVM machine creation complete!
	I0501 02:32:28.159537   32853 main.go:141] libmachine: (ha-329926-m02) Calling .GetConfigRaw
	I0501 02:32:28.160119   32853 main.go:141] libmachine: (ha-329926-m02) Calling .DriverName
	I0501 02:32:28.160324   32853 main.go:141] libmachine: (ha-329926-m02) Calling .DriverName
	I0501 02:32:28.160532   32853 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0501 02:32:28.160546   32853 main.go:141] libmachine: (ha-329926-m02) Calling .GetState
	I0501 02:32:28.161848   32853 main.go:141] libmachine: Detecting operating system of created instance...
	I0501 02:32:28.161861   32853 main.go:141] libmachine: Waiting for SSH to be available...
	I0501 02:32:28.161867   32853 main.go:141] libmachine: Getting to WaitForSSH function...
	I0501 02:32:28.161872   32853 main.go:141] libmachine: (ha-329926-m02) Calling .GetSSHHostname
	I0501 02:32:28.163988   32853 main.go:141] libmachine: (ha-329926-m02) DBG | domain ha-329926-m02 has defined MAC address 52:54:00:92:16:5f in network mk-ha-329926
	I0501 02:32:28.164322   32853 main.go:141] libmachine: (ha-329926-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:16:5f", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:32:18 +0000 UTC Type:0 Mac:52:54:00:92:16:5f Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-329926-m02 Clientid:01:52:54:00:92:16:5f}
	I0501 02:32:28.164348   32853 main.go:141] libmachine: (ha-329926-m02) DBG | domain ha-329926-m02 has defined IP address 192.168.39.79 and MAC address 52:54:00:92:16:5f in network mk-ha-329926
	I0501 02:32:28.164513   32853 main.go:141] libmachine: (ha-329926-m02) Calling .GetSSHPort
	I0501 02:32:28.164673   32853 main.go:141] libmachine: (ha-329926-m02) Calling .GetSSHKeyPath
	I0501 02:32:28.164816   32853 main.go:141] libmachine: (ha-329926-m02) Calling .GetSSHKeyPath
	I0501 02:32:28.164914   32853 main.go:141] libmachine: (ha-329926-m02) Calling .GetSSHUsername
	I0501 02:32:28.165101   32853 main.go:141] libmachine: Using SSH client type: native
	I0501 02:32:28.165370   32853 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.79 22 <nil> <nil>}
	I0501 02:32:28.165385   32853 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0501 02:32:28.270126   32853 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0501 02:32:28.270150   32853 main.go:141] libmachine: Detecting the provisioner...
	I0501 02:32:28.270157   32853 main.go:141] libmachine: (ha-329926-m02) Calling .GetSSHHostname
	I0501 02:32:28.272738   32853 main.go:141] libmachine: (ha-329926-m02) DBG | domain ha-329926-m02 has defined MAC address 52:54:00:92:16:5f in network mk-ha-329926
	I0501 02:32:28.273164   32853 main.go:141] libmachine: (ha-329926-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:16:5f", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:32:18 +0000 UTC Type:0 Mac:52:54:00:92:16:5f Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-329926-m02 Clientid:01:52:54:00:92:16:5f}
	I0501 02:32:28.273192   32853 main.go:141] libmachine: (ha-329926-m02) DBG | domain ha-329926-m02 has defined IP address 192.168.39.79 and MAC address 52:54:00:92:16:5f in network mk-ha-329926
	I0501 02:32:28.273354   32853 main.go:141] libmachine: (ha-329926-m02) Calling .GetSSHPort
	I0501 02:32:28.273547   32853 main.go:141] libmachine: (ha-329926-m02) Calling .GetSSHKeyPath
	I0501 02:32:28.273697   32853 main.go:141] libmachine: (ha-329926-m02) Calling .GetSSHKeyPath
	I0501 02:32:28.273825   32853 main.go:141] libmachine: (ha-329926-m02) Calling .GetSSHUsername
	I0501 02:32:28.274027   32853 main.go:141] libmachine: Using SSH client type: native
	I0501 02:32:28.274226   32853 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.79 22 <nil> <nil>}
	I0501 02:32:28.274240   32853 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0501 02:32:28.375684   32853 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0501 02:32:28.375755   32853 main.go:141] libmachine: found compatible host: buildroot
	I0501 02:32:28.375766   32853 main.go:141] libmachine: Provisioning with buildroot...
	I0501 02:32:28.375782   32853 main.go:141] libmachine: (ha-329926-m02) Calling .GetMachineName
	I0501 02:32:28.376055   32853 buildroot.go:166] provisioning hostname "ha-329926-m02"
	I0501 02:32:28.376083   32853 main.go:141] libmachine: (ha-329926-m02) Calling .GetMachineName
	I0501 02:32:28.376256   32853 main.go:141] libmachine: (ha-329926-m02) Calling .GetSSHHostname
	I0501 02:32:28.378946   32853 main.go:141] libmachine: (ha-329926-m02) DBG | domain ha-329926-m02 has defined MAC address 52:54:00:92:16:5f in network mk-ha-329926
	I0501 02:32:28.379397   32853 main.go:141] libmachine: (ha-329926-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:16:5f", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:32:18 +0000 UTC Type:0 Mac:52:54:00:92:16:5f Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-329926-m02 Clientid:01:52:54:00:92:16:5f}
	I0501 02:32:28.379428   32853 main.go:141] libmachine: (ha-329926-m02) DBG | domain ha-329926-m02 has defined IP address 192.168.39.79 and MAC address 52:54:00:92:16:5f in network mk-ha-329926
	I0501 02:32:28.379548   32853 main.go:141] libmachine: (ha-329926-m02) Calling .GetSSHPort
	I0501 02:32:28.379708   32853 main.go:141] libmachine: (ha-329926-m02) Calling .GetSSHKeyPath
	I0501 02:32:28.379877   32853 main.go:141] libmachine: (ha-329926-m02) Calling .GetSSHKeyPath
	I0501 02:32:28.380038   32853 main.go:141] libmachine: (ha-329926-m02) Calling .GetSSHUsername
	I0501 02:32:28.380193   32853 main.go:141] libmachine: Using SSH client type: native
	I0501 02:32:28.380382   32853 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.79 22 <nil> <nil>}
	I0501 02:32:28.380398   32853 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-329926-m02 && echo "ha-329926-m02" | sudo tee /etc/hostname
	I0501 02:32:28.500197   32853 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-329926-m02
	
	I0501 02:32:28.500220   32853 main.go:141] libmachine: (ha-329926-m02) Calling .GetSSHHostname
	I0501 02:32:28.502847   32853 main.go:141] libmachine: (ha-329926-m02) DBG | domain ha-329926-m02 has defined MAC address 52:54:00:92:16:5f in network mk-ha-329926
	I0501 02:32:28.503142   32853 main.go:141] libmachine: (ha-329926-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:16:5f", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:32:18 +0000 UTC Type:0 Mac:52:54:00:92:16:5f Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-329926-m02 Clientid:01:52:54:00:92:16:5f}
	I0501 02:32:28.503170   32853 main.go:141] libmachine: (ha-329926-m02) DBG | domain ha-329926-m02 has defined IP address 192.168.39.79 and MAC address 52:54:00:92:16:5f in network mk-ha-329926
	I0501 02:32:28.503352   32853 main.go:141] libmachine: (ha-329926-m02) Calling .GetSSHPort
	I0501 02:32:28.503548   32853 main.go:141] libmachine: (ha-329926-m02) Calling .GetSSHKeyPath
	I0501 02:32:28.503693   32853 main.go:141] libmachine: (ha-329926-m02) Calling .GetSSHKeyPath
	I0501 02:32:28.503858   32853 main.go:141] libmachine: (ha-329926-m02) Calling .GetSSHUsername
	I0501 02:32:28.504010   32853 main.go:141] libmachine: Using SSH client type: native
	I0501 02:32:28.504251   32853 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.79 22 <nil> <nil>}
	I0501 02:32:28.504288   32853 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-329926-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-329926-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-329926-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0501 02:32:28.619098   32853 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0501 02:32:28.619130   32853 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18779-13391/.minikube CaCertPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18779-13391/.minikube}
	I0501 02:32:28.619149   32853 buildroot.go:174] setting up certificates
	I0501 02:32:28.619168   32853 provision.go:84] configureAuth start
	I0501 02:32:28.619183   32853 main.go:141] libmachine: (ha-329926-m02) Calling .GetMachineName
	I0501 02:32:28.619462   32853 main.go:141] libmachine: (ha-329926-m02) Calling .GetIP
	I0501 02:32:28.621888   32853 main.go:141] libmachine: (ha-329926-m02) DBG | domain ha-329926-m02 has defined MAC address 52:54:00:92:16:5f in network mk-ha-329926
	I0501 02:32:28.622191   32853 main.go:141] libmachine: (ha-329926-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:16:5f", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:32:18 +0000 UTC Type:0 Mac:52:54:00:92:16:5f Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-329926-m02 Clientid:01:52:54:00:92:16:5f}
	I0501 02:32:28.622223   32853 main.go:141] libmachine: (ha-329926-m02) DBG | domain ha-329926-m02 has defined IP address 192.168.39.79 and MAC address 52:54:00:92:16:5f in network mk-ha-329926
	I0501 02:32:28.622318   32853 main.go:141] libmachine: (ha-329926-m02) Calling .GetSSHHostname
	I0501 02:32:28.624655   32853 main.go:141] libmachine: (ha-329926-m02) DBG | domain ha-329926-m02 has defined MAC address 52:54:00:92:16:5f in network mk-ha-329926
	I0501 02:32:28.624978   32853 main.go:141] libmachine: (ha-329926-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:16:5f", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:32:18 +0000 UTC Type:0 Mac:52:54:00:92:16:5f Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-329926-m02 Clientid:01:52:54:00:92:16:5f}
	I0501 02:32:28.625002   32853 main.go:141] libmachine: (ha-329926-m02) DBG | domain ha-329926-m02 has defined IP address 192.168.39.79 and MAC address 52:54:00:92:16:5f in network mk-ha-329926
	I0501 02:32:28.625122   32853 provision.go:143] copyHostCerts
	I0501 02:32:28.625148   32853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem
	I0501 02:32:28.625175   32853 exec_runner.go:144] found /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem, removing ...
	I0501 02:32:28.625184   32853 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem
	I0501 02:32:28.625243   32853 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem (1078 bytes)
	I0501 02:32:28.625313   32853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem
	I0501 02:32:28.625331   32853 exec_runner.go:144] found /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem, removing ...
	I0501 02:32:28.625336   32853 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem
	I0501 02:32:28.625359   32853 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem (1123 bytes)
	I0501 02:32:28.625400   32853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem
	I0501 02:32:28.625418   32853 exec_runner.go:144] found /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem, removing ...
	I0501 02:32:28.625424   32853 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem
	I0501 02:32:28.625445   32853 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem (1675 bytes)
	I0501 02:32:28.625498   32853 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca-key.pem org=jenkins.ha-329926-m02 san=[127.0.0.1 192.168.39.79 ha-329926-m02 localhost minikube]
	I0501 02:32:28.707102   32853 provision.go:177] copyRemoteCerts
	I0501 02:32:28.707154   32853 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0501 02:32:28.707177   32853 main.go:141] libmachine: (ha-329926-m02) Calling .GetSSHHostname
	I0501 02:32:28.709603   32853 main.go:141] libmachine: (ha-329926-m02) DBG | domain ha-329926-m02 has defined MAC address 52:54:00:92:16:5f in network mk-ha-329926
	I0501 02:32:28.709910   32853 main.go:141] libmachine: (ha-329926-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:16:5f", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:32:18 +0000 UTC Type:0 Mac:52:54:00:92:16:5f Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-329926-m02 Clientid:01:52:54:00:92:16:5f}
	I0501 02:32:28.709927   32853 main.go:141] libmachine: (ha-329926-m02) DBG | domain ha-329926-m02 has defined IP address 192.168.39.79 and MAC address 52:54:00:92:16:5f in network mk-ha-329926
	I0501 02:32:28.710078   32853 main.go:141] libmachine: (ha-329926-m02) Calling .GetSSHPort
	I0501 02:32:28.710258   32853 main.go:141] libmachine: (ha-329926-m02) Calling .GetSSHKeyPath
	I0501 02:32:28.710437   32853 main.go:141] libmachine: (ha-329926-m02) Calling .GetSSHUsername
	I0501 02:32:28.710566   32853 sshutil.go:53] new ssh client: &{IP:192.168.39.79 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926-m02/id_rsa Username:docker}
	I0501 02:32:28.793538   32853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0501 02:32:28.793606   32853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0501 02:32:28.824782   32853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0501 02:32:28.824846   32853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0501 02:32:28.856031   32853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0501 02:32:28.856095   32853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0501 02:32:28.886602   32853 provision.go:87] duration metric: took 267.420274ms to configureAuth
	I0501 02:32:28.886636   32853 buildroot.go:189] setting minikube options for container-runtime
	I0501 02:32:28.886827   32853 config.go:182] Loaded profile config "ha-329926": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0501 02:32:28.886919   32853 main.go:141] libmachine: (ha-329926-m02) Calling .GetSSHHostname
	I0501 02:32:28.889589   32853 main.go:141] libmachine: (ha-329926-m02) DBG | domain ha-329926-m02 has defined MAC address 52:54:00:92:16:5f in network mk-ha-329926
	I0501 02:32:28.889945   32853 main.go:141] libmachine: (ha-329926-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:16:5f", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:32:18 +0000 UTC Type:0 Mac:52:54:00:92:16:5f Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-329926-m02 Clientid:01:52:54:00:92:16:5f}
	I0501 02:32:28.889973   32853 main.go:141] libmachine: (ha-329926-m02) DBG | domain ha-329926-m02 has defined IP address 192.168.39.79 and MAC address 52:54:00:92:16:5f in network mk-ha-329926
	I0501 02:32:28.890172   32853 main.go:141] libmachine: (ha-329926-m02) Calling .GetSSHPort
	I0501 02:32:28.890351   32853 main.go:141] libmachine: (ha-329926-m02) Calling .GetSSHKeyPath
	I0501 02:32:28.890553   32853 main.go:141] libmachine: (ha-329926-m02) Calling .GetSSHKeyPath
	I0501 02:32:28.890699   32853 main.go:141] libmachine: (ha-329926-m02) Calling .GetSSHUsername
	I0501 02:32:28.890856   32853 main.go:141] libmachine: Using SSH client type: native
	I0501 02:32:28.891001   32853 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.79 22 <nil> <nil>}
	I0501 02:32:28.891014   32853 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0501 02:32:29.159244   32853 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0501 02:32:29.159272   32853 main.go:141] libmachine: Checking connection to Docker...
	I0501 02:32:29.159283   32853 main.go:141] libmachine: (ha-329926-m02) Calling .GetURL
	I0501 02:32:29.160474   32853 main.go:141] libmachine: (ha-329926-m02) DBG | Using libvirt version 6000000
	I0501 02:32:29.162578   32853 main.go:141] libmachine: (ha-329926-m02) DBG | domain ha-329926-m02 has defined MAC address 52:54:00:92:16:5f in network mk-ha-329926
	I0501 02:32:29.163002   32853 main.go:141] libmachine: (ha-329926-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:16:5f", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:32:18 +0000 UTC Type:0 Mac:52:54:00:92:16:5f Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-329926-m02 Clientid:01:52:54:00:92:16:5f}
	I0501 02:32:29.163032   32853 main.go:141] libmachine: (ha-329926-m02) DBG | domain ha-329926-m02 has defined IP address 192.168.39.79 and MAC address 52:54:00:92:16:5f in network mk-ha-329926
	I0501 02:32:29.163145   32853 main.go:141] libmachine: Docker is up and running!
	I0501 02:32:29.163159   32853 main.go:141] libmachine: Reticulating splines...
	I0501 02:32:29.163167   32853 client.go:171] duration metric: took 26.549605676s to LocalClient.Create
	I0501 02:32:29.163194   32853 start.go:167] duration metric: took 26.549670109s to libmachine.API.Create "ha-329926"
	I0501 02:32:29.163208   32853 start.go:293] postStartSetup for "ha-329926-m02" (driver="kvm2")
	I0501 02:32:29.163222   32853 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0501 02:32:29.163245   32853 main.go:141] libmachine: (ha-329926-m02) Calling .DriverName
	I0501 02:32:29.163485   32853 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0501 02:32:29.163508   32853 main.go:141] libmachine: (ha-329926-m02) Calling .GetSSHHostname
	I0501 02:32:29.165222   32853 main.go:141] libmachine: (ha-329926-m02) DBG | domain ha-329926-m02 has defined MAC address 52:54:00:92:16:5f in network mk-ha-329926
	I0501 02:32:29.165624   32853 main.go:141] libmachine: (ha-329926-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:16:5f", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:32:18 +0000 UTC Type:0 Mac:52:54:00:92:16:5f Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-329926-m02 Clientid:01:52:54:00:92:16:5f}
	I0501 02:32:29.165652   32853 main.go:141] libmachine: (ha-329926-m02) DBG | domain ha-329926-m02 has defined IP address 192.168.39.79 and MAC address 52:54:00:92:16:5f in network mk-ha-329926
	I0501 02:32:29.165808   32853 main.go:141] libmachine: (ha-329926-m02) Calling .GetSSHPort
	I0501 02:32:29.165987   32853 main.go:141] libmachine: (ha-329926-m02) Calling .GetSSHKeyPath
	I0501 02:32:29.166131   32853 main.go:141] libmachine: (ha-329926-m02) Calling .GetSSHUsername
	I0501 02:32:29.166267   32853 sshutil.go:53] new ssh client: &{IP:192.168.39.79 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926-m02/id_rsa Username:docker}
	I0501 02:32:29.249614   32853 ssh_runner.go:195] Run: cat /etc/os-release
	I0501 02:32:29.254833   32853 info.go:137] Remote host: Buildroot 2023.02.9
	I0501 02:32:29.254865   32853 filesync.go:126] Scanning /home/jenkins/minikube-integration/18779-13391/.minikube/addons for local assets ...
	I0501 02:32:29.254942   32853 filesync.go:126] Scanning /home/jenkins/minikube-integration/18779-13391/.minikube/files for local assets ...
	I0501 02:32:29.255016   32853 filesync.go:149] local asset: /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem -> 207242.pem in /etc/ssl/certs
	I0501 02:32:29.255026   32853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem -> /etc/ssl/certs/207242.pem
	I0501 02:32:29.255104   32853 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0501 02:32:29.265848   32853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem --> /etc/ssl/certs/207242.pem (1708 bytes)
	I0501 02:32:29.294379   32853 start.go:296] duration metric: took 131.157143ms for postStartSetup
	I0501 02:32:29.294455   32853 main.go:141] libmachine: (ha-329926-m02) Calling .GetConfigRaw
	I0501 02:32:29.295051   32853 main.go:141] libmachine: (ha-329926-m02) Calling .GetIP
	I0501 02:32:29.297751   32853 main.go:141] libmachine: (ha-329926-m02) DBG | domain ha-329926-m02 has defined MAC address 52:54:00:92:16:5f in network mk-ha-329926
	I0501 02:32:29.298110   32853 main.go:141] libmachine: (ha-329926-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:16:5f", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:32:18 +0000 UTC Type:0 Mac:52:54:00:92:16:5f Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-329926-m02 Clientid:01:52:54:00:92:16:5f}
	I0501 02:32:29.298140   32853 main.go:141] libmachine: (ha-329926-m02) DBG | domain ha-329926-m02 has defined IP address 192.168.39.79 and MAC address 52:54:00:92:16:5f in network mk-ha-329926
	I0501 02:32:29.298337   32853 profile.go:143] Saving config to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/config.json ...
	I0501 02:32:29.298549   32853 start.go:128] duration metric: took 26.702914692s to createHost
	I0501 02:32:29.298571   32853 main.go:141] libmachine: (ha-329926-m02) Calling .GetSSHHostname
	I0501 02:32:29.300678   32853 main.go:141] libmachine: (ha-329926-m02) DBG | domain ha-329926-m02 has defined MAC address 52:54:00:92:16:5f in network mk-ha-329926
	I0501 02:32:29.301049   32853 main.go:141] libmachine: (ha-329926-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:16:5f", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:32:18 +0000 UTC Type:0 Mac:52:54:00:92:16:5f Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-329926-m02 Clientid:01:52:54:00:92:16:5f}
	I0501 02:32:29.301087   32853 main.go:141] libmachine: (ha-329926-m02) DBG | domain ha-329926-m02 has defined IP address 192.168.39.79 and MAC address 52:54:00:92:16:5f in network mk-ha-329926
	I0501 02:32:29.301201   32853 main.go:141] libmachine: (ha-329926-m02) Calling .GetSSHPort
	I0501 02:32:29.301444   32853 main.go:141] libmachine: (ha-329926-m02) Calling .GetSSHKeyPath
	I0501 02:32:29.301631   32853 main.go:141] libmachine: (ha-329926-m02) Calling .GetSSHKeyPath
	I0501 02:32:29.301795   32853 main.go:141] libmachine: (ha-329926-m02) Calling .GetSSHUsername
	I0501 02:32:29.301954   32853 main.go:141] libmachine: Using SSH client type: native
	I0501 02:32:29.302115   32853 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.79 22 <nil> <nil>}
	I0501 02:32:29.302125   32853 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0501 02:32:29.404161   32853 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714530749.376998038
	
	I0501 02:32:29.404185   32853 fix.go:216] guest clock: 1714530749.376998038
	I0501 02:32:29.404194   32853 fix.go:229] Guest: 2024-05-01 02:32:29.376998038 +0000 UTC Remote: 2024-05-01 02:32:29.298561287 +0000 UTC m=+87.221058556 (delta=78.436751ms)
	I0501 02:32:29.404215   32853 fix.go:200] guest clock delta is within tolerance: 78.436751ms
	I0501 02:32:29.404222   32853 start.go:83] releasing machines lock for "ha-329926-m02", held for 26.80870233s
	I0501 02:32:29.404253   32853 main.go:141] libmachine: (ha-329926-m02) Calling .DriverName
	I0501 02:32:29.404558   32853 main.go:141] libmachine: (ha-329926-m02) Calling .GetIP
	I0501 02:32:29.407060   32853 main.go:141] libmachine: (ha-329926-m02) DBG | domain ha-329926-m02 has defined MAC address 52:54:00:92:16:5f in network mk-ha-329926
	I0501 02:32:29.407456   32853 main.go:141] libmachine: (ha-329926-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:16:5f", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:32:18 +0000 UTC Type:0 Mac:52:54:00:92:16:5f Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-329926-m02 Clientid:01:52:54:00:92:16:5f}
	I0501 02:32:29.407478   32853 main.go:141] libmachine: (ha-329926-m02) DBG | domain ha-329926-m02 has defined IP address 192.168.39.79 and MAC address 52:54:00:92:16:5f in network mk-ha-329926
	I0501 02:32:29.412905   32853 out.go:177] * Found network options:
	I0501 02:32:29.414075   32853 out.go:177]   - NO_PROXY=192.168.39.5
	W0501 02:32:29.415067   32853 proxy.go:119] fail to check proxy env: Error ip not in block
	I0501 02:32:29.415094   32853 main.go:141] libmachine: (ha-329926-m02) Calling .DriverName
	I0501 02:32:29.415626   32853 main.go:141] libmachine: (ha-329926-m02) Calling .DriverName
	I0501 02:32:29.415813   32853 main.go:141] libmachine: (ha-329926-m02) Calling .DriverName
	I0501 02:32:29.415878   32853 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0501 02:32:29.415923   32853 main.go:141] libmachine: (ha-329926-m02) Calling .GetSSHHostname
	W0501 02:32:29.416037   32853 proxy.go:119] fail to check proxy env: Error ip not in block
	I0501 02:32:29.416100   32853 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0501 02:32:29.416118   32853 main.go:141] libmachine: (ha-329926-m02) Calling .GetSSHHostname
	I0501 02:32:29.418446   32853 main.go:141] libmachine: (ha-329926-m02) DBG | domain ha-329926-m02 has defined MAC address 52:54:00:92:16:5f in network mk-ha-329926
	I0501 02:32:29.418710   32853 main.go:141] libmachine: (ha-329926-m02) DBG | domain ha-329926-m02 has defined MAC address 52:54:00:92:16:5f in network mk-ha-329926
	I0501 02:32:29.418743   32853 main.go:141] libmachine: (ha-329926-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:16:5f", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:32:18 +0000 UTC Type:0 Mac:52:54:00:92:16:5f Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-329926-m02 Clientid:01:52:54:00:92:16:5f}
	I0501 02:32:29.418764   32853 main.go:141] libmachine: (ha-329926-m02) DBG | domain ha-329926-m02 has defined IP address 192.168.39.79 and MAC address 52:54:00:92:16:5f in network mk-ha-329926
	I0501 02:32:29.418896   32853 main.go:141] libmachine: (ha-329926-m02) Calling .GetSSHPort
	I0501 02:32:29.419059   32853 main.go:141] libmachine: (ha-329926-m02) Calling .GetSSHKeyPath
	I0501 02:32:29.419137   32853 main.go:141] libmachine: (ha-329926-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:16:5f", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:32:18 +0000 UTC Type:0 Mac:52:54:00:92:16:5f Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-329926-m02 Clientid:01:52:54:00:92:16:5f}
	I0501 02:32:29.419166   32853 main.go:141] libmachine: (ha-329926-m02) DBG | domain ha-329926-m02 has defined IP address 192.168.39.79 and MAC address 52:54:00:92:16:5f in network mk-ha-329926
	I0501 02:32:29.419224   32853 main.go:141] libmachine: (ha-329926-m02) Calling .GetSSHUsername
	I0501 02:32:29.419303   32853 main.go:141] libmachine: (ha-329926-m02) Calling .GetSSHPort
	I0501 02:32:29.419384   32853 sshutil.go:53] new ssh client: &{IP:192.168.39.79 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926-m02/id_rsa Username:docker}
	I0501 02:32:29.419466   32853 main.go:141] libmachine: (ha-329926-m02) Calling .GetSSHKeyPath
	I0501 02:32:29.419607   32853 main.go:141] libmachine: (ha-329926-m02) Calling .GetSSHUsername
	I0501 02:32:29.419726   32853 sshutil.go:53] new ssh client: &{IP:192.168.39.79 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926-m02/id_rsa Username:docker}
	I0501 02:32:29.660914   32853 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0501 02:32:29.668297   32853 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0501 02:32:29.668376   32853 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0501 02:32:29.687850   32853 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0501 02:32:29.687883   32853 start.go:494] detecting cgroup driver to use...
	I0501 02:32:29.687972   32853 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0501 02:32:29.706565   32853 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0501 02:32:29.723456   32853 docker.go:217] disabling cri-docker service (if available) ...
	I0501 02:32:29.723539   32853 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0501 02:32:29.738887   32853 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0501 02:32:29.754172   32853 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0501 02:32:29.874297   32853 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0501 02:32:30.042352   32853 docker.go:233] disabling docker service ...
	I0501 02:32:30.042446   32853 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0501 02:32:30.059238   32853 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0501 02:32:30.075898   32853 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0501 02:32:30.201083   32853 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0501 02:32:30.333782   32853 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0501 02:32:30.350062   32853 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0501 02:32:30.371860   32853 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0501 02:32:30.371927   32853 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 02:32:30.384981   32853 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0501 02:32:30.385056   32853 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 02:32:30.398163   32853 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 02:32:30.411332   32853 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 02:32:30.426328   32853 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0501 02:32:30.441124   32853 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 02:32:30.453834   32853 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 02:32:30.474622   32853 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 02:32:30.492765   32853 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0501 02:32:30.503973   32853 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0501 02:32:30.504044   32853 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0501 02:32:30.518436   32853 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0501 02:32:30.529512   32853 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:32:30.653918   32853 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0501 02:32:30.808199   32853 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0501 02:32:30.808267   32853 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0501 02:32:30.814260   32853 start.go:562] Will wait 60s for crictl version
	I0501 02:32:30.814333   32853 ssh_runner.go:195] Run: which crictl
	I0501 02:32:30.818797   32853 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0501 02:32:30.858905   32853 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0501 02:32:30.858991   32853 ssh_runner.go:195] Run: crio --version
	I0501 02:32:30.890383   32853 ssh_runner.go:195] Run: crio --version
	I0501 02:32:30.925385   32853 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0501 02:32:30.926898   32853 out.go:177]   - env NO_PROXY=192.168.39.5
	I0501 02:32:30.927949   32853 main.go:141] libmachine: (ha-329926-m02) Calling .GetIP
	I0501 02:32:30.930381   32853 main.go:141] libmachine: (ha-329926-m02) DBG | domain ha-329926-m02 has defined MAC address 52:54:00:92:16:5f in network mk-ha-329926
	I0501 02:32:30.930728   32853 main.go:141] libmachine: (ha-329926-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:16:5f", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:32:18 +0000 UTC Type:0 Mac:52:54:00:92:16:5f Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-329926-m02 Clientid:01:52:54:00:92:16:5f}
	I0501 02:32:30.930760   32853 main.go:141] libmachine: (ha-329926-m02) DBG | domain ha-329926-m02 has defined IP address 192.168.39.79 and MAC address 52:54:00:92:16:5f in network mk-ha-329926
	I0501 02:32:30.930932   32853 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0501 02:32:30.935561   32853 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0501 02:32:30.949642   32853 mustload.go:65] Loading cluster: ha-329926
	I0501 02:32:30.949868   32853 config.go:182] Loaded profile config "ha-329926": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0501 02:32:30.950222   32853 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:32:30.950257   32853 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:32:30.964975   32853 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41109
	I0501 02:32:30.965384   32853 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:32:30.965819   32853 main.go:141] libmachine: Using API Version  1
	I0501 02:32:30.965840   32853 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:32:30.966161   32853 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:32:30.966360   32853 main.go:141] libmachine: (ha-329926) Calling .GetState
	I0501 02:32:30.967865   32853 host.go:66] Checking if "ha-329926" exists ...
	I0501 02:32:30.968220   32853 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:32:30.968247   32853 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:32:30.983656   32853 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41319
	I0501 02:32:30.984025   32853 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:32:30.984516   32853 main.go:141] libmachine: Using API Version  1
	I0501 02:32:30.984538   32853 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:32:30.984870   32853 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:32:30.985070   32853 main.go:141] libmachine: (ha-329926) Calling .DriverName
	I0501 02:32:30.985228   32853 certs.go:68] Setting up /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926 for IP: 192.168.39.79
	I0501 02:32:30.985248   32853 certs.go:194] generating shared ca certs ...
	I0501 02:32:30.985267   32853 certs.go:226] acquiring lock for ca certs: {Name:mk85a75d14c00780d4bf09cd9bdaf0f96f1b8fce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:32:30.985407   32853 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18779-13391/.minikube/ca.key
	I0501 02:32:30.985458   32853 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.key
	I0501 02:32:30.985470   32853 certs.go:256] generating profile certs ...
	I0501 02:32:30.985562   32853 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/client.key
	I0501 02:32:30.985597   32853 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/apiserver.key.d90e43f3
	I0501 02:32:30.985619   32853 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/apiserver.crt.d90e43f3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.5 192.168.39.79 192.168.39.254]
	I0501 02:32:31.181206   32853 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/apiserver.crt.d90e43f3 ...
	I0501 02:32:31.181238   32853 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/apiserver.crt.d90e43f3: {Name:mk5518d1e07d843574fb807e035ad0b363a66c7f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:32:31.181440   32853 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/apiserver.key.d90e43f3 ...
	I0501 02:32:31.181458   32853 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/apiserver.key.d90e43f3: {Name:mkb1feab49c04187ec90bd16923d434f3fa71e99 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:32:31.181562   32853 certs.go:381] copying /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/apiserver.crt.d90e43f3 -> /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/apiserver.crt
	I0501 02:32:31.181740   32853 certs.go:385] copying /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/apiserver.key.d90e43f3 -> /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/apiserver.key
	I0501 02:32:31.181920   32853 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/proxy-client.key
	I0501 02:32:31.181951   32853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0501 02:32:31.181971   32853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0501 02:32:31.181989   32853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0501 02:32:31.182006   32853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0501 02:32:31.182022   32853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0501 02:32:31.182036   32853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0501 02:32:31.182055   32853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0501 02:32:31.182072   32853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0501 02:32:31.182135   32853 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/20724.pem (1338 bytes)
	W0501 02:32:31.182173   32853 certs.go:480] ignoring /home/jenkins/minikube-integration/18779-13391/.minikube/certs/20724_empty.pem, impossibly tiny 0 bytes
	I0501 02:32:31.182187   32853 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca-key.pem (1679 bytes)
	I0501 02:32:31.182221   32853 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem (1078 bytes)
	I0501 02:32:31.182257   32853 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem (1123 bytes)
	I0501 02:32:31.182289   32853 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem (1675 bytes)
	I0501 02:32:31.182345   32853 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem (1708 bytes)
	I0501 02:32:31.182379   32853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/20724.pem -> /usr/share/ca-certificates/20724.pem
	I0501 02:32:31.182414   32853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem -> /usr/share/ca-certificates/207242.pem
	I0501 02:32:31.182433   32853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0501 02:32:31.182472   32853 main.go:141] libmachine: (ha-329926) Calling .GetSSHHostname
	I0501 02:32:31.185284   32853 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:32:31.185667   32853 main.go:141] libmachine: (ha-329926) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:d8:43", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:31:18 +0000 UTC Type:0 Mac:52:54:00:ce:d8:43 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-329926 Clientid:01:52:54:00:ce:d8:43}
	I0501 02:32:31.185696   32853 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined IP address 192.168.39.5 and MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:32:31.185859   32853 main.go:141] libmachine: (ha-329926) Calling .GetSSHPort
	I0501 02:32:31.186079   32853 main.go:141] libmachine: (ha-329926) Calling .GetSSHKeyPath
	I0501 02:32:31.186237   32853 main.go:141] libmachine: (ha-329926) Calling .GetSSHUsername
	I0501 02:32:31.186364   32853 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926/id_rsa Username:docker}
	I0501 02:32:31.262748   32853 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0501 02:32:31.269126   32853 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0501 02:32:31.283342   32853 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0501 02:32:31.288056   32853 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0501 02:32:31.301185   32853 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0501 02:32:31.305935   32853 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0501 02:32:31.318019   32853 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0501 02:32:31.322409   32853 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0501 02:32:31.334369   32853 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0501 02:32:31.339405   32853 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0501 02:32:31.351745   32853 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0501 02:32:31.356564   32853 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0501 02:32:31.368836   32853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0501 02:32:31.400674   32853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0501 02:32:31.426866   32853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0501 02:32:31.454204   32853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0501 02:32:31.481303   32853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0501 02:32:31.508290   32853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0501 02:32:31.539629   32853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0501 02:32:31.570094   32853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0501 02:32:31.600574   32853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/certs/20724.pem --> /usr/share/ca-certificates/20724.pem (1338 bytes)
	I0501 02:32:31.632245   32853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem --> /usr/share/ca-certificates/207242.pem (1708 bytes)
	I0501 02:32:31.663620   32853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0501 02:32:31.694867   32853 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0501 02:32:31.716003   32853 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0501 02:32:31.737736   32853 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0501 02:32:31.759385   32853 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0501 02:32:31.781053   32853 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0501 02:32:31.801226   32853 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0501 02:32:31.820680   32853 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0501 02:32:31.840418   32853 ssh_runner.go:195] Run: openssl version
	I0501 02:32:31.846834   32853 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20724.pem && ln -fs /usr/share/ca-certificates/20724.pem /etc/ssl/certs/20724.pem"
	I0501 02:32:31.859445   32853 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20724.pem
	I0501 02:32:31.864815   32853 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May  1 02:20 /usr/share/ca-certificates/20724.pem
	I0501 02:32:31.864868   32853 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20724.pem
	I0501 02:32:31.871245   32853 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20724.pem /etc/ssl/certs/51391683.0"
	I0501 02:32:31.883690   32853 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/207242.pem && ln -fs /usr/share/ca-certificates/207242.pem /etc/ssl/certs/207242.pem"
	I0501 02:32:31.896212   32853 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/207242.pem
	I0501 02:32:31.901714   32853 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May  1 02:20 /usr/share/ca-certificates/207242.pem
	I0501 02:32:31.901787   32853 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/207242.pem
	I0501 02:32:31.908364   32853 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/207242.pem /etc/ssl/certs/3ec20f2e.0"
	I0501 02:32:31.921148   32853 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0501 02:32:31.933834   32853 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0501 02:32:31.939556   32853 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May  1 02:08 /usr/share/ca-certificates/minikubeCA.pem
	I0501 02:32:31.939610   32853 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0501 02:32:31.946280   32853 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0501 02:32:31.958842   32853 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0501 02:32:31.963676   32853 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0501 02:32:31.963732   32853 kubeadm.go:928] updating node {m02 192.168.39.79 8443 v1.30.0 crio true true} ...
	I0501 02:32:31.963816   32853 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-329926-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.79
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-329926 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0501 02:32:31.963847   32853 kube-vip.go:111] generating kube-vip config ...
	I0501 02:32:31.963890   32853 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0501 02:32:31.981605   32853 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0501 02:32:31.981681   32853 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0501 02:32:31.981735   32853 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0501 02:32:31.992985   32853 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.0': No such file or directory
	
	Initiating transfer...
	I0501 02:32:31.993036   32853 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.0
	I0501 02:32:32.003788   32853 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl.sha256
	I0501 02:32:32.003816   32853 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/18779-13391/.minikube/cache/linux/amd64/v1.30.0/kubeadm
	I0501 02:32:32.003819   32853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/linux/amd64/v1.30.0/kubectl -> /var/lib/minikube/binaries/v1.30.0/kubectl
	I0501 02:32:32.003787   32853 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/18779-13391/.minikube/cache/linux/amd64/v1.30.0/kubelet
	I0501 02:32:32.004006   32853 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl
	I0501 02:32:32.009102   32853 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubectl': No such file or directory
	I0501 02:32:32.009132   32853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/cache/linux/amd64/v1.30.0/kubectl --> /var/lib/minikube/binaries/v1.30.0/kubectl (51454104 bytes)
	I0501 02:32:40.645161   32853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/linux/amd64/v1.30.0/kubeadm -> /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0501 02:32:40.645247   32853 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0501 02:32:40.651214   32853 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubeadm': No such file or directory
	I0501 02:32:40.651259   32853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/cache/linux/amd64/v1.30.0/kubeadm --> /var/lib/minikube/binaries/v1.30.0/kubeadm (50249880 bytes)
	I0501 02:32:46.044024   32853 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 02:32:46.061292   32853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/linux/amd64/v1.30.0/kubelet -> /var/lib/minikube/binaries/v1.30.0/kubelet
	I0501 02:32:46.061412   32853 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet
	I0501 02:32:46.066040   32853 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubelet': No such file or directory
	I0501 02:32:46.066068   32853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/cache/linux/amd64/v1.30.0/kubelet --> /var/lib/minikube/binaries/v1.30.0/kubelet (100100024 bytes)
	I0501 02:32:46.533245   32853 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0501 02:32:46.544899   32853 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0501 02:32:46.568504   32853 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0501 02:32:46.588194   32853 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0501 02:32:46.608341   32853 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0501 02:32:46.613073   32853 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0501 02:32:46.628506   32853 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:32:46.759692   32853 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0501 02:32:46.779606   32853 host.go:66] Checking if "ha-329926" exists ...
	I0501 02:32:46.779999   32853 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:32:46.780033   32853 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:32:46.795346   32853 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37047
	I0501 02:32:46.795774   32853 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:32:46.796285   32853 main.go:141] libmachine: Using API Version  1
	I0501 02:32:46.796318   32853 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:32:46.796647   32853 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:32:46.796862   32853 main.go:141] libmachine: (ha-329926) Calling .DriverName
	I0501 02:32:46.797022   32853 start.go:316] joinCluster: &{Name:ha-329926 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Cluster
Name:ha-329926 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.5 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.79 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:2
6280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 02:32:46.797115   32853 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0501 02:32:46.797131   32853 main.go:141] libmachine: (ha-329926) Calling .GetSSHHostname
	I0501 02:32:46.799894   32853 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:32:46.800337   32853 main.go:141] libmachine: (ha-329926) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:d8:43", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:31:18 +0000 UTC Type:0 Mac:52:54:00:ce:d8:43 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-329926 Clientid:01:52:54:00:ce:d8:43}
	I0501 02:32:46.800367   32853 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined IP address 192.168.39.5 and MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:32:46.800498   32853 main.go:141] libmachine: (ha-329926) Calling .GetSSHPort
	I0501 02:32:46.800677   32853 main.go:141] libmachine: (ha-329926) Calling .GetSSHKeyPath
	I0501 02:32:46.800834   32853 main.go:141] libmachine: (ha-329926) Calling .GetSSHUsername
	I0501 02:32:46.800981   32853 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926/id_rsa Username:docker}
	I0501 02:32:46.973707   32853 start.go:342] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.79 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0501 02:32:46.973760   32853 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ii3sbv.7jvk8wpzpyemm901 --discovery-token-ca-cert-hash sha256:bd94cc6acfb95fe36f70b12c6a1f2980601b8235e7c4dea65cebdf35eb514754 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-329926-m02 --control-plane --apiserver-advertise-address=192.168.39.79 --apiserver-bind-port=8443"
	I0501 02:33:10.771192   32853 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ii3sbv.7jvk8wpzpyemm901 --discovery-token-ca-cert-hash sha256:bd94cc6acfb95fe36f70b12c6a1f2980601b8235e7c4dea65cebdf35eb514754 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-329926-m02 --control-plane --apiserver-advertise-address=192.168.39.79 --apiserver-bind-port=8443": (23.797411356s)
	I0501 02:33:10.771238   32853 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0501 02:33:11.384980   32853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-329926-m02 minikube.k8s.io/updated_at=2024_05_01T02_33_11_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=2c4eae41cda912e6a762d77f0d8868e00f97bb4e minikube.k8s.io/name=ha-329926 minikube.k8s.io/primary=false
	I0501 02:33:11.526841   32853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-329926-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0501 02:33:11.662780   32853 start.go:318] duration metric: took 24.865752449s to joinCluster
	I0501 02:33:11.662858   32853 start.go:234] Will wait 6m0s for node &{Name:m02 IP:192.168.39.79 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0501 02:33:11.664381   32853 out.go:177] * Verifying Kubernetes components...
	I0501 02:33:11.663177   32853 config.go:182] Loaded profile config "ha-329926": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0501 02:33:11.665708   32853 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:33:11.967770   32853 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0501 02:33:11.987701   32853 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18779-13391/kubeconfig
	I0501 02:33:11.987916   32853 kapi.go:59] client config for ha-329926: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/client.crt", KeyFile:"/home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/client.key", CAFile:"/home/jenkins/minikube-integration/18779-13391/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02700), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0501 02:33:11.987972   32853 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.5:8443
	I0501 02:33:11.988206   32853 node_ready.go:35] waiting up to 6m0s for node "ha-329926-m02" to be "Ready" ...
	I0501 02:33:11.988325   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926-m02
	I0501 02:33:11.988335   32853 round_trippers.go:469] Request Headers:
	I0501 02:33:11.988342   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:11.988348   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:33:12.003472   32853 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0501 02:33:12.488867   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926-m02
	I0501 02:33:12.488893   32853 round_trippers.go:469] Request Headers:
	I0501 02:33:12.488901   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:12.488905   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:33:12.494522   32853 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:33:12.989291   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926-m02
	I0501 02:33:12.989314   32853 round_trippers.go:469] Request Headers:
	I0501 02:33:12.989322   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:12.989326   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:33:12.994548   32853 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:33:13.489445   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926-m02
	I0501 02:33:13.489465   32853 round_trippers.go:469] Request Headers:
	I0501 02:33:13.489473   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:33:13.489477   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:13.492740   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:33:13.989302   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926-m02
	I0501 02:33:13.989326   32853 round_trippers.go:469] Request Headers:
	I0501 02:33:13.989332   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:33:13.989336   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:13.994574   32853 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:33:13.995174   32853 node_ready.go:53] node "ha-329926-m02" has status "Ready":"False"
	I0501 02:33:14.488543   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926-m02
	I0501 02:33:14.488564   32853 round_trippers.go:469] Request Headers:
	I0501 02:33:14.488571   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:14.488575   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:33:14.491845   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:33:14.988951   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926-m02
	I0501 02:33:14.988971   32853 round_trippers.go:469] Request Headers:
	I0501 02:33:14.988977   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:14.988981   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:33:14.994551   32853 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:33:15.489130   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926-m02
	I0501 02:33:15.489151   32853 round_trippers.go:469] Request Headers:
	I0501 02:33:15.489162   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:15.489169   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:33:15.493112   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:33:15.988869   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926-m02
	I0501 02:33:15.988893   32853 round_trippers.go:469] Request Headers:
	I0501 02:33:15.988901   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:15.988906   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:33:15.992940   32853 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:33:16.488731   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926-m02
	I0501 02:33:16.488756   32853 round_trippers.go:469] Request Headers:
	I0501 02:33:16.488774   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:16.488779   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:33:16.492717   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:33:16.493523   32853 node_ready.go:53] node "ha-329926-m02" has status "Ready":"False"
	I0501 02:33:16.989386   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926-m02
	I0501 02:33:16.989415   32853 round_trippers.go:469] Request Headers:
	I0501 02:33:16.989425   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:16.989430   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:33:16.993462   32853 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:33:17.489274   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926-m02
	I0501 02:33:17.489301   32853 round_trippers.go:469] Request Headers:
	I0501 02:33:17.489312   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:33:17.489317   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:17.494823   32853 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:33:17.988542   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926-m02
	I0501 02:33:17.988583   32853 round_trippers.go:469] Request Headers:
	I0501 02:33:17.988592   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:17.988596   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:33:17.992630   32853 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:33:18.488722   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926-m02
	I0501 02:33:18.488745   32853 round_trippers.go:469] Request Headers:
	I0501 02:33:18.488753   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:18.488757   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:33:18.492691   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:33:18.493303   32853 node_ready.go:49] node "ha-329926-m02" has status "Ready":"True"
	I0501 02:33:18.493329   32853 node_ready.go:38] duration metric: took 6.505084484s for node "ha-329926-m02" to be "Ready" ...
	I0501 02:33:18.493337   32853 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0501 02:33:18.493389   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods
	I0501 02:33:18.493411   32853 round_trippers.go:469] Request Headers:
	I0501 02:33:18.493417   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:18.493421   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:33:18.498285   32853 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:33:18.505511   32853 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-2h8lc" in "kube-system" namespace to be "Ready" ...
	I0501 02:33:18.505611   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2h8lc
	I0501 02:33:18.505622   32853 round_trippers.go:469] Request Headers:
	I0501 02:33:18.505633   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:18.505640   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:33:18.509040   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:33:18.509791   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926
	I0501 02:33:18.509807   32853 round_trippers.go:469] Request Headers:
	I0501 02:33:18.509814   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:18.509817   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:33:18.513129   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:33:18.513783   32853 pod_ready.go:92] pod "coredns-7db6d8ff4d-2h8lc" in "kube-system" namespace has status "Ready":"True"
	I0501 02:33:18.513799   32853 pod_ready.go:81] duration metric: took 8.261557ms for pod "coredns-7db6d8ff4d-2h8lc" in "kube-system" namespace to be "Ready" ...
	I0501 02:33:18.513807   32853 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-cfdqc" in "kube-system" namespace to be "Ready" ...
	I0501 02:33:18.513858   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-cfdqc
	I0501 02:33:18.513866   32853 round_trippers.go:469] Request Headers:
	I0501 02:33:18.513872   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:18.513877   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:33:18.517367   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:33:18.518083   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926
	I0501 02:33:18.518098   32853 round_trippers.go:469] Request Headers:
	I0501 02:33:18.518105   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:18.518108   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:33:18.522084   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:33:18.522665   32853 pod_ready.go:92] pod "coredns-7db6d8ff4d-cfdqc" in "kube-system" namespace has status "Ready":"True"
	I0501 02:33:18.522682   32853 pod_ready.go:81] duration metric: took 8.866578ms for pod "coredns-7db6d8ff4d-cfdqc" in "kube-system" namespace to be "Ready" ...
	I0501 02:33:18.522690   32853 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-329926" in "kube-system" namespace to be "Ready" ...
	I0501 02:33:18.522731   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-329926
	I0501 02:33:18.522739   32853 round_trippers.go:469] Request Headers:
	I0501 02:33:18.522745   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:18.522749   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:33:18.526829   32853 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:33:18.527854   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926
	I0501 02:33:18.527868   32853 round_trippers.go:469] Request Headers:
	I0501 02:33:18.527875   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:18.527879   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:33:18.530937   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:33:18.531737   32853 pod_ready.go:92] pod "etcd-ha-329926" in "kube-system" namespace has status "Ready":"True"
	I0501 02:33:18.531751   32853 pod_ready.go:81] duration metric: took 9.056356ms for pod "etcd-ha-329926" in "kube-system" namespace to be "Ready" ...
	I0501 02:33:18.531759   32853 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-329926-m02" in "kube-system" namespace to be "Ready" ...
	I0501 02:33:18.531803   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-329926-m02
	I0501 02:33:18.531816   32853 round_trippers.go:469] Request Headers:
	I0501 02:33:18.531823   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:33:18.531831   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:18.534721   32853 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 02:33:18.535312   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926-m02
	I0501 02:33:18.535325   32853 round_trippers.go:469] Request Headers:
	I0501 02:33:18.535330   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:18.535333   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:33:18.539152   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:33:19.032909   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-329926-m02
	I0501 02:33:19.032936   32853 round_trippers.go:469] Request Headers:
	I0501 02:33:19.032948   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:19.032954   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:33:19.036794   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:33:19.037616   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926-m02
	I0501 02:33:19.037636   32853 round_trippers.go:469] Request Headers:
	I0501 02:33:19.037646   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:19.037653   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:33:19.040249   32853 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 02:33:19.532218   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-329926-m02
	I0501 02:33:19.532240   32853 round_trippers.go:469] Request Headers:
	I0501 02:33:19.532248   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:19.532253   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:33:19.537133   32853 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:33:19.538351   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926-m02
	I0501 02:33:19.538367   32853 round_trippers.go:469] Request Headers:
	I0501 02:33:19.538372   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:19.538376   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:33:19.541943   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:33:20.032940   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-329926-m02
	I0501 02:33:20.032959   32853 round_trippers.go:469] Request Headers:
	I0501 02:33:20.032967   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:20.032972   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:33:20.038191   32853 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:33:20.039221   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926-m02
	I0501 02:33:20.039241   32853 round_trippers.go:469] Request Headers:
	I0501 02:33:20.039251   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:20.039259   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:33:20.041847   32853 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 02:33:20.532824   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-329926-m02
	I0501 02:33:20.532845   32853 round_trippers.go:469] Request Headers:
	I0501 02:33:20.532852   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:20.532855   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:33:20.536745   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:33:20.537569   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926-m02
	I0501 02:33:20.537588   32853 round_trippers.go:469] Request Headers:
	I0501 02:33:20.537598   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:20.537602   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:33:20.540342   32853 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 02:33:20.540967   32853 pod_ready.go:102] pod "etcd-ha-329926-m02" in "kube-system" namespace has status "Ready":"False"
	I0501 02:33:21.032383   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-329926-m02
	I0501 02:33:21.032405   32853 round_trippers.go:469] Request Headers:
	I0501 02:33:21.032412   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:21.032416   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:33:21.035771   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:33:21.036462   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926-m02
	I0501 02:33:21.036478   32853 round_trippers.go:469] Request Headers:
	I0501 02:33:21.036486   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:33:21.036492   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:21.039690   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:33:21.531898   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-329926-m02
	I0501 02:33:21.531919   32853 round_trippers.go:469] Request Headers:
	I0501 02:33:21.531925   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:21.531929   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:33:21.535449   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:33:21.536376   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926-m02
	I0501 02:33:21.536389   32853 round_trippers.go:469] Request Headers:
	I0501 02:33:21.536395   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:21.536398   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:33:21.539172   32853 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 02:33:22.032935   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-329926-m02
	I0501 02:33:22.032964   32853 round_trippers.go:469] Request Headers:
	I0501 02:33:22.032974   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:22.032981   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:33:22.037029   32853 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:33:22.037834   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926-m02
	I0501 02:33:22.037858   32853 round_trippers.go:469] Request Headers:
	I0501 02:33:22.037874   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:22.037881   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:33:22.041827   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:33:22.532130   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-329926-m02
	I0501 02:33:22.532153   32853 round_trippers.go:469] Request Headers:
	I0501 02:33:22.532161   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:22.532164   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:33:22.536068   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:33:22.537208   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926-m02
	I0501 02:33:22.537229   32853 round_trippers.go:469] Request Headers:
	I0501 02:33:22.537248   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:22.537255   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:33:22.540959   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:33:22.541676   32853 pod_ready.go:102] pod "etcd-ha-329926-m02" in "kube-system" namespace has status "Ready":"False"
	I0501 02:33:23.032523   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-329926-m02
	I0501 02:33:23.032550   32853 round_trippers.go:469] Request Headers:
	I0501 02:33:23.032558   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:23.032562   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:33:23.037008   32853 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:33:23.038957   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926-m02
	I0501 02:33:23.038978   32853 round_trippers.go:469] Request Headers:
	I0501 02:33:23.038993   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:23.038999   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:33:23.041972   32853 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 02:33:23.531969   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-329926-m02
	I0501 02:33:23.531993   32853 round_trippers.go:469] Request Headers:
	I0501 02:33:23.532003   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:23.532007   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:33:23.536144   32853 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:33:23.537306   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926-m02
	I0501 02:33:23.537338   32853 round_trippers.go:469] Request Headers:
	I0501 02:33:23.537349   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:23.537356   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:33:23.541418   32853 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:33:24.032461   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-329926-m02
	I0501 02:33:24.032489   32853 round_trippers.go:469] Request Headers:
	I0501 02:33:24.032500   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:33:24.032506   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:24.035563   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:33:24.036508   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926-m02
	I0501 02:33:24.036524   32853 round_trippers.go:469] Request Headers:
	I0501 02:33:24.036534   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:24.036538   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:33:24.039302   32853 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 02:33:24.532724   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-329926-m02
	I0501 02:33:24.532758   32853 round_trippers.go:469] Request Headers:
	I0501 02:33:24.532771   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:24.532776   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:33:24.536087   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:33:24.537058   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926-m02
	I0501 02:33:24.537070   32853 round_trippers.go:469] Request Headers:
	I0501 02:33:24.537077   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:24.537081   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:33:24.539781   32853 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 02:33:25.031933   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-329926-m02
	I0501 02:33:25.031957   32853 round_trippers.go:469] Request Headers:
	I0501 02:33:25.031965   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:25.031970   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:33:25.038111   32853 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 02:33:25.040132   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926-m02
	I0501 02:33:25.040150   32853 round_trippers.go:469] Request Headers:
	I0501 02:33:25.040158   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:25.040163   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:33:25.044966   32853 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:33:25.046443   32853 pod_ready.go:102] pod "etcd-ha-329926-m02" in "kube-system" namespace has status "Ready":"False"
	I0501 02:33:25.532822   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-329926-m02
	I0501 02:33:25.532852   32853 round_trippers.go:469] Request Headers:
	I0501 02:33:25.532861   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:25.532865   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:33:25.536720   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:33:25.537639   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926-m02
	I0501 02:33:25.537660   32853 round_trippers.go:469] Request Headers:
	I0501 02:33:25.537671   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:25.537676   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:33:25.541010   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:33:25.541525   32853 pod_ready.go:92] pod "etcd-ha-329926-m02" in "kube-system" namespace has status "Ready":"True"
	I0501 02:33:25.541542   32853 pod_ready.go:81] duration metric: took 7.00977732s for pod "etcd-ha-329926-m02" in "kube-system" namespace to be "Ready" ...
	I0501 02:33:25.541555   32853 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-329926" in "kube-system" namespace to be "Ready" ...
	I0501 02:33:25.541603   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-329926
	I0501 02:33:25.541611   32853 round_trippers.go:469] Request Headers:
	I0501 02:33:25.541618   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:25.541621   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:33:25.544520   32853 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 02:33:25.545317   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926
	I0501 02:33:25.545332   32853 round_trippers.go:469] Request Headers:
	I0501 02:33:25.545340   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:25.545342   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:33:25.547604   32853 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 02:33:25.548196   32853 pod_ready.go:92] pod "kube-apiserver-ha-329926" in "kube-system" namespace has status "Ready":"True"
	I0501 02:33:25.548211   32853 pod_ready.go:81] duration metric: took 6.649613ms for pod "kube-apiserver-ha-329926" in "kube-system" namespace to be "Ready" ...
	I0501 02:33:25.548219   32853 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-329926-m02" in "kube-system" namespace to be "Ready" ...
	I0501 02:33:25.548267   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-329926-m02
	I0501 02:33:25.548274   32853 round_trippers.go:469] Request Headers:
	I0501 02:33:25.548281   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:25.548284   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:33:25.550809   32853 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 02:33:25.551391   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926-m02
	I0501 02:33:25.551403   32853 round_trippers.go:469] Request Headers:
	I0501 02:33:25.551410   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:25.551414   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:33:25.553972   32853 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 02:33:25.554820   32853 pod_ready.go:92] pod "kube-apiserver-ha-329926-m02" in "kube-system" namespace has status "Ready":"True"
	I0501 02:33:25.554833   32853 pod_ready.go:81] duration metric: took 6.608772ms for pod "kube-apiserver-ha-329926-m02" in "kube-system" namespace to be "Ready" ...
	I0501 02:33:25.554842   32853 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-329926" in "kube-system" namespace to be "Ready" ...
	I0501 02:33:25.554885   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-329926
	I0501 02:33:25.554894   32853 round_trippers.go:469] Request Headers:
	I0501 02:33:25.554902   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:25.554910   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:33:25.557096   32853 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 02:33:25.557769   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926
	I0501 02:33:25.557784   32853 round_trippers.go:469] Request Headers:
	I0501 02:33:25.557791   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:25.557795   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:33:25.560089   32853 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 02:33:25.560780   32853 pod_ready.go:92] pod "kube-controller-manager-ha-329926" in "kube-system" namespace has status "Ready":"True"
	I0501 02:33:25.560799   32853 pod_ready.go:81] duration metric: took 5.951704ms for pod "kube-controller-manager-ha-329926" in "kube-system" namespace to be "Ready" ...
	I0501 02:33:25.560807   32853 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-329926-m02" in "kube-system" namespace to be "Ready" ...
	I0501 02:33:25.560852   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-329926-m02
	I0501 02:33:25.560859   32853 round_trippers.go:469] Request Headers:
	I0501 02:33:25.560866   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:25.560872   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:33:25.563304   32853 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 02:33:25.689395   32853 request.go:629] Waited for 125.311047ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/nodes/ha-329926-m02
	I0501 02:33:25.689473   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926-m02
	I0501 02:33:25.689481   32853 round_trippers.go:469] Request Headers:
	I0501 02:33:25.689491   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:25.689495   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:33:25.694568   32853 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:33:25.695613   32853 pod_ready.go:92] pod "kube-controller-manager-ha-329926-m02" in "kube-system" namespace has status "Ready":"True"
	I0501 02:33:25.695631   32853 pod_ready.go:81] duration metric: took 134.818644ms for pod "kube-controller-manager-ha-329926-m02" in "kube-system" namespace to be "Ready" ...
	I0501 02:33:25.695640   32853 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-msshn" in "kube-system" namespace to be "Ready" ...
	I0501 02:33:25.888934   32853 request.go:629] Waited for 193.220812ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-msshn
	I0501 02:33:25.888991   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-msshn
	I0501 02:33:25.889000   32853 round_trippers.go:469] Request Headers:
	I0501 02:33:25.889014   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:25.889020   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:33:25.891885   32853 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 02:33:26.089742   32853 request.go:629] Waited for 197.064709ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/nodes/ha-329926
	I0501 02:33:26.089823   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926
	I0501 02:33:26.089832   32853 round_trippers.go:469] Request Headers:
	I0501 02:33:26.089840   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:26.089846   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:33:26.094770   32853 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:33:26.096339   32853 pod_ready.go:92] pod "kube-proxy-msshn" in "kube-system" namespace has status "Ready":"True"
	I0501 02:33:26.096359   32853 pod_ready.go:81] duration metric: took 400.712232ms for pod "kube-proxy-msshn" in "kube-system" namespace to be "Ready" ...
	I0501 02:33:26.096369   32853 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rfsm8" in "kube-system" namespace to be "Ready" ...
	I0501 02:33:26.289504   32853 request.go:629] Waited for 193.059757ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rfsm8
	I0501 02:33:26.289558   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rfsm8
	I0501 02:33:26.289563   32853 round_trippers.go:469] Request Headers:
	I0501 02:33:26.289571   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:26.289578   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:33:26.292679   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:33:26.488853   32853 request.go:629] Waited for 195.296934ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/nodes/ha-329926-m02
	I0501 02:33:26.488915   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926-m02
	I0501 02:33:26.488929   32853 round_trippers.go:469] Request Headers:
	I0501 02:33:26.488940   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:26.488946   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:33:26.492008   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:33:26.492777   32853 pod_ready.go:92] pod "kube-proxy-rfsm8" in "kube-system" namespace has status "Ready":"True"
	I0501 02:33:26.492804   32853 pod_ready.go:81] duration metric: took 396.427668ms for pod "kube-proxy-rfsm8" in "kube-system" namespace to be "Ready" ...
	I0501 02:33:26.492818   32853 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-329926" in "kube-system" namespace to be "Ready" ...
	I0501 02:33:26.688800   32853 request.go:629] Waited for 195.916931ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-329926
	I0501 02:33:26.688858   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-329926
	I0501 02:33:26.688862   32853 round_trippers.go:469] Request Headers:
	I0501 02:33:26.688871   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:26.688877   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:33:26.692819   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:33:26.889221   32853 request.go:629] Waited for 195.41555ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/nodes/ha-329926
	I0501 02:33:26.889280   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926
	I0501 02:33:26.889285   32853 round_trippers.go:469] Request Headers:
	I0501 02:33:26.889293   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:26.889297   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:33:26.893122   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:33:26.894152   32853 pod_ready.go:92] pod "kube-scheduler-ha-329926" in "kube-system" namespace has status "Ready":"True"
	I0501 02:33:26.894171   32853 pod_ready.go:81] duration metric: took 401.345489ms for pod "kube-scheduler-ha-329926" in "kube-system" namespace to be "Ready" ...
	I0501 02:33:26.894180   32853 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-329926-m02" in "kube-system" namespace to be "Ready" ...
	I0501 02:33:27.089385   32853 request.go:629] Waited for 195.12619ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-329926-m02
	I0501 02:33:27.089452   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-329926-m02
	I0501 02:33:27.089458   32853 round_trippers.go:469] Request Headers:
	I0501 02:33:27.089465   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:27.089469   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:33:27.093418   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:33:27.289741   32853 request.go:629] Waited for 195.55559ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/nodes/ha-329926-m02
	I0501 02:33:27.289799   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926-m02
	I0501 02:33:27.289805   32853 round_trippers.go:469] Request Headers:
	I0501 02:33:27.289812   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:27.289817   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:33:27.294038   32853 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:33:27.295226   32853 pod_ready.go:92] pod "kube-scheduler-ha-329926-m02" in "kube-system" namespace has status "Ready":"True"
	I0501 02:33:27.295243   32853 pod_ready.go:81] duration metric: took 401.057138ms for pod "kube-scheduler-ha-329926-m02" in "kube-system" namespace to be "Ready" ...
	I0501 02:33:27.295253   32853 pod_ready.go:38] duration metric: took 8.801905402s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0501 02:33:27.295268   32853 api_server.go:52] waiting for apiserver process to appear ...
	I0501 02:33:27.295334   32853 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 02:33:27.311865   32853 api_server.go:72] duration metric: took 15.648969816s to wait for apiserver process to appear ...
	I0501 02:33:27.311894   32853 api_server.go:88] waiting for apiserver healthz status ...
	I0501 02:33:27.311919   32853 api_server.go:253] Checking apiserver healthz at https://192.168.39.5:8443/healthz ...
	I0501 02:33:27.317230   32853 api_server.go:279] https://192.168.39.5:8443/healthz returned 200:
	ok
	I0501 02:33:27.317286   32853 round_trippers.go:463] GET https://192.168.39.5:8443/version
	I0501 02:33:27.317294   32853 round_trippers.go:469] Request Headers:
	I0501 02:33:27.317301   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:27.317307   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:33:27.318471   32853 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0501 02:33:27.318732   32853 api_server.go:141] control plane version: v1.30.0
	I0501 02:33:27.318751   32853 api_server.go:131] duration metric: took 6.850306ms to wait for apiserver health ...
	I0501 02:33:27.318758   32853 system_pods.go:43] waiting for kube-system pods to appear ...
	I0501 02:33:27.489151   32853 request.go:629] Waited for 170.324079ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods
	I0501 02:33:27.489223   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods
	I0501 02:33:27.489229   32853 round_trippers.go:469] Request Headers:
	I0501 02:33:27.489239   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:27.489251   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:33:27.496035   32853 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 02:33:27.501718   32853 system_pods.go:59] 17 kube-system pods found
	I0501 02:33:27.501745   32853 system_pods.go:61] "coredns-7db6d8ff4d-2h8lc" [937e09f0-6a7d-4387-aa19-ee959eb5a2a5] Running
	I0501 02:33:27.501750   32853 system_pods.go:61] "coredns-7db6d8ff4d-cfdqc" [a37e982e-9e4f-43bf-b957-0d6f082f4ec8] Running
	I0501 02:33:27.501754   32853 system_pods.go:61] "etcd-ha-329926" [f0e4ae2a-a8cc-42b2-9865-fb6ec3f41acb] Running
	I0501 02:33:27.501757   32853 system_pods.go:61] "etcd-ha-329926-m02" [4ed5b754-bb3d-46de-a5b9-ff46994f25ad] Running
	I0501 02:33:27.501760   32853 system_pods.go:61] "kindnet-9r8zn" [fc187c8a-a964-45e1-adb0-f5ce23922b66] Running
	I0501 02:33:27.501762   32853 system_pods.go:61] "kindnet-kcmp7" [8e15c166-9ba1-40c9-8f33-db7f83733932] Running
	I0501 02:33:27.501765   32853 system_pods.go:61] "kube-apiserver-ha-329926" [49c47f4f-663a-4407-9d46-94fa3afbf349] Running
	I0501 02:33:27.501769   32853 system_pods.go:61] "kube-apiserver-ha-329926-m02" [886d1acc-021c-4f8b-b477-b9760260aabb] Running
	I0501 02:33:27.501773   32853 system_pods.go:61] "kube-controller-manager-ha-329926" [332785d8-9966-4823-8828-fa5e90b4aac1] Running
	I0501 02:33:27.501779   32853 system_pods.go:61] "kube-controller-manager-ha-329926-m02" [91d97fa7-6409-4620-b569-c391d21a5915] Running
	I0501 02:33:27.501783   32853 system_pods.go:61] "kube-proxy-msshn" [7575fbfc-11ce-4223-bd99-ff9cdddd3568] Running
	I0501 02:33:27.501788   32853 system_pods.go:61] "kube-proxy-rfsm8" [f0510b55-1b59-4239-b529-b7af4d017c06] Running
	I0501 02:33:27.501796   32853 system_pods.go:61] "kube-scheduler-ha-329926" [7d45e3e9-cc7e-4b69-9219-61c3006013ea] Running
	I0501 02:33:27.501801   32853 system_pods.go:61] "kube-scheduler-ha-329926-m02" [075e127f-debf-4dd4-babd-be0930fb2ef7] Running
	I0501 02:33:27.501820   32853 system_pods.go:61] "kube-vip-ha-329926" [0fbbb815-441d-48d0-b0cf-1bb57ff6d993] Running
	I0501 02:33:27.501824   32853 system_pods.go:61] "kube-vip-ha-329926-m02" [92c115f8-bb9c-4a86-b914-984985a69916] Running
	I0501 02:33:27.501827   32853 system_pods.go:61] "storage-provisioner" [371423a6-a156-4e8d-bf66-812d606cc8d7] Running
	I0501 02:33:27.501833   32853 system_pods.go:74] duration metric: took 183.069484ms to wait for pod list to return data ...
	I0501 02:33:27.501842   32853 default_sa.go:34] waiting for default service account to be created ...
	I0501 02:33:27.689121   32853 request.go:629] Waited for 187.222295ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/namespaces/default/serviceaccounts
	I0501 02:33:27.689173   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/default/serviceaccounts
	I0501 02:33:27.689191   32853 round_trippers.go:469] Request Headers:
	I0501 02:33:27.689217   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:27.689228   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:33:27.693649   32853 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:33:27.693890   32853 default_sa.go:45] found service account: "default"
	I0501 02:33:27.693908   32853 default_sa.go:55] duration metric: took 192.059311ms for default service account to be created ...
	I0501 02:33:27.693918   32853 system_pods.go:116] waiting for k8s-apps to be running ...
	I0501 02:33:27.889175   32853 request.go:629] Waited for 195.171272ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods
	I0501 02:33:27.889228   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods
	I0501 02:33:27.889239   32853 round_trippers.go:469] Request Headers:
	I0501 02:33:27.889252   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:27.889260   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:33:27.895684   32853 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 02:33:27.900752   32853 system_pods.go:86] 17 kube-system pods found
	I0501 02:33:27.900784   32853 system_pods.go:89] "coredns-7db6d8ff4d-2h8lc" [937e09f0-6a7d-4387-aa19-ee959eb5a2a5] Running
	I0501 02:33:27.900792   32853 system_pods.go:89] "coredns-7db6d8ff4d-cfdqc" [a37e982e-9e4f-43bf-b957-0d6f082f4ec8] Running
	I0501 02:33:27.900798   32853 system_pods.go:89] "etcd-ha-329926" [f0e4ae2a-a8cc-42b2-9865-fb6ec3f41acb] Running
	I0501 02:33:27.900804   32853 system_pods.go:89] "etcd-ha-329926-m02" [4ed5b754-bb3d-46de-a5b9-ff46994f25ad] Running
	I0501 02:33:27.900810   32853 system_pods.go:89] "kindnet-9r8zn" [fc187c8a-a964-45e1-adb0-f5ce23922b66] Running
	I0501 02:33:27.900816   32853 system_pods.go:89] "kindnet-kcmp7" [8e15c166-9ba1-40c9-8f33-db7f83733932] Running
	I0501 02:33:27.900822   32853 system_pods.go:89] "kube-apiserver-ha-329926" [49c47f4f-663a-4407-9d46-94fa3afbf349] Running
	I0501 02:33:27.900829   32853 system_pods.go:89] "kube-apiserver-ha-329926-m02" [886d1acc-021c-4f8b-b477-b9760260aabb] Running
	I0501 02:33:27.900840   32853 system_pods.go:89] "kube-controller-manager-ha-329926" [332785d8-9966-4823-8828-fa5e90b4aac1] Running
	I0501 02:33:27.900847   32853 system_pods.go:89] "kube-controller-manager-ha-329926-m02" [91d97fa7-6409-4620-b569-c391d21a5915] Running
	I0501 02:33:27.900853   32853 system_pods.go:89] "kube-proxy-msshn" [7575fbfc-11ce-4223-bd99-ff9cdddd3568] Running
	I0501 02:33:27.900864   32853 system_pods.go:89] "kube-proxy-rfsm8" [f0510b55-1b59-4239-b529-b7af4d017c06] Running
	I0501 02:33:27.900871   32853 system_pods.go:89] "kube-scheduler-ha-329926" [7d45e3e9-cc7e-4b69-9219-61c3006013ea] Running
	I0501 02:33:27.900880   32853 system_pods.go:89] "kube-scheduler-ha-329926-m02" [075e127f-debf-4dd4-babd-be0930fb2ef7] Running
	I0501 02:33:27.900887   32853 system_pods.go:89] "kube-vip-ha-329926" [0fbbb815-441d-48d0-b0cf-1bb57ff6d993] Running
	I0501 02:33:27.900895   32853 system_pods.go:89] "kube-vip-ha-329926-m02" [92c115f8-bb9c-4a86-b914-984985a69916] Running
	I0501 02:33:27.900904   32853 system_pods.go:89] "storage-provisioner" [371423a6-a156-4e8d-bf66-812d606cc8d7] Running
	I0501 02:33:27.900913   32853 system_pods.go:126] duration metric: took 206.988594ms to wait for k8s-apps to be running ...
	I0501 02:33:27.900927   32853 system_svc.go:44] waiting for kubelet service to be running ....
	I0501 02:33:27.900977   32853 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 02:33:27.917057   32853 system_svc.go:56] duration metric: took 16.105865ms WaitForService to wait for kubelet
	I0501 02:33:27.917082   32853 kubeadm.go:576] duration metric: took 16.254189789s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0501 02:33:27.917099   32853 node_conditions.go:102] verifying NodePressure condition ...
	I0501 02:33:28.089485   32853 request.go:629] Waited for 172.305995ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/nodes
	I0501 02:33:28.089541   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes
	I0501 02:33:28.089546   32853 round_trippers.go:469] Request Headers:
	I0501 02:33:28.089553   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:28.089557   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:33:28.093499   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:33:28.094277   32853 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0501 02:33:28.094298   32853 node_conditions.go:123] node cpu capacity is 2
	I0501 02:33:28.094312   32853 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0501 02:33:28.094318   32853 node_conditions.go:123] node cpu capacity is 2
	I0501 02:33:28.094323   32853 node_conditions.go:105] duration metric: took 177.218719ms to run NodePressure ...
	I0501 02:33:28.094336   32853 start.go:240] waiting for startup goroutines ...
	I0501 02:33:28.094364   32853 start.go:254] writing updated cluster config ...
	I0501 02:33:28.096419   32853 out.go:177] 
	I0501 02:33:28.097791   32853 config.go:182] Loaded profile config "ha-329926": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0501 02:33:28.097893   32853 profile.go:143] Saving config to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/config.json ...
	I0501 02:33:28.099538   32853 out.go:177] * Starting "ha-329926-m03" control-plane node in "ha-329926" cluster
	I0501 02:33:28.100767   32853 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0501 02:33:28.100801   32853 cache.go:56] Caching tarball of preloaded images
	I0501 02:33:28.100915   32853 preload.go:173] Found /home/jenkins/minikube-integration/18779-13391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0501 02:33:28.100932   32853 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0501 02:33:28.101053   32853 profile.go:143] Saving config to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/config.json ...
	I0501 02:33:28.101262   32853 start.go:360] acquireMachinesLock for ha-329926-m03: {Name:mkc4548e5afe1d4b0a833f6c522103562b0cefff Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0501 02:33:28.101311   32853 start.go:364] duration metric: took 25.235µs to acquireMachinesLock for "ha-329926-m03"
	I0501 02:33:28.101336   32853 start.go:93] Provisioning new machine with config: &{Name:ha-329926 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.0 ClusterName:ha-329926 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.5 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.79 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns
:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiza
tions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0501 02:33:28.101461   32853 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0501 02:33:28.103040   32853 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0501 02:33:28.103111   32853 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:33:28.103139   32853 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:33:28.117788   32853 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37789
	I0501 02:33:28.118265   32853 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:33:28.118822   32853 main.go:141] libmachine: Using API Version  1
	I0501 02:33:28.118846   32853 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:33:28.119143   32853 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:33:28.119367   32853 main.go:141] libmachine: (ha-329926-m03) Calling .GetMachineName
	I0501 02:33:28.119501   32853 main.go:141] libmachine: (ha-329926-m03) Calling .DriverName
	I0501 02:33:28.119666   32853 start.go:159] libmachine.API.Create for "ha-329926" (driver="kvm2")
	I0501 02:33:28.119696   32853 client.go:168] LocalClient.Create starting
	I0501 02:33:28.119739   32853 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem
	I0501 02:33:28.119778   32853 main.go:141] libmachine: Decoding PEM data...
	I0501 02:33:28.119800   32853 main.go:141] libmachine: Parsing certificate...
	I0501 02:33:28.119866   32853 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem
	I0501 02:33:28.119891   32853 main.go:141] libmachine: Decoding PEM data...
	I0501 02:33:28.119910   32853 main.go:141] libmachine: Parsing certificate...
	I0501 02:33:28.119931   32853 main.go:141] libmachine: Running pre-create checks...
	I0501 02:33:28.119942   32853 main.go:141] libmachine: (ha-329926-m03) Calling .PreCreateCheck
	I0501 02:33:28.120080   32853 main.go:141] libmachine: (ha-329926-m03) Calling .GetConfigRaw
	I0501 02:33:28.120474   32853 main.go:141] libmachine: Creating machine...
	I0501 02:33:28.120492   32853 main.go:141] libmachine: (ha-329926-m03) Calling .Create
	I0501 02:33:28.120604   32853 main.go:141] libmachine: (ha-329926-m03) Creating KVM machine...
	I0501 02:33:28.122036   32853 main.go:141] libmachine: (ha-329926-m03) DBG | found existing default KVM network
	I0501 02:33:28.122204   32853 main.go:141] libmachine: (ha-329926-m03) DBG | found existing private KVM network mk-ha-329926
	I0501 02:33:28.122370   32853 main.go:141] libmachine: (ha-329926-m03) Setting up store path in /home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926-m03 ...
	I0501 02:33:28.122409   32853 main.go:141] libmachine: (ha-329926-m03) Building disk image from file:///home/jenkins/minikube-integration/18779-13391/.minikube/cache/iso/amd64/minikube-v1.33.0-1714498396-18779-amd64.iso
	I0501 02:33:28.122457   32853 main.go:141] libmachine: (ha-329926-m03) DBG | I0501 02:33:28.122345   33738 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18779-13391/.minikube
	I0501 02:33:28.122564   32853 main.go:141] libmachine: (ha-329926-m03) Downloading /home/jenkins/minikube-integration/18779-13391/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18779-13391/.minikube/cache/iso/amd64/minikube-v1.33.0-1714498396-18779-amd64.iso...
	I0501 02:33:28.332066   32853 main.go:141] libmachine: (ha-329926-m03) DBG | I0501 02:33:28.331943   33738 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926-m03/id_rsa...
	I0501 02:33:28.547024   32853 main.go:141] libmachine: (ha-329926-m03) DBG | I0501 02:33:28.546919   33738 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926-m03/ha-329926-m03.rawdisk...
	I0501 02:33:28.547051   32853 main.go:141] libmachine: (ha-329926-m03) DBG | Writing magic tar header
	I0501 02:33:28.547061   32853 main.go:141] libmachine: (ha-329926-m03) DBG | Writing SSH key tar header
	I0501 02:33:28.547069   32853 main.go:141] libmachine: (ha-329926-m03) DBG | I0501 02:33:28.547024   33738 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926-m03 ...
	I0501 02:33:28.547158   32853 main.go:141] libmachine: (ha-329926-m03) Setting executable bit set on /home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926-m03 (perms=drwx------)
	I0501 02:33:28.547182   32853 main.go:141] libmachine: (ha-329926-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926-m03
	I0501 02:33:28.547190   32853 main.go:141] libmachine: (ha-329926-m03) Setting executable bit set on /home/jenkins/minikube-integration/18779-13391/.minikube/machines (perms=drwxr-xr-x)
	I0501 02:33:28.547197   32853 main.go:141] libmachine: (ha-329926-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18779-13391/.minikube/machines
	I0501 02:33:28.547207   32853 main.go:141] libmachine: (ha-329926-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18779-13391/.minikube
	I0501 02:33:28.547214   32853 main.go:141] libmachine: (ha-329926-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18779-13391
	I0501 02:33:28.547226   32853 main.go:141] libmachine: (ha-329926-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0501 02:33:28.547232   32853 main.go:141] libmachine: (ha-329926-m03) DBG | Checking permissions on dir: /home/jenkins
	I0501 02:33:28.547238   32853 main.go:141] libmachine: (ha-329926-m03) DBG | Checking permissions on dir: /home
	I0501 02:33:28.547243   32853 main.go:141] libmachine: (ha-329926-m03) DBG | Skipping /home - not owner
	I0501 02:33:28.547257   32853 main.go:141] libmachine: (ha-329926-m03) Setting executable bit set on /home/jenkins/minikube-integration/18779-13391/.minikube (perms=drwxr-xr-x)
	I0501 02:33:28.547269   32853 main.go:141] libmachine: (ha-329926-m03) Setting executable bit set on /home/jenkins/minikube-integration/18779-13391 (perms=drwxrwxr-x)
	I0501 02:33:28.547280   32853 main.go:141] libmachine: (ha-329926-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0501 02:33:28.547285   32853 main.go:141] libmachine: (ha-329926-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0501 02:33:28.547292   32853 main.go:141] libmachine: (ha-329926-m03) Creating domain...
	I0501 02:33:28.548119   32853 main.go:141] libmachine: (ha-329926-m03) define libvirt domain using xml: 
	I0501 02:33:28.548141   32853 main.go:141] libmachine: (ha-329926-m03) <domain type='kvm'>
	I0501 02:33:28.548157   32853 main.go:141] libmachine: (ha-329926-m03)   <name>ha-329926-m03</name>
	I0501 02:33:28.548166   32853 main.go:141] libmachine: (ha-329926-m03)   <memory unit='MiB'>2200</memory>
	I0501 02:33:28.548177   32853 main.go:141] libmachine: (ha-329926-m03)   <vcpu>2</vcpu>
	I0501 02:33:28.548188   32853 main.go:141] libmachine: (ha-329926-m03)   <features>
	I0501 02:33:28.548197   32853 main.go:141] libmachine: (ha-329926-m03)     <acpi/>
	I0501 02:33:28.548210   32853 main.go:141] libmachine: (ha-329926-m03)     <apic/>
	I0501 02:33:28.548229   32853 main.go:141] libmachine: (ha-329926-m03)     <pae/>
	I0501 02:33:28.548245   32853 main.go:141] libmachine: (ha-329926-m03)     
	I0501 02:33:28.548257   32853 main.go:141] libmachine: (ha-329926-m03)   </features>
	I0501 02:33:28.548272   32853 main.go:141] libmachine: (ha-329926-m03)   <cpu mode='host-passthrough'>
	I0501 02:33:28.548298   32853 main.go:141] libmachine: (ha-329926-m03)   
	I0501 02:33:28.548321   32853 main.go:141] libmachine: (ha-329926-m03)   </cpu>
	I0501 02:33:28.548330   32853 main.go:141] libmachine: (ha-329926-m03)   <os>
	I0501 02:33:28.548343   32853 main.go:141] libmachine: (ha-329926-m03)     <type>hvm</type>
	I0501 02:33:28.548358   32853 main.go:141] libmachine: (ha-329926-m03)     <boot dev='cdrom'/>
	I0501 02:33:28.548369   32853 main.go:141] libmachine: (ha-329926-m03)     <boot dev='hd'/>
	I0501 02:33:28.548378   32853 main.go:141] libmachine: (ha-329926-m03)     <bootmenu enable='no'/>
	I0501 02:33:28.548388   32853 main.go:141] libmachine: (ha-329926-m03)   </os>
	I0501 02:33:28.548396   32853 main.go:141] libmachine: (ha-329926-m03)   <devices>
	I0501 02:33:28.548407   32853 main.go:141] libmachine: (ha-329926-m03)     <disk type='file' device='cdrom'>
	I0501 02:33:28.548425   32853 main.go:141] libmachine: (ha-329926-m03)       <source file='/home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926-m03/boot2docker.iso'/>
	I0501 02:33:28.548436   32853 main.go:141] libmachine: (ha-329926-m03)       <target dev='hdc' bus='scsi'/>
	I0501 02:33:28.548446   32853 main.go:141] libmachine: (ha-329926-m03)       <readonly/>
	I0501 02:33:28.548455   32853 main.go:141] libmachine: (ha-329926-m03)     </disk>
	I0501 02:33:28.548465   32853 main.go:141] libmachine: (ha-329926-m03)     <disk type='file' device='disk'>
	I0501 02:33:28.548476   32853 main.go:141] libmachine: (ha-329926-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0501 02:33:28.548485   32853 main.go:141] libmachine: (ha-329926-m03)       <source file='/home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926-m03/ha-329926-m03.rawdisk'/>
	I0501 02:33:28.548496   32853 main.go:141] libmachine: (ha-329926-m03)       <target dev='hda' bus='virtio'/>
	I0501 02:33:28.548511   32853 main.go:141] libmachine: (ha-329926-m03)     </disk>
	I0501 02:33:28.548528   32853 main.go:141] libmachine: (ha-329926-m03)     <interface type='network'>
	I0501 02:33:28.548544   32853 main.go:141] libmachine: (ha-329926-m03)       <source network='mk-ha-329926'/>
	I0501 02:33:28.548556   32853 main.go:141] libmachine: (ha-329926-m03)       <model type='virtio'/>
	I0501 02:33:28.548569   32853 main.go:141] libmachine: (ha-329926-m03)     </interface>
	I0501 02:33:28.548581   32853 main.go:141] libmachine: (ha-329926-m03)     <interface type='network'>
	I0501 02:33:28.548608   32853 main.go:141] libmachine: (ha-329926-m03)       <source network='default'/>
	I0501 02:33:28.548638   32853 main.go:141] libmachine: (ha-329926-m03)       <model type='virtio'/>
	I0501 02:33:28.548649   32853 main.go:141] libmachine: (ha-329926-m03)     </interface>
	I0501 02:33:28.548660   32853 main.go:141] libmachine: (ha-329926-m03)     <serial type='pty'>
	I0501 02:33:28.548670   32853 main.go:141] libmachine: (ha-329926-m03)       <target port='0'/>
	I0501 02:33:28.548680   32853 main.go:141] libmachine: (ha-329926-m03)     </serial>
	I0501 02:33:28.548691   32853 main.go:141] libmachine: (ha-329926-m03)     <console type='pty'>
	I0501 02:33:28.548701   32853 main.go:141] libmachine: (ha-329926-m03)       <target type='serial' port='0'/>
	I0501 02:33:28.548712   32853 main.go:141] libmachine: (ha-329926-m03)     </console>
	I0501 02:33:28.548722   32853 main.go:141] libmachine: (ha-329926-m03)     <rng model='virtio'>
	I0501 02:33:28.548736   32853 main.go:141] libmachine: (ha-329926-m03)       <backend model='random'>/dev/random</backend>
	I0501 02:33:28.548752   32853 main.go:141] libmachine: (ha-329926-m03)     </rng>
	I0501 02:33:28.548764   32853 main.go:141] libmachine: (ha-329926-m03)     
	I0501 02:33:28.548775   32853 main.go:141] libmachine: (ha-329926-m03)     
	I0501 02:33:28.548787   32853 main.go:141] libmachine: (ha-329926-m03)   </devices>
	I0501 02:33:28.548797   32853 main.go:141] libmachine: (ha-329926-m03) </domain>
	I0501 02:33:28.548809   32853 main.go:141] libmachine: (ha-329926-m03) 
	I0501 02:33:28.555383   32853 main.go:141] libmachine: (ha-329926-m03) DBG | domain ha-329926-m03 has defined MAC address 52:54:00:67:d4:d7 in network default
	I0501 02:33:28.555898   32853 main.go:141] libmachine: (ha-329926-m03) Ensuring networks are active...
	I0501 02:33:28.555917   32853 main.go:141] libmachine: (ha-329926-m03) DBG | domain ha-329926-m03 has defined MAC address 52:54:00:f9:eb:7d in network mk-ha-329926
	I0501 02:33:28.556546   32853 main.go:141] libmachine: (ha-329926-m03) Ensuring network default is active
	I0501 02:33:28.556865   32853 main.go:141] libmachine: (ha-329926-m03) Ensuring network mk-ha-329926 is active
	I0501 02:33:28.557213   32853 main.go:141] libmachine: (ha-329926-m03) Getting domain xml...
	I0501 02:33:28.557937   32853 main.go:141] libmachine: (ha-329926-m03) Creating domain...
	I0501 02:33:29.753981   32853 main.go:141] libmachine: (ha-329926-m03) Waiting to get IP...
	I0501 02:33:29.754874   32853 main.go:141] libmachine: (ha-329926-m03) DBG | domain ha-329926-m03 has defined MAC address 52:54:00:f9:eb:7d in network mk-ha-329926
	I0501 02:33:29.755233   32853 main.go:141] libmachine: (ha-329926-m03) DBG | unable to find current IP address of domain ha-329926-m03 in network mk-ha-329926
	I0501 02:33:29.755257   32853 main.go:141] libmachine: (ha-329926-m03) DBG | I0501 02:33:29.755213   33738 retry.go:31] will retry after 264.426048ms: waiting for machine to come up
	I0501 02:33:30.021622   32853 main.go:141] libmachine: (ha-329926-m03) DBG | domain ha-329926-m03 has defined MAC address 52:54:00:f9:eb:7d in network mk-ha-329926
	I0501 02:33:30.022090   32853 main.go:141] libmachine: (ha-329926-m03) DBG | unable to find current IP address of domain ha-329926-m03 in network mk-ha-329926
	I0501 02:33:30.022125   32853 main.go:141] libmachine: (ha-329926-m03) DBG | I0501 02:33:30.022034   33738 retry.go:31] will retry after 236.771649ms: waiting for machine to come up
	I0501 02:33:30.260504   32853 main.go:141] libmachine: (ha-329926-m03) DBG | domain ha-329926-m03 has defined MAC address 52:54:00:f9:eb:7d in network mk-ha-329926
	I0501 02:33:30.260950   32853 main.go:141] libmachine: (ha-329926-m03) DBG | unable to find current IP address of domain ha-329926-m03 in network mk-ha-329926
	I0501 02:33:30.260982   32853 main.go:141] libmachine: (ha-329926-m03) DBG | I0501 02:33:30.260914   33738 retry.go:31] will retry after 381.572111ms: waiting for machine to come up
	I0501 02:33:30.644643   32853 main.go:141] libmachine: (ha-329926-m03) DBG | domain ha-329926-m03 has defined MAC address 52:54:00:f9:eb:7d in network mk-ha-329926
	I0501 02:33:30.645170   32853 main.go:141] libmachine: (ha-329926-m03) DBG | unable to find current IP address of domain ha-329926-m03 in network mk-ha-329926
	I0501 02:33:30.645211   32853 main.go:141] libmachine: (ha-329926-m03) DBG | I0501 02:33:30.645114   33738 retry.go:31] will retry after 576.635524ms: waiting for machine to come up
	I0501 02:33:31.223856   32853 main.go:141] libmachine: (ha-329926-m03) DBG | domain ha-329926-m03 has defined MAC address 52:54:00:f9:eb:7d in network mk-ha-329926
	I0501 02:33:31.224393   32853 main.go:141] libmachine: (ha-329926-m03) DBG | unable to find current IP address of domain ha-329926-m03 in network mk-ha-329926
	I0501 02:33:31.224423   32853 main.go:141] libmachine: (ha-329926-m03) DBG | I0501 02:33:31.224340   33738 retry.go:31] will retry after 695.353018ms: waiting for machine to come up
	I0501 02:33:31.920747   32853 main.go:141] libmachine: (ha-329926-m03) DBG | domain ha-329926-m03 has defined MAC address 52:54:00:f9:eb:7d in network mk-ha-329926
	I0501 02:33:31.921137   32853 main.go:141] libmachine: (ha-329926-m03) DBG | unable to find current IP address of domain ha-329926-m03 in network mk-ha-329926
	I0501 02:33:31.921166   32853 main.go:141] libmachine: (ha-329926-m03) DBG | I0501 02:33:31.921101   33738 retry.go:31] will retry after 744.069404ms: waiting for machine to come up
	I0501 02:33:32.666979   32853 main.go:141] libmachine: (ha-329926-m03) DBG | domain ha-329926-m03 has defined MAC address 52:54:00:f9:eb:7d in network mk-ha-329926
	I0501 02:33:32.667389   32853 main.go:141] libmachine: (ha-329926-m03) DBG | unable to find current IP address of domain ha-329926-m03 in network mk-ha-329926
	I0501 02:33:32.667414   32853 main.go:141] libmachine: (ha-329926-m03) DBG | I0501 02:33:32.667359   33738 retry.go:31] will retry after 1.005854202s: waiting for machine to come up
	I0501 02:33:33.675019   32853 main.go:141] libmachine: (ha-329926-m03) DBG | domain ha-329926-m03 has defined MAC address 52:54:00:f9:eb:7d in network mk-ha-329926
	I0501 02:33:33.675426   32853 main.go:141] libmachine: (ha-329926-m03) DBG | unable to find current IP address of domain ha-329926-m03 in network mk-ha-329926
	I0501 02:33:33.675449   32853 main.go:141] libmachine: (ha-329926-m03) DBG | I0501 02:33:33.675390   33738 retry.go:31] will retry after 1.01541658s: waiting for machine to come up
	I0501 02:33:34.692612   32853 main.go:141] libmachine: (ha-329926-m03) DBG | domain ha-329926-m03 has defined MAC address 52:54:00:f9:eb:7d in network mk-ha-329926
	I0501 02:33:34.693194   32853 main.go:141] libmachine: (ha-329926-m03) DBG | unable to find current IP address of domain ha-329926-m03 in network mk-ha-329926
	I0501 02:33:34.693223   32853 main.go:141] libmachine: (ha-329926-m03) DBG | I0501 02:33:34.693141   33738 retry.go:31] will retry after 1.74258816s: waiting for machine to come up
	I0501 02:33:36.437450   32853 main.go:141] libmachine: (ha-329926-m03) DBG | domain ha-329926-m03 has defined MAC address 52:54:00:f9:eb:7d in network mk-ha-329926
	I0501 02:33:36.437789   32853 main.go:141] libmachine: (ha-329926-m03) DBG | unable to find current IP address of domain ha-329926-m03 in network mk-ha-329926
	I0501 02:33:36.437830   32853 main.go:141] libmachine: (ha-329926-m03) DBG | I0501 02:33:36.437746   33738 retry.go:31] will retry after 1.680882888s: waiting for machine to come up
	I0501 02:33:38.120586   32853 main.go:141] libmachine: (ha-329926-m03) DBG | domain ha-329926-m03 has defined MAC address 52:54:00:f9:eb:7d in network mk-ha-329926
	I0501 02:33:38.121045   32853 main.go:141] libmachine: (ha-329926-m03) DBG | unable to find current IP address of domain ha-329926-m03 in network mk-ha-329926
	I0501 02:33:38.121070   32853 main.go:141] libmachine: (ha-329926-m03) DBG | I0501 02:33:38.121011   33738 retry.go:31] will retry after 2.761042118s: waiting for machine to come up
	I0501 02:33:40.883703   32853 main.go:141] libmachine: (ha-329926-m03) DBG | domain ha-329926-m03 has defined MAC address 52:54:00:f9:eb:7d in network mk-ha-329926
	I0501 02:33:40.884076   32853 main.go:141] libmachine: (ha-329926-m03) DBG | unable to find current IP address of domain ha-329926-m03 in network mk-ha-329926
	I0501 02:33:40.884117   32853 main.go:141] libmachine: (ha-329926-m03) DBG | I0501 02:33:40.884008   33738 retry.go:31] will retry after 2.930624255s: waiting for machine to come up
	I0501 02:33:43.816571   32853 main.go:141] libmachine: (ha-329926-m03) DBG | domain ha-329926-m03 has defined MAC address 52:54:00:f9:eb:7d in network mk-ha-329926
	I0501 02:33:43.816974   32853 main.go:141] libmachine: (ha-329926-m03) DBG | unable to find current IP address of domain ha-329926-m03 in network mk-ha-329926
	I0501 02:33:43.817009   32853 main.go:141] libmachine: (ha-329926-m03) DBG | I0501 02:33:43.816935   33738 retry.go:31] will retry after 3.065921207s: waiting for machine to come up
	I0501 02:33:46.884687   32853 main.go:141] libmachine: (ha-329926-m03) DBG | domain ha-329926-m03 has defined MAC address 52:54:00:f9:eb:7d in network mk-ha-329926
	I0501 02:33:46.885111   32853 main.go:141] libmachine: (ha-329926-m03) DBG | unable to find current IP address of domain ha-329926-m03 in network mk-ha-329926
	I0501 02:33:46.885137   32853 main.go:141] libmachine: (ha-329926-m03) DBG | I0501 02:33:46.885085   33738 retry.go:31] will retry after 3.477878953s: waiting for machine to come up
	I0501 02:33:50.365711   32853 main.go:141] libmachine: (ha-329926-m03) DBG | domain ha-329926-m03 has defined MAC address 52:54:00:f9:eb:7d in network mk-ha-329926
	I0501 02:33:50.366223   32853 main.go:141] libmachine: (ha-329926-m03) Found IP for machine: 192.168.39.115
	I0501 02:33:50.366257   32853 main.go:141] libmachine: (ha-329926-m03) DBG | domain ha-329926-m03 has current primary IP address 192.168.39.115 and MAC address 52:54:00:f9:eb:7d in network mk-ha-329926
	I0501 02:33:50.366266   32853 main.go:141] libmachine: (ha-329926-m03) Reserving static IP address...
	I0501 02:33:50.366601   32853 main.go:141] libmachine: (ha-329926-m03) DBG | unable to find host DHCP lease matching {name: "ha-329926-m03", mac: "52:54:00:f9:eb:7d", ip: "192.168.39.115"} in network mk-ha-329926
	I0501 02:33:50.439427   32853 main.go:141] libmachine: (ha-329926-m03) DBG | Getting to WaitForSSH function...
	I0501 02:33:50.439449   32853 main.go:141] libmachine: (ha-329926-m03) Reserved static IP address: 192.168.39.115
	I0501 02:33:50.439462   32853 main.go:141] libmachine: (ha-329926-m03) Waiting for SSH to be available...
	I0501 02:33:50.441962   32853 main.go:141] libmachine: (ha-329926-m03) DBG | domain ha-329926-m03 has defined MAC address 52:54:00:f9:eb:7d in network mk-ha-329926
	I0501 02:33:50.442330   32853 main.go:141] libmachine: (ha-329926-m03) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:f9:eb:7d", ip: ""} in network mk-ha-329926
	I0501 02:33:50.442357   32853 main.go:141] libmachine: (ha-329926-m03) DBG | unable to find defined IP address of network mk-ha-329926 interface with MAC address 52:54:00:f9:eb:7d
	I0501 02:33:50.442600   32853 main.go:141] libmachine: (ha-329926-m03) DBG | Using SSH client type: external
	I0501 02:33:50.442628   32853 main.go:141] libmachine: (ha-329926-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926-m03/id_rsa (-rw-------)
	I0501 02:33:50.442655   32853 main.go:141] libmachine: (ha-329926-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0501 02:33:50.442668   32853 main.go:141] libmachine: (ha-329926-m03) DBG | About to run SSH command:
	I0501 02:33:50.442704   32853 main.go:141] libmachine: (ha-329926-m03) DBG | exit 0
	I0501 02:33:50.446087   32853 main.go:141] libmachine: (ha-329926-m03) DBG | SSH cmd err, output: exit status 255: 
	I0501 02:33:50.446106   32853 main.go:141] libmachine: (ha-329926-m03) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0501 02:33:50.446113   32853 main.go:141] libmachine: (ha-329926-m03) DBG | command : exit 0
	I0501 02:33:50.446121   32853 main.go:141] libmachine: (ha-329926-m03) DBG | err     : exit status 255
	I0501 02:33:50.446128   32853 main.go:141] libmachine: (ha-329926-m03) DBG | output  : 
	I0501 02:33:53.446971   32853 main.go:141] libmachine: (ha-329926-m03) DBG | Getting to WaitForSSH function...
	I0501 02:33:53.449841   32853 main.go:141] libmachine: (ha-329926-m03) DBG | domain ha-329926-m03 has defined MAC address 52:54:00:f9:eb:7d in network mk-ha-329926
	I0501 02:33:53.450179   32853 main.go:141] libmachine: (ha-329926-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:eb:7d", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:33:44 +0000 UTC Type:0 Mac:52:54:00:f9:eb:7d Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:ha-329926-m03 Clientid:01:52:54:00:f9:eb:7d}
	I0501 02:33:53.450204   32853 main.go:141] libmachine: (ha-329926-m03) DBG | domain ha-329926-m03 has defined IP address 192.168.39.115 and MAC address 52:54:00:f9:eb:7d in network mk-ha-329926
	I0501 02:33:53.450301   32853 main.go:141] libmachine: (ha-329926-m03) DBG | Using SSH client type: external
	I0501 02:33:53.450327   32853 main.go:141] libmachine: (ha-329926-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926-m03/id_rsa (-rw-------)
	I0501 02:33:53.450372   32853 main.go:141] libmachine: (ha-329926-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.115 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0501 02:33:53.450389   32853 main.go:141] libmachine: (ha-329926-m03) DBG | About to run SSH command:
	I0501 02:33:53.450420   32853 main.go:141] libmachine: (ha-329926-m03) DBG | exit 0
	I0501 02:33:53.578919   32853 main.go:141] libmachine: (ha-329926-m03) DBG | SSH cmd err, output: <nil>: 
	I0501 02:33:53.579202   32853 main.go:141] libmachine: (ha-329926-m03) KVM machine creation complete!
	I0501 02:33:53.579498   32853 main.go:141] libmachine: (ha-329926-m03) Calling .GetConfigRaw
	I0501 02:33:53.580099   32853 main.go:141] libmachine: (ha-329926-m03) Calling .DriverName
	I0501 02:33:53.580316   32853 main.go:141] libmachine: (ha-329926-m03) Calling .DriverName
	I0501 02:33:53.580468   32853 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0501 02:33:53.580481   32853 main.go:141] libmachine: (ha-329926-m03) Calling .GetState
	I0501 02:33:53.581566   32853 main.go:141] libmachine: Detecting operating system of created instance...
	I0501 02:33:53.581578   32853 main.go:141] libmachine: Waiting for SSH to be available...
	I0501 02:33:53.581586   32853 main.go:141] libmachine: Getting to WaitForSSH function...
	I0501 02:33:53.581593   32853 main.go:141] libmachine: (ha-329926-m03) Calling .GetSSHHostname
	I0501 02:33:53.584271   32853 main.go:141] libmachine: (ha-329926-m03) DBG | domain ha-329926-m03 has defined MAC address 52:54:00:f9:eb:7d in network mk-ha-329926
	I0501 02:33:53.584731   32853 main.go:141] libmachine: (ha-329926-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:eb:7d", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:33:44 +0000 UTC Type:0 Mac:52:54:00:f9:eb:7d Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:ha-329926-m03 Clientid:01:52:54:00:f9:eb:7d}
	I0501 02:33:53.584758   32853 main.go:141] libmachine: (ha-329926-m03) DBG | domain ha-329926-m03 has defined IP address 192.168.39.115 and MAC address 52:54:00:f9:eb:7d in network mk-ha-329926
	I0501 02:33:53.584924   32853 main.go:141] libmachine: (ha-329926-m03) Calling .GetSSHPort
	I0501 02:33:53.585094   32853 main.go:141] libmachine: (ha-329926-m03) Calling .GetSSHKeyPath
	I0501 02:33:53.585243   32853 main.go:141] libmachine: (ha-329926-m03) Calling .GetSSHKeyPath
	I0501 02:33:53.585381   32853 main.go:141] libmachine: (ha-329926-m03) Calling .GetSSHUsername
	I0501 02:33:53.585530   32853 main.go:141] libmachine: Using SSH client type: native
	I0501 02:33:53.585733   32853 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.115 22 <nil> <nil>}
	I0501 02:33:53.585748   32853 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0501 02:33:53.701991   32853 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0501 02:33:53.702026   32853 main.go:141] libmachine: Detecting the provisioner...
	I0501 02:33:53.702034   32853 main.go:141] libmachine: (ha-329926-m03) Calling .GetSSHHostname
	I0501 02:33:53.704820   32853 main.go:141] libmachine: (ha-329926-m03) DBG | domain ha-329926-m03 has defined MAC address 52:54:00:f9:eb:7d in network mk-ha-329926
	I0501 02:33:53.705152   32853 main.go:141] libmachine: (ha-329926-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:eb:7d", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:33:44 +0000 UTC Type:0 Mac:52:54:00:f9:eb:7d Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:ha-329926-m03 Clientid:01:52:54:00:f9:eb:7d}
	I0501 02:33:53.705179   32853 main.go:141] libmachine: (ha-329926-m03) DBG | domain ha-329926-m03 has defined IP address 192.168.39.115 and MAC address 52:54:00:f9:eb:7d in network mk-ha-329926
	I0501 02:33:53.705311   32853 main.go:141] libmachine: (ha-329926-m03) Calling .GetSSHPort
	I0501 02:33:53.705484   32853 main.go:141] libmachine: (ha-329926-m03) Calling .GetSSHKeyPath
	I0501 02:33:53.705664   32853 main.go:141] libmachine: (ha-329926-m03) Calling .GetSSHKeyPath
	I0501 02:33:53.705762   32853 main.go:141] libmachine: (ha-329926-m03) Calling .GetSSHUsername
	I0501 02:33:53.705942   32853 main.go:141] libmachine: Using SSH client type: native
	I0501 02:33:53.706095   32853 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.115 22 <nil> <nil>}
	I0501 02:33:53.706106   32853 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0501 02:33:53.819574   32853 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0501 02:33:53.819644   32853 main.go:141] libmachine: found compatible host: buildroot
	I0501 02:33:53.819657   32853 main.go:141] libmachine: Provisioning with buildroot...
	I0501 02:33:53.819670   32853 main.go:141] libmachine: (ha-329926-m03) Calling .GetMachineName
	I0501 02:33:53.819887   32853 buildroot.go:166] provisioning hostname "ha-329926-m03"
	I0501 02:33:53.819912   32853 main.go:141] libmachine: (ha-329926-m03) Calling .GetMachineName
	I0501 02:33:53.820059   32853 main.go:141] libmachine: (ha-329926-m03) Calling .GetSSHHostname
	I0501 02:33:53.822803   32853 main.go:141] libmachine: (ha-329926-m03) DBG | domain ha-329926-m03 has defined MAC address 52:54:00:f9:eb:7d in network mk-ha-329926
	I0501 02:33:53.823211   32853 main.go:141] libmachine: (ha-329926-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:eb:7d", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:33:44 +0000 UTC Type:0 Mac:52:54:00:f9:eb:7d Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:ha-329926-m03 Clientid:01:52:54:00:f9:eb:7d}
	I0501 02:33:53.823238   32853 main.go:141] libmachine: (ha-329926-m03) DBG | domain ha-329926-m03 has defined IP address 192.168.39.115 and MAC address 52:54:00:f9:eb:7d in network mk-ha-329926
	I0501 02:33:53.823413   32853 main.go:141] libmachine: (ha-329926-m03) Calling .GetSSHPort
	I0501 02:33:53.823590   32853 main.go:141] libmachine: (ha-329926-m03) Calling .GetSSHKeyPath
	I0501 02:33:53.823759   32853 main.go:141] libmachine: (ha-329926-m03) Calling .GetSSHKeyPath
	I0501 02:33:53.823948   32853 main.go:141] libmachine: (ha-329926-m03) Calling .GetSSHUsername
	I0501 02:33:53.824130   32853 main.go:141] libmachine: Using SSH client type: native
	I0501 02:33:53.824345   32853 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.115 22 <nil> <nil>}
	I0501 02:33:53.824365   32853 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-329926-m03 && echo "ha-329926-m03" | sudo tee /etc/hostname
	I0501 02:33:53.958301   32853 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-329926-m03
	
	I0501 02:33:53.958334   32853 main.go:141] libmachine: (ha-329926-m03) Calling .GetSSHHostname
	I0501 02:33:53.961097   32853 main.go:141] libmachine: (ha-329926-m03) DBG | domain ha-329926-m03 has defined MAC address 52:54:00:f9:eb:7d in network mk-ha-329926
	I0501 02:33:53.961545   32853 main.go:141] libmachine: (ha-329926-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:eb:7d", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:33:44 +0000 UTC Type:0 Mac:52:54:00:f9:eb:7d Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:ha-329926-m03 Clientid:01:52:54:00:f9:eb:7d}
	I0501 02:33:53.961576   32853 main.go:141] libmachine: (ha-329926-m03) DBG | domain ha-329926-m03 has defined IP address 192.168.39.115 and MAC address 52:54:00:f9:eb:7d in network mk-ha-329926
	I0501 02:33:53.961774   32853 main.go:141] libmachine: (ha-329926-m03) Calling .GetSSHPort
	I0501 02:33:53.961992   32853 main.go:141] libmachine: (ha-329926-m03) Calling .GetSSHKeyPath
	I0501 02:33:53.962163   32853 main.go:141] libmachine: (ha-329926-m03) Calling .GetSSHKeyPath
	I0501 02:33:53.962305   32853 main.go:141] libmachine: (ha-329926-m03) Calling .GetSSHUsername
	I0501 02:33:53.962494   32853 main.go:141] libmachine: Using SSH client type: native
	I0501 02:33:53.962643   32853 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.115 22 <nil> <nil>}
	I0501 02:33:53.962660   32853 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-329926-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-329926-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-329926-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0501 02:33:54.089021   32853 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0501 02:33:54.089056   32853 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18779-13391/.minikube CaCertPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18779-13391/.minikube}
	I0501 02:33:54.089078   32853 buildroot.go:174] setting up certificates
	I0501 02:33:54.089092   32853 provision.go:84] configureAuth start
	I0501 02:33:54.089103   32853 main.go:141] libmachine: (ha-329926-m03) Calling .GetMachineName
	I0501 02:33:54.089417   32853 main.go:141] libmachine: (ha-329926-m03) Calling .GetIP
	I0501 02:33:54.091857   32853 main.go:141] libmachine: (ha-329926-m03) DBG | domain ha-329926-m03 has defined MAC address 52:54:00:f9:eb:7d in network mk-ha-329926
	I0501 02:33:54.092181   32853 main.go:141] libmachine: (ha-329926-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:eb:7d", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:33:44 +0000 UTC Type:0 Mac:52:54:00:f9:eb:7d Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:ha-329926-m03 Clientid:01:52:54:00:f9:eb:7d}
	I0501 02:33:54.092211   32853 main.go:141] libmachine: (ha-329926-m03) DBG | domain ha-329926-m03 has defined IP address 192.168.39.115 and MAC address 52:54:00:f9:eb:7d in network mk-ha-329926
	I0501 02:33:54.092345   32853 main.go:141] libmachine: (ha-329926-m03) Calling .GetSSHHostname
	I0501 02:33:54.094374   32853 main.go:141] libmachine: (ha-329926-m03) DBG | domain ha-329926-m03 has defined MAC address 52:54:00:f9:eb:7d in network mk-ha-329926
	I0501 02:33:54.094820   32853 main.go:141] libmachine: (ha-329926-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:eb:7d", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:33:44 +0000 UTC Type:0 Mac:52:54:00:f9:eb:7d Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:ha-329926-m03 Clientid:01:52:54:00:f9:eb:7d}
	I0501 02:33:54.094854   32853 main.go:141] libmachine: (ha-329926-m03) DBG | domain ha-329926-m03 has defined IP address 192.168.39.115 and MAC address 52:54:00:f9:eb:7d in network mk-ha-329926
	I0501 02:33:54.095005   32853 provision.go:143] copyHostCerts
	I0501 02:33:54.095045   32853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem
	I0501 02:33:54.095085   32853 exec_runner.go:144] found /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem, removing ...
	I0501 02:33:54.095097   32853 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem
	I0501 02:33:54.095182   32853 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem (1078 bytes)
	I0501 02:33:54.095256   32853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem
	I0501 02:33:54.095276   32853 exec_runner.go:144] found /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem, removing ...
	I0501 02:33:54.095283   32853 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem
	I0501 02:33:54.095307   32853 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem (1123 bytes)
	I0501 02:33:54.095348   32853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem
	I0501 02:33:54.095366   32853 exec_runner.go:144] found /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem, removing ...
	I0501 02:33:54.095373   32853 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem
	I0501 02:33:54.095394   32853 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem (1675 bytes)
	I0501 02:33:54.095440   32853 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca-key.pem org=jenkins.ha-329926-m03 san=[127.0.0.1 192.168.39.115 ha-329926-m03 localhost minikube]
	I0501 02:33:54.224112   32853 provision.go:177] copyRemoteCerts
	I0501 02:33:54.224166   32853 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0501 02:33:54.224187   32853 main.go:141] libmachine: (ha-329926-m03) Calling .GetSSHHostname
	I0501 02:33:54.226746   32853 main.go:141] libmachine: (ha-329926-m03) DBG | domain ha-329926-m03 has defined MAC address 52:54:00:f9:eb:7d in network mk-ha-329926
	I0501 02:33:54.227156   32853 main.go:141] libmachine: (ha-329926-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:eb:7d", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:33:44 +0000 UTC Type:0 Mac:52:54:00:f9:eb:7d Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:ha-329926-m03 Clientid:01:52:54:00:f9:eb:7d}
	I0501 02:33:54.227183   32853 main.go:141] libmachine: (ha-329926-m03) DBG | domain ha-329926-m03 has defined IP address 192.168.39.115 and MAC address 52:54:00:f9:eb:7d in network mk-ha-329926
	I0501 02:33:54.227375   32853 main.go:141] libmachine: (ha-329926-m03) Calling .GetSSHPort
	I0501 02:33:54.227570   32853 main.go:141] libmachine: (ha-329926-m03) Calling .GetSSHKeyPath
	I0501 02:33:54.227725   32853 main.go:141] libmachine: (ha-329926-m03) Calling .GetSSHUsername
	I0501 02:33:54.227861   32853 sshutil.go:53] new ssh client: &{IP:192.168.39.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926-m03/id_rsa Username:docker}
	I0501 02:33:54.314170   32853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0501 02:33:54.314242   32853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0501 02:33:54.340949   32853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0501 02:33:54.341014   32853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0501 02:33:54.367638   32853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0501 02:33:54.367713   32853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0501 02:33:54.395065   32853 provision.go:87] duration metric: took 305.962904ms to configureAuth
	I0501 02:33:54.395096   32853 buildroot.go:189] setting minikube options for container-runtime
	I0501 02:33:54.395366   32853 config.go:182] Loaded profile config "ha-329926": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0501 02:33:54.395472   32853 main.go:141] libmachine: (ha-329926-m03) Calling .GetSSHHostname
	I0501 02:33:54.398240   32853 main.go:141] libmachine: (ha-329926-m03) DBG | domain ha-329926-m03 has defined MAC address 52:54:00:f9:eb:7d in network mk-ha-329926
	I0501 02:33:54.398716   32853 main.go:141] libmachine: (ha-329926-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:eb:7d", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:33:44 +0000 UTC Type:0 Mac:52:54:00:f9:eb:7d Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:ha-329926-m03 Clientid:01:52:54:00:f9:eb:7d}
	I0501 02:33:54.398756   32853 main.go:141] libmachine: (ha-329926-m03) DBG | domain ha-329926-m03 has defined IP address 192.168.39.115 and MAC address 52:54:00:f9:eb:7d in network mk-ha-329926
	I0501 02:33:54.398961   32853 main.go:141] libmachine: (ha-329926-m03) Calling .GetSSHPort
	I0501 02:33:54.399148   32853 main.go:141] libmachine: (ha-329926-m03) Calling .GetSSHKeyPath
	I0501 02:33:54.399292   32853 main.go:141] libmachine: (ha-329926-m03) Calling .GetSSHKeyPath
	I0501 02:33:54.399469   32853 main.go:141] libmachine: (ha-329926-m03) Calling .GetSSHUsername
	I0501 02:33:54.399651   32853 main.go:141] libmachine: Using SSH client type: native
	I0501 02:33:54.399829   32853 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.115 22 <nil> <nil>}
	I0501 02:33:54.399843   32853 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0501 02:33:54.690659   32853 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0501 02:33:54.690698   32853 main.go:141] libmachine: Checking connection to Docker...
	I0501 02:33:54.690706   32853 main.go:141] libmachine: (ha-329926-m03) Calling .GetURL
	I0501 02:33:54.691918   32853 main.go:141] libmachine: (ha-329926-m03) DBG | Using libvirt version 6000000
	I0501 02:33:54.694051   32853 main.go:141] libmachine: (ha-329926-m03) DBG | domain ha-329926-m03 has defined MAC address 52:54:00:f9:eb:7d in network mk-ha-329926
	I0501 02:33:54.694359   32853 main.go:141] libmachine: (ha-329926-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:eb:7d", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:33:44 +0000 UTC Type:0 Mac:52:54:00:f9:eb:7d Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:ha-329926-m03 Clientid:01:52:54:00:f9:eb:7d}
	I0501 02:33:54.694427   32853 main.go:141] libmachine: (ha-329926-m03) DBG | domain ha-329926-m03 has defined IP address 192.168.39.115 and MAC address 52:54:00:f9:eb:7d in network mk-ha-329926
	I0501 02:33:54.694544   32853 main.go:141] libmachine: Docker is up and running!
	I0501 02:33:54.694558   32853 main.go:141] libmachine: Reticulating splines...
	I0501 02:33:54.694565   32853 client.go:171] duration metric: took 26.574862273s to LocalClient.Create
	I0501 02:33:54.694588   32853 start.go:167] duration metric: took 26.574922123s to libmachine.API.Create "ha-329926"
	I0501 02:33:54.694601   32853 start.go:293] postStartSetup for "ha-329926-m03" (driver="kvm2")
	I0501 02:33:54.694617   32853 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0501 02:33:54.694639   32853 main.go:141] libmachine: (ha-329926-m03) Calling .DriverName
	I0501 02:33:54.694843   32853 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0501 02:33:54.694865   32853 main.go:141] libmachine: (ha-329926-m03) Calling .GetSSHHostname
	I0501 02:33:54.698015   32853 main.go:141] libmachine: (ha-329926-m03) DBG | domain ha-329926-m03 has defined MAC address 52:54:00:f9:eb:7d in network mk-ha-329926
	I0501 02:33:54.698491   32853 main.go:141] libmachine: (ha-329926-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:eb:7d", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:33:44 +0000 UTC Type:0 Mac:52:54:00:f9:eb:7d Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:ha-329926-m03 Clientid:01:52:54:00:f9:eb:7d}
	I0501 02:33:54.698516   32853 main.go:141] libmachine: (ha-329926-m03) DBG | domain ha-329926-m03 has defined IP address 192.168.39.115 and MAC address 52:54:00:f9:eb:7d in network mk-ha-329926
	I0501 02:33:54.698686   32853 main.go:141] libmachine: (ha-329926-m03) Calling .GetSSHPort
	I0501 02:33:54.698867   32853 main.go:141] libmachine: (ha-329926-m03) Calling .GetSSHKeyPath
	I0501 02:33:54.699050   32853 main.go:141] libmachine: (ha-329926-m03) Calling .GetSSHUsername
	I0501 02:33:54.699169   32853 sshutil.go:53] new ssh client: &{IP:192.168.39.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926-m03/id_rsa Username:docker}
	I0501 02:33:54.794701   32853 ssh_runner.go:195] Run: cat /etc/os-release
	I0501 02:33:54.799872   32853 info.go:137] Remote host: Buildroot 2023.02.9
	I0501 02:33:54.799897   32853 filesync.go:126] Scanning /home/jenkins/minikube-integration/18779-13391/.minikube/addons for local assets ...
	I0501 02:33:54.799955   32853 filesync.go:126] Scanning /home/jenkins/minikube-integration/18779-13391/.minikube/files for local assets ...
	I0501 02:33:54.800022   32853 filesync.go:149] local asset: /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem -> 207242.pem in /etc/ssl/certs
	I0501 02:33:54.800032   32853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem -> /etc/ssl/certs/207242.pem
	I0501 02:33:54.800120   32853 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0501 02:33:54.813593   32853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem --> /etc/ssl/certs/207242.pem (1708 bytes)
	I0501 02:33:54.842323   32853 start.go:296] duration metric: took 147.707876ms for postStartSetup
	I0501 02:33:54.842369   32853 main.go:141] libmachine: (ha-329926-m03) Calling .GetConfigRaw
	I0501 02:33:54.843095   32853 main.go:141] libmachine: (ha-329926-m03) Calling .GetIP
	I0501 02:33:54.845640   32853 main.go:141] libmachine: (ha-329926-m03) DBG | domain ha-329926-m03 has defined MAC address 52:54:00:f9:eb:7d in network mk-ha-329926
	I0501 02:33:54.845998   32853 main.go:141] libmachine: (ha-329926-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:eb:7d", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:33:44 +0000 UTC Type:0 Mac:52:54:00:f9:eb:7d Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:ha-329926-m03 Clientid:01:52:54:00:f9:eb:7d}
	I0501 02:33:54.846028   32853 main.go:141] libmachine: (ha-329926-m03) DBG | domain ha-329926-m03 has defined IP address 192.168.39.115 and MAC address 52:54:00:f9:eb:7d in network mk-ha-329926
	I0501 02:33:54.846276   32853 profile.go:143] Saving config to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/config.json ...
	I0501 02:33:54.846526   32853 start.go:128] duration metric: took 26.745052966s to createHost
	I0501 02:33:54.846548   32853 main.go:141] libmachine: (ha-329926-m03) Calling .GetSSHHostname
	I0501 02:33:54.848541   32853 main.go:141] libmachine: (ha-329926-m03) DBG | domain ha-329926-m03 has defined MAC address 52:54:00:f9:eb:7d in network mk-ha-329926
	I0501 02:33:54.848882   32853 main.go:141] libmachine: (ha-329926-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:eb:7d", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:33:44 +0000 UTC Type:0 Mac:52:54:00:f9:eb:7d Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:ha-329926-m03 Clientid:01:52:54:00:f9:eb:7d}
	I0501 02:33:54.848912   32853 main.go:141] libmachine: (ha-329926-m03) DBG | domain ha-329926-m03 has defined IP address 192.168.39.115 and MAC address 52:54:00:f9:eb:7d in network mk-ha-329926
	I0501 02:33:54.849053   32853 main.go:141] libmachine: (ha-329926-m03) Calling .GetSSHPort
	I0501 02:33:54.849236   32853 main.go:141] libmachine: (ha-329926-m03) Calling .GetSSHKeyPath
	I0501 02:33:54.849419   32853 main.go:141] libmachine: (ha-329926-m03) Calling .GetSSHKeyPath
	I0501 02:33:54.849566   32853 main.go:141] libmachine: (ha-329926-m03) Calling .GetSSHUsername
	I0501 02:33:54.849701   32853 main.go:141] libmachine: Using SSH client type: native
	I0501 02:33:54.849843   32853 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.115 22 <nil> <nil>}
	I0501 02:33:54.849853   32853 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0501 02:33:54.964413   32853 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714530834.949291052
	
	I0501 02:33:54.964439   32853 fix.go:216] guest clock: 1714530834.949291052
	I0501 02:33:54.964449   32853 fix.go:229] Guest: 2024-05-01 02:33:54.949291052 +0000 UTC Remote: 2024-05-01 02:33:54.846538738 +0000 UTC m=+172.769036006 (delta=102.752314ms)
	I0501 02:33:54.964468   32853 fix.go:200] guest clock delta is within tolerance: 102.752314ms
	I0501 02:33:54.964474   32853 start.go:83] releasing machines lock for "ha-329926-m03", held for 26.863150367s
	I0501 02:33:54.964496   32853 main.go:141] libmachine: (ha-329926-m03) Calling .DriverName
	I0501 02:33:54.964764   32853 main.go:141] libmachine: (ha-329926-m03) Calling .GetIP
	I0501 02:33:54.967409   32853 main.go:141] libmachine: (ha-329926-m03) DBG | domain ha-329926-m03 has defined MAC address 52:54:00:f9:eb:7d in network mk-ha-329926
	I0501 02:33:54.967787   32853 main.go:141] libmachine: (ha-329926-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:eb:7d", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:33:44 +0000 UTC Type:0 Mac:52:54:00:f9:eb:7d Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:ha-329926-m03 Clientid:01:52:54:00:f9:eb:7d}
	I0501 02:33:54.967819   32853 main.go:141] libmachine: (ha-329926-m03) DBG | domain ha-329926-m03 has defined IP address 192.168.39.115 and MAC address 52:54:00:f9:eb:7d in network mk-ha-329926
	I0501 02:33:54.969859   32853 out.go:177] * Found network options:
	I0501 02:33:54.971200   32853 out.go:177]   - NO_PROXY=192.168.39.5,192.168.39.79
	W0501 02:33:54.972418   32853 proxy.go:119] fail to check proxy env: Error ip not in block
	W0501 02:33:54.972447   32853 proxy.go:119] fail to check proxy env: Error ip not in block
	I0501 02:33:54.972465   32853 main.go:141] libmachine: (ha-329926-m03) Calling .DriverName
	I0501 02:33:54.972936   32853 main.go:141] libmachine: (ha-329926-m03) Calling .DriverName
	I0501 02:33:54.973099   32853 main.go:141] libmachine: (ha-329926-m03) Calling .DriverName
	I0501 02:33:54.973193   32853 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0501 02:33:54.973232   32853 main.go:141] libmachine: (ha-329926-m03) Calling .GetSSHHostname
	W0501 02:33:54.973294   32853 proxy.go:119] fail to check proxy env: Error ip not in block
	W0501 02:33:54.973313   32853 proxy.go:119] fail to check proxy env: Error ip not in block
	I0501 02:33:54.973365   32853 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0501 02:33:54.973385   32853 main.go:141] libmachine: (ha-329926-m03) Calling .GetSSHHostname
	I0501 02:33:54.976075   32853 main.go:141] libmachine: (ha-329926-m03) DBG | domain ha-329926-m03 has defined MAC address 52:54:00:f9:eb:7d in network mk-ha-329926
	I0501 02:33:54.976253   32853 main.go:141] libmachine: (ha-329926-m03) DBG | domain ha-329926-m03 has defined MAC address 52:54:00:f9:eb:7d in network mk-ha-329926
	I0501 02:33:54.976478   32853 main.go:141] libmachine: (ha-329926-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:eb:7d", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:33:44 +0000 UTC Type:0 Mac:52:54:00:f9:eb:7d Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:ha-329926-m03 Clientid:01:52:54:00:f9:eb:7d}
	I0501 02:33:54.976515   32853 main.go:141] libmachine: (ha-329926-m03) DBG | domain ha-329926-m03 has defined IP address 192.168.39.115 and MAC address 52:54:00:f9:eb:7d in network mk-ha-329926
	I0501 02:33:54.976617   32853 main.go:141] libmachine: (ha-329926-m03) Calling .GetSSHPort
	I0501 02:33:54.976726   32853 main.go:141] libmachine: (ha-329926-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:eb:7d", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:33:44 +0000 UTC Type:0 Mac:52:54:00:f9:eb:7d Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:ha-329926-m03 Clientid:01:52:54:00:f9:eb:7d}
	I0501 02:33:54.976749   32853 main.go:141] libmachine: (ha-329926-m03) DBG | domain ha-329926-m03 has defined IP address 192.168.39.115 and MAC address 52:54:00:f9:eb:7d in network mk-ha-329926
	I0501 02:33:54.976782   32853 main.go:141] libmachine: (ha-329926-m03) Calling .GetSSHKeyPath
	I0501 02:33:54.976915   32853 main.go:141] libmachine: (ha-329926-m03) Calling .GetSSHPort
	I0501 02:33:54.976973   32853 main.go:141] libmachine: (ha-329926-m03) Calling .GetSSHUsername
	I0501 02:33:54.977095   32853 main.go:141] libmachine: (ha-329926-m03) Calling .GetSSHKeyPath
	I0501 02:33:54.977164   32853 sshutil.go:53] new ssh client: &{IP:192.168.39.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926-m03/id_rsa Username:docker}
	I0501 02:33:54.977245   32853 main.go:141] libmachine: (ha-329926-m03) Calling .GetSSHUsername
	I0501 02:33:54.977373   32853 sshutil.go:53] new ssh client: &{IP:192.168.39.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926-m03/id_rsa Username:docker}
	I0501 02:33:55.228774   32853 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0501 02:33:55.236740   32853 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0501 02:33:55.236805   32853 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0501 02:33:55.256868   32853 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0501 02:33:55.256893   32853 start.go:494] detecting cgroup driver to use...
	I0501 02:33:55.256963   32853 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0501 02:33:55.278379   32853 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0501 02:33:55.295278   32853 docker.go:217] disabling cri-docker service (if available) ...
	I0501 02:33:55.295367   32853 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0501 02:33:55.310071   32853 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0501 02:33:55.324746   32853 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0501 02:33:55.450716   32853 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0501 02:33:55.601554   32853 docker.go:233] disabling docker service ...
	I0501 02:33:55.601613   32853 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0501 02:33:55.620391   32853 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0501 02:33:55.634343   32853 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0501 02:33:55.776462   32853 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0501 02:33:55.905451   32853 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0501 02:33:55.921494   32853 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0501 02:33:55.944306   32853 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0501 02:33:55.944374   32853 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 02:33:55.956199   32853 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0501 02:33:55.956267   32853 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 02:33:55.968518   32853 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 02:33:55.980336   32853 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 02:33:55.992865   32853 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0501 02:33:56.005561   32853 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 02:33:56.017934   32853 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 02:33:56.039177   32853 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 02:33:56.050473   32853 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0501 02:33:56.060271   32853 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0501 02:33:56.060333   32853 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0501 02:33:56.074851   32853 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0501 02:33:56.086136   32853 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:33:56.245188   32853 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0501 02:33:56.398179   32853 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0501 02:33:56.398258   32853 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0501 02:33:56.403866   32853 start.go:562] Will wait 60s for crictl version
	I0501 02:33:56.403928   32853 ssh_runner.go:195] Run: which crictl
	I0501 02:33:56.408138   32853 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0501 02:33:56.446483   32853 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0501 02:33:56.446584   32853 ssh_runner.go:195] Run: crio --version
	I0501 02:33:56.478041   32853 ssh_runner.go:195] Run: crio --version
	I0501 02:33:56.510382   32853 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0501 02:33:56.511673   32853 out.go:177]   - env NO_PROXY=192.168.39.5
	I0501 02:33:56.512946   32853 out.go:177]   - env NO_PROXY=192.168.39.5,192.168.39.79
	I0501 02:33:56.514115   32853 main.go:141] libmachine: (ha-329926-m03) Calling .GetIP
	I0501 02:33:56.516527   32853 main.go:141] libmachine: (ha-329926-m03) DBG | domain ha-329926-m03 has defined MAC address 52:54:00:f9:eb:7d in network mk-ha-329926
	I0501 02:33:56.516881   32853 main.go:141] libmachine: (ha-329926-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:eb:7d", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:33:44 +0000 UTC Type:0 Mac:52:54:00:f9:eb:7d Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:ha-329926-m03 Clientid:01:52:54:00:f9:eb:7d}
	I0501 02:33:56.516908   32853 main.go:141] libmachine: (ha-329926-m03) DBG | domain ha-329926-m03 has defined IP address 192.168.39.115 and MAC address 52:54:00:f9:eb:7d in network mk-ha-329926
	I0501 02:33:56.517156   32853 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0501 02:33:56.521688   32853 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0501 02:33:56.534930   32853 mustload.go:65] Loading cluster: ha-329926
	I0501 02:33:56.535180   32853 config.go:182] Loaded profile config "ha-329926": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0501 02:33:56.535531   32853 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:33:56.535576   32853 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:33:56.550946   32853 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34933
	I0501 02:33:56.551366   32853 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:33:56.551880   32853 main.go:141] libmachine: Using API Version  1
	I0501 02:33:56.551897   32853 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:33:56.552181   32853 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:33:56.552325   32853 main.go:141] libmachine: (ha-329926) Calling .GetState
	I0501 02:33:56.553939   32853 host.go:66] Checking if "ha-329926" exists ...
	I0501 02:33:56.554304   32853 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:33:56.554340   32853 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:33:56.568563   32853 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43319
	I0501 02:33:56.568903   32853 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:33:56.569274   32853 main.go:141] libmachine: Using API Version  1
	I0501 02:33:56.569292   32853 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:33:56.569580   32853 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:33:56.569758   32853 main.go:141] libmachine: (ha-329926) Calling .DriverName
	I0501 02:33:56.569932   32853 certs.go:68] Setting up /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926 for IP: 192.168.39.115
	I0501 02:33:56.569946   32853 certs.go:194] generating shared ca certs ...
	I0501 02:33:56.569964   32853 certs.go:226] acquiring lock for ca certs: {Name:mk85a75d14c00780d4bf09cd9bdaf0f96f1b8fce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:33:56.570109   32853 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18779-13391/.minikube/ca.key
	I0501 02:33:56.570162   32853 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.key
	I0501 02:33:56.570175   32853 certs.go:256] generating profile certs ...
	I0501 02:33:56.570275   32853 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/client.key
	I0501 02:33:56.570309   32853 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/apiserver.key.da5f17e3
	I0501 02:33:56.570329   32853 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/apiserver.crt.da5f17e3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.5 192.168.39.79 192.168.39.115 192.168.39.254]
	I0501 02:33:56.836197   32853 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/apiserver.crt.da5f17e3 ...
	I0501 02:33:56.836227   32853 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/apiserver.crt.da5f17e3: {Name:mk19e8ab336a8011f2b618a7ee80af76218cad15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:33:56.836423   32853 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/apiserver.key.da5f17e3 ...
	I0501 02:33:56.836438   32853 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/apiserver.key.da5f17e3: {Name:mk87ba21c767b0a549751d84b1b9bc029d81cdf7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:33:56.836534   32853 certs.go:381] copying /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/apiserver.crt.da5f17e3 -> /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/apiserver.crt
	I0501 02:33:56.836705   32853 certs.go:385] copying /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/apiserver.key.da5f17e3 -> /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/apiserver.key
	I0501 02:33:56.836884   32853 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/proxy-client.key
	I0501 02:33:56.836902   32853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0501 02:33:56.836920   32853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0501 02:33:56.836939   32853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0501 02:33:56.836960   32853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0501 02:33:56.836978   32853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0501 02:33:56.836994   32853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0501 02:33:56.837011   32853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0501 02:33:56.837030   32853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0501 02:33:56.837090   32853 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/20724.pem (1338 bytes)
	W0501 02:33:56.837127   32853 certs.go:480] ignoring /home/jenkins/minikube-integration/18779-13391/.minikube/certs/20724_empty.pem, impossibly tiny 0 bytes
	I0501 02:33:56.837141   32853 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca-key.pem (1679 bytes)
	I0501 02:33:56.837177   32853 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem (1078 bytes)
	I0501 02:33:56.837206   32853 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem (1123 bytes)
	I0501 02:33:56.837234   32853 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem (1675 bytes)
	I0501 02:33:56.837287   32853 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem (1708 bytes)
	I0501 02:33:56.837323   32853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem -> /usr/share/ca-certificates/207242.pem
	I0501 02:33:56.837342   32853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0501 02:33:56.837363   32853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/20724.pem -> /usr/share/ca-certificates/20724.pem
	I0501 02:33:56.837401   32853 main.go:141] libmachine: (ha-329926) Calling .GetSSHHostname
	I0501 02:33:56.840494   32853 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:33:56.841014   32853 main.go:141] libmachine: (ha-329926) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:d8:43", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:31:18 +0000 UTC Type:0 Mac:52:54:00:ce:d8:43 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-329926 Clientid:01:52:54:00:ce:d8:43}
	I0501 02:33:56.841045   32853 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined IP address 192.168.39.5 and MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:33:56.841252   32853 main.go:141] libmachine: (ha-329926) Calling .GetSSHPort
	I0501 02:33:56.841478   32853 main.go:141] libmachine: (ha-329926) Calling .GetSSHKeyPath
	I0501 02:33:56.841693   32853 main.go:141] libmachine: (ha-329926) Calling .GetSSHUsername
	I0501 02:33:56.841856   32853 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926/id_rsa Username:docker}
	I0501 02:33:56.914715   32853 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0501 02:33:56.920177   32853 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0501 02:33:56.939591   32853 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0501 02:33:56.946435   32853 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0501 02:33:56.959821   32853 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0501 02:33:56.964912   32853 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0501 02:33:56.977275   32853 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0501 02:33:56.982220   32853 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0501 02:33:56.995757   32853 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0501 02:33:57.007074   32853 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0501 02:33:57.020516   32853 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0501 02:33:57.026328   32853 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0501 02:33:57.041243   32853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0501 02:33:57.074420   32853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0501 02:33:57.103724   32853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0501 02:33:57.134910   32853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0501 02:33:57.165073   32853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0501 02:33:57.196545   32853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0501 02:33:57.225326   32853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0501 02:33:57.252084   32853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0501 02:33:57.286208   32853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem --> /usr/share/ca-certificates/207242.pem (1708 bytes)
	I0501 02:33:57.317838   32853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0501 02:33:57.346679   32853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/certs/20724.pem --> /usr/share/ca-certificates/20724.pem (1338 bytes)
	I0501 02:33:57.375393   32853 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0501 02:33:57.394979   32853 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0501 02:33:57.414354   32853 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0501 02:33:57.433558   32853 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0501 02:33:57.455755   32853 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0501 02:33:57.476512   32853 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0501 02:33:57.497673   32853 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0501 02:33:57.517886   32853 ssh_runner.go:195] Run: openssl version
	I0501 02:33:57.524690   32853 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/207242.pem && ln -fs /usr/share/ca-certificates/207242.pem /etc/ssl/certs/207242.pem"
	I0501 02:33:57.538612   32853 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/207242.pem
	I0501 02:33:57.543865   32853 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May  1 02:20 /usr/share/ca-certificates/207242.pem
	I0501 02:33:57.543933   32853 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/207242.pem
	I0501 02:33:57.550789   32853 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/207242.pem /etc/ssl/certs/3ec20f2e.0"
	I0501 02:33:57.565578   32853 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0501 02:33:57.578568   32853 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0501 02:33:57.583785   32853 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May  1 02:08 /usr/share/ca-certificates/minikubeCA.pem
	I0501 02:33:57.583839   32853 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0501 02:33:57.590572   32853 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0501 02:33:57.602927   32853 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20724.pem && ln -fs /usr/share/ca-certificates/20724.pem /etc/ssl/certs/20724.pem"
	I0501 02:33:57.617155   32853 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20724.pem
	I0501 02:33:57.622337   32853 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May  1 02:20 /usr/share/ca-certificates/20724.pem
	I0501 02:33:57.622391   32853 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20724.pem
	I0501 02:33:57.628805   32853 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20724.pem /etc/ssl/certs/51391683.0"
	I0501 02:33:57.641042   32853 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0501 02:33:57.645699   32853 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0501 02:33:57.645751   32853 kubeadm.go:928] updating node {m03 192.168.39.115 8443 v1.30.0 crio true true} ...
	I0501 02:33:57.645825   32853 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-329926-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.115
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-329926 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0501 02:33:57.645848   32853 kube-vip.go:111] generating kube-vip config ...
	I0501 02:33:57.645886   32853 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0501 02:33:57.664754   32853 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0501 02:33:57.664818   32853 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0501 02:33:57.664885   32853 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0501 02:33:57.675936   32853 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.0': No such file or directory
	
	Initiating transfer...
	I0501 02:33:57.676002   32853 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.0
	I0501 02:33:57.686557   32853 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm.sha256
	I0501 02:33:57.686567   32853 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl.sha256
	I0501 02:33:57.686583   32853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/linux/amd64/v1.30.0/kubeadm -> /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0501 02:33:57.686590   32853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/linux/amd64/v1.30.0/kubectl -> /var/lib/minikube/binaries/v1.30.0/kubectl
	I0501 02:33:57.686652   32853 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0501 02:33:57.686658   32853 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl
	I0501 02:33:57.686557   32853 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet.sha256
	I0501 02:33:57.686726   32853 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 02:33:57.691485   32853 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubectl': No such file or directory
	I0501 02:33:57.691516   32853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/cache/linux/amd64/v1.30.0/kubectl --> /var/lib/minikube/binaries/v1.30.0/kubectl (51454104 bytes)
	I0501 02:33:57.705557   32853 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubeadm': No such file or directory
	I0501 02:33:57.705595   32853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/cache/linux/amd64/v1.30.0/kubeadm --> /var/lib/minikube/binaries/v1.30.0/kubeadm (50249880 bytes)
	I0501 02:33:57.716872   32853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/linux/amd64/v1.30.0/kubelet -> /var/lib/minikube/binaries/v1.30.0/kubelet
	I0501 02:33:57.716967   32853 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet
	I0501 02:33:57.761978   32853 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubelet': No such file or directory
	I0501 02:33:57.762021   32853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/cache/linux/amd64/v1.30.0/kubelet --> /var/lib/minikube/binaries/v1.30.0/kubelet (100100024 bytes)
	I0501 02:33:58.711990   32853 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0501 02:33:58.724400   32853 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0501 02:33:58.744940   32853 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0501 02:33:58.767269   32853 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0501 02:33:58.787739   32853 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0501 02:33:58.792566   32853 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0501 02:33:58.809119   32853 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:33:58.945032   32853 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0501 02:33:58.967940   32853 host.go:66] Checking if "ha-329926" exists ...
	I0501 02:33:58.968406   32853 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:33:58.968467   32853 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:33:58.984998   32853 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35945
	I0501 02:33:58.985460   32853 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:33:58.985963   32853 main.go:141] libmachine: Using API Version  1
	I0501 02:33:58.986033   32853 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:33:58.986380   32853 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:33:58.986599   32853 main.go:141] libmachine: (ha-329926) Calling .DriverName
	I0501 02:33:58.986770   32853 start.go:316] joinCluster: &{Name:ha-329926 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Cluster
Name:ha-329926 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.5 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.79 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.115 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false ins
pektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 02:33:58.986899   32853 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0501 02:33:58.986915   32853 main.go:141] libmachine: (ha-329926) Calling .GetSSHHostname
	I0501 02:33:58.989775   32853 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:33:58.990227   32853 main.go:141] libmachine: (ha-329926) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:d8:43", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:31:18 +0000 UTC Type:0 Mac:52:54:00:ce:d8:43 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-329926 Clientid:01:52:54:00:ce:d8:43}
	I0501 02:33:58.990252   32853 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined IP address 192.168.39.5 and MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:33:58.990489   32853 main.go:141] libmachine: (ha-329926) Calling .GetSSHPort
	I0501 02:33:58.990643   32853 main.go:141] libmachine: (ha-329926) Calling .GetSSHKeyPath
	I0501 02:33:58.990791   32853 main.go:141] libmachine: (ha-329926) Calling .GetSSHUsername
	I0501 02:33:58.990957   32853 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926/id_rsa Username:docker}
	I0501 02:33:59.580409   32853 start.go:342] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.115 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0501 02:33:59.580458   32853 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token m7l0ya.kjhwirja5kia0ep4 --discovery-token-ca-cert-hash sha256:bd94cc6acfb95fe36f70b12c6a1f2980601b8235e7c4dea65cebdf35eb514754 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-329926-m03 --control-plane --apiserver-advertise-address=192.168.39.115 --apiserver-bind-port=8443"
	I0501 02:34:25.335526   32853 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token m7l0ya.kjhwirja5kia0ep4 --discovery-token-ca-cert-hash sha256:bd94cc6acfb95fe36f70b12c6a1f2980601b8235e7c4dea65cebdf35eb514754 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-329926-m03 --control-plane --apiserver-advertise-address=192.168.39.115 --apiserver-bind-port=8443": (25.755033767s)
	I0501 02:34:25.335571   32853 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0501 02:34:25.928467   32853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-329926-m03 minikube.k8s.io/updated_at=2024_05_01T02_34_25_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=2c4eae41cda912e6a762d77f0d8868e00f97bb4e minikube.k8s.io/name=ha-329926 minikube.k8s.io/primary=false
	I0501 02:34:26.107354   32853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-329926-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0501 02:34:26.257401   32853 start.go:318] duration metric: took 27.270627658s to joinCluster
	I0501 02:34:26.257479   32853 start.go:234] Will wait 6m0s for node &{Name:m03 IP:192.168.39.115 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0501 02:34:26.259069   32853 out.go:177] * Verifying Kubernetes components...
	I0501 02:34:26.257836   32853 config.go:182] Loaded profile config "ha-329926": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0501 02:34:26.260445   32853 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:34:26.510765   32853 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0501 02:34:26.552182   32853 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18779-13391/kubeconfig
	I0501 02:34:26.552546   32853 kapi.go:59] client config for ha-329926: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/client.crt", KeyFile:"/home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/client.key", CAFile:"/home/jenkins/minikube-integration/18779-13391/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02700), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0501 02:34:26.552636   32853 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.5:8443
	I0501 02:34:26.552909   32853 node_ready.go:35] waiting up to 6m0s for node "ha-329926-m03" to be "Ready" ...
	I0501 02:34:26.552995   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926-m03
	I0501 02:34:26.553008   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:26.553019   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:26.553028   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:26.559027   32853 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:34:27.053197   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926-m03
	I0501 02:34:27.053219   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:27.053230   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:27.053234   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:27.057217   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:34:27.553859   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926-m03
	I0501 02:34:27.553883   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:27.553893   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:27.553899   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:27.557231   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:34:28.053863   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926-m03
	I0501 02:34:28.053887   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:28.053897   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:28.053901   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:28.058194   32853 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:34:28.553549   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926-m03
	I0501 02:34:28.553580   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:28.553592   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:28.553599   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:28.558384   32853 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:34:28.559675   32853 node_ready.go:53] node "ha-329926-m03" has status "Ready":"False"
	I0501 02:34:29.053346   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926-m03
	I0501 02:34:29.053367   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:29.053375   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:29.053381   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:29.057823   32853 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:34:29.553737   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926-m03
	I0501 02:34:29.553765   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:29.553775   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:29.553784   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:29.557883   32853 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:34:30.053337   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926-m03
	I0501 02:34:30.053367   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:30.053377   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:30.053384   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:30.056878   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:34:30.553820   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926-m03
	I0501 02:34:30.553846   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:30.553858   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:30.553864   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:30.557495   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:34:31.053570   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926-m03
	I0501 02:34:31.053602   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:31.053610   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:31.053613   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:31.058281   32853 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:34:31.059035   32853 node_ready.go:53] node "ha-329926-m03" has status "Ready":"False"
	I0501 02:34:31.553683   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926-m03
	I0501 02:34:31.553714   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:31.553728   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:31.553732   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:31.558150   32853 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:34:32.053180   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926-m03
	I0501 02:34:32.053203   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:32.053210   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:32.053215   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:32.058335   32853 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:34:32.553190   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926-m03
	I0501 02:34:32.553211   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:32.553219   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:32.553224   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:32.556713   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:34:33.053769   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926-m03
	I0501 02:34:33.053793   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:33.053801   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:33.053805   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:33.058695   32853 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:34:33.059985   32853 node_ready.go:53] node "ha-329926-m03" has status "Ready":"False"
	I0501 02:34:33.553873   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926-m03
	I0501 02:34:33.553896   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:33.553902   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:33.553905   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:33.557865   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:34:33.558759   32853 node_ready.go:49] node "ha-329926-m03" has status "Ready":"True"
	I0501 02:34:33.558778   32853 node_ready.go:38] duration metric: took 7.005851298s for node "ha-329926-m03" to be "Ready" ...
	I0501 02:34:33.558786   32853 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0501 02:34:33.558841   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods
	I0501 02:34:33.558851   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:33.558858   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:33.558862   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:33.566456   32853 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0501 02:34:33.573887   32853 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-2h8lc" in "kube-system" namespace to be "Ready" ...
	I0501 02:34:33.573969   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2h8lc
	I0501 02:34:33.573977   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:33.573984   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:33.573989   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:33.580267   32853 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 02:34:33.581620   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926
	I0501 02:34:33.581637   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:33.581646   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:33.581651   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:33.585397   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:34:33.586157   32853 pod_ready.go:92] pod "coredns-7db6d8ff4d-2h8lc" in "kube-system" namespace has status "Ready":"True"
	I0501 02:34:33.586182   32853 pod_ready.go:81] duration metric: took 12.268357ms for pod "coredns-7db6d8ff4d-2h8lc" in "kube-system" namespace to be "Ready" ...
	I0501 02:34:33.586195   32853 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-cfdqc" in "kube-system" namespace to be "Ready" ...
	I0501 02:34:33.586262   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-cfdqc
	I0501 02:34:33.586273   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:33.586281   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:33.586290   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:33.590164   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:34:33.591076   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926
	I0501 02:34:33.591092   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:33.591099   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:33.591104   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:33.594772   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:34:33.595506   32853 pod_ready.go:92] pod "coredns-7db6d8ff4d-cfdqc" in "kube-system" namespace has status "Ready":"True"
	I0501 02:34:33.595526   32853 pod_ready.go:81] duration metric: took 9.323438ms for pod "coredns-7db6d8ff4d-cfdqc" in "kube-system" namespace to be "Ready" ...
	I0501 02:34:33.595540   32853 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-329926" in "kube-system" namespace to be "Ready" ...
	I0501 02:34:33.595609   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-329926
	I0501 02:34:33.595620   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:33.595630   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:33.595640   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:33.598884   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:34:33.599606   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926
	I0501 02:34:33.599623   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:33.599630   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:33.599635   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:33.603088   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:34:33.603694   32853 pod_ready.go:92] pod "etcd-ha-329926" in "kube-system" namespace has status "Ready":"True"
	I0501 02:34:33.603712   32853 pod_ready.go:81] duration metric: took 8.164903ms for pod "etcd-ha-329926" in "kube-system" namespace to be "Ready" ...
	I0501 02:34:33.603719   32853 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-329926-m02" in "kube-system" namespace to be "Ready" ...
	I0501 02:34:33.603788   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-329926-m02
	I0501 02:34:33.603800   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:33.603808   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:33.603811   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:33.607305   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:34:33.608451   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926-m02
	I0501 02:34:33.608464   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:33.608471   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:33.608474   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:33.611758   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:34:33.612507   32853 pod_ready.go:92] pod "etcd-ha-329926-m02" in "kube-system" namespace has status "Ready":"True"
	I0501 02:34:33.612529   32853 pod_ready.go:81] duration metric: took 8.802946ms for pod "etcd-ha-329926-m02" in "kube-system" namespace to be "Ready" ...
	I0501 02:34:33.612541   32853 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-329926-m03" in "kube-system" namespace to be "Ready" ...
	I0501 02:34:33.754929   32853 request.go:629] Waited for 142.321048ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-329926-m03
	I0501 02:34:33.755011   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-329926-m03
	I0501 02:34:33.755020   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:33.755028   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:33.755032   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:33.759234   32853 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:34:33.954431   32853 request.go:629] Waited for 194.370954ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/nodes/ha-329926-m03
	I0501 02:34:33.954499   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926-m03
	I0501 02:34:33.954506   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:33.954515   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:33.954534   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:33.958626   32853 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:34:34.154421   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-329926-m03
	I0501 02:34:34.154446   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:34.154458   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:34.154464   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:34.158327   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:34:34.354383   32853 request.go:629] Waited for 195.391735ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/nodes/ha-329926-m03
	I0501 02:34:34.354488   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926-m03
	I0501 02:34:34.354501   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:34.354515   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:34.354527   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:34.358227   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:34:34.613107   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-329926-m03
	I0501 02:34:34.613128   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:34.613135   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:34.613139   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:34.616497   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:34:34.754798   32853 request.go:629] Waited for 137.272351ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/nodes/ha-329926-m03
	I0501 02:34:34.754884   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926-m03
	I0501 02:34:34.754894   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:34.754907   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:34.754921   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:34.758624   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:34:35.113446   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-329926-m03
	I0501 02:34:35.113477   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:35.113485   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:35.113490   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:35.117286   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:34:35.154475   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926-m03
	I0501 02:34:35.154499   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:35.154518   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:35.154522   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:35.157638   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:34:35.613752   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-329926-m03
	I0501 02:34:35.613776   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:35.613784   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:35.613787   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:35.617712   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:34:35.618352   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926-m03
	I0501 02:34:35.618369   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:35.618378   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:35.618384   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:35.623517   32853 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:34:35.624078   32853 pod_ready.go:102] pod "etcd-ha-329926-m03" in "kube-system" namespace has status "Ready":"False"
	I0501 02:34:36.113120   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-329926-m03
	I0501 02:34:36.113140   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:36.113147   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:36.113151   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:36.116690   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:34:36.118173   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926-m03
	I0501 02:34:36.118181   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:36.118187   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:36.118192   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:36.121216   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:34:36.613137   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-329926-m03
	I0501 02:34:36.613157   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:36.613163   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:36.613169   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:36.616634   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:34:36.617476   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926-m03
	I0501 02:34:36.617491   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:36.617500   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:36.617509   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:36.620593   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:34:37.113710   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-329926-m03
	I0501 02:34:37.113729   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:37.113736   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:37.113741   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:37.117351   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:34:37.118184   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926-m03
	I0501 02:34:37.118200   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:37.118209   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:37.118217   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:37.121466   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:34:37.612955   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-329926-m03
	I0501 02:34:37.612977   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:37.612986   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:37.612990   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:37.616673   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:34:37.617588   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926-m03
	I0501 02:34:37.617604   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:37.617613   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:37.617619   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:37.620996   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:34:38.113342   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-329926-m03
	I0501 02:34:38.113367   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:38.113374   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:38.113377   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:38.117427   32853 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:34:38.118302   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926-m03
	I0501 02:34:38.118323   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:38.118331   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:38.118336   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:38.123781   32853 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:34:38.125579   32853 pod_ready.go:102] pod "etcd-ha-329926-m03" in "kube-system" namespace has status "Ready":"False"
	I0501 02:34:38.613049   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-329926-m03
	I0501 02:34:38.613070   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:38.613078   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:38.613082   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:38.616757   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:34:38.617664   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926-m03
	I0501 02:34:38.617687   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:38.617697   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:38.617701   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:38.620559   32853 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 02:34:39.113253   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-329926-m03
	I0501 02:34:39.113276   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:39.113286   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:39.113291   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:39.117261   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:34:39.118314   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926-m03
	I0501 02:34:39.118330   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:39.118339   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:39.118346   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:39.123812   32853 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:34:39.613659   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-329926-m03
	I0501 02:34:39.613681   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:39.613689   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:39.613692   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:39.617494   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:34:39.618352   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926-m03
	I0501 02:34:39.618371   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:39.618381   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:39.618386   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:39.621727   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:34:40.112726   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-329926-m03
	I0501 02:34:40.112749   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:40.112757   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:40.112761   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:40.116585   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:34:40.117611   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926-m03
	I0501 02:34:40.117626   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:40.117632   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:40.117638   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:40.125153   32853 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0501 02:34:40.125922   32853 pod_ready.go:92] pod "etcd-ha-329926-m03" in "kube-system" namespace has status "Ready":"True"
	I0501 02:34:40.125940   32853 pod_ready.go:81] duration metric: took 6.513392364s for pod "etcd-ha-329926-m03" in "kube-system" namespace to be "Ready" ...
	I0501 02:34:40.125956   32853 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-329926" in "kube-system" namespace to be "Ready" ...
	I0501 02:34:40.126001   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-329926
	I0501 02:34:40.126009   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:40.126016   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:40.126020   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:40.128839   32853 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 02:34:40.129696   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926
	I0501 02:34:40.129714   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:40.129723   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:40.129731   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:40.132582   32853 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 02:34:40.133263   32853 pod_ready.go:92] pod "kube-apiserver-ha-329926" in "kube-system" namespace has status "Ready":"True"
	I0501 02:34:40.133283   32853 pod_ready.go:81] duration metric: took 7.321354ms for pod "kube-apiserver-ha-329926" in "kube-system" namespace to be "Ready" ...
	I0501 02:34:40.133292   32853 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-329926-m02" in "kube-system" namespace to be "Ready" ...
	I0501 02:34:40.133348   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-329926-m02
	I0501 02:34:40.133356   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:40.133363   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:40.133367   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:40.135773   32853 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 02:34:40.136343   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926-m02
	I0501 02:34:40.136355   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:40.136361   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:40.136364   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:40.139151   32853 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 02:34:40.139786   32853 pod_ready.go:92] pod "kube-apiserver-ha-329926-m02" in "kube-system" namespace has status "Ready":"True"
	I0501 02:34:40.139806   32853 pod_ready.go:81] duration metric: took 6.506764ms for pod "kube-apiserver-ha-329926-m02" in "kube-system" namespace to be "Ready" ...
	I0501 02:34:40.139820   32853 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-329926-m03" in "kube-system" namespace to be "Ready" ...
	I0501 02:34:40.154083   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-329926-m03
	I0501 02:34:40.154097   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:40.154103   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:40.154108   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:40.156853   32853 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 02:34:40.354192   32853 request.go:629] Waited for 196.340447ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/nodes/ha-329926-m03
	I0501 02:34:40.354256   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926-m03
	I0501 02:34:40.354263   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:40.354272   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:40.354277   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:40.358337   32853 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:34:40.359238   32853 pod_ready.go:92] pod "kube-apiserver-ha-329926-m03" in "kube-system" namespace has status "Ready":"True"
	I0501 02:34:40.359260   32853 pod_ready.go:81] duration metric: took 219.426636ms for pod "kube-apiserver-ha-329926-m03" in "kube-system" namespace to be "Ready" ...
	I0501 02:34:40.359275   32853 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-329926" in "kube-system" namespace to be "Ready" ...
	I0501 02:34:40.554714   32853 request.go:629] Waited for 195.374385ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-329926
	I0501 02:34:40.554789   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-329926
	I0501 02:34:40.554794   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:40.554803   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:40.554807   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:40.558437   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:34:40.753934   32853 request.go:629] Waited for 194.309516ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/nodes/ha-329926
	I0501 02:34:40.753993   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926
	I0501 02:34:40.754002   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:40.754015   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:40.754028   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:40.757565   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:34:40.758538   32853 pod_ready.go:92] pod "kube-controller-manager-ha-329926" in "kube-system" namespace has status "Ready":"True"
	I0501 02:34:40.758554   32853 pod_ready.go:81] duration metric: took 399.271628ms for pod "kube-controller-manager-ha-329926" in "kube-system" namespace to be "Ready" ...
	I0501 02:34:40.758565   32853 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-329926-m02" in "kube-system" namespace to be "Ready" ...
	I0501 02:34:40.954661   32853 request.go:629] Waited for 196.021432ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-329926-m02
	I0501 02:34:40.954735   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-329926-m02
	I0501 02:34:40.954740   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:40.954747   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:40.954751   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:40.958428   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:34:41.153920   32853 request.go:629] Waited for 192.630607ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/nodes/ha-329926-m02
	I0501 02:34:41.153984   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926-m02
	I0501 02:34:41.153991   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:41.154007   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:41.154016   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:41.157816   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:34:41.158520   32853 pod_ready.go:92] pod "kube-controller-manager-ha-329926-m02" in "kube-system" namespace has status "Ready":"True"
	I0501 02:34:41.158537   32853 pod_ready.go:81] duration metric: took 399.964614ms for pod "kube-controller-manager-ha-329926-m02" in "kube-system" namespace to be "Ready" ...
	I0501 02:34:41.158548   32853 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-329926-m03" in "kube-system" namespace to be "Ready" ...
	I0501 02:34:41.354666   32853 request.go:629] Waited for 196.037746ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-329926-m03
	I0501 02:34:41.354720   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-329926-m03
	I0501 02:34:41.354727   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:41.354736   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:41.354742   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:41.360533   32853 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:34:41.554443   32853 request.go:629] Waited for 193.057741ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/nodes/ha-329926-m03
	I0501 02:34:41.554511   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926-m03
	I0501 02:34:41.554518   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:41.554529   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:41.554539   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:41.558653   32853 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:34:41.559781   32853 pod_ready.go:92] pod "kube-controller-manager-ha-329926-m03" in "kube-system" namespace has status "Ready":"True"
	I0501 02:34:41.559799   32853 pod_ready.go:81] duration metric: took 401.243411ms for pod "kube-controller-manager-ha-329926-m03" in "kube-system" namespace to be "Ready" ...
	I0501 02:34:41.559813   32853 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jfnk9" in "kube-system" namespace to be "Ready" ...
	I0501 02:34:41.754453   32853 request.go:629] Waited for 194.556038ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jfnk9
	I0501 02:34:41.754506   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jfnk9
	I0501 02:34:41.754513   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:41.754523   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:41.754531   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:41.757858   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:34:41.954922   32853 request.go:629] Waited for 196.354627ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/nodes/ha-329926-m03
	I0501 02:34:41.954987   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926-m03
	I0501 02:34:41.954993   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:41.955001   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:41.955005   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:41.958705   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:34:41.959342   32853 pod_ready.go:92] pod "kube-proxy-jfnk9" in "kube-system" namespace has status "Ready":"True"
	I0501 02:34:41.959368   32853 pod_ready.go:81] duration metric: took 399.547594ms for pod "kube-proxy-jfnk9" in "kube-system" namespace to be "Ready" ...
	I0501 02:34:41.959382   32853 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-msshn" in "kube-system" namespace to be "Ready" ...
	I0501 02:34:42.154797   32853 request.go:629] Waited for 195.330667ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-msshn
	I0501 02:34:42.154856   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-msshn
	I0501 02:34:42.154864   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:42.154873   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:42.154877   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:42.159411   32853 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:34:42.354538   32853 request.go:629] Waited for 194.330503ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/nodes/ha-329926
	I0501 02:34:42.354609   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926
	I0501 02:34:42.354617   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:42.354628   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:42.354648   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:42.358369   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:34:42.359153   32853 pod_ready.go:92] pod "kube-proxy-msshn" in "kube-system" namespace has status "Ready":"True"
	I0501 02:34:42.359173   32853 pod_ready.go:81] duration metric: took 399.782461ms for pod "kube-proxy-msshn" in "kube-system" namespace to be "Ready" ...
	I0501 02:34:42.359193   32853 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rfsm8" in "kube-system" namespace to be "Ready" ...
	I0501 02:34:42.554323   32853 request.go:629] Waited for 195.038073ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rfsm8
	I0501 02:34:42.554442   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rfsm8
	I0501 02:34:42.554454   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:42.554464   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:42.554473   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:42.558176   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:34:42.754211   32853 request.go:629] Waited for 195.287871ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/nodes/ha-329926-m02
	I0501 02:34:42.754282   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926-m02
	I0501 02:34:42.754289   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:42.754301   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:42.754321   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:42.757652   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:34:42.758429   32853 pod_ready.go:92] pod "kube-proxy-rfsm8" in "kube-system" namespace has status "Ready":"True"
	I0501 02:34:42.758447   32853 pod_ready.go:81] duration metric: took 399.247286ms for pod "kube-proxy-rfsm8" in "kube-system" namespace to be "Ready" ...
	I0501 02:34:42.758457   32853 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-329926" in "kube-system" namespace to be "Ready" ...
	I0501 02:34:42.954494   32853 request.go:629] Waited for 195.971378ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-329926
	I0501 02:34:42.954582   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-329926
	I0501 02:34:42.954590   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:42.954600   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:42.954607   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:42.958480   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:34:43.154915   32853 request.go:629] Waited for 195.448033ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/nodes/ha-329926
	I0501 02:34:43.154966   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926
	I0501 02:34:43.154971   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:43.154979   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:43.154987   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:43.158195   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:34:43.159035   32853 pod_ready.go:92] pod "kube-scheduler-ha-329926" in "kube-system" namespace has status "Ready":"True"
	I0501 02:34:43.159055   32853 pod_ready.go:81] duration metric: took 400.59166ms for pod "kube-scheduler-ha-329926" in "kube-system" namespace to be "Ready" ...
	I0501 02:34:43.159065   32853 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-329926-m02" in "kube-system" namespace to be "Ready" ...
	I0501 02:34:43.354202   32853 request.go:629] Waited for 195.054236ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-329926-m02
	I0501 02:34:43.354272   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-329926-m02
	I0501 02:34:43.354279   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:43.354296   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:43.354303   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:43.358424   32853 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:34:43.554487   32853 request.go:629] Waited for 195.193446ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/nodes/ha-329926-m02
	I0501 02:34:43.554584   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926-m02
	I0501 02:34:43.554595   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:43.554606   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:43.554617   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:43.558333   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:34:43.559150   32853 pod_ready.go:92] pod "kube-scheduler-ha-329926-m02" in "kube-system" namespace has status "Ready":"True"
	I0501 02:34:43.559173   32853 pod_ready.go:81] duration metric: took 400.101615ms for pod "kube-scheduler-ha-329926-m02" in "kube-system" namespace to be "Ready" ...
	I0501 02:34:43.559184   32853 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-329926-m03" in "kube-system" namespace to be "Ready" ...
	I0501 02:34:43.754323   32853 request.go:629] Waited for 195.055548ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-329926-m03
	I0501 02:34:43.754413   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-329926-m03
	I0501 02:34:43.754422   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:43.754435   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:43.754443   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:43.758790   32853 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:34:43.954606   32853 request.go:629] Waited for 195.158979ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/nodes/ha-329926-m03
	I0501 02:34:43.954659   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes/ha-329926-m03
	I0501 02:34:43.954664   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:43.954673   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:43.954678   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:43.958522   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:34:43.959350   32853 pod_ready.go:92] pod "kube-scheduler-ha-329926-m03" in "kube-system" namespace has status "Ready":"True"
	I0501 02:34:43.959375   32853 pod_ready.go:81] duration metric: took 400.183308ms for pod "kube-scheduler-ha-329926-m03" in "kube-system" namespace to be "Ready" ...
	I0501 02:34:43.959388   32853 pod_ready.go:38] duration metric: took 10.400590521s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0501 02:34:43.959406   32853 api_server.go:52] waiting for apiserver process to appear ...
	I0501 02:34:43.959461   32853 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 02:34:43.977544   32853 api_server.go:72] duration metric: took 17.720028948s to wait for apiserver process to appear ...
	I0501 02:34:43.977569   32853 api_server.go:88] waiting for apiserver healthz status ...
	I0501 02:34:43.977591   32853 api_server.go:253] Checking apiserver healthz at https://192.168.39.5:8443/healthz ...
	I0501 02:34:43.983701   32853 api_server.go:279] https://192.168.39.5:8443/healthz returned 200:
	ok
	I0501 02:34:43.983758   32853 round_trippers.go:463] GET https://192.168.39.5:8443/version
	I0501 02:34:43.983766   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:43.983774   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:43.983777   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:43.984732   32853 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0501 02:34:43.984783   32853 api_server.go:141] control plane version: v1.30.0
	I0501 02:34:43.984795   32853 api_server.go:131] duration metric: took 7.220912ms to wait for apiserver health ...
	I0501 02:34:43.984802   32853 system_pods.go:43] waiting for kube-system pods to appear ...
	I0501 02:34:44.154473   32853 request.go:629] Waited for 169.608236ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods
	I0501 02:34:44.154531   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods
	I0501 02:34:44.154537   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:44.154544   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:44.154549   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:44.164116   32853 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0501 02:34:44.171344   32853 system_pods.go:59] 24 kube-system pods found
	I0501 02:34:44.171371   32853 system_pods.go:61] "coredns-7db6d8ff4d-2h8lc" [937e09f0-6a7d-4387-aa19-ee959eb5a2a5] Running
	I0501 02:34:44.171376   32853 system_pods.go:61] "coredns-7db6d8ff4d-cfdqc" [a37e982e-9e4f-43bf-b957-0d6f082f4ec8] Running
	I0501 02:34:44.171380   32853 system_pods.go:61] "etcd-ha-329926" [f0e4ae2a-a8cc-42b2-9865-fb6ec3f41acb] Running
	I0501 02:34:44.171384   32853 system_pods.go:61] "etcd-ha-329926-m02" [4ed5b754-bb3d-46de-a5b9-ff46994f25ad] Running
	I0501 02:34:44.171389   32853 system_pods.go:61] "etcd-ha-329926-m03" [5b1f17c9-f09d-4e25-9069-125ca6756bb9] Running
	I0501 02:34:44.171392   32853 system_pods.go:61] "kindnet-7gr9n" [acd0ac11-9caa-47ae-b1f9-40dbb9f25b9c] Running
	I0501 02:34:44.171395   32853 system_pods.go:61] "kindnet-9r8zn" [fc187c8a-a964-45e1-adb0-f5ce23922b66] Running
	I0501 02:34:44.171399   32853 system_pods.go:61] "kindnet-kcmp7" [8e15c166-9ba1-40c9-8f33-db7f83733932] Running
	I0501 02:34:44.171404   32853 system_pods.go:61] "kube-apiserver-ha-329926" [49c47f4f-663a-4407-9d46-94fa3afbf349] Running
	I0501 02:34:44.171409   32853 system_pods.go:61] "kube-apiserver-ha-329926-m02" [886d1acc-021c-4f8b-b477-b9760260aabb] Running
	I0501 02:34:44.171414   32853 system_pods.go:61] "kube-apiserver-ha-329926-m03" [1d9a8819-b7a1-4b6d-b633-912974f051ce] Running
	I0501 02:34:44.171419   32853 system_pods.go:61] "kube-controller-manager-ha-329926" [332785d8-9966-4823-8828-fa5e90b4aac1] Running
	I0501 02:34:44.171425   32853 system_pods.go:61] "kube-controller-manager-ha-329926-m02" [91d97fa7-6409-4620-b569-c391d21a5915] Running
	I0501 02:34:44.171431   32853 system_pods.go:61] "kube-controller-manager-ha-329926-m03" [623b64bf-d9cc-44fd-91d4-ab8296a2d0a8] Running
	I0501 02:34:44.171441   32853 system_pods.go:61] "kube-proxy-jfnk9" [a0d4b9ce-a0b5-4810-b2ea-34b1ad295e88] Running
	I0501 02:34:44.171445   32853 system_pods.go:61] "kube-proxy-msshn" [7575fbfc-11ce-4223-bd99-ff9cdddd3568] Running
	I0501 02:34:44.171448   32853 system_pods.go:61] "kube-proxy-rfsm8" [f0510b55-1b59-4239-b529-b7af4d017c06] Running
	I0501 02:34:44.171452   32853 system_pods.go:61] "kube-scheduler-ha-329926" [7d45e3e9-cc7e-4b69-9219-61c3006013ea] Running
	I0501 02:34:44.171455   32853 system_pods.go:61] "kube-scheduler-ha-329926-m02" [075e127f-debf-4dd4-babd-be0930fb2ef7] Running
	I0501 02:34:44.171461   32853 system_pods.go:61] "kube-scheduler-ha-329926-m03" [057d5d0d-b546-4007-b922-4e4db5232918] Running
	I0501 02:34:44.171464   32853 system_pods.go:61] "kube-vip-ha-329926" [0fbbb815-441d-48d0-b0cf-1bb57ff6d993] Running
	I0501 02:34:44.171467   32853 system_pods.go:61] "kube-vip-ha-329926-m02" [92c115f8-bb9c-4a86-b914-984985a69916] Running
	I0501 02:34:44.171470   32853 system_pods.go:61] "kube-vip-ha-329926-m03" [a66ba3bd-e5c6-4e6c-9f95-bac5a111bc0e] Running
	I0501 02:34:44.171473   32853 system_pods.go:61] "storage-provisioner" [371423a6-a156-4e8d-bf66-812d606cc8d7] Running
	I0501 02:34:44.171479   32853 system_pods.go:74] duration metric: took 186.669098ms to wait for pod list to return data ...
	I0501 02:34:44.171489   32853 default_sa.go:34] waiting for default service account to be created ...
	I0501 02:34:44.354917   32853 request.go:629] Waited for 183.348562ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/namespaces/default/serviceaccounts
	I0501 02:34:44.354981   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/default/serviceaccounts
	I0501 02:34:44.354986   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:44.354993   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:44.354997   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:44.357877   32853 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 02:34:44.358002   32853 default_sa.go:45] found service account: "default"
	I0501 02:34:44.358022   32853 default_sa.go:55] duration metric: took 186.526043ms for default service account to be created ...
	I0501 02:34:44.358032   32853 system_pods.go:116] waiting for k8s-apps to be running ...
	I0501 02:34:44.554486   32853 request.go:629] Waited for 196.380023ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods
	I0501 02:34:44.554547   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/namespaces/kube-system/pods
	I0501 02:34:44.554552   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:44.554560   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:44.554567   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:44.562004   32853 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0501 02:34:44.568771   32853 system_pods.go:86] 24 kube-system pods found
	I0501 02:34:44.568796   32853 system_pods.go:89] "coredns-7db6d8ff4d-2h8lc" [937e09f0-6a7d-4387-aa19-ee959eb5a2a5] Running
	I0501 02:34:44.568801   32853 system_pods.go:89] "coredns-7db6d8ff4d-cfdqc" [a37e982e-9e4f-43bf-b957-0d6f082f4ec8] Running
	I0501 02:34:44.568806   32853 system_pods.go:89] "etcd-ha-329926" [f0e4ae2a-a8cc-42b2-9865-fb6ec3f41acb] Running
	I0501 02:34:44.568810   32853 system_pods.go:89] "etcd-ha-329926-m02" [4ed5b754-bb3d-46de-a5b9-ff46994f25ad] Running
	I0501 02:34:44.568813   32853 system_pods.go:89] "etcd-ha-329926-m03" [5b1f17c9-f09d-4e25-9069-125ca6756bb9] Running
	I0501 02:34:44.568817   32853 system_pods.go:89] "kindnet-7gr9n" [acd0ac11-9caa-47ae-b1f9-40dbb9f25b9c] Running
	I0501 02:34:44.568821   32853 system_pods.go:89] "kindnet-9r8zn" [fc187c8a-a964-45e1-adb0-f5ce23922b66] Running
	I0501 02:34:44.568824   32853 system_pods.go:89] "kindnet-kcmp7" [8e15c166-9ba1-40c9-8f33-db7f83733932] Running
	I0501 02:34:44.568828   32853 system_pods.go:89] "kube-apiserver-ha-329926" [49c47f4f-663a-4407-9d46-94fa3afbf349] Running
	I0501 02:34:44.568834   32853 system_pods.go:89] "kube-apiserver-ha-329926-m02" [886d1acc-021c-4f8b-b477-b9760260aabb] Running
	I0501 02:34:44.568838   32853 system_pods.go:89] "kube-apiserver-ha-329926-m03" [1d9a8819-b7a1-4b6d-b633-912974f051ce] Running
	I0501 02:34:44.568843   32853 system_pods.go:89] "kube-controller-manager-ha-329926" [332785d8-9966-4823-8828-fa5e90b4aac1] Running
	I0501 02:34:44.568847   32853 system_pods.go:89] "kube-controller-manager-ha-329926-m02" [91d97fa7-6409-4620-b569-c391d21a5915] Running
	I0501 02:34:44.568854   32853 system_pods.go:89] "kube-controller-manager-ha-329926-m03" [623b64bf-d9cc-44fd-91d4-ab8296a2d0a8] Running
	I0501 02:34:44.568857   32853 system_pods.go:89] "kube-proxy-jfnk9" [a0d4b9ce-a0b5-4810-b2ea-34b1ad295e88] Running
	I0501 02:34:44.568863   32853 system_pods.go:89] "kube-proxy-msshn" [7575fbfc-11ce-4223-bd99-ff9cdddd3568] Running
	I0501 02:34:44.568867   32853 system_pods.go:89] "kube-proxy-rfsm8" [f0510b55-1b59-4239-b529-b7af4d017c06] Running
	I0501 02:34:44.568871   32853 system_pods.go:89] "kube-scheduler-ha-329926" [7d45e3e9-cc7e-4b69-9219-61c3006013ea] Running
	I0501 02:34:44.568874   32853 system_pods.go:89] "kube-scheduler-ha-329926-m02" [075e127f-debf-4dd4-babd-be0930fb2ef7] Running
	I0501 02:34:44.568878   32853 system_pods.go:89] "kube-scheduler-ha-329926-m03" [057d5d0d-b546-4007-b922-4e4db5232918] Running
	I0501 02:34:44.568884   32853 system_pods.go:89] "kube-vip-ha-329926" [0fbbb815-441d-48d0-b0cf-1bb57ff6d993] Running
	I0501 02:34:44.568887   32853 system_pods.go:89] "kube-vip-ha-329926-m02" [92c115f8-bb9c-4a86-b914-984985a69916] Running
	I0501 02:34:44.568891   32853 system_pods.go:89] "kube-vip-ha-329926-m03" [a66ba3bd-e5c6-4e6c-9f95-bac5a111bc0e] Running
	I0501 02:34:44.568894   32853 system_pods.go:89] "storage-provisioner" [371423a6-a156-4e8d-bf66-812d606cc8d7] Running
	I0501 02:34:44.568902   32853 system_pods.go:126] duration metric: took 210.864899ms to wait for k8s-apps to be running ...
	I0501 02:34:44.568911   32853 system_svc.go:44] waiting for kubelet service to be running ....
	I0501 02:34:44.568950   32853 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 02:34:44.586737   32853 system_svc.go:56] duration metric: took 17.813264ms WaitForService to wait for kubelet
	I0501 02:34:44.586769   32853 kubeadm.go:576] duration metric: took 18.329255466s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0501 02:34:44.586793   32853 node_conditions.go:102] verifying NodePressure condition ...
	I0501 02:34:44.754243   32853 request.go:629] Waited for 167.353029ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.5:8443/api/v1/nodes
	I0501 02:34:44.754293   32853 round_trippers.go:463] GET https://192.168.39.5:8443/api/v1/nodes
	I0501 02:34:44.754298   32853 round_trippers.go:469] Request Headers:
	I0501 02:34:44.754306   32853 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:34:44.754317   32853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0501 02:34:44.757904   32853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:34:44.759059   32853 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0501 02:34:44.759079   32853 node_conditions.go:123] node cpu capacity is 2
	I0501 02:34:44.759088   32853 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0501 02:34:44.759092   32853 node_conditions.go:123] node cpu capacity is 2
	I0501 02:34:44.759098   32853 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0501 02:34:44.759103   32853 node_conditions.go:123] node cpu capacity is 2
	I0501 02:34:44.759111   32853 node_conditions.go:105] duration metric: took 172.311739ms to run NodePressure ...
	I0501 02:34:44.759125   32853 start.go:240] waiting for startup goroutines ...
	I0501 02:34:44.759151   32853 start.go:254] writing updated cluster config ...
	I0501 02:34:44.759456   32853 ssh_runner.go:195] Run: rm -f paused
	I0501 02:34:44.808612   32853 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0501 02:34:44.811518   32853 out.go:177] * Done! kubectl is now configured to use "ha-329926" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	May 01 02:39:20 ha-329926 crio[686]: time="2024-05-01 02:39:20.926492387Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714531160926468324,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3da1766b-cf63-4279-815f-8d6f2abbeda3 name=/runtime.v1.ImageService/ImageFsInfo
	May 01 02:39:20 ha-329926 crio[686]: time="2024-05-01 02:39:20.927058147Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=43a3d387-9b1c-4d15-adde-2383660843fe name=/runtime.v1.RuntimeService/ListContainers
	May 01 02:39:20 ha-329926 crio[686]: time="2024-05-01 02:39:20.927146534Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=43a3d387-9b1c-4d15-adde-2383660843fe name=/runtime.v1.RuntimeService/ListContainers
	May 01 02:39:20 ha-329926 crio[686]: time="2024-05-01 02:39:20.927397601Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4d8c54a9eb6fd7f6b4514ddfa0201da9a28692806db71b7d277493e9f9b90233,PodSandboxId:abf4acd7dd09ff0cf728fe61b1bf3c73291eacc97a98f9f2d79b9fe49a629266,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714530889047890387,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-nwj5x,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0cfb5bda-fca7-479f-98d3-6be9bddf0e1c,},Annotations:map[string]string{io.kubernetes.container.hash: b50f62c1,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:619f66869569c782fd59143653e79694cc74f9ae89927d4f16dfcb10f47d0e03,PodSandboxId:0fe93b95f6356e43c599c74942976a3c1c2025ad4e52377711767c38c88d3d63,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714530725115289651,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-cfdqc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a37e982e-9e4f-43bf-b957-0d6f082f4ec8,},Annotations:map[string]string{io.kubernetes.container.hash: d10dbf58,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:693a12cd2b2c6de7a7c03e4d290e69cdcef9f4f17b75ff84f84fb81b8297cd63,PodSandboxId:1771f42c6abecb905d783dc2c43071807fc9138d9eae9432b2b132602a450cb2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714530725082746654,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-2h8lc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
937e09f0-6a7d-4387-aa19-ee959eb5a2a5,},Annotations:map[string]string{io.kubernetes.container.hash: 64cd0f7d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbc7b6bc224b5b53e156316187f05c941fd17da22bca2cc7fecf5071d8eb4d38,PodSandboxId:05fed297415fe992b6ceac2c7aef1f62bcd2e60cf49b1d9d743697eee2cb3af3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1714530724054226796,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 371423a6-a156-4e8d-bf66-812d606cc8d7,},Annotations:map[string]string{io.kubernetes.container.hash: 79218c39,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:257c4b72e49ea613701bb138700cc82cde325fb0c005942fc50bd070378cf0eb,PodSandboxId:ad0b43789b437ced381dd7eb2d9868a7746a793b32c75f341a8f9efae3a1de24,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:17145307
22097649549,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-kcmp7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e15c166-9ba1-40c9-8f33-db7f83733932,},Annotations:map[string]string{io.kubernetes.container.hash: fdceac74,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ab64850e34b66cbe91e099c91014051472b87888130297cd7c47a4a78992140,PodSandboxId:f6611da96d51a594f93865322c664539feca749ea5a59ee33b637aae006635ba,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714530722007148476,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-msshn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7575fbfc-11ce-4223-bd99-ff9cdddd3568,},Annotations:map[string]string{io.kubernetes.container.hash: 725b7ad5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9563ee09b7dc14582bda46368040d65e26370cf354a48e6db28fb4d5169a41db,PodSandboxId:8e4b8a65b029e97b7caac8a0741c84135d0828b6c08c910ffe39c62fad15b348,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1714530704705366179,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-329926,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c3b226201c27ab5f848e6c44c130330,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3ffc6d046e214333ed85219819ad54bfa5c3ac0b10167beacf72edbd6035736,PodSandboxId:170d412885089b502fe3701809ec5f3271a9fd4f9181bf1c293cd527d144907b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714530701588736902,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-329926,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50544e7f95cae164184f9b27f78747c6,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d24a4adfe9096e0063099c3390b72f12094c22465e8b666eb999e30740b77ea3,PodSandboxId:f0b4ec2fbb3da1f22c55229886d7442b77bfddb7283930fbd8a5792aab374edd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714530701591213003,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-ha-329926,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d6c0ce9d370e02811c06c5c50fb7da1,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f36a128ab65a069e1cafa5abf1b7774272b73428ea3826f595758d5e99b4e93,PodSandboxId:0c17dc8e917b3fee067acb3ff1b63beb8742f444a12d58ee96183b021052a9ed,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714530701461255731,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name:
etcd-ha-329926,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 684463257510837a1c150a7df713bf62,},Annotations:map[string]string{io.kubernetes.container.hash: d6da2a03,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:347407ef9dd66d0f2a44d6bc871649c2f38c1263ef6f3a33d6574f0e149ab701,PodSandboxId:65643d458b7e95f734a62743c303ec72adbb23f0caf328e66b40f003fc10141e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714530701541408038,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-329926,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c603c91fe09a36a9d3862475188142a,},Annotations:map[string]string{io.kubernetes.container.hash: 740b8b39,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=43a3d387-9b1c-4d15-adde-2383660843fe name=/runtime.v1.RuntimeService/ListContainers
	May 01 02:39:20 ha-329926 crio[686]: time="2024-05-01 02:39:20.973020247Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9d59f635-a072-4d9c-970b-4ec1fa8bf559 name=/runtime.v1.RuntimeService/Version
	May 01 02:39:20 ha-329926 crio[686]: time="2024-05-01 02:39:20.973792153Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9d59f635-a072-4d9c-970b-4ec1fa8bf559 name=/runtime.v1.RuntimeService/Version
	May 01 02:39:20 ha-329926 crio[686]: time="2024-05-01 02:39:20.975163340Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8be37cc1-b08f-4a0d-a855-7bad47d20bb8 name=/runtime.v1.ImageService/ImageFsInfo
	May 01 02:39:20 ha-329926 crio[686]: time="2024-05-01 02:39:20.975588252Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714531160975567464,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8be37cc1-b08f-4a0d-a855-7bad47d20bb8 name=/runtime.v1.ImageService/ImageFsInfo
	May 01 02:39:20 ha-329926 crio[686]: time="2024-05-01 02:39:20.976916747Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ca601b7b-779a-43d8-b8cf-5a5d19c64ee1 name=/runtime.v1.RuntimeService/ListContainers
	May 01 02:39:20 ha-329926 crio[686]: time="2024-05-01 02:39:20.976971269Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ca601b7b-779a-43d8-b8cf-5a5d19c64ee1 name=/runtime.v1.RuntimeService/ListContainers
	May 01 02:39:20 ha-329926 crio[686]: time="2024-05-01 02:39:20.977198897Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4d8c54a9eb6fd7f6b4514ddfa0201da9a28692806db71b7d277493e9f9b90233,PodSandboxId:abf4acd7dd09ff0cf728fe61b1bf3c73291eacc97a98f9f2d79b9fe49a629266,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714530889047890387,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-nwj5x,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0cfb5bda-fca7-479f-98d3-6be9bddf0e1c,},Annotations:map[string]string{io.kubernetes.container.hash: b50f62c1,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:619f66869569c782fd59143653e79694cc74f9ae89927d4f16dfcb10f47d0e03,PodSandboxId:0fe93b95f6356e43c599c74942976a3c1c2025ad4e52377711767c38c88d3d63,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714530725115289651,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-cfdqc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a37e982e-9e4f-43bf-b957-0d6f082f4ec8,},Annotations:map[string]string{io.kubernetes.container.hash: d10dbf58,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:693a12cd2b2c6de7a7c03e4d290e69cdcef9f4f17b75ff84f84fb81b8297cd63,PodSandboxId:1771f42c6abecb905d783dc2c43071807fc9138d9eae9432b2b132602a450cb2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714530725082746654,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-2h8lc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
937e09f0-6a7d-4387-aa19-ee959eb5a2a5,},Annotations:map[string]string{io.kubernetes.container.hash: 64cd0f7d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbc7b6bc224b5b53e156316187f05c941fd17da22bca2cc7fecf5071d8eb4d38,PodSandboxId:05fed297415fe992b6ceac2c7aef1f62bcd2e60cf49b1d9d743697eee2cb3af3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1714530724054226796,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 371423a6-a156-4e8d-bf66-812d606cc8d7,},Annotations:map[string]string{io.kubernetes.container.hash: 79218c39,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:257c4b72e49ea613701bb138700cc82cde325fb0c005942fc50bd070378cf0eb,PodSandboxId:ad0b43789b437ced381dd7eb2d9868a7746a793b32c75f341a8f9efae3a1de24,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:17145307
22097649549,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-kcmp7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e15c166-9ba1-40c9-8f33-db7f83733932,},Annotations:map[string]string{io.kubernetes.container.hash: fdceac74,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ab64850e34b66cbe91e099c91014051472b87888130297cd7c47a4a78992140,PodSandboxId:f6611da96d51a594f93865322c664539feca749ea5a59ee33b637aae006635ba,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714530722007148476,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-msshn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7575fbfc-11ce-4223-bd99-ff9cdddd3568,},Annotations:map[string]string{io.kubernetes.container.hash: 725b7ad5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9563ee09b7dc14582bda46368040d65e26370cf354a48e6db28fb4d5169a41db,PodSandboxId:8e4b8a65b029e97b7caac8a0741c84135d0828b6c08c910ffe39c62fad15b348,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1714530704705366179,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-329926,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c3b226201c27ab5f848e6c44c130330,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3ffc6d046e214333ed85219819ad54bfa5c3ac0b10167beacf72edbd6035736,PodSandboxId:170d412885089b502fe3701809ec5f3271a9fd4f9181bf1c293cd527d144907b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714530701588736902,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-329926,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50544e7f95cae164184f9b27f78747c6,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d24a4adfe9096e0063099c3390b72f12094c22465e8b666eb999e30740b77ea3,PodSandboxId:f0b4ec2fbb3da1f22c55229886d7442b77bfddb7283930fbd8a5792aab374edd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714530701591213003,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-ha-329926,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d6c0ce9d370e02811c06c5c50fb7da1,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f36a128ab65a069e1cafa5abf1b7774272b73428ea3826f595758d5e99b4e93,PodSandboxId:0c17dc8e917b3fee067acb3ff1b63beb8742f444a12d58ee96183b021052a9ed,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714530701461255731,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name:
etcd-ha-329926,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 684463257510837a1c150a7df713bf62,},Annotations:map[string]string{io.kubernetes.container.hash: d6da2a03,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:347407ef9dd66d0f2a44d6bc871649c2f38c1263ef6f3a33d6574f0e149ab701,PodSandboxId:65643d458b7e95f734a62743c303ec72adbb23f0caf328e66b40f003fc10141e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714530701541408038,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-329926,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c603c91fe09a36a9d3862475188142a,},Annotations:map[string]string{io.kubernetes.container.hash: 740b8b39,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ca601b7b-779a-43d8-b8cf-5a5d19c64ee1 name=/runtime.v1.RuntimeService/ListContainers
	May 01 02:39:21 ha-329926 crio[686]: time="2024-05-01 02:39:21.035191195Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=949f1b9e-d2ff-42f0-8166-ecafde144417 name=/runtime.v1.RuntimeService/Version
	May 01 02:39:21 ha-329926 crio[686]: time="2024-05-01 02:39:21.035273916Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=949f1b9e-d2ff-42f0-8166-ecafde144417 name=/runtime.v1.RuntimeService/Version
	May 01 02:39:21 ha-329926 crio[686]: time="2024-05-01 02:39:21.037493930Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=18ee3da3-1760-4dff-95ad-0eb7ba5fc6a9 name=/runtime.v1.ImageService/ImageFsInfo
	May 01 02:39:21 ha-329926 crio[686]: time="2024-05-01 02:39:21.038088879Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714531161038059873,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=18ee3da3-1760-4dff-95ad-0eb7ba5fc6a9 name=/runtime.v1.ImageService/ImageFsInfo
	May 01 02:39:21 ha-329926 crio[686]: time="2024-05-01 02:39:21.039146335Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=baa79fa6-3818-4f37-a2fc-48d5fa9778b6 name=/runtime.v1.RuntimeService/ListContainers
	May 01 02:39:21 ha-329926 crio[686]: time="2024-05-01 02:39:21.039227764Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=baa79fa6-3818-4f37-a2fc-48d5fa9778b6 name=/runtime.v1.RuntimeService/ListContainers
	May 01 02:39:21 ha-329926 crio[686]: time="2024-05-01 02:39:21.039455274Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4d8c54a9eb6fd7f6b4514ddfa0201da9a28692806db71b7d277493e9f9b90233,PodSandboxId:abf4acd7dd09ff0cf728fe61b1bf3c73291eacc97a98f9f2d79b9fe49a629266,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714530889047890387,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-nwj5x,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0cfb5bda-fca7-479f-98d3-6be9bddf0e1c,},Annotations:map[string]string{io.kubernetes.container.hash: b50f62c1,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:619f66869569c782fd59143653e79694cc74f9ae89927d4f16dfcb10f47d0e03,PodSandboxId:0fe93b95f6356e43c599c74942976a3c1c2025ad4e52377711767c38c88d3d63,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714530725115289651,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-cfdqc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a37e982e-9e4f-43bf-b957-0d6f082f4ec8,},Annotations:map[string]string{io.kubernetes.container.hash: d10dbf58,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:693a12cd2b2c6de7a7c03e4d290e69cdcef9f4f17b75ff84f84fb81b8297cd63,PodSandboxId:1771f42c6abecb905d783dc2c43071807fc9138d9eae9432b2b132602a450cb2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714530725082746654,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-2h8lc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
937e09f0-6a7d-4387-aa19-ee959eb5a2a5,},Annotations:map[string]string{io.kubernetes.container.hash: 64cd0f7d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbc7b6bc224b5b53e156316187f05c941fd17da22bca2cc7fecf5071d8eb4d38,PodSandboxId:05fed297415fe992b6ceac2c7aef1f62bcd2e60cf49b1d9d743697eee2cb3af3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1714530724054226796,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 371423a6-a156-4e8d-bf66-812d606cc8d7,},Annotations:map[string]string{io.kubernetes.container.hash: 79218c39,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:257c4b72e49ea613701bb138700cc82cde325fb0c005942fc50bd070378cf0eb,PodSandboxId:ad0b43789b437ced381dd7eb2d9868a7746a793b32c75f341a8f9efae3a1de24,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:17145307
22097649549,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-kcmp7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e15c166-9ba1-40c9-8f33-db7f83733932,},Annotations:map[string]string{io.kubernetes.container.hash: fdceac74,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ab64850e34b66cbe91e099c91014051472b87888130297cd7c47a4a78992140,PodSandboxId:f6611da96d51a594f93865322c664539feca749ea5a59ee33b637aae006635ba,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714530722007148476,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-msshn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7575fbfc-11ce-4223-bd99-ff9cdddd3568,},Annotations:map[string]string{io.kubernetes.container.hash: 725b7ad5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9563ee09b7dc14582bda46368040d65e26370cf354a48e6db28fb4d5169a41db,PodSandboxId:8e4b8a65b029e97b7caac8a0741c84135d0828b6c08c910ffe39c62fad15b348,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1714530704705366179,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-329926,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c3b226201c27ab5f848e6c44c130330,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3ffc6d046e214333ed85219819ad54bfa5c3ac0b10167beacf72edbd6035736,PodSandboxId:170d412885089b502fe3701809ec5f3271a9fd4f9181bf1c293cd527d144907b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714530701588736902,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-329926,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50544e7f95cae164184f9b27f78747c6,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d24a4adfe9096e0063099c3390b72f12094c22465e8b666eb999e30740b77ea3,PodSandboxId:f0b4ec2fbb3da1f22c55229886d7442b77bfddb7283930fbd8a5792aab374edd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714530701591213003,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-ha-329926,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d6c0ce9d370e02811c06c5c50fb7da1,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f36a128ab65a069e1cafa5abf1b7774272b73428ea3826f595758d5e99b4e93,PodSandboxId:0c17dc8e917b3fee067acb3ff1b63beb8742f444a12d58ee96183b021052a9ed,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714530701461255731,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name:
etcd-ha-329926,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 684463257510837a1c150a7df713bf62,},Annotations:map[string]string{io.kubernetes.container.hash: d6da2a03,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:347407ef9dd66d0f2a44d6bc871649c2f38c1263ef6f3a33d6574f0e149ab701,PodSandboxId:65643d458b7e95f734a62743c303ec72adbb23f0caf328e66b40f003fc10141e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714530701541408038,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-329926,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c603c91fe09a36a9d3862475188142a,},Annotations:map[string]string{io.kubernetes.container.hash: 740b8b39,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=baa79fa6-3818-4f37-a2fc-48d5fa9778b6 name=/runtime.v1.RuntimeService/ListContainers
	May 01 02:39:21 ha-329926 crio[686]: time="2024-05-01 02:39:21.086271287Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=faf4a449-0ff7-4c58-b803-da3dfd7bed13 name=/runtime.v1.RuntimeService/Version
	May 01 02:39:21 ha-329926 crio[686]: time="2024-05-01 02:39:21.086354494Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=faf4a449-0ff7-4c58-b803-da3dfd7bed13 name=/runtime.v1.RuntimeService/Version
	May 01 02:39:21 ha-329926 crio[686]: time="2024-05-01 02:39:21.089043269Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b7edc049-7a52-460c-8a81-c34b5fa6fda7 name=/runtime.v1.ImageService/ImageFsInfo
	May 01 02:39:21 ha-329926 crio[686]: time="2024-05-01 02:39:21.089467846Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714531161089442243,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b7edc049-7a52-460c-8a81-c34b5fa6fda7 name=/runtime.v1.ImageService/ImageFsInfo
	May 01 02:39:21 ha-329926 crio[686]: time="2024-05-01 02:39:21.090099244Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c0033292-f5c5-4252-bd0f-4394e3bc759f name=/runtime.v1.RuntimeService/ListContainers
	May 01 02:39:21 ha-329926 crio[686]: time="2024-05-01 02:39:21.090148786Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c0033292-f5c5-4252-bd0f-4394e3bc759f name=/runtime.v1.RuntimeService/ListContainers
	May 01 02:39:21 ha-329926 crio[686]: time="2024-05-01 02:39:21.090463731Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4d8c54a9eb6fd7f6b4514ddfa0201da9a28692806db71b7d277493e9f9b90233,PodSandboxId:abf4acd7dd09ff0cf728fe61b1bf3c73291eacc97a98f9f2d79b9fe49a629266,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714530889047890387,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-nwj5x,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0cfb5bda-fca7-479f-98d3-6be9bddf0e1c,},Annotations:map[string]string{io.kubernetes.container.hash: b50f62c1,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:619f66869569c782fd59143653e79694cc74f9ae89927d4f16dfcb10f47d0e03,PodSandboxId:0fe93b95f6356e43c599c74942976a3c1c2025ad4e52377711767c38c88d3d63,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714530725115289651,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-cfdqc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a37e982e-9e4f-43bf-b957-0d6f082f4ec8,},Annotations:map[string]string{io.kubernetes.container.hash: d10dbf58,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:693a12cd2b2c6de7a7c03e4d290e69cdcef9f4f17b75ff84f84fb81b8297cd63,PodSandboxId:1771f42c6abecb905d783dc2c43071807fc9138d9eae9432b2b132602a450cb2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714530725082746654,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-2h8lc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
937e09f0-6a7d-4387-aa19-ee959eb5a2a5,},Annotations:map[string]string{io.kubernetes.container.hash: 64cd0f7d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbc7b6bc224b5b53e156316187f05c941fd17da22bca2cc7fecf5071d8eb4d38,PodSandboxId:05fed297415fe992b6ceac2c7aef1f62bcd2e60cf49b1d9d743697eee2cb3af3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1714530724054226796,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 371423a6-a156-4e8d-bf66-812d606cc8d7,},Annotations:map[string]string{io.kubernetes.container.hash: 79218c39,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:257c4b72e49ea613701bb138700cc82cde325fb0c005942fc50bd070378cf0eb,PodSandboxId:ad0b43789b437ced381dd7eb2d9868a7746a793b32c75f341a8f9efae3a1de24,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:17145307
22097649549,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-kcmp7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e15c166-9ba1-40c9-8f33-db7f83733932,},Annotations:map[string]string{io.kubernetes.container.hash: fdceac74,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ab64850e34b66cbe91e099c91014051472b87888130297cd7c47a4a78992140,PodSandboxId:f6611da96d51a594f93865322c664539feca749ea5a59ee33b637aae006635ba,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714530722007148476,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-msshn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7575fbfc-11ce-4223-bd99-ff9cdddd3568,},Annotations:map[string]string{io.kubernetes.container.hash: 725b7ad5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9563ee09b7dc14582bda46368040d65e26370cf354a48e6db28fb4d5169a41db,PodSandboxId:8e4b8a65b029e97b7caac8a0741c84135d0828b6c08c910ffe39c62fad15b348,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1714530704705366179,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-329926,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c3b226201c27ab5f848e6c44c130330,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3ffc6d046e214333ed85219819ad54bfa5c3ac0b10167beacf72edbd6035736,PodSandboxId:170d412885089b502fe3701809ec5f3271a9fd4f9181bf1c293cd527d144907b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714530701588736902,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-329926,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50544e7f95cae164184f9b27f78747c6,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d24a4adfe9096e0063099c3390b72f12094c22465e8b666eb999e30740b77ea3,PodSandboxId:f0b4ec2fbb3da1f22c55229886d7442b77bfddb7283930fbd8a5792aab374edd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714530701591213003,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-ha-329926,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d6c0ce9d370e02811c06c5c50fb7da1,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f36a128ab65a069e1cafa5abf1b7774272b73428ea3826f595758d5e99b4e93,PodSandboxId:0c17dc8e917b3fee067acb3ff1b63beb8742f444a12d58ee96183b021052a9ed,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714530701461255731,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name:
etcd-ha-329926,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 684463257510837a1c150a7df713bf62,},Annotations:map[string]string{io.kubernetes.container.hash: d6da2a03,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:347407ef9dd66d0f2a44d6bc871649c2f38c1263ef6f3a33d6574f0e149ab701,PodSandboxId:65643d458b7e95f734a62743c303ec72adbb23f0caf328e66b40f003fc10141e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714530701541408038,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-329926,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c603c91fe09a36a9d3862475188142a,},Annotations:map[string]string{io.kubernetes.container.hash: 740b8b39,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c0033292-f5c5-4252-bd0f-4394e3bc759f name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	4d8c54a9eb6fd       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 minutes ago       Running             busybox                   0                   abf4acd7dd09f       busybox-fc5497c4f-nwj5x
	619f66869569c       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      7 minutes ago       Running             coredns                   0                   0fe93b95f6356       coredns-7db6d8ff4d-cfdqc
	693a12cd2b2c6       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      7 minutes ago       Running             coredns                   0                   1771f42c6abec       coredns-7db6d8ff4d-2h8lc
	fbc7b6bc224b5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      7 minutes ago       Running             storage-provisioner       0                   05fed297415fe       storage-provisioner
	257c4b72e49ea       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      7 minutes ago       Running             kindnet-cni               0                   ad0b43789b437       kindnet-kcmp7
	2ab64850e34b6       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b                                      7 minutes ago       Running             kube-proxy                0                   f6611da96d51a       kube-proxy-msshn
	9563ee09b7dc1       ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a     7 minutes ago       Running             kube-vip                  0                   8e4b8a65b029e       kube-vip-ha-329926
	d24a4adfe9096       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b                                      7 minutes ago       Running             kube-controller-manager   0                   f0b4ec2fbb3da       kube-controller-manager-ha-329926
	e3ffc6d046e21       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced                                      7 minutes ago       Running             kube-scheduler            0                   170d412885089       kube-scheduler-ha-329926
	347407ef9dd66       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0                                      7 minutes ago       Running             kube-apiserver            0                   65643d458b7e9       kube-apiserver-ha-329926
	9f36a128ab65a       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      7 minutes ago       Running             etcd                      0                   0c17dc8e917b3       etcd-ha-329926
	
	
	==> coredns [619f66869569c782fd59143653e79694cc74f9ae89927d4f16dfcb10f47d0e03] <==
	[INFO] 10.244.1.2:53229 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000157631s
	[INFO] 10.244.1.2:58661 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.013593602s
	[INFO] 10.244.1.2:38209 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000174169s
	[INFO] 10.244.1.2:49411 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000226927s
	[INFO] 10.244.0.4:36823 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000251251s
	[INFO] 10.244.0.4:50159 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001217267s
	[INFO] 10.244.0.4:40861 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000095644s
	[INFO] 10.244.0.4:39347 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000037736s
	[INFO] 10.244.2.2:41105 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000265426s
	[INFO] 10.244.2.2:60245 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000092358s
	[INFO] 10.244.2.2:33866 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00027339s
	[INFO] 10.244.2.2:40430 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000118178s
	[INFO] 10.244.2.2:34835 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000101675s
	[INFO] 10.244.1.2:50970 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000173405s
	[INFO] 10.244.1.2:45808 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000138806s
	[INFO] 10.244.0.4:35255 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000156547s
	[INFO] 10.244.0.4:41916 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000142712s
	[INFO] 10.244.0.4:47485 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000089433s
	[INFO] 10.244.2.2:53686 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000133335s
	[INFO] 10.244.2.2:36841 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000214942s
	[INFO] 10.244.2.2:60707 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000154s
	[INFO] 10.244.1.2:56577 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000484498s
	[INFO] 10.244.0.4:54313 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000184738s
	[INFO] 10.244.0.4:52463 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000369344s
	[INFO] 10.244.2.2:41039 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000224698s
	
	
	==> coredns [693a12cd2b2c6de7a7c03e4d290e69cdcef9f4f17b75ff84f84fb81b8297cd63] <==
	[INFO] 10.244.1.2:53262 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.005416004s
	[INFO] 10.244.0.4:55487 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000178658s
	[INFO] 10.244.1.2:56056 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000290654s
	[INFO] 10.244.1.2:49988 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000174539s
	[INFO] 10.244.1.2:51093 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.019474136s
	[INFO] 10.244.1.2:60518 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00017936s
	[INFO] 10.244.0.4:49957 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000203599s
	[INFO] 10.244.0.4:42538 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001710693s
	[INFO] 10.244.0.4:56099 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000083655s
	[INFO] 10.244.0.4:32984 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000156518s
	[INFO] 10.244.2.2:55668 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001793326s
	[INFO] 10.244.2.2:50808 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001174633s
	[INFO] 10.244.2.2:44291 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000119382s
	[INFO] 10.244.1.2:38278 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000204436s
	[INFO] 10.244.1.2:59141 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000117309s
	[INFO] 10.244.0.4:37516 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00005532s
	[INFO] 10.244.2.2:57332 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000189855s
	[INFO] 10.244.1.2:34171 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00024042s
	[INFO] 10.244.1.2:37491 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000234774s
	[INFO] 10.244.1.2:47588 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000815872s
	[INFO] 10.244.0.4:38552 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000135078s
	[INFO] 10.244.0.4:37827 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000154857s
	[INFO] 10.244.2.2:47767 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000154967s
	[INFO] 10.244.2.2:56393 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000156764s
	[INFO] 10.244.2.2:38616 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000127045s
	
	
	==> describe nodes <==
	Name:               ha-329926
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-329926
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2c4eae41cda912e6a762d77f0d8868e00f97bb4e
	                    minikube.k8s.io/name=ha-329926
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_01T02_31_49_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 01 May 2024 02:31:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-329926
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 01 May 2024 02:39:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 01 May 2024 02:35:22 +0000   Wed, 01 May 2024 02:31:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 01 May 2024 02:35:22 +0000   Wed, 01 May 2024 02:31:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 01 May 2024 02:35:22 +0000   Wed, 01 May 2024 02:31:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 01 May 2024 02:35:22 +0000   Wed, 01 May 2024 02:32:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.5
	  Hostname:    ha-329926
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f2958e1e59474320901fe20ba723db00
	  System UUID:                f2958e1e-5947-4320-901f-e20ba723db00
	  Boot ID:                    29fc4c0c-83d6-4af9-8767-4e1b7b7102d4
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-nwj5x              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m36s
	  kube-system                 coredns-7db6d8ff4d-2h8lc             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m20s
	  kube-system                 coredns-7db6d8ff4d-cfdqc             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m20s
	  kube-system                 etcd-ha-329926                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         7m33s
	  kube-system                 kindnet-kcmp7                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m20s
	  kube-system                 kube-apiserver-ha-329926             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m33s
	  kube-system                 kube-controller-manager-ha-329926    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m35s
	  kube-system                 kube-proxy-msshn                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m20s
	  kube-system                 kube-scheduler-ha-329926             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m33s
	  kube-system                 kube-vip-ha-329926                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m36s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m19s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 7m18s  kube-proxy       
	  Normal  Starting                 7m33s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m33s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m33s  kubelet          Node ha-329926 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m33s  kubelet          Node ha-329926 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m33s  kubelet          Node ha-329926 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           7m21s  node-controller  Node ha-329926 event: Registered Node ha-329926 in Controller
	  Normal  NodeReady                7m18s  kubelet          Node ha-329926 status is now: NodeReady
	  Normal  RegisteredNode           5m56s  node-controller  Node ha-329926 event: Registered Node ha-329926 in Controller
	  Normal  RegisteredNode           4m41s  node-controller  Node ha-329926 event: Registered Node ha-329926 in Controller
	
	
	Name:               ha-329926-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-329926-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2c4eae41cda912e6a762d77f0d8868e00f97bb4e
	                    minikube.k8s.io/name=ha-329926
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_01T02_33_11_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 01 May 2024 02:33:07 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-329926-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 01 May 2024 02:35:52 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Wed, 01 May 2024 02:35:09 +0000   Wed, 01 May 2024 02:36:35 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Wed, 01 May 2024 02:35:09 +0000   Wed, 01 May 2024 02:36:35 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Wed, 01 May 2024 02:35:09 +0000   Wed, 01 May 2024 02:36:35 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Wed, 01 May 2024 02:35:09 +0000   Wed, 01 May 2024 02:36:35 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.79
	  Hostname:    ha-329926-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 135aac161d694487846d436743753149
	  System UUID:                135aac16-1d69-4487-846d-436743753149
	  Boot ID:                    34317182-9a7b-42af-9a1d-807830167258
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-h8dxv                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m36s
	  kube-system                 etcd-ha-329926-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m12s
	  kube-system                 kindnet-9r8zn                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m14s
	  kube-system                 kube-apiserver-ha-329926-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m13s
	  kube-system                 kube-controller-manager-ha-329926-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m12s
	  kube-system                 kube-proxy-rfsm8                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m14s
	  kube-system                 kube-scheduler-ha-329926-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m13s
	  kube-system                 kube-vip-ha-329926-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m9s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  6m14s (x8 over 6m14s)  kubelet          Node ha-329926-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m14s (x8 over 6m14s)  kubelet          Node ha-329926-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m14s (x7 over 6m14s)  kubelet          Node ha-329926-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m14s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m11s                  node-controller  Node ha-329926-m02 event: Registered Node ha-329926-m02 in Controller
	  Normal  RegisteredNode           5m56s                  node-controller  Node ha-329926-m02 event: Registered Node ha-329926-m02 in Controller
	  Normal  RegisteredNode           4m41s                  node-controller  Node ha-329926-m02 event: Registered Node ha-329926-m02 in Controller
	  Normal  NodeNotReady             2m46s                  node-controller  Node ha-329926-m02 status is now: NodeNotReady
	
	
	Name:               ha-329926-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-329926-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2c4eae41cda912e6a762d77f0d8868e00f97bb4e
	                    minikube.k8s.io/name=ha-329926
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_01T02_34_25_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 01 May 2024 02:34:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-329926-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 01 May 2024 02:39:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 01 May 2024 02:34:51 +0000   Wed, 01 May 2024 02:34:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 01 May 2024 02:34:51 +0000   Wed, 01 May 2024 02:34:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 01 May 2024 02:34:51 +0000   Wed, 01 May 2024 02:34:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 01 May 2024 02:34:51 +0000   Wed, 01 May 2024 02:34:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.115
	  Hostname:    ha-329926-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 1767eff05cce4be88efdc97aef5d41f4
	  System UUID:                1767eff0-5cce-4be8-8efd-c97aef5d41f4
	  Boot ID:                    cb6b191b-c518-4f73-b29a-a16a5fcd9713
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-s528n                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m36s
	  kube-system                 etcd-ha-329926-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m58s
	  kube-system                 kindnet-7gr9n                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m
	  kube-system                 kube-apiserver-ha-329926-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m58s
	  kube-system                 kube-controller-manager-ha-329926-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m58s
	  kube-system                 kube-proxy-jfnk9                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m
	  kube-system                 kube-scheduler-ha-329926-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m58s
	  kube-system                 kube-vip-ha-329926-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age              From             Message
	  ----    ------                   ----             ----             -------
	  Normal  Starting                 4m54s            kube-proxy       
	  Normal  NodeHasSufficientMemory  5m (x8 over 5m)  kubelet          Node ha-329926-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m (x8 over 5m)  kubelet          Node ha-329926-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m (x7 over 5m)  kubelet          Node ha-329926-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m               kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m56s            node-controller  Node ha-329926-m03 event: Registered Node ha-329926-m03 in Controller
	  Normal  RegisteredNode           4m56s            node-controller  Node ha-329926-m03 event: Registered Node ha-329926-m03 in Controller
	  Normal  RegisteredNode           4m41s            node-controller  Node ha-329926-m03 event: Registered Node ha-329926-m03 in Controller
	
	
	Name:               ha-329926-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-329926-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2c4eae41cda912e6a762d77f0d8868e00f97bb4e
	                    minikube.k8s.io/name=ha-329926
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_01T02_35_25_0700
	                    minikube.k8s.io/version=v1.33.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 01 May 2024 02:35:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-329926-m04
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 01 May 2024 02:39:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 01 May 2024 02:35:55 +0000   Wed, 01 May 2024 02:35:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 01 May 2024 02:35:55 +0000   Wed, 01 May 2024 02:35:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 01 May 2024 02:35:55 +0000   Wed, 01 May 2024 02:35:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 01 May 2024 02:35:55 +0000   Wed, 01 May 2024 02:35:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.84
	  Hostname:    ha-329926-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 b19ce422aa224cda91e88f6cd8b003f9
	  System UUID:                b19ce422-aa22-4cda-91e8-8f6cd8b003f9
	  Boot ID:                    8e829b97-ffa9-4d75-abf6-2a174d768e30
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-86ngt       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m57s
	  kube-system                 kube-proxy-9492r    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m57s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m51s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  3m57s (x2 over 3m57s)  kubelet          Node ha-329926-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m57s (x2 over 3m57s)  kubelet          Node ha-329926-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m57s (x2 over 3m57s)  kubelet          Node ha-329926-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m57s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m56s                  node-controller  Node ha-329926-m04 event: Registered Node ha-329926-m04 in Controller
	  Normal  RegisteredNode           3m56s                  node-controller  Node ha-329926-m04 event: Registered Node ha-329926-m04 in Controller
	  Normal  RegisteredNode           3m56s                  node-controller  Node ha-329926-m04 event: Registered Node ha-329926-m04 in Controller
	  Normal  NodeReady                3m46s                  kubelet          Node ha-329926-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[May 1 02:31] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052256] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.043894] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.694169] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.641579] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.672174] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.723194] systemd-fstab-generator[602]: Ignoring "noauto" option for root device
	[  +0.059078] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.050190] systemd-fstab-generator[614]: Ignoring "noauto" option for root device
	[  +0.172804] systemd-fstab-generator[628]: Ignoring "noauto" option for root device
	[  +0.147592] systemd-fstab-generator[641]: Ignoring "noauto" option for root device
	[  +0.297725] systemd-fstab-generator[671]: Ignoring "noauto" option for root device
	[  +4.784571] systemd-fstab-generator[771]: Ignoring "noauto" option for root device
	[  +0.063787] kauditd_printk_skb: 130 callbacks suppressed
	[  +5.533501] systemd-fstab-generator[961]: Ignoring "noauto" option for root device
	[  +0.060916] kauditd_printk_skb: 18 callbacks suppressed
	[  +7.479829] systemd-fstab-generator[1381]: Ignoring "noauto" option for root device
	[  +0.092024] kauditd_printk_skb: 79 callbacks suppressed
	[May 1 02:32] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.650154] kauditd_printk_skb: 74 callbacks suppressed
	
	
	==> etcd [9f36a128ab65a069e1cafa5abf1b7774272b73428ea3826f595758d5e99b4e93] <==
	{"level":"warn","ts":"2024-05-01T02:39:21.323595Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c5263387c79c0223","from":"c5263387c79c0223","remote-peer-id":"64dbb1bdcfddc92c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-01T02:39:21.371042Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c5263387c79c0223","from":"c5263387c79c0223","remote-peer-id":"64dbb1bdcfddc92c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-01T02:39:21.382138Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.79:2380/version","remote-member-id":"64dbb1bdcfddc92c","error":"Get \"https://192.168.39.79:2380/version\": dial tcp 192.168.39.79:2380: i/o timeout"}
	{"level":"warn","ts":"2024-05-01T02:39:21.38224Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"64dbb1bdcfddc92c","error":"Get \"https://192.168.39.79:2380/version\": dial tcp 192.168.39.79:2380: i/o timeout"}
	{"level":"warn","ts":"2024-05-01T02:39:21.424032Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c5263387c79c0223","from":"c5263387c79c0223","remote-peer-id":"64dbb1bdcfddc92c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-01T02:39:21.435428Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c5263387c79c0223","from":"c5263387c79c0223","remote-peer-id":"64dbb1bdcfddc92c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-01T02:39:21.441066Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c5263387c79c0223","from":"c5263387c79c0223","remote-peer-id":"64dbb1bdcfddc92c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-01T02:39:21.45276Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c5263387c79c0223","from":"c5263387c79c0223","remote-peer-id":"64dbb1bdcfddc92c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-01T02:39:21.460649Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c5263387c79c0223","from":"c5263387c79c0223","remote-peer-id":"64dbb1bdcfddc92c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-01T02:39:21.471174Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c5263387c79c0223","from":"c5263387c79c0223","remote-peer-id":"64dbb1bdcfddc92c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-01T02:39:21.472754Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c5263387c79c0223","from":"c5263387c79c0223","remote-peer-id":"64dbb1bdcfddc92c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-01T02:39:21.481979Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c5263387c79c0223","from":"c5263387c79c0223","remote-peer-id":"64dbb1bdcfddc92c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-01T02:39:21.486434Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c5263387c79c0223","from":"c5263387c79c0223","remote-peer-id":"64dbb1bdcfddc92c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-01T02:39:21.487551Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c5263387c79c0223","from":"c5263387c79c0223","remote-peer-id":"64dbb1bdcfddc92c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-01T02:39:21.49953Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c5263387c79c0223","from":"c5263387c79c0223","remote-peer-id":"64dbb1bdcfddc92c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-01T02:39:21.518117Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c5263387c79c0223","from":"c5263387c79c0223","remote-peer-id":"64dbb1bdcfddc92c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-01T02:39:21.553208Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c5263387c79c0223","from":"c5263387c79c0223","remote-peer-id":"64dbb1bdcfddc92c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-01T02:39:21.561893Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c5263387c79c0223","from":"c5263387c79c0223","remote-peer-id":"64dbb1bdcfddc92c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-01T02:39:21.570738Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c5263387c79c0223","from":"c5263387c79c0223","remote-peer-id":"64dbb1bdcfddc92c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-01T02:39:21.573567Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c5263387c79c0223","from":"c5263387c79c0223","remote-peer-id":"64dbb1bdcfddc92c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-01T02:39:21.5786Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c5263387c79c0223","from":"c5263387c79c0223","remote-peer-id":"64dbb1bdcfddc92c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-01T02:39:21.58647Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c5263387c79c0223","from":"c5263387c79c0223","remote-peer-id":"64dbb1bdcfddc92c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-01T02:39:21.592416Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c5263387c79c0223","from":"c5263387c79c0223","remote-peer-id":"64dbb1bdcfddc92c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-01T02:39:21.600396Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c5263387c79c0223","from":"c5263387c79c0223","remote-peer-id":"64dbb1bdcfddc92c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-01T02:39:21.670807Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c5263387c79c0223","from":"c5263387c79c0223","remote-peer-id":"64dbb1bdcfddc92c","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 02:39:21 up 8 min,  0 users,  load average: 0.58, 0.49, 0.28
	Linux ha-329926 5.10.207 #1 SMP Tue Apr 30 22:38:43 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [257c4b72e49ea613701bb138700cc82cde325fb0c005942fc50bd070378cf0eb] <==
	I0501 02:38:43.843581       1 main.go:250] Node ha-329926-m04 has CIDR [10.244.3.0/24] 
	I0501 02:38:53.849998       1 main.go:223] Handling node with IPs: map[192.168.39.5:{}]
	I0501 02:38:53.850046       1 main.go:227] handling current node
	I0501 02:38:53.850057       1 main.go:223] Handling node with IPs: map[192.168.39.79:{}]
	I0501 02:38:53.850063       1 main.go:250] Node ha-329926-m02 has CIDR [10.244.1.0/24] 
	I0501 02:38:53.850162       1 main.go:223] Handling node with IPs: map[192.168.39.115:{}]
	I0501 02:38:53.850167       1 main.go:250] Node ha-329926-m03 has CIDR [10.244.2.0/24] 
	I0501 02:38:53.850205       1 main.go:223] Handling node with IPs: map[192.168.39.84:{}]
	I0501 02:38:53.850241       1 main.go:250] Node ha-329926-m04 has CIDR [10.244.3.0/24] 
	I0501 02:39:03.868857       1 main.go:223] Handling node with IPs: map[192.168.39.5:{}]
	I0501 02:39:03.868904       1 main.go:227] handling current node
	I0501 02:39:03.868916       1 main.go:223] Handling node with IPs: map[192.168.39.79:{}]
	I0501 02:39:03.868922       1 main.go:250] Node ha-329926-m02 has CIDR [10.244.1.0/24] 
	I0501 02:39:03.869022       1 main.go:223] Handling node with IPs: map[192.168.39.115:{}]
	I0501 02:39:03.869055       1 main.go:250] Node ha-329926-m03 has CIDR [10.244.2.0/24] 
	I0501 02:39:03.869107       1 main.go:223] Handling node with IPs: map[192.168.39.84:{}]
	I0501 02:39:03.869138       1 main.go:250] Node ha-329926-m04 has CIDR [10.244.3.0/24] 
	I0501 02:39:13.877997       1 main.go:223] Handling node with IPs: map[192.168.39.5:{}]
	I0501 02:39:13.878150       1 main.go:227] handling current node
	I0501 02:39:13.878187       1 main.go:223] Handling node with IPs: map[192.168.39.79:{}]
	I0501 02:39:13.878208       1 main.go:250] Node ha-329926-m02 has CIDR [10.244.1.0/24] 
	I0501 02:39:13.878396       1 main.go:223] Handling node with IPs: map[192.168.39.115:{}]
	I0501 02:39:13.878425       1 main.go:250] Node ha-329926-m03 has CIDR [10.244.2.0/24] 
	I0501 02:39:13.878493       1 main.go:223] Handling node with IPs: map[192.168.39.84:{}]
	I0501 02:39:13.878512       1 main.go:250] Node ha-329926-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [347407ef9dd66d0f2a44d6bc871649c2f38c1263ef6f3a33d6574f0e149ab701] <==
	I0501 02:31:48.173859       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0501 02:31:48.202132       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0501 02:31:48.233574       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0501 02:32:01.259025       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0501 02:32:01.406512       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0501 02:34:22.090167       1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout
	E0501 02:34:22.090264       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0501 02:34:22.090350       1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 56.4µs, panicked: false, err: context canceled, panic-reason: <nil>
	E0501 02:34:22.091592       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0501 02:34:22.091760       1 timeout.go:142] post-timeout activity - time-elapsed: 1.723373ms, POST "/api/v1/namespaces/kube-system/events" result: <nil>
	E0501 02:34:51.977591       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54016: use of closed network connection
	E0501 02:34:52.184940       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54034: use of closed network connection
	E0501 02:34:52.404232       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54048: use of closed network connection
	E0501 02:34:52.645445       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54076: use of closed network connection
	E0501 02:34:52.872942       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54094: use of closed network connection
	E0501 02:34:53.072199       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54102: use of closed network connection
	E0501 02:34:53.269091       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54128: use of closed network connection
	E0501 02:34:53.473976       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54142: use of closed network connection
	E0501 02:34:53.675277       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54160: use of closed network connection
	E0501 02:34:53.992299       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54184: use of closed network connection
	E0501 02:34:54.200055       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54200: use of closed network connection
	E0501 02:34:54.405926       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54218: use of closed network connection
	E0501 02:34:54.607335       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54244: use of closed network connection
	E0501 02:34:55.030335       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54268: use of closed network connection
	W0501 02:36:17.051220       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.115 192.168.39.5]
	
	
	==> kube-controller-manager [d24a4adfe9096e0063099c3390b72f12094c22465e8b666eb999e30740b77ea3] <==
	I0501 02:34:46.423487       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="62.197µs"
	I0501 02:34:47.410301       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="111.689µs"
	I0501 02:34:47.424534       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="120.812µs"
	I0501 02:34:47.436181       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="51.729µs"
	I0501 02:34:47.460575       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="82.202µs"
	I0501 02:34:47.464058       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="62.501µs"
	I0501 02:34:47.481353       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="78.292µs"
	I0501 02:34:47.589609       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="85.126µs"
	I0501 02:34:48.132319       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="69.791µs"
	I0501 02:34:50.025583       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.52337ms"
	I0501 02:34:50.027423       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="114.319µs"
	I0501 02:34:50.080334       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.407484ms"
	I0501 02:34:50.080442       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="43.624µs"
	I0501 02:34:51.488074       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="42.558313ms"
	I0501 02:34:51.488940       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="606.851µs"
	E0501 02:35:24.461939       1 certificate_controller.go:146] Sync csr-kt9bz failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-kt9bz": the object has been modified; please apply your changes to the latest version and try again
	I0501 02:35:24.761549       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-329926-m04\" does not exist"
	I0501 02:35:24.778000       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-329926-m04" podCIDRs=["10.244.3.0/24"]
	E0501 02:35:24.943411       1 daemon_controller.go:324] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"089e39dc-d22b-4162-b254-170ffb790464", ResourceVersion:"932", Generation:1, CreationTimestamp:time.Date(2024, time.May, 1, 2, 31, 48, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc001b14bc0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0
, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolume
Source)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc0020c2240), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001ad8480), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolum
eSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.
VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001ad8498), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersist
entDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"registry.k8s.io/kube-proxy:v1.30.0", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc001b14c00)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil), Claims:[]v1.ResourceClaim(nil)}, ResizePolicy:[]v1.ContainerResizePolicy(nil), RestartPolicy:(*v1.ContainerRestartPolicy)(nil), VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-
proxy", ReadOnly:false, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc001f7f080), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContai
ner(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00206b8d8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001babd00), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPoli
cy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil), OS:(*v1.PodOS)(nil), HostUsers:(*bool)(nil), SchedulingGates:[]v1.PodSchedulingGate(nil), ResourceClaims:[]v1.PodResourceClaim(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc002214250)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc00206ba10)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:3, NumberMisscheduled:0, DesiredNumberScheduled:3, NumberReady:3, ObservedGeneration:1, UpdatedNumberScheduled:3, NumberAvailable:3, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
	I0501 02:35:25.657560       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-329926-m04"
	I0501 02:35:35.822369       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-329926-m04"
	I0501 02:36:35.074777       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-329926-m04"
	I0501 02:36:35.260779       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.655882ms"
	I0501 02:36:35.262620       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="57.334µs"
	
	
	==> kube-proxy [2ab64850e34b66cbe91e099c91014051472b87888130297cd7c47a4a78992140] <==
	I0501 02:32:02.374716       1 server_linux.go:69] "Using iptables proxy"
	I0501 02:32:02.384514       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.5"]
	I0501 02:32:02.544454       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0501 02:32:02.544529       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0501 02:32:02.544548       1 server_linux.go:165] "Using iptables Proxier"
	I0501 02:32:02.550009       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0501 02:32:02.550292       1 server.go:872] "Version info" version="v1.30.0"
	I0501 02:32:02.550331       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0501 02:32:02.560773       1 config.go:192] "Starting service config controller"
	I0501 02:32:02.560817       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0501 02:32:02.560846       1 config.go:101] "Starting endpoint slice config controller"
	I0501 02:32:02.560850       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0501 02:32:02.568529       1 config.go:319] "Starting node config controller"
	I0501 02:32:02.568571       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0501 02:32:02.660905       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0501 02:32:02.660950       1 shared_informer.go:320] Caches are synced for service config
	I0501 02:32:02.669133       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [e3ffc6d046e214333ed85219819ad54bfa5c3ac0b10167beacf72edbd6035736] <==
	I0501 02:34:45.753527       1 cache.go:503] "Pod was added to a different node than it was assumed" podKey="6a244374-9326-48de-9c65-1f46061e6e1c" pod="default/busybox-fc5497c4f-h8dxv" assumedNode="ha-329926-m02" currentNode="ha-329926-m03"
	E0501 02:34:45.779238       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-h8dxv\": pod busybox-fc5497c4f-h8dxv is already assigned to node \"ha-329926-m02\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-h8dxv" node="ha-329926-m03"
	E0501 02:34:45.781832       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 6a244374-9326-48de-9c65-1f46061e6e1c(default/busybox-fc5497c4f-h8dxv) was assumed on ha-329926-m03 but assigned to ha-329926-m02" pod="default/busybox-fc5497c4f-h8dxv"
	E0501 02:34:45.781931       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-h8dxv\": pod busybox-fc5497c4f-h8dxv is already assigned to node \"ha-329926-m02\"" pod="default/busybox-fc5497c4f-h8dxv"
	I0501 02:34:45.782004       1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-h8dxv" node="ha-329926-m02"
	E0501 02:35:24.875138       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-86ngt\": pod kindnet-86ngt is already assigned to node \"ha-329926-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-86ngt" node="ha-329926-m04"
	E0501 02:35:24.875288       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 64f4f56d-5f20-47a6-8cdb-bb56d4515758(kube-system/kindnet-86ngt) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-86ngt"
	E0501 02:35:24.875317       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-86ngt\": pod kindnet-86ngt is already assigned to node \"ha-329926-m04\"" pod="kube-system/kindnet-86ngt"
	I0501 02:35:24.875337       1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-86ngt" node="ha-329926-m04"
	E0501 02:35:24.884208       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-j728v\": pod kube-proxy-j728v is already assigned to node \"ha-329926-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-j728v" node="ha-329926-m04"
	E0501 02:35:24.884314       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 3af4ad58-4beb-45c6-9152-4549816009a5(kube-system/kube-proxy-j728v) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-j728v"
	E0501 02:35:24.884350       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-j728v\": pod kube-proxy-j728v is already assigned to node \"ha-329926-m04\"" pod="kube-system/kube-proxy-j728v"
	I0501 02:35:24.884486       1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-j728v" node="ha-329926-m04"
	E0501 02:35:24.886508       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-cc2wd\": pod kindnet-cc2wd is already assigned to node \"ha-329926-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-cc2wd" node="ha-329926-m04"
	E0501 02:35:24.886591       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 9cf82faf-4728-47f7-83e4-36b674b85759(kube-system/kindnet-cc2wd) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-cc2wd"
	E0501 02:35:24.886629       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-cc2wd\": pod kindnet-cc2wd is already assigned to node \"ha-329926-m04\"" pod="kube-system/kindnet-cc2wd"
	I0501 02:35:24.886734       1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-cc2wd" node="ha-329926-m04"
	E0501 02:35:25.032187       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-zvz47\": pod kindnet-zvz47 is already assigned to node \"ha-329926-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-zvz47" node="ha-329926-m04"
	E0501 02:35:25.032285       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 1de4bf64-3ff4-42ee-afb5-fe7629e1e992(kube-system/kindnet-zvz47) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-zvz47"
	E0501 02:35:25.032343       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-zvz47\": pod kindnet-zvz47 is already assigned to node \"ha-329926-m04\"" pod="kube-system/kindnet-zvz47"
	I0501 02:35:25.032475       1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-zvz47" node="ha-329926-m04"
	E0501 02:35:25.040119       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-77fqn\": pod kube-proxy-77fqn is already assigned to node \"ha-329926-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-77fqn" node="ha-329926-m04"
	E0501 02:35:25.040231       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 9a3678b0-5806-435a-ad11-9368201f3377(kube-system/kube-proxy-77fqn) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-77fqn"
	E0501 02:35:25.040255       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-77fqn\": pod kube-proxy-77fqn is already assigned to node \"ha-329926-m04\"" pod="kube-system/kube-proxy-77fqn"
	I0501 02:35:25.040287       1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-77fqn" node="ha-329926-m04"
	
	
	==> kubelet <==
	May 01 02:34:48 ha-329926 kubelet[1388]: E0501 02:34:48.137367    1388 iptables.go:577] "Could not set up iptables canary" err=<
	May 01 02:34:48 ha-329926 kubelet[1388]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 01 02:34:48 ha-329926 kubelet[1388]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 01 02:34:48 ha-329926 kubelet[1388]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 01 02:34:48 ha-329926 kubelet[1388]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 01 02:35:48 ha-329926 kubelet[1388]: E0501 02:35:48.136754    1388 iptables.go:577] "Could not set up iptables canary" err=<
	May 01 02:35:48 ha-329926 kubelet[1388]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 01 02:35:48 ha-329926 kubelet[1388]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 01 02:35:48 ha-329926 kubelet[1388]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 01 02:35:48 ha-329926 kubelet[1388]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 01 02:36:48 ha-329926 kubelet[1388]: E0501 02:36:48.135875    1388 iptables.go:577] "Could not set up iptables canary" err=<
	May 01 02:36:48 ha-329926 kubelet[1388]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 01 02:36:48 ha-329926 kubelet[1388]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 01 02:36:48 ha-329926 kubelet[1388]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 01 02:36:48 ha-329926 kubelet[1388]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 01 02:37:48 ha-329926 kubelet[1388]: E0501 02:37:48.134896    1388 iptables.go:577] "Could not set up iptables canary" err=<
	May 01 02:37:48 ha-329926 kubelet[1388]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 01 02:37:48 ha-329926 kubelet[1388]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 01 02:37:48 ha-329926 kubelet[1388]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 01 02:37:48 ha-329926 kubelet[1388]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 01 02:38:48 ha-329926 kubelet[1388]: E0501 02:38:48.136257    1388 iptables.go:577] "Could not set up iptables canary" err=<
	May 01 02:38:48 ha-329926 kubelet[1388]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 01 02:38:48 ha-329926 kubelet[1388]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 01 02:38:48 ha-329926 kubelet[1388]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 01 02:38:48 ha-329926 kubelet[1388]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-329926 -n ha-329926
helpers_test.go:261: (dbg) Run:  kubectl --context ha-329926 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (59.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (421.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-329926 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-329926 -v=7 --alsologtostderr
E0501 02:39:56.198355   20724 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/functional-960026/client.crt: no such file or directory
E0501 02:40:23.883494   20724 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/functional-960026/client.crt: no such file or directory
E0501 02:41:24.419429   20724 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/addons-286595/client.crt: no such file or directory
ha_test.go:462: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-329926 -v=7 --alsologtostderr: exit status 82 (2m2.731257132s)

                                                
                                                
-- stdout --
	* Stopping node "ha-329926-m04"  ...
	* Stopping node "ha-329926-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0501 02:39:23.153032   38748 out.go:291] Setting OutFile to fd 1 ...
	I0501 02:39:23.153144   38748 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 02:39:23.153156   38748 out.go:304] Setting ErrFile to fd 2...
	I0501 02:39:23.153163   38748 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 02:39:23.153375   38748 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18779-13391/.minikube/bin
	I0501 02:39:23.153609   38748 out.go:298] Setting JSON to false
	I0501 02:39:23.153683   38748 mustload.go:65] Loading cluster: ha-329926
	I0501 02:39:23.154009   38748 config.go:182] Loaded profile config "ha-329926": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0501 02:39:23.154103   38748 profile.go:143] Saving config to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/config.json ...
	I0501 02:39:23.154264   38748 mustload.go:65] Loading cluster: ha-329926
	I0501 02:39:23.154433   38748 config.go:182] Loaded profile config "ha-329926": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0501 02:39:23.154467   38748 stop.go:39] StopHost: ha-329926-m04
	I0501 02:39:23.154881   38748 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:39:23.154919   38748 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:39:23.170149   38748 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36351
	I0501 02:39:23.170621   38748 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:39:23.171172   38748 main.go:141] libmachine: Using API Version  1
	I0501 02:39:23.171198   38748 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:39:23.171500   38748 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:39:23.174046   38748 out.go:177] * Stopping node "ha-329926-m04"  ...
	I0501 02:39:23.175295   38748 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0501 02:39:23.175327   38748 main.go:141] libmachine: (ha-329926-m04) Calling .DriverName
	I0501 02:39:23.175571   38748 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0501 02:39:23.175605   38748 main.go:141] libmachine: (ha-329926-m04) Calling .GetSSHHostname
	I0501 02:39:23.178171   38748 main.go:141] libmachine: (ha-329926-m04) DBG | domain ha-329926-m04 has defined MAC address 52:54:00:95:f4:8b in network mk-ha-329926
	I0501 02:39:23.178551   38748 main.go:141] libmachine: (ha-329926-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:f4:8b", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:35:11 +0000 UTC Type:0 Mac:52:54:00:95:f4:8b Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-329926-m04 Clientid:01:52:54:00:95:f4:8b}
	I0501 02:39:23.178609   38748 main.go:141] libmachine: (ha-329926-m04) DBG | domain ha-329926-m04 has defined IP address 192.168.39.84 and MAC address 52:54:00:95:f4:8b in network mk-ha-329926
	I0501 02:39:23.178716   38748 main.go:141] libmachine: (ha-329926-m04) Calling .GetSSHPort
	I0501 02:39:23.178884   38748 main.go:141] libmachine: (ha-329926-m04) Calling .GetSSHKeyPath
	I0501 02:39:23.179033   38748 main.go:141] libmachine: (ha-329926-m04) Calling .GetSSHUsername
	I0501 02:39:23.179136   38748 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926-m04/id_rsa Username:docker}
	I0501 02:39:23.274538   38748 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0501 02:39:23.329380   38748 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0501 02:39:23.386222   38748 main.go:141] libmachine: Stopping "ha-329926-m04"...
	I0501 02:39:23.386254   38748 main.go:141] libmachine: (ha-329926-m04) Calling .GetState
	I0501 02:39:23.387767   38748 main.go:141] libmachine: (ha-329926-m04) Calling .Stop
	I0501 02:39:23.390869   38748 main.go:141] libmachine: (ha-329926-m04) Waiting for machine to stop 0/120
	I0501 02:39:24.393095   38748 main.go:141] libmachine: (ha-329926-m04) Waiting for machine to stop 1/120
	I0501 02:39:25.394812   38748 main.go:141] libmachine: (ha-329926-m04) Calling .GetState
	I0501 02:39:25.396177   38748 main.go:141] libmachine: Machine "ha-329926-m04" was stopped.
	I0501 02:39:25.396196   38748 stop.go:75] duration metric: took 2.220902342s to stop
	I0501 02:39:25.396214   38748 stop.go:39] StopHost: ha-329926-m03
	I0501 02:39:25.396516   38748 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:39:25.396596   38748 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:39:25.412958   38748 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45103
	I0501 02:39:25.413438   38748 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:39:25.413906   38748 main.go:141] libmachine: Using API Version  1
	I0501 02:39:25.413930   38748 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:39:25.414262   38748 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:39:25.415922   38748 out.go:177] * Stopping node "ha-329926-m03"  ...
	I0501 02:39:25.417036   38748 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0501 02:39:25.417058   38748 main.go:141] libmachine: (ha-329926-m03) Calling .DriverName
	I0501 02:39:25.417263   38748 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0501 02:39:25.417285   38748 main.go:141] libmachine: (ha-329926-m03) Calling .GetSSHHostname
	I0501 02:39:25.420213   38748 main.go:141] libmachine: (ha-329926-m03) DBG | domain ha-329926-m03 has defined MAC address 52:54:00:f9:eb:7d in network mk-ha-329926
	I0501 02:39:25.420701   38748 main.go:141] libmachine: (ha-329926-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:eb:7d", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:33:44 +0000 UTC Type:0 Mac:52:54:00:f9:eb:7d Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:ha-329926-m03 Clientid:01:52:54:00:f9:eb:7d}
	I0501 02:39:25.420737   38748 main.go:141] libmachine: (ha-329926-m03) DBG | domain ha-329926-m03 has defined IP address 192.168.39.115 and MAC address 52:54:00:f9:eb:7d in network mk-ha-329926
	I0501 02:39:25.420813   38748 main.go:141] libmachine: (ha-329926-m03) Calling .GetSSHPort
	I0501 02:39:25.420974   38748 main.go:141] libmachine: (ha-329926-m03) Calling .GetSSHKeyPath
	I0501 02:39:25.421123   38748 main.go:141] libmachine: (ha-329926-m03) Calling .GetSSHUsername
	I0501 02:39:25.421269   38748 sshutil.go:53] new ssh client: &{IP:192.168.39.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926-m03/id_rsa Username:docker}
	I0501 02:39:25.521168   38748 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0501 02:39:25.577508   38748 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0501 02:39:25.638421   38748 main.go:141] libmachine: Stopping "ha-329926-m03"...
	I0501 02:39:25.638452   38748 main.go:141] libmachine: (ha-329926-m03) Calling .GetState
	I0501 02:39:25.639961   38748 main.go:141] libmachine: (ha-329926-m03) Calling .Stop
	I0501 02:39:25.643520   38748 main.go:141] libmachine: (ha-329926-m03) Waiting for machine to stop 0/120
	I0501 02:39:26.645564   38748 main.go:141] libmachine: (ha-329926-m03) Waiting for machine to stop 1/120
	I0501 02:39:27.646979   38748 main.go:141] libmachine: (ha-329926-m03) Waiting for machine to stop 2/120
	I0501 02:39:28.648275   38748 main.go:141] libmachine: (ha-329926-m03) Waiting for machine to stop 3/120
	I0501 02:39:29.649509   38748 main.go:141] libmachine: (ha-329926-m03) Waiting for machine to stop 4/120
	I0501 02:39:30.651453   38748 main.go:141] libmachine: (ha-329926-m03) Waiting for machine to stop 5/120
	I0501 02:39:31.653158   38748 main.go:141] libmachine: (ha-329926-m03) Waiting for machine to stop 6/120
	I0501 02:39:32.655046   38748 main.go:141] libmachine: (ha-329926-m03) Waiting for machine to stop 7/120
	I0501 02:39:33.656604   38748 main.go:141] libmachine: (ha-329926-m03) Waiting for machine to stop 8/120
	I0501 02:39:34.658117   38748 main.go:141] libmachine: (ha-329926-m03) Waiting for machine to stop 9/120
	I0501 02:39:35.659572   38748 main.go:141] libmachine: (ha-329926-m03) Waiting for machine to stop 10/120
	I0501 02:39:36.661146   38748 main.go:141] libmachine: (ha-329926-m03) Waiting for machine to stop 11/120
	I0501 02:39:37.662600   38748 main.go:141] libmachine: (ha-329926-m03) Waiting for machine to stop 12/120
	I0501 02:39:38.664873   38748 main.go:141] libmachine: (ha-329926-m03) Waiting for machine to stop 13/120
	I0501 02:39:39.666199   38748 main.go:141] libmachine: (ha-329926-m03) Waiting for machine to stop 14/120
	I0501 02:39:40.668364   38748 main.go:141] libmachine: (ha-329926-m03) Waiting for machine to stop 15/120
	I0501 02:39:41.669803   38748 main.go:141] libmachine: (ha-329926-m03) Waiting for machine to stop 16/120
	I0501 02:39:42.671330   38748 main.go:141] libmachine: (ha-329926-m03) Waiting for machine to stop 17/120
	I0501 02:39:43.672619   38748 main.go:141] libmachine: (ha-329926-m03) Waiting for machine to stop 18/120
	I0501 02:39:44.673897   38748 main.go:141] libmachine: (ha-329926-m03) Waiting for machine to stop 19/120
	I0501 02:39:45.675554   38748 main.go:141] libmachine: (ha-329926-m03) Waiting for machine to stop 20/120
	I0501 02:39:46.677136   38748 main.go:141] libmachine: (ha-329926-m03) Waiting for machine to stop 21/120
	I0501 02:39:47.678701   38748 main.go:141] libmachine: (ha-329926-m03) Waiting for machine to stop 22/120
	I0501 02:39:48.680923   38748 main.go:141] libmachine: (ha-329926-m03) Waiting for machine to stop 23/120
	I0501 02:39:49.682258   38748 main.go:141] libmachine: (ha-329926-m03) Waiting for machine to stop 24/120
	I0501 02:39:50.683930   38748 main.go:141] libmachine: (ha-329926-m03) Waiting for machine to stop 25/120
	I0501 02:39:51.685518   38748 main.go:141] libmachine: (ha-329926-m03) Waiting for machine to stop 26/120
	I0501 02:39:52.686872   38748 main.go:141] libmachine: (ha-329926-m03) Waiting for machine to stop 27/120
	I0501 02:39:53.688789   38748 main.go:141] libmachine: (ha-329926-m03) Waiting for machine to stop 28/120
	I0501 02:39:54.690452   38748 main.go:141] libmachine: (ha-329926-m03) Waiting for machine to stop 29/120
	I0501 02:39:55.692596   38748 main.go:141] libmachine: (ha-329926-m03) Waiting for machine to stop 30/120
	I0501 02:39:56.693896   38748 main.go:141] libmachine: (ha-329926-m03) Waiting for machine to stop 31/120
	I0501 02:39:57.695353   38748 main.go:141] libmachine: (ha-329926-m03) Waiting for machine to stop 32/120
	I0501 02:39:58.696545   38748 main.go:141] libmachine: (ha-329926-m03) Waiting for machine to stop 33/120
	I0501 02:39:59.697789   38748 main.go:141] libmachine: (ha-329926-m03) Waiting for machine to stop 34/120
	I0501 02:40:00.699296   38748 main.go:141] libmachine: (ha-329926-m03) Waiting for machine to stop 35/120
	I0501 02:40:01.700615   38748 main.go:141] libmachine: (ha-329926-m03) Waiting for machine to stop 36/120
	I0501 02:40:02.701968   38748 main.go:141] libmachine: (ha-329926-m03) Waiting for machine to stop 37/120
	I0501 02:40:03.703554   38748 main.go:141] libmachine: (ha-329926-m03) Waiting for machine to stop 38/120
	I0501 02:40:04.704805   38748 main.go:141] libmachine: (ha-329926-m03) Waiting for machine to stop 39/120
	I0501 02:40:05.706618   38748 main.go:141] libmachine: (ha-329926-m03) Waiting for machine to stop 40/120
	I0501 02:40:06.707837   38748 main.go:141] libmachine: (ha-329926-m03) Waiting for machine to stop 41/120
	I0501 02:40:07.709311   38748 main.go:141] libmachine: (ha-329926-m03) Waiting for machine to stop 42/120
	I0501 02:40:08.710541   38748 main.go:141] libmachine: (ha-329926-m03) Waiting for machine to stop 43/120
	I0501 02:40:09.712141   38748 main.go:141] libmachine: (ha-329926-m03) Waiting for machine to stop 44/120
	I0501 02:40:10.713709   38748 main.go:141] libmachine: (ha-329926-m03) Waiting for machine to stop 45/120
	I0501 02:40:11.715031   38748 main.go:141] libmachine: (ha-329926-m03) Waiting for machine to stop 46/120
	I0501 02:40:12.716545   38748 main.go:141] libmachine: (ha-329926-m03) Waiting for machine to stop 47/120
	I0501 02:40:13.717814   38748 main.go:141] libmachine: (ha-329926-m03) Waiting for machine to stop 48/120
	I0501 02:40:14.719069   38748 main.go:141] libmachine: (ha-329926-m03) Waiting for machine to stop 49/120
	I0501 02:40:15.720550   38748 main.go:141] libmachine: (ha-329926-m03) Waiting for machine to stop 50/120
	I0501 02:40:16.721894   38748 main.go:141] libmachine: (ha-329926-m03) Waiting for machine to stop 51/120
	I0501 02:40:17.723332   38748 main.go:141] libmachine: (ha-329926-m03) Waiting for machine to stop 52/120
	I0501 02:40:18.724689   38748 main.go:141] libmachine: (ha-329926-m03) Waiting for machine to stop 53/120
	I0501 02:40:19.726094   38748 main.go:141] libmachine: (ha-329926-m03) Waiting for machine to stop 54/120
	I0501 02:40:20.728547   38748 main.go:141] libmachine: (ha-329926-m03) Waiting for machine to stop 55/120
	I0501 02:40:21.730042   38748 main.go:141] libmachine: (ha-329926-m03) Waiting for machine to stop 56/120
	I0501 02:40:22.731727   38748 main.go:141] libmachine: (ha-329926-m03) Waiting for machine to stop 57/120
	I0501 02:40:23.733132   38748 main.go:141] libmachine: (ha-329926-m03) Waiting for machine to stop 58/120
	I0501 02:40:24.734658   38748 main.go:141] libmachine: (ha-329926-m03) Waiting for machine to stop 59/120
	I0501 02:40:25.736577   38748 main.go:141] libmachine: (ha-329926-m03) Waiting for machine to stop 60/120
	I0501 02:40:26.738020   38748 main.go:141] libmachine: (ha-329926-m03) Waiting for machine to stop 61/120
	I0501 02:40:27.739519   38748 main.go:141] libmachine: (ha-329926-m03) Waiting for machine to stop 62/120
	I0501 02:40:28.740772   38748 main.go:141] libmachine: (ha-329926-m03) Waiting for machine to stop 63/120
	I0501 02:40:29.742306   38748 main.go:141] libmachine: (ha-329926-m03) Waiting for machine to stop 64/120
	I0501 02:40:30.744146   38748 main.go:141] libmachine: (ha-329926-m03) Waiting for machine to stop 65/120
	I0501 02:40:31.745386   38748 main.go:141] libmachine: (ha-329926-m03) Waiting for machine to stop 66/120
	I0501 02:40:32.746747   38748 main.go:141] libmachine: (ha-329926-m03) Waiting for machine to stop 67/120
	I0501 02:40:33.748099   38748 main.go:141] libmachine: (ha-329926-m03) Waiting for machine to stop 68/120
	I0501 02:40:34.749291   38748 main.go:141] libmachine: (ha-329926-m03) Waiting for machine to stop 69/120
	I0501 02:40:35.750960   38748 main.go:141] libmachine: (ha-329926-m03) Waiting for machine to stop 70/120
	I0501 02:40:36.752342   38748 main.go:141] libmachine: (ha-329926-m03) Waiting for machine to stop 71/120
	I0501 02:40:37.753745   38748 main.go:141] libmachine: (ha-329926-m03) Waiting for machine to stop 72/120
	I0501 02:40:38.755106   38748 main.go:141] libmachine: (ha-329926-m03) Waiting for machine to stop 73/120
	I0501 02:40:39.756363   38748 main.go:141] libmachine: (ha-329926-m03) Waiting for machine to stop 74/120
	I0501 02:40:40.758114   38748 main.go:141] libmachine: (ha-329926-m03) Waiting for machine to stop 75/120
	I0501 02:40:41.759484   38748 main.go:141] libmachine: (ha-329926-m03) Waiting for machine to stop 76/120
	I0501 02:40:42.760664   38748 main.go:141] libmachine: (ha-329926-m03) Waiting for machine to stop 77/120
	I0501 02:40:43.762788   38748 main.go:141] libmachine: (ha-329926-m03) Waiting for machine to stop 78/120
	I0501 02:40:44.764049   38748 main.go:141] libmachine: (ha-329926-m03) Waiting for machine to stop 79/120
	I0501 02:40:45.765750   38748 main.go:141] libmachine: (ha-329926-m03) Waiting for machine to stop 80/120
	I0501 02:40:46.766984   38748 main.go:141] libmachine: (ha-329926-m03) Waiting for machine to stop 81/120
	I0501 02:40:47.768213   38748 main.go:141] libmachine: (ha-329926-m03) Waiting for machine to stop 82/120
	I0501 02:40:48.769490   38748 main.go:141] libmachine: (ha-329926-m03) Waiting for machine to stop 83/120
	I0501 02:40:49.770925   38748 main.go:141] libmachine: (ha-329926-m03) Waiting for machine to stop 84/120
	I0501 02:40:50.772159   38748 main.go:141] libmachine: (ha-329926-m03) Waiting for machine to stop 85/120
	I0501 02:40:51.773428   38748 main.go:141] libmachine: (ha-329926-m03) Waiting for machine to stop 86/120
	I0501 02:40:52.774789   38748 main.go:141] libmachine: (ha-329926-m03) Waiting for machine to stop 87/120
	I0501 02:40:53.776048   38748 main.go:141] libmachine: (ha-329926-m03) Waiting for machine to stop 88/120
	I0501 02:40:54.777421   38748 main.go:141] libmachine: (ha-329926-m03) Waiting for machine to stop 89/120
	I0501 02:40:55.779277   38748 main.go:141] libmachine: (ha-329926-m03) Waiting for machine to stop 90/120
	I0501 02:40:56.781158   38748 main.go:141] libmachine: (ha-329926-m03) Waiting for machine to stop 91/120
	I0501 02:40:57.782338   38748 main.go:141] libmachine: (ha-329926-m03) Waiting for machine to stop 92/120
	I0501 02:40:58.783701   38748 main.go:141] libmachine: (ha-329926-m03) Waiting for machine to stop 93/120
	I0501 02:40:59.785035   38748 main.go:141] libmachine: (ha-329926-m03) Waiting for machine to stop 94/120
	I0501 02:41:00.786853   38748 main.go:141] libmachine: (ha-329926-m03) Waiting for machine to stop 95/120
	I0501 02:41:01.789129   38748 main.go:141] libmachine: (ha-329926-m03) Waiting for machine to stop 96/120
	I0501 02:41:02.790622   38748 main.go:141] libmachine: (ha-329926-m03) Waiting for machine to stop 97/120
	I0501 02:41:03.791801   38748 main.go:141] libmachine: (ha-329926-m03) Waiting for machine to stop 98/120
	I0501 02:41:04.793124   38748 main.go:141] libmachine: (ha-329926-m03) Waiting for machine to stop 99/120
	I0501 02:41:05.794500   38748 main.go:141] libmachine: (ha-329926-m03) Waiting for machine to stop 100/120
	I0501 02:41:06.795854   38748 main.go:141] libmachine: (ha-329926-m03) Waiting for machine to stop 101/120
	I0501 02:41:07.797133   38748 main.go:141] libmachine: (ha-329926-m03) Waiting for machine to stop 102/120
	I0501 02:41:08.798471   38748 main.go:141] libmachine: (ha-329926-m03) Waiting for machine to stop 103/120
	I0501 02:41:09.799832   38748 main.go:141] libmachine: (ha-329926-m03) Waiting for machine to stop 104/120
	I0501 02:41:10.801556   38748 main.go:141] libmachine: (ha-329926-m03) Waiting for machine to stop 105/120
	I0501 02:41:11.803031   38748 main.go:141] libmachine: (ha-329926-m03) Waiting for machine to stop 106/120
	I0501 02:41:12.804266   38748 main.go:141] libmachine: (ha-329926-m03) Waiting for machine to stop 107/120
	I0501 02:41:13.806448   38748 main.go:141] libmachine: (ha-329926-m03) Waiting for machine to stop 108/120
	I0501 02:41:14.807869   38748 main.go:141] libmachine: (ha-329926-m03) Waiting for machine to stop 109/120
	I0501 02:41:15.809475   38748 main.go:141] libmachine: (ha-329926-m03) Waiting for machine to stop 110/120
	I0501 02:41:16.810935   38748 main.go:141] libmachine: (ha-329926-m03) Waiting for machine to stop 111/120
	I0501 02:41:17.812309   38748 main.go:141] libmachine: (ha-329926-m03) Waiting for machine to stop 112/120
	I0501 02:41:18.813782   38748 main.go:141] libmachine: (ha-329926-m03) Waiting for machine to stop 113/120
	I0501 02:41:19.815446   38748 main.go:141] libmachine: (ha-329926-m03) Waiting for machine to stop 114/120
	I0501 02:41:20.817168   38748 main.go:141] libmachine: (ha-329926-m03) Waiting for machine to stop 115/120
	I0501 02:41:21.818466   38748 main.go:141] libmachine: (ha-329926-m03) Waiting for machine to stop 116/120
	I0501 02:41:22.820039   38748 main.go:141] libmachine: (ha-329926-m03) Waiting for machine to stop 117/120
	I0501 02:41:23.821341   38748 main.go:141] libmachine: (ha-329926-m03) Waiting for machine to stop 118/120
	I0501 02:41:24.822636   38748 main.go:141] libmachine: (ha-329926-m03) Waiting for machine to stop 119/120
	I0501 02:41:25.823413   38748 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0501 02:41:25.823490   38748 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0501 02:41:25.825274   38748 out.go:177] 
	W0501 02:41:25.826602   38748 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0501 02:41:25.826616   38748 out.go:239] * 
	* 
	W0501 02:41:25.828902   38748 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0501 02:41:25.831019   38748 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:464: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-329926 -v=7 --alsologtostderr" : exit status 82
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-329926 --wait=true -v=7 --alsologtostderr
E0501 02:44:56.198467   20724 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/functional-960026/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-329926 --wait=true -v=7 --alsologtostderr: (4m56.140731053s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-329926
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-329926 -n ha-329926
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-329926 logs -n 25
E0501 02:46:24.420078   20724 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/addons-286595/client.crt: no such file or directory
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-329926 logs -n 25: (2.173372089s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-329926 cp ha-329926-m03:/home/docker/cp-test.txt                             | ha-329926 | jenkins | v1.33.0 | 01 May 24 02:35 UTC | 01 May 24 02:35 UTC |
	|         | ha-329926-m02:/home/docker/cp-test_ha-329926-m03_ha-329926-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-329926 ssh -n                                                                | ha-329926 | jenkins | v1.33.0 | 01 May 24 02:35 UTC | 01 May 24 02:35 UTC |
	|         | ha-329926-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-329926 ssh -n ha-329926-m02 sudo cat                                         | ha-329926 | jenkins | v1.33.0 | 01 May 24 02:35 UTC | 01 May 24 02:35 UTC |
	|         | /home/docker/cp-test_ha-329926-m03_ha-329926-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-329926 cp ha-329926-m03:/home/docker/cp-test.txt                             | ha-329926 | jenkins | v1.33.0 | 01 May 24 02:35 UTC | 01 May 24 02:35 UTC |
	|         | ha-329926-m04:/home/docker/cp-test_ha-329926-m03_ha-329926-m04.txt              |           |         |         |                     |                     |
	| ssh     | ha-329926 ssh -n                                                                | ha-329926 | jenkins | v1.33.0 | 01 May 24 02:35 UTC | 01 May 24 02:35 UTC |
	|         | ha-329926-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-329926 ssh -n ha-329926-m04 sudo cat                                         | ha-329926 | jenkins | v1.33.0 | 01 May 24 02:35 UTC | 01 May 24 02:35 UTC |
	|         | /home/docker/cp-test_ha-329926-m03_ha-329926-m04.txt                            |           |         |         |                     |                     |
	| cp      | ha-329926 cp testdata/cp-test.txt                                               | ha-329926 | jenkins | v1.33.0 | 01 May 24 02:35 UTC | 01 May 24 02:35 UTC |
	|         | ha-329926-m04:/home/docker/cp-test.txt                                          |           |         |         |                     |                     |
	| ssh     | ha-329926 ssh -n                                                                | ha-329926 | jenkins | v1.33.0 | 01 May 24 02:35 UTC | 01 May 24 02:35 UTC |
	|         | ha-329926-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-329926 cp ha-329926-m04:/home/docker/cp-test.txt                             | ha-329926 | jenkins | v1.33.0 | 01 May 24 02:35 UTC | 01 May 24 02:35 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile895580191/001/cp-test_ha-329926-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-329926 ssh -n                                                                | ha-329926 | jenkins | v1.33.0 | 01 May 24 02:35 UTC | 01 May 24 02:35 UTC |
	|         | ha-329926-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-329926 cp ha-329926-m04:/home/docker/cp-test.txt                             | ha-329926 | jenkins | v1.33.0 | 01 May 24 02:35 UTC | 01 May 24 02:35 UTC |
	|         | ha-329926:/home/docker/cp-test_ha-329926-m04_ha-329926.txt                      |           |         |         |                     |                     |
	| ssh     | ha-329926 ssh -n                                                                | ha-329926 | jenkins | v1.33.0 | 01 May 24 02:35 UTC | 01 May 24 02:35 UTC |
	|         | ha-329926-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-329926 ssh -n ha-329926 sudo cat                                             | ha-329926 | jenkins | v1.33.0 | 01 May 24 02:35 UTC | 01 May 24 02:35 UTC |
	|         | /home/docker/cp-test_ha-329926-m04_ha-329926.txt                                |           |         |         |                     |                     |
	| cp      | ha-329926 cp ha-329926-m04:/home/docker/cp-test.txt                             | ha-329926 | jenkins | v1.33.0 | 01 May 24 02:35 UTC | 01 May 24 02:35 UTC |
	|         | ha-329926-m02:/home/docker/cp-test_ha-329926-m04_ha-329926-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-329926 ssh -n                                                                | ha-329926 | jenkins | v1.33.0 | 01 May 24 02:35 UTC | 01 May 24 02:35 UTC |
	|         | ha-329926-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-329926 ssh -n ha-329926-m02 sudo cat                                         | ha-329926 | jenkins | v1.33.0 | 01 May 24 02:35 UTC | 01 May 24 02:35 UTC |
	|         | /home/docker/cp-test_ha-329926-m04_ha-329926-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-329926 cp ha-329926-m04:/home/docker/cp-test.txt                             | ha-329926 | jenkins | v1.33.0 | 01 May 24 02:35 UTC | 01 May 24 02:35 UTC |
	|         | ha-329926-m03:/home/docker/cp-test_ha-329926-m04_ha-329926-m03.txt              |           |         |         |                     |                     |
	| ssh     | ha-329926 ssh -n                                                                | ha-329926 | jenkins | v1.33.0 | 01 May 24 02:35 UTC | 01 May 24 02:35 UTC |
	|         | ha-329926-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-329926 ssh -n ha-329926-m03 sudo cat                                         | ha-329926 | jenkins | v1.33.0 | 01 May 24 02:35 UTC | 01 May 24 02:35 UTC |
	|         | /home/docker/cp-test_ha-329926-m04_ha-329926-m03.txt                            |           |         |         |                     |                     |
	| node    | ha-329926 node stop m02 -v=7                                                    | ha-329926 | jenkins | v1.33.0 | 01 May 24 02:35 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | ha-329926 node start m02 -v=7                                                   | ha-329926 | jenkins | v1.33.0 | 01 May 24 02:38 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | list -p ha-329926 -v=7                                                          | ha-329926 | jenkins | v1.33.0 | 01 May 24 02:39 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| stop    | -p ha-329926 -v=7                                                               | ha-329926 | jenkins | v1.33.0 | 01 May 24 02:39 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| start   | -p ha-329926 --wait=true -v=7                                                   | ha-329926 | jenkins | v1.33.0 | 01 May 24 02:41 UTC | 01 May 24 02:46 UTC |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | list -p ha-329926                                                               | ha-329926 | jenkins | v1.33.0 | 01 May 24 02:46 UTC |                     |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/01 02:41:25
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0501 02:41:25.892088   39235 out.go:291] Setting OutFile to fd 1 ...
	I0501 02:41:25.892219   39235 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 02:41:25.892229   39235 out.go:304] Setting ErrFile to fd 2...
	I0501 02:41:25.892234   39235 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 02:41:25.892432   39235 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18779-13391/.minikube/bin
	I0501 02:41:25.893010   39235 out.go:298] Setting JSON to false
	I0501 02:41:25.893898   39235 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5029,"bootTime":1714526257,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0501 02:41:25.893955   39235 start.go:139] virtualization: kvm guest
	I0501 02:41:25.896237   39235 out.go:177] * [ha-329926] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0501 02:41:25.897626   39235 out.go:177]   - MINIKUBE_LOCATION=18779
	I0501 02:41:25.897587   39235 notify.go:220] Checking for updates...
	I0501 02:41:25.899156   39235 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0501 02:41:25.900581   39235 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18779-13391/kubeconfig
	I0501 02:41:25.901680   39235 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18779-13391/.minikube
	I0501 02:41:25.903091   39235 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0501 02:41:25.904286   39235 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0501 02:41:25.905925   39235 config.go:182] Loaded profile config "ha-329926": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0501 02:41:25.906030   39235 driver.go:392] Setting default libvirt URI to qemu:///system
	I0501 02:41:25.906452   39235 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:41:25.906505   39235 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:41:25.922177   39235 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43331
	I0501 02:41:25.922553   39235 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:41:25.923049   39235 main.go:141] libmachine: Using API Version  1
	I0501 02:41:25.923071   39235 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:41:25.923374   39235 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:41:25.923534   39235 main.go:141] libmachine: (ha-329926) Calling .DriverName
	I0501 02:41:25.958389   39235 out.go:177] * Using the kvm2 driver based on existing profile
	I0501 02:41:25.959560   39235 start.go:297] selected driver: kvm2
	I0501 02:41:25.959571   39235 start.go:901] validating driver "kvm2" against &{Name:ha-329926 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.0 ClusterName:ha-329926 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.5 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.79 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.115 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.84 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:f
alse freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:
docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 02:41:25.959704   39235 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0501 02:41:25.960008   39235 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0501 02:41:25.960075   39235 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18779-13391/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0501 02:41:25.974627   39235 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0501 02:41:25.975302   39235 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0501 02:41:25.975373   39235 cni.go:84] Creating CNI manager for ""
	I0501 02:41:25.975387   39235 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0501 02:41:25.975443   39235 start.go:340] cluster config:
	{Name:ha-329926 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-329926 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.5 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.79 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.115 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.84 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller
:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 02:41:25.975578   39235 iso.go:125] acquiring lock: {Name:mkcd2496aadb29931e179193d707c731bfda98b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0501 02:41:25.977222   39235 out.go:177] * Starting "ha-329926" primary control-plane node in "ha-329926" cluster
	I0501 02:41:25.978372   39235 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0501 02:41:25.978418   39235 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0501 02:41:25.978433   39235 cache.go:56] Caching tarball of preloaded images
	I0501 02:41:25.978536   39235 preload.go:173] Found /home/jenkins/minikube-integration/18779-13391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0501 02:41:25.978550   39235 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0501 02:41:25.978667   39235 profile.go:143] Saving config to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/config.json ...
	I0501 02:41:25.978848   39235 start.go:360] acquireMachinesLock for ha-329926: {Name:mkc4548e5afe1d4b0a833f6c522103562b0cefff Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0501 02:41:25.978907   39235 start.go:364] duration metric: took 41.656µs to acquireMachinesLock for "ha-329926"
	I0501 02:41:25.978925   39235 start.go:96] Skipping create...Using existing machine configuration
	I0501 02:41:25.978932   39235 fix.go:54] fixHost starting: 
	I0501 02:41:25.979164   39235 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:41:25.979195   39235 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:41:25.992928   39235 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33517
	I0501 02:41:25.993323   39235 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:41:25.993738   39235 main.go:141] libmachine: Using API Version  1
	I0501 02:41:25.993759   39235 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:41:25.994054   39235 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:41:25.994212   39235 main.go:141] libmachine: (ha-329926) Calling .DriverName
	I0501 02:41:25.994359   39235 main.go:141] libmachine: (ha-329926) Calling .GetState
	I0501 02:41:25.995800   39235 fix.go:112] recreateIfNeeded on ha-329926: state=Running err=<nil>
	W0501 02:41:25.995818   39235 fix.go:138] unexpected machine state, will restart: <nil>
	I0501 02:41:25.997468   39235 out.go:177] * Updating the running kvm2 "ha-329926" VM ...
	I0501 02:41:25.998518   39235 machine.go:94] provisionDockerMachine start ...
	I0501 02:41:25.998537   39235 main.go:141] libmachine: (ha-329926) Calling .DriverName
	I0501 02:41:25.998739   39235 main.go:141] libmachine: (ha-329926) Calling .GetSSHHostname
	I0501 02:41:26.000881   39235 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:41:26.001348   39235 main.go:141] libmachine: (ha-329926) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:d8:43", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:31:18 +0000 UTC Type:0 Mac:52:54:00:ce:d8:43 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-329926 Clientid:01:52:54:00:ce:d8:43}
	I0501 02:41:26.001377   39235 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined IP address 192.168.39.5 and MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:41:26.001489   39235 main.go:141] libmachine: (ha-329926) Calling .GetSSHPort
	I0501 02:41:26.001666   39235 main.go:141] libmachine: (ha-329926) Calling .GetSSHKeyPath
	I0501 02:41:26.001826   39235 main.go:141] libmachine: (ha-329926) Calling .GetSSHKeyPath
	I0501 02:41:26.001953   39235 main.go:141] libmachine: (ha-329926) Calling .GetSSHUsername
	I0501 02:41:26.002103   39235 main.go:141] libmachine: Using SSH client type: native
	I0501 02:41:26.002279   39235 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.5 22 <nil> <nil>}
	I0501 02:41:26.002297   39235 main.go:141] libmachine: About to run SSH command:
	hostname
	I0501 02:41:26.107909   39235 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-329926
	
	I0501 02:41:26.107932   39235 main.go:141] libmachine: (ha-329926) Calling .GetMachineName
	I0501 02:41:26.108181   39235 buildroot.go:166] provisioning hostname "ha-329926"
	I0501 02:41:26.108207   39235 main.go:141] libmachine: (ha-329926) Calling .GetMachineName
	I0501 02:41:26.108414   39235 main.go:141] libmachine: (ha-329926) Calling .GetSSHHostname
	I0501 02:41:26.111150   39235 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:41:26.111500   39235 main.go:141] libmachine: (ha-329926) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:d8:43", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:31:18 +0000 UTC Type:0 Mac:52:54:00:ce:d8:43 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-329926 Clientid:01:52:54:00:ce:d8:43}
	I0501 02:41:26.111532   39235 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined IP address 192.168.39.5 and MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:41:26.111673   39235 main.go:141] libmachine: (ha-329926) Calling .GetSSHPort
	I0501 02:41:26.111874   39235 main.go:141] libmachine: (ha-329926) Calling .GetSSHKeyPath
	I0501 02:41:26.112022   39235 main.go:141] libmachine: (ha-329926) Calling .GetSSHKeyPath
	I0501 02:41:26.112134   39235 main.go:141] libmachine: (ha-329926) Calling .GetSSHUsername
	I0501 02:41:26.112273   39235 main.go:141] libmachine: Using SSH client type: native
	I0501 02:41:26.112456   39235 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.5 22 <nil> <nil>}
	I0501 02:41:26.112470   39235 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-329926 && echo "ha-329926" | sudo tee /etc/hostname
	I0501 02:41:26.240369   39235 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-329926
	
	I0501 02:41:26.240401   39235 main.go:141] libmachine: (ha-329926) Calling .GetSSHHostname
	I0501 02:41:26.243048   39235 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:41:26.243396   39235 main.go:141] libmachine: (ha-329926) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:d8:43", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:31:18 +0000 UTC Type:0 Mac:52:54:00:ce:d8:43 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-329926 Clientid:01:52:54:00:ce:d8:43}
	I0501 02:41:26.243429   39235 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined IP address 192.168.39.5 and MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:41:26.243611   39235 main.go:141] libmachine: (ha-329926) Calling .GetSSHPort
	I0501 02:41:26.243803   39235 main.go:141] libmachine: (ha-329926) Calling .GetSSHKeyPath
	I0501 02:41:26.243998   39235 main.go:141] libmachine: (ha-329926) Calling .GetSSHKeyPath
	I0501 02:41:26.244137   39235 main.go:141] libmachine: (ha-329926) Calling .GetSSHUsername
	I0501 02:41:26.244302   39235 main.go:141] libmachine: Using SSH client type: native
	I0501 02:41:26.244467   39235 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.5 22 <nil> <nil>}
	I0501 02:41:26.244482   39235 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-329926' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-329926/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-329926' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0501 02:41:26.352537   39235 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0501 02:41:26.352585   39235 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18779-13391/.minikube CaCertPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18779-13391/.minikube}
	I0501 02:41:26.352641   39235 buildroot.go:174] setting up certificates
	I0501 02:41:26.352649   39235 provision.go:84] configureAuth start
	I0501 02:41:26.352659   39235 main.go:141] libmachine: (ha-329926) Calling .GetMachineName
	I0501 02:41:26.352949   39235 main.go:141] libmachine: (ha-329926) Calling .GetIP
	I0501 02:41:26.355545   39235 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:41:26.355872   39235 main.go:141] libmachine: (ha-329926) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:d8:43", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:31:18 +0000 UTC Type:0 Mac:52:54:00:ce:d8:43 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-329926 Clientid:01:52:54:00:ce:d8:43}
	I0501 02:41:26.355900   39235 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined IP address 192.168.39.5 and MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:41:26.356059   39235 main.go:141] libmachine: (ha-329926) Calling .GetSSHHostname
	I0501 02:41:26.357978   39235 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:41:26.358248   39235 main.go:141] libmachine: (ha-329926) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:d8:43", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:31:18 +0000 UTC Type:0 Mac:52:54:00:ce:d8:43 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-329926 Clientid:01:52:54:00:ce:d8:43}
	I0501 02:41:26.358270   39235 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined IP address 192.168.39.5 and MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:41:26.358421   39235 provision.go:143] copyHostCerts
	I0501 02:41:26.358448   39235 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem
	I0501 02:41:26.358489   39235 exec_runner.go:144] found /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem, removing ...
	I0501 02:41:26.358505   39235 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem
	I0501 02:41:26.358569   39235 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem (1078 bytes)
	I0501 02:41:26.358637   39235 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem
	I0501 02:41:26.358654   39235 exec_runner.go:144] found /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem, removing ...
	I0501 02:41:26.358661   39235 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem
	I0501 02:41:26.358683   39235 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem (1123 bytes)
	I0501 02:41:26.358721   39235 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem
	I0501 02:41:26.358739   39235 exec_runner.go:144] found /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem, removing ...
	I0501 02:41:26.358745   39235 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem
	I0501 02:41:26.358769   39235 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem (1675 bytes)
	I0501 02:41:26.358810   39235 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca-key.pem org=jenkins.ha-329926 san=[127.0.0.1 192.168.39.5 ha-329926 localhost minikube]
	I0501 02:41:26.530762   39235 provision.go:177] copyRemoteCerts
	I0501 02:41:26.530813   39235 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0501 02:41:26.530835   39235 main.go:141] libmachine: (ha-329926) Calling .GetSSHHostname
	I0501 02:41:26.533227   39235 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:41:26.533560   39235 main.go:141] libmachine: (ha-329926) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:d8:43", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:31:18 +0000 UTC Type:0 Mac:52:54:00:ce:d8:43 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-329926 Clientid:01:52:54:00:ce:d8:43}
	I0501 02:41:26.533597   39235 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined IP address 192.168.39.5 and MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:41:26.533767   39235 main.go:141] libmachine: (ha-329926) Calling .GetSSHPort
	I0501 02:41:26.533949   39235 main.go:141] libmachine: (ha-329926) Calling .GetSSHKeyPath
	I0501 02:41:26.534099   39235 main.go:141] libmachine: (ha-329926) Calling .GetSSHUsername
	I0501 02:41:26.534242   39235 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926/id_rsa Username:docker}
	I0501 02:41:26.618426   39235 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0501 02:41:26.618484   39235 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0501 02:41:26.647551   39235 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0501 02:41:26.647612   39235 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0501 02:41:26.677751   39235 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0501 02:41:26.677835   39235 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0501 02:41:26.706632   39235 provision.go:87] duration metric: took 353.97227ms to configureAuth
	I0501 02:41:26.706653   39235 buildroot.go:189] setting minikube options for container-runtime
	I0501 02:41:26.706892   39235 config.go:182] Loaded profile config "ha-329926": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0501 02:41:26.706956   39235 main.go:141] libmachine: (ha-329926) Calling .GetSSHHostname
	I0501 02:41:26.709324   39235 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:41:26.709656   39235 main.go:141] libmachine: (ha-329926) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:d8:43", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:31:18 +0000 UTC Type:0 Mac:52:54:00:ce:d8:43 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-329926 Clientid:01:52:54:00:ce:d8:43}
	I0501 02:41:26.709683   39235 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined IP address 192.168.39.5 and MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:41:26.709847   39235 main.go:141] libmachine: (ha-329926) Calling .GetSSHPort
	I0501 02:41:26.710055   39235 main.go:141] libmachine: (ha-329926) Calling .GetSSHKeyPath
	I0501 02:41:26.710210   39235 main.go:141] libmachine: (ha-329926) Calling .GetSSHKeyPath
	I0501 02:41:26.710377   39235 main.go:141] libmachine: (ha-329926) Calling .GetSSHUsername
	I0501 02:41:26.710555   39235 main.go:141] libmachine: Using SSH client type: native
	I0501 02:41:26.710708   39235 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.5 22 <nil> <nil>}
	I0501 02:41:26.710741   39235 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0501 02:42:57.618717   39235 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0501 02:42:57.618746   39235 machine.go:97] duration metric: took 1m31.620213044s to provisionDockerMachine
	I0501 02:42:57.618759   39235 start.go:293] postStartSetup for "ha-329926" (driver="kvm2")
	I0501 02:42:57.618770   39235 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0501 02:42:57.618785   39235 main.go:141] libmachine: (ha-329926) Calling .DriverName
	I0501 02:42:57.619180   39235 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0501 02:42:57.619214   39235 main.go:141] libmachine: (ha-329926) Calling .GetSSHHostname
	I0501 02:42:57.622423   39235 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:42:57.622836   39235 main.go:141] libmachine: (ha-329926) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:d8:43", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:31:18 +0000 UTC Type:0 Mac:52:54:00:ce:d8:43 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-329926 Clientid:01:52:54:00:ce:d8:43}
	I0501 02:42:57.622874   39235 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined IP address 192.168.39.5 and MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:42:57.623038   39235 main.go:141] libmachine: (ha-329926) Calling .GetSSHPort
	I0501 02:42:57.623209   39235 main.go:141] libmachine: (ha-329926) Calling .GetSSHKeyPath
	I0501 02:42:57.623363   39235 main.go:141] libmachine: (ha-329926) Calling .GetSSHUsername
	I0501 02:42:57.623474   39235 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926/id_rsa Username:docker}
	I0501 02:42:57.707404   39235 ssh_runner.go:195] Run: cat /etc/os-release
	I0501 02:42:57.712359   39235 info.go:137] Remote host: Buildroot 2023.02.9
	I0501 02:42:57.712388   39235 filesync.go:126] Scanning /home/jenkins/minikube-integration/18779-13391/.minikube/addons for local assets ...
	I0501 02:42:57.712464   39235 filesync.go:126] Scanning /home/jenkins/minikube-integration/18779-13391/.minikube/files for local assets ...
	I0501 02:42:57.712553   39235 filesync.go:149] local asset: /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem -> 207242.pem in /etc/ssl/certs
	I0501 02:42:57.712575   39235 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem -> /etc/ssl/certs/207242.pem
	I0501 02:42:57.712684   39235 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0501 02:42:57.723985   39235 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem --> /etc/ssl/certs/207242.pem (1708 bytes)
	I0501 02:42:57.753277   39235 start.go:296] duration metric: took 134.502924ms for postStartSetup
	I0501 02:42:57.753320   39235 main.go:141] libmachine: (ha-329926) Calling .DriverName
	I0501 02:42:57.753611   39235 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0501 02:42:57.753644   39235 main.go:141] libmachine: (ha-329926) Calling .GetSSHHostname
	I0501 02:42:57.756543   39235 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:42:57.756949   39235 main.go:141] libmachine: (ha-329926) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:d8:43", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:31:18 +0000 UTC Type:0 Mac:52:54:00:ce:d8:43 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-329926 Clientid:01:52:54:00:ce:d8:43}
	I0501 02:42:57.756978   39235 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined IP address 192.168.39.5 and MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:42:57.757069   39235 main.go:141] libmachine: (ha-329926) Calling .GetSSHPort
	I0501 02:42:57.757225   39235 main.go:141] libmachine: (ha-329926) Calling .GetSSHKeyPath
	I0501 02:42:57.757390   39235 main.go:141] libmachine: (ha-329926) Calling .GetSSHUsername
	I0501 02:42:57.757525   39235 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926/id_rsa Username:docker}
	W0501 02:42:57.837763   39235 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0501 02:42:57.837796   39235 fix.go:56] duration metric: took 1m31.858862807s for fixHost
	I0501 02:42:57.837824   39235 main.go:141] libmachine: (ha-329926) Calling .GetSSHHostname
	I0501 02:42:57.840530   39235 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:42:57.840813   39235 main.go:141] libmachine: (ha-329926) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:d8:43", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:31:18 +0000 UTC Type:0 Mac:52:54:00:ce:d8:43 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-329926 Clientid:01:52:54:00:ce:d8:43}
	I0501 02:42:57.840841   39235 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined IP address 192.168.39.5 and MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:42:57.840995   39235 main.go:141] libmachine: (ha-329926) Calling .GetSSHPort
	I0501 02:42:57.841179   39235 main.go:141] libmachine: (ha-329926) Calling .GetSSHKeyPath
	I0501 02:42:57.841354   39235 main.go:141] libmachine: (ha-329926) Calling .GetSSHKeyPath
	I0501 02:42:57.841476   39235 main.go:141] libmachine: (ha-329926) Calling .GetSSHUsername
	I0501 02:42:57.841655   39235 main.go:141] libmachine: Using SSH client type: native
	I0501 02:42:57.841819   39235 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.5 22 <nil> <nil>}
	I0501 02:42:57.841830   39235 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0501 02:42:57.944094   39235 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714531377.896474154
	
	I0501 02:42:57.944121   39235 fix.go:216] guest clock: 1714531377.896474154
	I0501 02:42:57.944143   39235 fix.go:229] Guest: 2024-05-01 02:42:57.896474154 +0000 UTC Remote: 2024-05-01 02:42:57.837806525 +0000 UTC m=+91.996092869 (delta=58.667629ms)
	I0501 02:42:57.944164   39235 fix.go:200] guest clock delta is within tolerance: 58.667629ms
	I0501 02:42:57.944169   39235 start.go:83] releasing machines lock for "ha-329926", held for 1m31.965251882s
	I0501 02:42:57.944194   39235 main.go:141] libmachine: (ha-329926) Calling .DriverName
	I0501 02:42:57.944468   39235 main.go:141] libmachine: (ha-329926) Calling .GetIP
	I0501 02:42:57.947110   39235 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:42:57.947398   39235 main.go:141] libmachine: (ha-329926) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:d8:43", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:31:18 +0000 UTC Type:0 Mac:52:54:00:ce:d8:43 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-329926 Clientid:01:52:54:00:ce:d8:43}
	I0501 02:42:57.947418   39235 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined IP address 192.168.39.5 and MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:42:57.947548   39235 main.go:141] libmachine: (ha-329926) Calling .DriverName
	I0501 02:42:57.948042   39235 main.go:141] libmachine: (ha-329926) Calling .DriverName
	I0501 02:42:57.948235   39235 main.go:141] libmachine: (ha-329926) Calling .DriverName
	I0501 02:42:57.948377   39235 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0501 02:42:57.948420   39235 main.go:141] libmachine: (ha-329926) Calling .GetSSHHostname
	I0501 02:42:57.948473   39235 ssh_runner.go:195] Run: cat /version.json
	I0501 02:42:57.948491   39235 main.go:141] libmachine: (ha-329926) Calling .GetSSHHostname
	I0501 02:42:57.951064   39235 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:42:57.951283   39235 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:42:57.951439   39235 main.go:141] libmachine: (ha-329926) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:d8:43", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:31:18 +0000 UTC Type:0 Mac:52:54:00:ce:d8:43 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-329926 Clientid:01:52:54:00:ce:d8:43}
	I0501 02:42:57.951462   39235 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined IP address 192.168.39.5 and MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:42:57.951567   39235 main.go:141] libmachine: (ha-329926) Calling .GetSSHPort
	I0501 02:42:57.951685   39235 main.go:141] libmachine: (ha-329926) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:d8:43", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:31:18 +0000 UTC Type:0 Mac:52:54:00:ce:d8:43 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-329926 Clientid:01:52:54:00:ce:d8:43}
	I0501 02:42:57.951712   39235 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined IP address 192.168.39.5 and MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:42:57.951715   39235 main.go:141] libmachine: (ha-329926) Calling .GetSSHKeyPath
	I0501 02:42:57.951866   39235 main.go:141] libmachine: (ha-329926) Calling .GetSSHPort
	I0501 02:42:57.951890   39235 main.go:141] libmachine: (ha-329926) Calling .GetSSHUsername
	I0501 02:42:57.951979   39235 main.go:141] libmachine: (ha-329926) Calling .GetSSHKeyPath
	I0501 02:42:57.952037   39235 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926/id_rsa Username:docker}
	I0501 02:42:57.952095   39235 main.go:141] libmachine: (ha-329926) Calling .GetSSHUsername
	I0501 02:42:57.952239   39235 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926/id_rsa Username:docker}
	I0501 02:42:58.062263   39235 ssh_runner.go:195] Run: systemctl --version
	I0501 02:42:58.069272   39235 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0501 02:42:58.236351   39235 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0501 02:42:58.243386   39235 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0501 02:42:58.243455   39235 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0501 02:42:58.253640   39235 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0501 02:42:58.253663   39235 start.go:494] detecting cgroup driver to use...
	I0501 02:42:58.253729   39235 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0501 02:42:58.272557   39235 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0501 02:42:58.289266   39235 docker.go:217] disabling cri-docker service (if available) ...
	I0501 02:42:58.289326   39235 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0501 02:42:58.305053   39235 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0501 02:42:58.319905   39235 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0501 02:42:58.480664   39235 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0501 02:42:58.636046   39235 docker.go:233] disabling docker service ...
	I0501 02:42:58.636104   39235 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0501 02:42:58.655237   39235 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0501 02:42:58.669819   39235 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0501 02:42:58.829003   39235 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0501 02:42:58.989996   39235 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0501 02:42:59.006703   39235 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0501 02:42:59.030216   39235 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0501 02:42:59.030294   39235 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 02:42:59.043172   39235 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0501 02:42:59.043242   39235 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 02:42:59.057452   39235 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 02:42:59.069954   39235 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 02:42:59.082603   39235 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0501 02:42:59.095265   39235 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 02:42:59.109262   39235 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 02:42:59.121195   39235 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 02:42:59.133777   39235 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0501 02:42:59.144697   39235 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0501 02:42:59.157218   39235 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:42:59.327577   39235 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0501 02:43:09.315864   39235 ssh_runner.go:235] Completed: sudo systemctl restart crio: (9.988252796s)
	I0501 02:43:09.315898   39235 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0501 02:43:09.315948   39235 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0501 02:43:09.321755   39235 start.go:562] Will wait 60s for crictl version
	I0501 02:43:09.321810   39235 ssh_runner.go:195] Run: which crictl
	I0501 02:43:09.326174   39235 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0501 02:43:09.364999   39235 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0501 02:43:09.365102   39235 ssh_runner.go:195] Run: crio --version
	I0501 02:43:09.396220   39235 ssh_runner.go:195] Run: crio --version
	I0501 02:43:09.429215   39235 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0501 02:43:09.430455   39235 main.go:141] libmachine: (ha-329926) Calling .GetIP
	I0501 02:43:09.433192   39235 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:43:09.433537   39235 main.go:141] libmachine: (ha-329926) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:d8:43", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:31:18 +0000 UTC Type:0 Mac:52:54:00:ce:d8:43 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-329926 Clientid:01:52:54:00:ce:d8:43}
	I0501 02:43:09.433562   39235 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined IP address 192.168.39.5 and MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:43:09.433785   39235 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0501 02:43:09.439468   39235 kubeadm.go:877] updating cluster {Name:ha-329926 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Cl
usterName:ha-329926 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.5 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.79 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.115 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.84 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshp
od:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Moun
tIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0501 02:43:09.439669   39235 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0501 02:43:09.439733   39235 ssh_runner.go:195] Run: sudo crictl images --output json
	I0501 02:43:09.498983   39235 crio.go:514] all images are preloaded for cri-o runtime.
	I0501 02:43:09.499010   39235 crio.go:433] Images already preloaded, skipping extraction
	I0501 02:43:09.499065   39235 ssh_runner.go:195] Run: sudo crictl images --output json
	I0501 02:43:09.545195   39235 crio.go:514] all images are preloaded for cri-o runtime.
	I0501 02:43:09.545221   39235 cache_images.go:84] Images are preloaded, skipping loading
	I0501 02:43:09.545232   39235 kubeadm.go:928] updating node { 192.168.39.5 8443 v1.30.0 crio true true} ...
	I0501 02:43:09.545352   39235 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-329926 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-329926 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0501 02:43:09.545437   39235 ssh_runner.go:195] Run: crio config
	I0501 02:43:09.614383   39235 cni.go:84] Creating CNI manager for ""
	I0501 02:43:09.614422   39235 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0501 02:43:09.614434   39235 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0501 02:43:09.614461   39235 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.5 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-329926 NodeName:ha-329926 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0501 02:43:09.614639   39235 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-329926"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0501 02:43:09.614663   39235 kube-vip.go:111] generating kube-vip config ...
	I0501 02:43:09.614711   39235 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0501 02:43:09.630044   39235 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0501 02:43:09.630136   39235 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0501 02:43:09.630191   39235 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0501 02:43:09.643362   39235 binaries.go:44] Found k8s binaries, skipping transfer
	I0501 02:43:09.643424   39235 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0501 02:43:09.656163   39235 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I0501 02:43:09.677024   39235 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0501 02:43:09.696909   39235 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2147 bytes)
	I0501 02:43:09.717210   39235 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0501 02:43:09.737838   39235 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0501 02:43:09.742941   39235 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:43:09.912331   39235 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0501 02:43:09.929488   39235 certs.go:68] Setting up /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926 for IP: 192.168.39.5
	I0501 02:43:09.929514   39235 certs.go:194] generating shared ca certs ...
	I0501 02:43:09.929535   39235 certs.go:226] acquiring lock for ca certs: {Name:mk85a75d14c00780d4bf09cd9bdaf0f96f1b8fce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:43:09.929723   39235 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18779-13391/.minikube/ca.key
	I0501 02:43:09.929777   39235 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.key
	I0501 02:43:09.929790   39235 certs.go:256] generating profile certs ...
	I0501 02:43:09.929909   39235 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/client.key
	I0501 02:43:09.929944   39235 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/apiserver.key.870d3c0e
	I0501 02:43:09.929962   39235 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/apiserver.crt.870d3c0e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.5 192.168.39.79 192.168.39.115 192.168.39.254]
	I0501 02:43:10.012851   39235 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/apiserver.crt.870d3c0e ...
	I0501 02:43:10.012885   39235 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/apiserver.crt.870d3c0e: {Name:mk4ccaf90fd6dcf78b8e9e2b8db11f9737a1bd70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:43:10.013054   39235 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/apiserver.key.870d3c0e ...
	I0501 02:43:10.013066   39235 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/apiserver.key.870d3c0e: {Name:mk2c3b7f593cad4d68b6ae9c2deae1c15fbc0249 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:43:10.013129   39235 certs.go:381] copying /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/apiserver.crt.870d3c0e -> /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/apiserver.crt
	I0501 02:43:10.013284   39235 certs.go:385] copying /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/apiserver.key.870d3c0e -> /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/apiserver.key
	I0501 02:43:10.013413   39235 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/proxy-client.key
	I0501 02:43:10.013436   39235 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0501 02:43:10.013448   39235 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0501 02:43:10.013462   39235 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0501 02:43:10.013474   39235 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0501 02:43:10.013483   39235 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0501 02:43:10.013494   39235 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0501 02:43:10.013509   39235 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0501 02:43:10.013521   39235 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0501 02:43:10.013567   39235 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/20724.pem (1338 bytes)
	W0501 02:43:10.013593   39235 certs.go:480] ignoring /home/jenkins/minikube-integration/18779-13391/.minikube/certs/20724_empty.pem, impossibly tiny 0 bytes
	I0501 02:43:10.013602   39235 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca-key.pem (1679 bytes)
	I0501 02:43:10.013624   39235 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem (1078 bytes)
	I0501 02:43:10.013645   39235 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem (1123 bytes)
	I0501 02:43:10.013665   39235 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem (1675 bytes)
	I0501 02:43:10.013705   39235 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem (1708 bytes)
	I0501 02:43:10.013729   39235 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0501 02:43:10.013763   39235 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/20724.pem -> /usr/share/ca-certificates/20724.pem
	I0501 02:43:10.013776   39235 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem -> /usr/share/ca-certificates/207242.pem
	I0501 02:43:10.014356   39235 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0501 02:43:10.044218   39235 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0501 02:43:10.071457   39235 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0501 02:43:10.099415   39235 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0501 02:43:10.126994   39235 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0501 02:43:10.154491   39235 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0501 02:43:10.181074   39235 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0501 02:43:10.207477   39235 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0501 02:43:10.234294   39235 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0501 02:43:10.260359   39235 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/certs/20724.pem --> /usr/share/ca-certificates/20724.pem (1338 bytes)
	I0501 02:43:10.285469   39235 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem --> /usr/share/ca-certificates/207242.pem (1708 bytes)
	I0501 02:43:10.312946   39235 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0501 02:43:10.338880   39235 ssh_runner.go:195] Run: openssl version
	I0501 02:43:10.345377   39235 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0501 02:43:10.357312   39235 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0501 02:43:10.362191   39235 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May  1 02:08 /usr/share/ca-certificates/minikubeCA.pem
	I0501 02:43:10.362226   39235 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0501 02:43:10.368239   39235 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0501 02:43:10.378439   39235 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20724.pem && ln -fs /usr/share/ca-certificates/20724.pem /etc/ssl/certs/20724.pem"
	I0501 02:43:10.390421   39235 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20724.pem
	I0501 02:43:10.395446   39235 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May  1 02:20 /usr/share/ca-certificates/20724.pem
	I0501 02:43:10.395497   39235 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20724.pem
	I0501 02:43:10.401896   39235 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20724.pem /etc/ssl/certs/51391683.0"
	I0501 02:43:10.412204   39235 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/207242.pem && ln -fs /usr/share/ca-certificates/207242.pem /etc/ssl/certs/207242.pem"
	I0501 02:43:10.424343   39235 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/207242.pem
	I0501 02:43:10.429120   39235 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May  1 02:20 /usr/share/ca-certificates/207242.pem
	I0501 02:43:10.429180   39235 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/207242.pem
	I0501 02:43:10.435171   39235 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/207242.pem /etc/ssl/certs/3ec20f2e.0"
	I0501 02:43:10.446115   39235 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0501 02:43:10.451405   39235 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0501 02:43:10.457897   39235 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0501 02:43:10.464098   39235 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0501 02:43:10.471263   39235 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0501 02:43:10.477507   39235 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0501 02:43:10.483955   39235 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0501 02:43:10.490367   39235 kubeadm.go:391] StartCluster: {Name:ha-329926 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Clust
erName:ha-329926 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.5 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.79 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.115 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.84 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:
false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 02:43:10.490526   39235 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0501 02:43:10.490571   39235 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0501 02:43:10.538838   39235 cri.go:89] found id: "33f44ace2eeffb23232e362caee4613e5d7141986d53d906034466c53cbfe7a8"
	I0501 02:43:10.538857   39235 cri.go:89] found id: "0558df584d79d0d8bc6e53073c4ad9708e838bb70f9953ce93d7593344c0385a"
	I0501 02:43:10.538862   39235 cri.go:89] found id: "9dc26cb1281fef5591fd3b938f25bc2b690517cc2fdb2d5506f19a18dd738057"
	I0501 02:43:10.538866   39235 cri.go:89] found id: "778cfaa464ec8ad52820f633702fc3f620188a6295206ae2173142199f71f48e"
	I0501 02:43:10.538870   39235 cri.go:89] found id: "619f66869569c782fd59143653e79694cc74f9ae89927d4f16dfcb10f47d0e03"
	I0501 02:43:10.538874   39235 cri.go:89] found id: "693a12cd2b2c6de7a7c03e4d290e69cdcef9f4f17b75ff84f84fb81b8297cd63"
	I0501 02:43:10.538878   39235 cri.go:89] found id: "fbc7b6bc224b5b53e156316187f05c941fd17da22bca2cc7fecf5071d8eb4d38"
	I0501 02:43:10.538882   39235 cri.go:89] found id: "2ab64850e34b66cbe91e099c91014051472b87888130297cd7c47a4a78992140"
	I0501 02:43:10.538886   39235 cri.go:89] found id: "9563ee09b7dc14582bda46368040d65e26370cf354a48e6db28fb4d5169a41db"
	I0501 02:43:10.538893   39235 cri.go:89] found id: "d24a4adfe9096e0063099c3390b72f12094c22465e8b666eb999e30740b77ea3"
	I0501 02:43:10.538897   39235 cri.go:89] found id: "e3ffc6d046e214333ed85219819ad54bfa5c3ac0b10167beacf72edbd6035736"
	I0501 02:43:10.538900   39235 cri.go:89] found id: "347407ef9dd66d0f2a44d6bc871649c2f38c1263ef6f3a33d6574f0e149ab701"
	I0501 02:43:10.538907   39235 cri.go:89] found id: "9f36a128ab65a069e1cafa5abf1b7774272b73428ea3826f595758d5e99b4e93"
	I0501 02:43:10.538910   39235 cri.go:89] found id: ""
	I0501 02:43:10.538957   39235 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	May 01 02:46:22 ha-329926 crio[3818]: time="2024-05-01 02:46:22.770461466Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714531582770424816,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=566afcd4-02da-4c9c-b4ec-b0351a55c8ce name=/runtime.v1.ImageService/ImageFsInfo
	May 01 02:46:22 ha-329926 crio[3818]: time="2024-05-01 02:46:22.771592824Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=df91756d-33aa-40ad-84db-9dd1d707e316 name=/runtime.v1.RuntimeService/ListContainers
	May 01 02:46:22 ha-329926 crio[3818]: time="2024-05-01 02:46:22.771793133Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=df91756d-33aa-40ad-84db-9dd1d707e316 name=/runtime.v1.RuntimeService/ListContainers
	May 01 02:46:22 ha-329926 crio[3818]: time="2024-05-01 02:46:22.772250476Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0b3a57c2195832ebe5e7032bc917987f944db828a8e5925d886ea10fb432f1ab,PodSandboxId:222f0baa90487bc1f8b94ec1b03db6568034bc14d984f1da1cd8b2ca7b480596,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714531488140417174,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 371423a6-a156-4e8d-bf66-812d606cc8d7,},Annotations:map[string]string{io.kubernetes.container.hash: 79218c39,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68c9b61e07b789bb1c441a33b66eeb07476719d85f4affe9c264e34bd73d8008,PodSandboxId:5c187637e4af957fa3481369cfb179dd7f043b24bd7436ca0fa0e524b347859a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1714531481120651978,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-kcmp7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e15c166-9ba1-40c9-8f33-db7f83733932,},Annotations:map[string]string{io.kubernetes.container.hash: fdceac74,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8f3963bbca5a5d8ca96fd7bf715f2b551bcaf4380803b443a346bccff25655b,PodSandboxId:ec79d09460adcc3f5e74bae2063a9f639a550ab2e2282a391540873353174ac5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714531441122972737,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-329926,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c603c91fe09a36a9d3862475188142a,},Annotations:map[string]string{io.kubernetes.container.hash: 740b8b39,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernet
es.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edfe1ca8adcff5c57dd48a6e4e52f6129014ec43e797455a799c8abb8dddf9ad,PodSandboxId:639834849a63b0c5b6034ba67902819baf2c7bab7e147320753381a3fadfc9ef,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714531439122290070,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-329926,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d6c0ce9d370e02811c06c5c50fb7da1,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c768225fa391861ab12a75d643a4757892824fd20ef1294ba1e9b817cbe81f3,PodSandboxId:222f0baa90487bc1f8b94ec1b03db6568034bc14d984f1da1cd8b2ca7b480596,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1714531436122032584,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 371423a6-a156-4e8d-bf66-812d606cc8d7,},Annotations:map[string]string{io.kubernetes.container.hash: 79218c39,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.cont
ainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bb33341c42d847e3c54695cda601e3d3ee0fe3e95d113fabdb40a8b79ee00ac,PodSandboxId:7ec7a58f19ea4590ecf46d3c8faea8e7ab579c87ea9e31da82f9173c6e67e371,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714531429689100495,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-nwj5x,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0cfb5bda-fca7-479f-98d3-6be9bddf0e1c,},Annotations:map[string]string{io.kubernetes.container.hash: b50f62c1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45d5832e610c8d7398d46f4d0dc6929742b56ee579172c6289a7ebcedd229113,PodSandboxId:45ff3dc59db5afc538c97abf460cf706199fe452449ab01ed2f230cf7248cf45,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1714531411902616259,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-329926,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7e0afd8727d0417c20407d0e8880765,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:3b19253347b3c6a6483c37f96f8593931755f784d085f793c13e001ae0d76794,PodSandboxId:6bd635b17d9c0d981c6e2c3a943281df18c7780f9ff5380d3554dfb073340194,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714531396113172009,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-msshn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7575fbfc-11ce-4223-bd99-ff9cdddd3568,},Annotations:map[string]string{io.kubernetes.container.hash: 725b7ad5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4005dbd
94fbfed92cf6c4bb53e9b2208adb17302eabfa52ea663e83fa24fef7,PodSandboxId:15d9ac760b30773086acc72880e8a01cf304d780f29db72009315c976cb517ff,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714531396470001536,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-cfdqc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a37e982e-9e4f-43bf-b957-0d6f082f4ec8,},Annotations:map[string]string{io.kubernetes.container.hash: d10dbf58,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d21d7f0022d1d1edaee4b7f45614bc8d98a407b0ba70c272d9fcdbc67fdba53,PodSandboxId:77261f211cf7416433533bfbdf670550fc76f5c15415fa7ad3d2c30a90d5c656,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714531396224192528,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-2h8lc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 937e09f0-6a7d-4387-aa19-ee959eb5a2a5,},Annotations:map[string]string{io.kubernetes.container.hash: 64cd0f7d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,
\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a409ec7ab3a3389b868353ff5b180728bff4d9fd6e9ee235408658387a54e865,PodSandboxId:639834849a63b0c5b6034ba67902819baf2c7bab7e147320753381a3fadfc9ef,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714531396158062617,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-329926,io.kuber
netes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d6c0ce9d370e02811c06c5c50fb7da1,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc712ce5191365a1b74a4fb690a3fe1fca3ef109f0525a60d88ecff10b96a61b,PodSandboxId:33a2511848b79ed0b27f51b17c8e8d0380da02cb35f4dd0ab8930ed674b8a9e3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714531396049539667,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-329926,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: 50544e7f95cae164184f9b27f78747c6,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:028fe64855c8942e704a66fc8d7d80db9662c05c5252b9ae01043eb95134a0a6,PodSandboxId:ec79d09460adcc3f5e74bae2063a9f639a550ab2e2282a391540873353174ac5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714531395925134025,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-329926,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 4c603c91fe09a36a9d3862475188142a,},Annotations:map[string]string{io.kubernetes.container.hash: 740b8b39,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4940e86ab3aeeda6fefe1a1c3eceee2908fbf5e3ebc1584761c2744b7a04e3e,PodSandboxId:27f30cfd71acf5bbb1ccf09c13482fbe21411ba6499ce9959099bd47c7ce537f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714531395866512593,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-329926,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 684463257510837a1c150a7df713bf62,},Ann
otations:map[string]string{io.kubernetes.container.hash: d6da2a03,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:042c499301cf07675ef97e386fb802a0684efc1ff197389bf8b7458ca853493f,PodSandboxId:5c187637e4af957fa3481369cfb179dd7f043b24bd7436ca0fa0e524b347859a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1714531392201491789,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-kcmp7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e15c166-9ba1-40c9-8f33-db7f83733932,},Annotations:map[string]string{io.kuber
netes.container.hash: fdceac74,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d8c54a9eb6fd7f6b4514ddfa0201da9a28692806db71b7d277493e9f9b90233,PodSandboxId:abf4acd7dd09ff0cf728fe61b1bf3c73291eacc97a98f9f2d79b9fe49a629266,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1714530889047946688,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-nwj5x,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0cfb5bda-fca7-479f-98d3-6be9bddf0e1c,},Annotations:map[string]string{io.kuberne
tes.container.hash: b50f62c1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:619f66869569c782fd59143653e79694cc74f9ae89927d4f16dfcb10f47d0e03,PodSandboxId:0fe93b95f6356e43c599c74942976a3c1c2025ad4e52377711767c38c88d3d63,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714530725115421538,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-cfdqc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a37e982e-9e4f-43bf-b957-0d6f082f4ec8,},Annotations:map[string]string{io.kubernetes.container.hash: d10dbf58,io.
kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:693a12cd2b2c6de7a7c03e4d290e69cdcef9f4f17b75ff84f84fb81b8297cd63,PodSandboxId:1771f42c6abecb905d783dc2c43071807fc9138d9eae9432b2b132602a450cb2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714530725082845913,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredn
s-7db6d8ff4d-2h8lc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 937e09f0-6a7d-4387-aa19-ee959eb5a2a5,},Annotations:map[string]string{io.kubernetes.container.hash: 64cd0f7d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ab64850e34b66cbe91e099c91014051472b87888130297cd7c47a4a78992140,PodSandboxId:f6611da96d51a594f93865322c664539feca749ea5a59ee33b637aae006635ba,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf4
31fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1714530722007160196,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-msshn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7575fbfc-11ce-4223-bd99-ff9cdddd3568,},Annotations:map[string]string{io.kubernetes.container.hash: 725b7ad5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3ffc6d046e214333ed85219819ad54bfa5c3ac0b10167beacf72edbd6035736,PodSandboxId:170d412885089b502fe3701809ec5f3271a9fd4f9181bf1c293cd527d144907b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929d
c8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1714530701589505342,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-329926,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50544e7f95cae164184f9b27f78747c6,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f36a128ab65a069e1cafa5abf1b7774272b73428ea3826f595758d5e99b4e93,PodSandboxId:0c17dc8e917b3fee067acb3ff1b63beb8742f444a12d58ee96183b021052a9ed,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTA
INER_EXITED,CreatedAt:1714530701461361460,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-329926,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 684463257510837a1c150a7df713bf62,},Annotations:map[string]string{io.kubernetes.container.hash: d6da2a03,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=df91756d-33aa-40ad-84db-9dd1d707e316 name=/runtime.v1.RuntimeService/ListContainers
	May 01 02:46:22 ha-329926 crio[3818]: time="2024-05-01 02:46:22.824012159Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=85d8e728-c98e-4729-b5af-45e6e3ef35cf name=/runtime.v1.RuntimeService/Version
	May 01 02:46:22 ha-329926 crio[3818]: time="2024-05-01 02:46:22.824292477Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=85d8e728-c98e-4729-b5af-45e6e3ef35cf name=/runtime.v1.RuntimeService/Version
	May 01 02:46:22 ha-329926 crio[3818]: time="2024-05-01 02:46:22.832449073Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=316f30d0-e206-4d71-bf75-bfe8236a4ea0 name=/runtime.v1.ImageService/ImageFsInfo
	May 01 02:46:22 ha-329926 crio[3818]: time="2024-05-01 02:46:22.833219123Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714531582833186191,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=316f30d0-e206-4d71-bf75-bfe8236a4ea0 name=/runtime.v1.ImageService/ImageFsInfo
	May 01 02:46:22 ha-329926 crio[3818]: time="2024-05-01 02:46:22.838207294Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=de2c59c4-e23b-4668-bb2b-91274ae5c2f9 name=/runtime.v1.RuntimeService/ListContainers
	May 01 02:46:22 ha-329926 crio[3818]: time="2024-05-01 02:46:22.838295653Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=de2c59c4-e23b-4668-bb2b-91274ae5c2f9 name=/runtime.v1.RuntimeService/ListContainers
	May 01 02:46:22 ha-329926 crio[3818]: time="2024-05-01 02:46:22.838989064Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0b3a57c2195832ebe5e7032bc917987f944db828a8e5925d886ea10fb432f1ab,PodSandboxId:222f0baa90487bc1f8b94ec1b03db6568034bc14d984f1da1cd8b2ca7b480596,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714531488140417174,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 371423a6-a156-4e8d-bf66-812d606cc8d7,},Annotations:map[string]string{io.kubernetes.container.hash: 79218c39,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68c9b61e07b789bb1c441a33b66eeb07476719d85f4affe9c264e34bd73d8008,PodSandboxId:5c187637e4af957fa3481369cfb179dd7f043b24bd7436ca0fa0e524b347859a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1714531481120651978,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-kcmp7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e15c166-9ba1-40c9-8f33-db7f83733932,},Annotations:map[string]string{io.kubernetes.container.hash: fdceac74,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8f3963bbca5a5d8ca96fd7bf715f2b551bcaf4380803b443a346bccff25655b,PodSandboxId:ec79d09460adcc3f5e74bae2063a9f639a550ab2e2282a391540873353174ac5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714531441122972737,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-329926,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c603c91fe09a36a9d3862475188142a,},Annotations:map[string]string{io.kubernetes.container.hash: 740b8b39,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernet
es.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edfe1ca8adcff5c57dd48a6e4e52f6129014ec43e797455a799c8abb8dddf9ad,PodSandboxId:639834849a63b0c5b6034ba67902819baf2c7bab7e147320753381a3fadfc9ef,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714531439122290070,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-329926,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d6c0ce9d370e02811c06c5c50fb7da1,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c768225fa391861ab12a75d643a4757892824fd20ef1294ba1e9b817cbe81f3,PodSandboxId:222f0baa90487bc1f8b94ec1b03db6568034bc14d984f1da1cd8b2ca7b480596,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1714531436122032584,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 371423a6-a156-4e8d-bf66-812d606cc8d7,},Annotations:map[string]string{io.kubernetes.container.hash: 79218c39,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.cont
ainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bb33341c42d847e3c54695cda601e3d3ee0fe3e95d113fabdb40a8b79ee00ac,PodSandboxId:7ec7a58f19ea4590ecf46d3c8faea8e7ab579c87ea9e31da82f9173c6e67e371,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714531429689100495,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-nwj5x,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0cfb5bda-fca7-479f-98d3-6be9bddf0e1c,},Annotations:map[string]string{io.kubernetes.container.hash: b50f62c1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45d5832e610c8d7398d46f4d0dc6929742b56ee579172c6289a7ebcedd229113,PodSandboxId:45ff3dc59db5afc538c97abf460cf706199fe452449ab01ed2f230cf7248cf45,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1714531411902616259,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-329926,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7e0afd8727d0417c20407d0e8880765,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:3b19253347b3c6a6483c37f96f8593931755f784d085f793c13e001ae0d76794,PodSandboxId:6bd635b17d9c0d981c6e2c3a943281df18c7780f9ff5380d3554dfb073340194,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714531396113172009,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-msshn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7575fbfc-11ce-4223-bd99-ff9cdddd3568,},Annotations:map[string]string{io.kubernetes.container.hash: 725b7ad5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4005dbd
94fbfed92cf6c4bb53e9b2208adb17302eabfa52ea663e83fa24fef7,PodSandboxId:15d9ac760b30773086acc72880e8a01cf304d780f29db72009315c976cb517ff,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714531396470001536,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-cfdqc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a37e982e-9e4f-43bf-b957-0d6f082f4ec8,},Annotations:map[string]string{io.kubernetes.container.hash: d10dbf58,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d21d7f0022d1d1edaee4b7f45614bc8d98a407b0ba70c272d9fcdbc67fdba53,PodSandboxId:77261f211cf7416433533bfbdf670550fc76f5c15415fa7ad3d2c30a90d5c656,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714531396224192528,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-2h8lc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 937e09f0-6a7d-4387-aa19-ee959eb5a2a5,},Annotations:map[string]string{io.kubernetes.container.hash: 64cd0f7d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,
\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a409ec7ab3a3389b868353ff5b180728bff4d9fd6e9ee235408658387a54e865,PodSandboxId:639834849a63b0c5b6034ba67902819baf2c7bab7e147320753381a3fadfc9ef,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714531396158062617,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-329926,io.kuber
netes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d6c0ce9d370e02811c06c5c50fb7da1,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc712ce5191365a1b74a4fb690a3fe1fca3ef109f0525a60d88ecff10b96a61b,PodSandboxId:33a2511848b79ed0b27f51b17c8e8d0380da02cb35f4dd0ab8930ed674b8a9e3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714531396049539667,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-329926,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: 50544e7f95cae164184f9b27f78747c6,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:028fe64855c8942e704a66fc8d7d80db9662c05c5252b9ae01043eb95134a0a6,PodSandboxId:ec79d09460adcc3f5e74bae2063a9f639a550ab2e2282a391540873353174ac5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714531395925134025,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-329926,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 4c603c91fe09a36a9d3862475188142a,},Annotations:map[string]string{io.kubernetes.container.hash: 740b8b39,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4940e86ab3aeeda6fefe1a1c3eceee2908fbf5e3ebc1584761c2744b7a04e3e,PodSandboxId:27f30cfd71acf5bbb1ccf09c13482fbe21411ba6499ce9959099bd47c7ce537f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714531395866512593,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-329926,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 684463257510837a1c150a7df713bf62,},Ann
otations:map[string]string{io.kubernetes.container.hash: d6da2a03,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:042c499301cf07675ef97e386fb802a0684efc1ff197389bf8b7458ca853493f,PodSandboxId:5c187637e4af957fa3481369cfb179dd7f043b24bd7436ca0fa0e524b347859a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1714531392201491789,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-kcmp7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e15c166-9ba1-40c9-8f33-db7f83733932,},Annotations:map[string]string{io.kuber
netes.container.hash: fdceac74,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d8c54a9eb6fd7f6b4514ddfa0201da9a28692806db71b7d277493e9f9b90233,PodSandboxId:abf4acd7dd09ff0cf728fe61b1bf3c73291eacc97a98f9f2d79b9fe49a629266,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1714530889047946688,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-nwj5x,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0cfb5bda-fca7-479f-98d3-6be9bddf0e1c,},Annotations:map[string]string{io.kuberne
tes.container.hash: b50f62c1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:619f66869569c782fd59143653e79694cc74f9ae89927d4f16dfcb10f47d0e03,PodSandboxId:0fe93b95f6356e43c599c74942976a3c1c2025ad4e52377711767c38c88d3d63,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714530725115421538,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-cfdqc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a37e982e-9e4f-43bf-b957-0d6f082f4ec8,},Annotations:map[string]string{io.kubernetes.container.hash: d10dbf58,io.
kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:693a12cd2b2c6de7a7c03e4d290e69cdcef9f4f17b75ff84f84fb81b8297cd63,PodSandboxId:1771f42c6abecb905d783dc2c43071807fc9138d9eae9432b2b132602a450cb2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714530725082845913,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredn
s-7db6d8ff4d-2h8lc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 937e09f0-6a7d-4387-aa19-ee959eb5a2a5,},Annotations:map[string]string{io.kubernetes.container.hash: 64cd0f7d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ab64850e34b66cbe91e099c91014051472b87888130297cd7c47a4a78992140,PodSandboxId:f6611da96d51a594f93865322c664539feca749ea5a59ee33b637aae006635ba,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf4
31fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1714530722007160196,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-msshn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7575fbfc-11ce-4223-bd99-ff9cdddd3568,},Annotations:map[string]string{io.kubernetes.container.hash: 725b7ad5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3ffc6d046e214333ed85219819ad54bfa5c3ac0b10167beacf72edbd6035736,PodSandboxId:170d412885089b502fe3701809ec5f3271a9fd4f9181bf1c293cd527d144907b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929d
c8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1714530701589505342,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-329926,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50544e7f95cae164184f9b27f78747c6,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f36a128ab65a069e1cafa5abf1b7774272b73428ea3826f595758d5e99b4e93,PodSandboxId:0c17dc8e917b3fee067acb3ff1b63beb8742f444a12d58ee96183b021052a9ed,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTA
INER_EXITED,CreatedAt:1714530701461361460,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-329926,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 684463257510837a1c150a7df713bf62,},Annotations:map[string]string{io.kubernetes.container.hash: d6da2a03,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=de2c59c4-e23b-4668-bb2b-91274ae5c2f9 name=/runtime.v1.RuntimeService/ListContainers
	May 01 02:46:22 ha-329926 crio[3818]: time="2024-05-01 02:46:22.939984494Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=50c1bc1c-29cd-4304-a3e9-1abd9f289c09 name=/runtime.v1.RuntimeService/Version
	May 01 02:46:22 ha-329926 crio[3818]: time="2024-05-01 02:46:22.940063570Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=50c1bc1c-29cd-4304-a3e9-1abd9f289c09 name=/runtime.v1.RuntimeService/Version
	May 01 02:46:22 ha-329926 crio[3818]: time="2024-05-01 02:46:22.942377247Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f2ce5761-b0be-44c1-a38f-79e2a3463fff name=/runtime.v1.ImageService/ImageFsInfo
	May 01 02:46:22 ha-329926 crio[3818]: time="2024-05-01 02:46:22.943056531Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714531582943016154,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f2ce5761-b0be-44c1-a38f-79e2a3463fff name=/runtime.v1.ImageService/ImageFsInfo
	May 01 02:46:22 ha-329926 crio[3818]: time="2024-05-01 02:46:22.943652334Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=faf61994-1209-4fd4-a552-824f9ecacb4e name=/runtime.v1.RuntimeService/ListContainers
	May 01 02:46:22 ha-329926 crio[3818]: time="2024-05-01 02:46:22.943795135Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=faf61994-1209-4fd4-a552-824f9ecacb4e name=/runtime.v1.RuntimeService/ListContainers
	May 01 02:46:22 ha-329926 crio[3818]: time="2024-05-01 02:46:22.944197074Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0b3a57c2195832ebe5e7032bc917987f944db828a8e5925d886ea10fb432f1ab,PodSandboxId:222f0baa90487bc1f8b94ec1b03db6568034bc14d984f1da1cd8b2ca7b480596,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714531488140417174,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 371423a6-a156-4e8d-bf66-812d606cc8d7,},Annotations:map[string]string{io.kubernetes.container.hash: 79218c39,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68c9b61e07b789bb1c441a33b66eeb07476719d85f4affe9c264e34bd73d8008,PodSandboxId:5c187637e4af957fa3481369cfb179dd7f043b24bd7436ca0fa0e524b347859a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1714531481120651978,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-kcmp7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e15c166-9ba1-40c9-8f33-db7f83733932,},Annotations:map[string]string{io.kubernetes.container.hash: fdceac74,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8f3963bbca5a5d8ca96fd7bf715f2b551bcaf4380803b443a346bccff25655b,PodSandboxId:ec79d09460adcc3f5e74bae2063a9f639a550ab2e2282a391540873353174ac5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714531441122972737,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-329926,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c603c91fe09a36a9d3862475188142a,},Annotations:map[string]string{io.kubernetes.container.hash: 740b8b39,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernet
es.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edfe1ca8adcff5c57dd48a6e4e52f6129014ec43e797455a799c8abb8dddf9ad,PodSandboxId:639834849a63b0c5b6034ba67902819baf2c7bab7e147320753381a3fadfc9ef,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714531439122290070,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-329926,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d6c0ce9d370e02811c06c5c50fb7da1,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c768225fa391861ab12a75d643a4757892824fd20ef1294ba1e9b817cbe81f3,PodSandboxId:222f0baa90487bc1f8b94ec1b03db6568034bc14d984f1da1cd8b2ca7b480596,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1714531436122032584,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 371423a6-a156-4e8d-bf66-812d606cc8d7,},Annotations:map[string]string{io.kubernetes.container.hash: 79218c39,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.cont
ainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bb33341c42d847e3c54695cda601e3d3ee0fe3e95d113fabdb40a8b79ee00ac,PodSandboxId:7ec7a58f19ea4590ecf46d3c8faea8e7ab579c87ea9e31da82f9173c6e67e371,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714531429689100495,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-nwj5x,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0cfb5bda-fca7-479f-98d3-6be9bddf0e1c,},Annotations:map[string]string{io.kubernetes.container.hash: b50f62c1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45d5832e610c8d7398d46f4d0dc6929742b56ee579172c6289a7ebcedd229113,PodSandboxId:45ff3dc59db5afc538c97abf460cf706199fe452449ab01ed2f230cf7248cf45,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1714531411902616259,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-329926,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7e0afd8727d0417c20407d0e8880765,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:3b19253347b3c6a6483c37f96f8593931755f784d085f793c13e001ae0d76794,PodSandboxId:6bd635b17d9c0d981c6e2c3a943281df18c7780f9ff5380d3554dfb073340194,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714531396113172009,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-msshn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7575fbfc-11ce-4223-bd99-ff9cdddd3568,},Annotations:map[string]string{io.kubernetes.container.hash: 725b7ad5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4005dbd
94fbfed92cf6c4bb53e9b2208adb17302eabfa52ea663e83fa24fef7,PodSandboxId:15d9ac760b30773086acc72880e8a01cf304d780f29db72009315c976cb517ff,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714531396470001536,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-cfdqc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a37e982e-9e4f-43bf-b957-0d6f082f4ec8,},Annotations:map[string]string{io.kubernetes.container.hash: d10dbf58,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d21d7f0022d1d1edaee4b7f45614bc8d98a407b0ba70c272d9fcdbc67fdba53,PodSandboxId:77261f211cf7416433533bfbdf670550fc76f5c15415fa7ad3d2c30a90d5c656,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714531396224192528,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-2h8lc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 937e09f0-6a7d-4387-aa19-ee959eb5a2a5,},Annotations:map[string]string{io.kubernetes.container.hash: 64cd0f7d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,
\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a409ec7ab3a3389b868353ff5b180728bff4d9fd6e9ee235408658387a54e865,PodSandboxId:639834849a63b0c5b6034ba67902819baf2c7bab7e147320753381a3fadfc9ef,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714531396158062617,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-329926,io.kuber
netes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d6c0ce9d370e02811c06c5c50fb7da1,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc712ce5191365a1b74a4fb690a3fe1fca3ef109f0525a60d88ecff10b96a61b,PodSandboxId:33a2511848b79ed0b27f51b17c8e8d0380da02cb35f4dd0ab8930ed674b8a9e3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714531396049539667,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-329926,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: 50544e7f95cae164184f9b27f78747c6,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:028fe64855c8942e704a66fc8d7d80db9662c05c5252b9ae01043eb95134a0a6,PodSandboxId:ec79d09460adcc3f5e74bae2063a9f639a550ab2e2282a391540873353174ac5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714531395925134025,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-329926,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 4c603c91fe09a36a9d3862475188142a,},Annotations:map[string]string{io.kubernetes.container.hash: 740b8b39,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4940e86ab3aeeda6fefe1a1c3eceee2908fbf5e3ebc1584761c2744b7a04e3e,PodSandboxId:27f30cfd71acf5bbb1ccf09c13482fbe21411ba6499ce9959099bd47c7ce537f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714531395866512593,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-329926,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 684463257510837a1c150a7df713bf62,},Ann
otations:map[string]string{io.kubernetes.container.hash: d6da2a03,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:042c499301cf07675ef97e386fb802a0684efc1ff197389bf8b7458ca853493f,PodSandboxId:5c187637e4af957fa3481369cfb179dd7f043b24bd7436ca0fa0e524b347859a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1714531392201491789,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-kcmp7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e15c166-9ba1-40c9-8f33-db7f83733932,},Annotations:map[string]string{io.kuber
netes.container.hash: fdceac74,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d8c54a9eb6fd7f6b4514ddfa0201da9a28692806db71b7d277493e9f9b90233,PodSandboxId:abf4acd7dd09ff0cf728fe61b1bf3c73291eacc97a98f9f2d79b9fe49a629266,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1714530889047946688,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-nwj5x,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0cfb5bda-fca7-479f-98d3-6be9bddf0e1c,},Annotations:map[string]string{io.kuberne
tes.container.hash: b50f62c1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:619f66869569c782fd59143653e79694cc74f9ae89927d4f16dfcb10f47d0e03,PodSandboxId:0fe93b95f6356e43c599c74942976a3c1c2025ad4e52377711767c38c88d3d63,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714530725115421538,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-cfdqc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a37e982e-9e4f-43bf-b957-0d6f082f4ec8,},Annotations:map[string]string{io.kubernetes.container.hash: d10dbf58,io.
kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:693a12cd2b2c6de7a7c03e4d290e69cdcef9f4f17b75ff84f84fb81b8297cd63,PodSandboxId:1771f42c6abecb905d783dc2c43071807fc9138d9eae9432b2b132602a450cb2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714530725082845913,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredn
s-7db6d8ff4d-2h8lc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 937e09f0-6a7d-4387-aa19-ee959eb5a2a5,},Annotations:map[string]string{io.kubernetes.container.hash: 64cd0f7d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ab64850e34b66cbe91e099c91014051472b87888130297cd7c47a4a78992140,PodSandboxId:f6611da96d51a594f93865322c664539feca749ea5a59ee33b637aae006635ba,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf4
31fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1714530722007160196,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-msshn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7575fbfc-11ce-4223-bd99-ff9cdddd3568,},Annotations:map[string]string{io.kubernetes.container.hash: 725b7ad5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3ffc6d046e214333ed85219819ad54bfa5c3ac0b10167beacf72edbd6035736,PodSandboxId:170d412885089b502fe3701809ec5f3271a9fd4f9181bf1c293cd527d144907b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929d
c8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1714530701589505342,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-329926,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50544e7f95cae164184f9b27f78747c6,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f36a128ab65a069e1cafa5abf1b7774272b73428ea3826f595758d5e99b4e93,PodSandboxId:0c17dc8e917b3fee067acb3ff1b63beb8742f444a12d58ee96183b021052a9ed,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTA
INER_EXITED,CreatedAt:1714530701461361460,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-329926,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 684463257510837a1c150a7df713bf62,},Annotations:map[string]string{io.kubernetes.container.hash: d6da2a03,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=faf61994-1209-4fd4-a552-824f9ecacb4e name=/runtime.v1.RuntimeService/ListContainers
	May 01 02:46:23 ha-329926 crio[3818]: time="2024-05-01 02:46:23.015018852Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5cc6169f-0c6c-4e2c-b1f2-0fdf02255c00 name=/runtime.v1.RuntimeService/Version
	May 01 02:46:23 ha-329926 crio[3818]: time="2024-05-01 02:46:23.015124326Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5cc6169f-0c6c-4e2c-b1f2-0fdf02255c00 name=/runtime.v1.RuntimeService/Version
	May 01 02:46:23 ha-329926 crio[3818]: time="2024-05-01 02:46:23.016461697Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=74da875e-6398-4b0f-ba1e-2b0431e8e8e2 name=/runtime.v1.ImageService/ImageFsInfo
	May 01 02:46:23 ha-329926 crio[3818]: time="2024-05-01 02:46:23.017105629Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714531583017077108,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=74da875e-6398-4b0f-ba1e-2b0431e8e8e2 name=/runtime.v1.ImageService/ImageFsInfo
	May 01 02:46:23 ha-329926 crio[3818]: time="2024-05-01 02:46:23.018299831Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=80d9f977-a1af-4bb9-beeb-59b1178f5611 name=/runtime.v1.RuntimeService/ListContainers
	May 01 02:46:23 ha-329926 crio[3818]: time="2024-05-01 02:46:23.018381492Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=80d9f977-a1af-4bb9-beeb-59b1178f5611 name=/runtime.v1.RuntimeService/ListContainers
	May 01 02:46:23 ha-329926 crio[3818]: time="2024-05-01 02:46:23.018919733Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0b3a57c2195832ebe5e7032bc917987f944db828a8e5925d886ea10fb432f1ab,PodSandboxId:222f0baa90487bc1f8b94ec1b03db6568034bc14d984f1da1cd8b2ca7b480596,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714531488140417174,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 371423a6-a156-4e8d-bf66-812d606cc8d7,},Annotations:map[string]string{io.kubernetes.container.hash: 79218c39,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68c9b61e07b789bb1c441a33b66eeb07476719d85f4affe9c264e34bd73d8008,PodSandboxId:5c187637e4af957fa3481369cfb179dd7f043b24bd7436ca0fa0e524b347859a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1714531481120651978,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-kcmp7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e15c166-9ba1-40c9-8f33-db7f83733932,},Annotations:map[string]string{io.kubernetes.container.hash: fdceac74,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8f3963bbca5a5d8ca96fd7bf715f2b551bcaf4380803b443a346bccff25655b,PodSandboxId:ec79d09460adcc3f5e74bae2063a9f639a550ab2e2282a391540873353174ac5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714531441122972737,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-329926,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c603c91fe09a36a9d3862475188142a,},Annotations:map[string]string{io.kubernetes.container.hash: 740b8b39,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernet
es.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edfe1ca8adcff5c57dd48a6e4e52f6129014ec43e797455a799c8abb8dddf9ad,PodSandboxId:639834849a63b0c5b6034ba67902819baf2c7bab7e147320753381a3fadfc9ef,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714531439122290070,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-329926,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d6c0ce9d370e02811c06c5c50fb7da1,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c768225fa391861ab12a75d643a4757892824fd20ef1294ba1e9b817cbe81f3,PodSandboxId:222f0baa90487bc1f8b94ec1b03db6568034bc14d984f1da1cd8b2ca7b480596,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1714531436122032584,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 371423a6-a156-4e8d-bf66-812d606cc8d7,},Annotations:map[string]string{io.kubernetes.container.hash: 79218c39,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.cont
ainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bb33341c42d847e3c54695cda601e3d3ee0fe3e95d113fabdb40a8b79ee00ac,PodSandboxId:7ec7a58f19ea4590ecf46d3c8faea8e7ab579c87ea9e31da82f9173c6e67e371,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714531429689100495,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-nwj5x,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0cfb5bda-fca7-479f-98d3-6be9bddf0e1c,},Annotations:map[string]string{io.kubernetes.container.hash: b50f62c1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45d5832e610c8d7398d46f4d0dc6929742b56ee579172c6289a7ebcedd229113,PodSandboxId:45ff3dc59db5afc538c97abf460cf706199fe452449ab01ed2f230cf7248cf45,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1714531411902616259,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-329926,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7e0afd8727d0417c20407d0e8880765,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:3b19253347b3c6a6483c37f96f8593931755f784d085f793c13e001ae0d76794,PodSandboxId:6bd635b17d9c0d981c6e2c3a943281df18c7780f9ff5380d3554dfb073340194,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714531396113172009,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-msshn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7575fbfc-11ce-4223-bd99-ff9cdddd3568,},Annotations:map[string]string{io.kubernetes.container.hash: 725b7ad5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4005dbd
94fbfed92cf6c4bb53e9b2208adb17302eabfa52ea663e83fa24fef7,PodSandboxId:15d9ac760b30773086acc72880e8a01cf304d780f29db72009315c976cb517ff,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714531396470001536,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-cfdqc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a37e982e-9e4f-43bf-b957-0d6f082f4ec8,},Annotations:map[string]string{io.kubernetes.container.hash: d10dbf58,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d21d7f0022d1d1edaee4b7f45614bc8d98a407b0ba70c272d9fcdbc67fdba53,PodSandboxId:77261f211cf7416433533bfbdf670550fc76f5c15415fa7ad3d2c30a90d5c656,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714531396224192528,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-2h8lc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 937e09f0-6a7d-4387-aa19-ee959eb5a2a5,},Annotations:map[string]string{io.kubernetes.container.hash: 64cd0f7d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,
\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a409ec7ab3a3389b868353ff5b180728bff4d9fd6e9ee235408658387a54e865,PodSandboxId:639834849a63b0c5b6034ba67902819baf2c7bab7e147320753381a3fadfc9ef,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714531396158062617,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-329926,io.kuber
netes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d6c0ce9d370e02811c06c5c50fb7da1,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc712ce5191365a1b74a4fb690a3fe1fca3ef109f0525a60d88ecff10b96a61b,PodSandboxId:33a2511848b79ed0b27f51b17c8e8d0380da02cb35f4dd0ab8930ed674b8a9e3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714531396049539667,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-329926,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: 50544e7f95cae164184f9b27f78747c6,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:028fe64855c8942e704a66fc8d7d80db9662c05c5252b9ae01043eb95134a0a6,PodSandboxId:ec79d09460adcc3f5e74bae2063a9f639a550ab2e2282a391540873353174ac5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714531395925134025,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-329926,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 4c603c91fe09a36a9d3862475188142a,},Annotations:map[string]string{io.kubernetes.container.hash: 740b8b39,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4940e86ab3aeeda6fefe1a1c3eceee2908fbf5e3ebc1584761c2744b7a04e3e,PodSandboxId:27f30cfd71acf5bbb1ccf09c13482fbe21411ba6499ce9959099bd47c7ce537f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714531395866512593,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-329926,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 684463257510837a1c150a7df713bf62,},Ann
otations:map[string]string{io.kubernetes.container.hash: d6da2a03,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:042c499301cf07675ef97e386fb802a0684efc1ff197389bf8b7458ca853493f,PodSandboxId:5c187637e4af957fa3481369cfb179dd7f043b24bd7436ca0fa0e524b347859a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1714531392201491789,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-kcmp7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e15c166-9ba1-40c9-8f33-db7f83733932,},Annotations:map[string]string{io.kuber
netes.container.hash: fdceac74,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d8c54a9eb6fd7f6b4514ddfa0201da9a28692806db71b7d277493e9f9b90233,PodSandboxId:abf4acd7dd09ff0cf728fe61b1bf3c73291eacc97a98f9f2d79b9fe49a629266,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1714530889047946688,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-nwj5x,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0cfb5bda-fca7-479f-98d3-6be9bddf0e1c,},Annotations:map[string]string{io.kuberne
tes.container.hash: b50f62c1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:619f66869569c782fd59143653e79694cc74f9ae89927d4f16dfcb10f47d0e03,PodSandboxId:0fe93b95f6356e43c599c74942976a3c1c2025ad4e52377711767c38c88d3d63,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714530725115421538,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-cfdqc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a37e982e-9e4f-43bf-b957-0d6f082f4ec8,},Annotations:map[string]string{io.kubernetes.container.hash: d10dbf58,io.
kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:693a12cd2b2c6de7a7c03e4d290e69cdcef9f4f17b75ff84f84fb81b8297cd63,PodSandboxId:1771f42c6abecb905d783dc2c43071807fc9138d9eae9432b2b132602a450cb2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714530725082845913,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredn
s-7db6d8ff4d-2h8lc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 937e09f0-6a7d-4387-aa19-ee959eb5a2a5,},Annotations:map[string]string{io.kubernetes.container.hash: 64cd0f7d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ab64850e34b66cbe91e099c91014051472b87888130297cd7c47a4a78992140,PodSandboxId:f6611da96d51a594f93865322c664539feca749ea5a59ee33b637aae006635ba,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf4
31fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1714530722007160196,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-msshn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7575fbfc-11ce-4223-bd99-ff9cdddd3568,},Annotations:map[string]string{io.kubernetes.container.hash: 725b7ad5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3ffc6d046e214333ed85219819ad54bfa5c3ac0b10167beacf72edbd6035736,PodSandboxId:170d412885089b502fe3701809ec5f3271a9fd4f9181bf1c293cd527d144907b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929d
c8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1714530701589505342,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-329926,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50544e7f95cae164184f9b27f78747c6,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f36a128ab65a069e1cafa5abf1b7774272b73428ea3826f595758d5e99b4e93,PodSandboxId:0c17dc8e917b3fee067acb3ff1b63beb8742f444a12d58ee96183b021052a9ed,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTA
INER_EXITED,CreatedAt:1714530701461361460,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-329926,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 684463257510837a1c150a7df713bf62,},Annotations:map[string]string{io.kubernetes.container.hash: d6da2a03,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=80d9f977-a1af-4bb9-beeb-59b1178f5611 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	0b3a57c219583       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       4                   222f0baa90487       storage-provisioner
	68c9b61e07b78       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      About a minute ago   Running             kindnet-cni               3                   5c187637e4af9       kindnet-kcmp7
	c8f3963bbca5a       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0                                      2 minutes ago        Running             kube-apiserver            3                   ec79d09460adc       kube-apiserver-ha-329926
	edfe1ca8adcff       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b                                      2 minutes ago        Running             kube-controller-manager   2                   639834849a63b       kube-controller-manager-ha-329926
	6c768225fa391       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      2 minutes ago        Exited              storage-provisioner       3                   222f0baa90487       storage-provisioner
	0bb33341c42d8       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      2 minutes ago        Running             busybox                   1                   7ec7a58f19ea4       busybox-fc5497c4f-nwj5x
	45d5832e610c8       22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba                                      2 minutes ago        Running             kube-vip                  0                   45ff3dc59db5a       kube-vip-ha-329926
	b4005dbd94fbf       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      3 minutes ago        Running             coredns                   1                   15d9ac760b307       coredns-7db6d8ff4d-cfdqc
	2d21d7f0022d1       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      3 minutes ago        Running             coredns                   1                   77261f211cf74       coredns-7db6d8ff4d-2h8lc
	a409ec7ab3a33       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b                                      3 minutes ago        Exited              kube-controller-manager   1                   639834849a63b       kube-controller-manager-ha-329926
	3b19253347b3c       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b                                      3 minutes ago        Running             kube-proxy                1                   6bd635b17d9c0       kube-proxy-msshn
	fc712ce519136       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced                                      3 minutes ago        Running             kube-scheduler            1                   33a2511848b79       kube-scheduler-ha-329926
	028fe64855c89       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0                                      3 minutes ago        Exited              kube-apiserver            2                   ec79d09460adc       kube-apiserver-ha-329926
	d4940e86ab3ae       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      3 minutes ago        Running             etcd                      1                   27f30cfd71acf       etcd-ha-329926
	042c499301cf0       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      3 minutes ago        Exited              kindnet-cni               2                   5c187637e4af9       kindnet-kcmp7
	4d8c54a9eb6fd       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   11 minutes ago       Exited              busybox                   0                   abf4acd7dd09f       busybox-fc5497c4f-nwj5x
	619f66869569c       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      14 minutes ago       Exited              coredns                   0                   0fe93b95f6356       coredns-7db6d8ff4d-cfdqc
	693a12cd2b2c6       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      14 minutes ago       Exited              coredns                   0                   1771f42c6abec       coredns-7db6d8ff4d-2h8lc
	2ab64850e34b6       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b                                      14 minutes ago       Exited              kube-proxy                0                   f6611da96d51a       kube-proxy-msshn
	e3ffc6d046e21       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced                                      14 minutes ago       Exited              kube-scheduler            0                   170d412885089       kube-scheduler-ha-329926
	9f36a128ab65a       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      14 minutes ago       Exited              etcd                      0                   0c17dc8e917b3       etcd-ha-329926
	
	
	==> coredns [2d21d7f0022d1d1edaee4b7f45614bc8d98a407b0ba70c272d9fcdbc67fdba53] <==
	[INFO] plugin/kubernetes: Trace[1392783372]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (01-May-2024 02:43:28.125) (total time: 10185ms):
	Trace[1392783372]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:58212->10.96.0.1:443: read: connection reset by peer 10184ms (02:43:38.309)
	Trace[1392783372]: [10.185012457s] [10.185012457s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:58212->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [619f66869569c782fd59143653e79694cc74f9ae89927d4f16dfcb10f47d0e03] <==
	[INFO] 10.244.1.2:38209 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000174169s
	[INFO] 10.244.1.2:49411 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000226927s
	[INFO] 10.244.0.4:36823 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000251251s
	[INFO] 10.244.0.4:50159 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001217267s
	[INFO] 10.244.0.4:40861 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000095644s
	[INFO] 10.244.0.4:39347 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000037736s
	[INFO] 10.244.2.2:41105 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000265426s
	[INFO] 10.244.2.2:60245 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000092358s
	[INFO] 10.244.2.2:33866 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00027339s
	[INFO] 10.244.2.2:40430 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000118178s
	[INFO] 10.244.2.2:34835 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000101675s
	[INFO] 10.244.1.2:50970 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000173405s
	[INFO] 10.244.1.2:45808 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000138806s
	[INFO] 10.244.0.4:35255 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000156547s
	[INFO] 10.244.0.4:41916 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000142712s
	[INFO] 10.244.0.4:47485 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000089433s
	[INFO] 10.244.2.2:53686 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000133335s
	[INFO] 10.244.2.2:36841 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000214942s
	[INFO] 10.244.2.2:60707 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000154s
	[INFO] 10.244.1.2:56577 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000484498s
	[INFO] 10.244.0.4:54313 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000184738s
	[INFO] 10.244.0.4:52463 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000369344s
	[INFO] 10.244.2.2:41039 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000224698s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [693a12cd2b2c6de7a7c03e4d290e69cdcef9f4f17b75ff84f84fb81b8297cd63] <==
	[INFO] 10.244.1.2:60518 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00017936s
	[INFO] 10.244.0.4:49957 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000203599s
	[INFO] 10.244.0.4:42538 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001710693s
	[INFO] 10.244.0.4:56099 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000083655s
	[INFO] 10.244.0.4:32984 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000156518s
	[INFO] 10.244.2.2:55668 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001793326s
	[INFO] 10.244.2.2:50808 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001174633s
	[INFO] 10.244.2.2:44291 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000119382s
	[INFO] 10.244.1.2:38278 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000204436s
	[INFO] 10.244.1.2:59141 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000117309s
	[INFO] 10.244.0.4:37516 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00005532s
	[INFO] 10.244.2.2:57332 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000189855s
	[INFO] 10.244.1.2:34171 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00024042s
	[INFO] 10.244.1.2:37491 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000234774s
	[INFO] 10.244.1.2:47588 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000815872s
	[INFO] 10.244.0.4:38552 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000135078s
	[INFO] 10.244.0.4:37827 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000154857s
	[INFO] 10.244.2.2:47767 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000154967s
	[INFO] 10.244.2.2:56393 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000156764s
	[INFO] 10.244.2.2:38616 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000127045s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	
	
	==> coredns [b4005dbd94fbfed92cf6c4bb53e9b2208adb17302eabfa52ea663e83fa24fef7] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:36528->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:49274->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:49274->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:49260->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[1398911808]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (01-May-2024 02:43:28.488) (total time: 12236ms):
	Trace[1398911808]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:49260->10.96.0.1:443: read: connection reset by peer 12235ms (02:43:40.724)
	Trace[1398911808]: [12.2361642s] [12.2361642s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:49260->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-329926
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-329926
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2c4eae41cda912e6a762d77f0d8868e00f97bb4e
	                    minikube.k8s.io/name=ha-329926
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_01T02_31_49_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 01 May 2024 02:31:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-329926
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 01 May 2024 02:46:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 01 May 2024 02:43:59 +0000   Wed, 01 May 2024 02:31:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 01 May 2024 02:43:59 +0000   Wed, 01 May 2024 02:31:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 01 May 2024 02:43:59 +0000   Wed, 01 May 2024 02:31:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 01 May 2024 02:43:59 +0000   Wed, 01 May 2024 02:32:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.5
	  Hostname:    ha-329926
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f2958e1e59474320901fe20ba723db00
	  System UUID:                f2958e1e-5947-4320-901f-e20ba723db00
	  Boot ID:                    29fc4c0c-83d6-4af9-8767-4e1b7b7102d4
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-nwj5x              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 coredns-7db6d8ff4d-2h8lc             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 coredns-7db6d8ff4d-cfdqc             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 etcd-ha-329926                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 kindnet-kcmp7                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      14m
	  kube-system                 kube-apiserver-ha-329926             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-controller-manager-ha-329926    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-msshn                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-scheduler-ha-329926             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-vip-ha-329926                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         88s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 14m                    kube-proxy       
	  Normal   Starting                 2m23s                  kube-proxy       
	  Normal   NodeHasNoDiskPressure    14m                    kubelet          Node ha-329926 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 14m                    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  14m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  14m                    kubelet          Node ha-329926 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     14m                    kubelet          Node ha-329926 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           14m                    node-controller  Node ha-329926 event: Registered Node ha-329926 in Controller
	  Normal   NodeReady                14m                    kubelet          Node ha-329926 status is now: NodeReady
	  Normal   RegisteredNode           12m                    node-controller  Node ha-329926 event: Registered Node ha-329926 in Controller
	  Normal   RegisteredNode           11m                    node-controller  Node ha-329926 event: Registered Node ha-329926 in Controller
	  Warning  ContainerGCFailed        3m35s (x2 over 4m35s)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           2m21s                  node-controller  Node ha-329926 event: Registered Node ha-329926 in Controller
	  Normal   RegisteredNode           2m8s                   node-controller  Node ha-329926 event: Registered Node ha-329926 in Controller
	  Normal   RegisteredNode           31s                    node-controller  Node ha-329926 event: Registered Node ha-329926 in Controller
	
	
	Name:               ha-329926-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-329926-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2c4eae41cda912e6a762d77f0d8868e00f97bb4e
	                    minikube.k8s.io/name=ha-329926
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_01T02_33_11_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 01 May 2024 02:33:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-329926-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 01 May 2024 02:46:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 01 May 2024 02:44:41 +0000   Wed, 01 May 2024 02:44:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 01 May 2024 02:44:41 +0000   Wed, 01 May 2024 02:44:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 01 May 2024 02:44:41 +0000   Wed, 01 May 2024 02:44:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 01 May 2024 02:44:41 +0000   Wed, 01 May 2024 02:44:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.79
	  Hostname:    ha-329926-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 135aac161d694487846d436743753149
	  System UUID:                135aac16-1d69-4487-846d-436743753149
	  Boot ID:                    fcfabe85-9cad-4538-b8cf-2825508a7ab0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-h8dxv                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 etcd-ha-329926-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 kindnet-9r8zn                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-apiserver-ha-329926-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-controller-manager-ha-329926-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-proxy-rfsm8                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-ha-329926-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-vip-ha-329926-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 117s                   kube-proxy       
	  Normal  Starting                 13m                    kube-proxy       
	  Normal  NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)      kubelet          Node ha-329926-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)      kubelet          Node ha-329926-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)      kubelet          Node ha-329926-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           13m                    node-controller  Node ha-329926-m02 event: Registered Node ha-329926-m02 in Controller
	  Normal  RegisteredNode           12m                    node-controller  Node ha-329926-m02 event: Registered Node ha-329926-m02 in Controller
	  Normal  RegisteredNode           11m                    node-controller  Node ha-329926-m02 event: Registered Node ha-329926-m02 in Controller
	  Normal  NodeNotReady             9m48s                  node-controller  Node ha-329926-m02 status is now: NodeNotReady
	  Normal  Starting                 2m51s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  2m51s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m50s (x8 over 2m51s)  kubelet          Node ha-329926-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m50s (x8 over 2m51s)  kubelet          Node ha-329926-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m50s (x7 over 2m51s)  kubelet          Node ha-329926-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           2m21s                  node-controller  Node ha-329926-m02 event: Registered Node ha-329926-m02 in Controller
	  Normal  RegisteredNode           2m8s                   node-controller  Node ha-329926-m02 event: Registered Node ha-329926-m02 in Controller
	  Normal  RegisteredNode           31s                    node-controller  Node ha-329926-m02 event: Registered Node ha-329926-m02 in Controller
	
	
	Name:               ha-329926-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-329926-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2c4eae41cda912e6a762d77f0d8868e00f97bb4e
	                    minikube.k8s.io/name=ha-329926
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_01T02_34_25_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 01 May 2024 02:34:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-329926-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 01 May 2024 02:46:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 01 May 2024 02:46:22 +0000   Wed, 01 May 2024 02:45:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 01 May 2024 02:46:22 +0000   Wed, 01 May 2024 02:45:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 01 May 2024 02:46:22 +0000   Wed, 01 May 2024 02:45:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 01 May 2024 02:46:22 +0000   Wed, 01 May 2024 02:45:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.115
	  Hostname:    ha-329926-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 1767eff05cce4be88efdc97aef5d41f4
	  System UUID:                1767eff0-5cce-4be8-8efd-c97aef5d41f4
	  Boot ID:                    37d3bdbc-a7a8-4e6a-8e84-b917819d6ccb
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-s528n                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 etcd-ha-329926-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         12m
	  kube-system                 kindnet-7gr9n                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-apiserver-ha-329926-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-controller-manager-ha-329926-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-proxy-jfnk9                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-scheduler-ha-329926-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-vip-ha-329926-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 11m                kube-proxy       
	  Normal   Starting                 33s                kube-proxy       
	  Normal   NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node ha-329926-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet          Node ha-329926-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x7 over 12m)  kubelet          Node ha-329926-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           11m                node-controller  Node ha-329926-m03 event: Registered Node ha-329926-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-329926-m03 event: Registered Node ha-329926-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-329926-m03 event: Registered Node ha-329926-m03 in Controller
	  Normal   RegisteredNode           2m20s              node-controller  Node ha-329926-m03 event: Registered Node ha-329926-m03 in Controller
	  Normal   RegisteredNode           2m8s               node-controller  Node ha-329926-m03 event: Registered Node ha-329926-m03 in Controller
	  Normal   NodeNotReady             100s               node-controller  Node ha-329926-m03 status is now: NodeNotReady
	  Normal   Starting                 62s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  62s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  62s (x2 over 62s)  kubelet          Node ha-329926-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    62s (x2 over 62s)  kubelet          Node ha-329926-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     62s (x2 over 62s)  kubelet          Node ha-329926-m03 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 62s                kubelet          Node ha-329926-m03 has been rebooted, boot id: 37d3bdbc-a7a8-4e6a-8e84-b917819d6ccb
	  Normal   NodeReady                62s                kubelet          Node ha-329926-m03 status is now: NodeReady
	  Normal   RegisteredNode           31s                node-controller  Node ha-329926-m03 event: Registered Node ha-329926-m03 in Controller
	
	
	Name:               ha-329926-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-329926-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2c4eae41cda912e6a762d77f0d8868e00f97bb4e
	                    minikube.k8s.io/name=ha-329926
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_01T02_35_25_0700
	                    minikube.k8s.io/version=v1.33.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 01 May 2024 02:35:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-329926-m04
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 01 May 2024 02:46:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 01 May 2024 02:46:14 +0000   Wed, 01 May 2024 02:46:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 01 May 2024 02:46:14 +0000   Wed, 01 May 2024 02:46:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 01 May 2024 02:46:14 +0000   Wed, 01 May 2024 02:46:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 01 May 2024 02:46:14 +0000   Wed, 01 May 2024 02:46:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.84
	  Hostname:    ha-329926-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 b19ce422aa224cda91e88f6cd8b003f9
	  System UUID:                b19ce422-aa22-4cda-91e8-8f6cd8b003f9
	  Boot ID:                    6ddbb389-e2ec-49d2-a7e2-c9728da82050
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-86ngt       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-proxy-9492r    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 5s                 kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   NodeHasSufficientMemory  10m (x2 over 10m)  kubelet          Node ha-329926-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x2 over 10m)  kubelet          Node ha-329926-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x2 over 10m)  kubelet          Node ha-329926-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           10m                node-controller  Node ha-329926-m04 event: Registered Node ha-329926-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-329926-m04 event: Registered Node ha-329926-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-329926-m04 event: Registered Node ha-329926-m04 in Controller
	  Normal   NodeReady                10m                kubelet          Node ha-329926-m04 status is now: NodeReady
	  Normal   RegisteredNode           2m20s              node-controller  Node ha-329926-m04 event: Registered Node ha-329926-m04 in Controller
	  Normal   RegisteredNode           2m8s               node-controller  Node ha-329926-m04 event: Registered Node ha-329926-m04 in Controller
	  Normal   NodeNotReady             100s               node-controller  Node ha-329926-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           31s                node-controller  Node ha-329926-m04 event: Registered Node ha-329926-m04 in Controller
	  Normal   Starting                 9s                 kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  9s                 kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 9s (x2 over 9s)    kubelet          Node ha-329926-m04 has been rebooted, boot id: 6ddbb389-e2ec-49d2-a7e2-c9728da82050
	  Normal   NodeHasSufficientMemory  9s (x3 over 9s)    kubelet          Node ha-329926-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9s (x3 over 9s)    kubelet          Node ha-329926-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9s (x3 over 9s)    kubelet          Node ha-329926-m04 status is now: NodeHasSufficientPID
	  Normal   NodeNotReady             9s                 kubelet          Node ha-329926-m04 status is now: NodeNotReady
	  Normal   NodeReady                9s                 kubelet          Node ha-329926-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.059078] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.050190] systemd-fstab-generator[614]: Ignoring "noauto" option for root device
	[  +0.172804] systemd-fstab-generator[628]: Ignoring "noauto" option for root device
	[  +0.147592] systemd-fstab-generator[641]: Ignoring "noauto" option for root device
	[  +0.297725] systemd-fstab-generator[671]: Ignoring "noauto" option for root device
	[  +4.784571] systemd-fstab-generator[771]: Ignoring "noauto" option for root device
	[  +0.063787] kauditd_printk_skb: 130 callbacks suppressed
	[  +5.533501] systemd-fstab-generator[961]: Ignoring "noauto" option for root device
	[  +0.060916] kauditd_printk_skb: 18 callbacks suppressed
	[  +7.479829] systemd-fstab-generator[1381]: Ignoring "noauto" option for root device
	[  +0.092024] kauditd_printk_skb: 79 callbacks suppressed
	[May 1 02:32] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.650154] kauditd_printk_skb: 74 callbacks suppressed
	[May 1 02:42] systemd-fstab-generator[3736]: Ignoring "noauto" option for root device
	[  +0.159126] systemd-fstab-generator[3748]: Ignoring "noauto" option for root device
	[  +0.187836] systemd-fstab-generator[3762]: Ignoring "noauto" option for root device
	[  +0.169864] systemd-fstab-generator[3774]: Ignoring "noauto" option for root device
	[  +0.322099] systemd-fstab-generator[3802]: Ignoring "noauto" option for root device
	[May 1 02:43] systemd-fstab-generator[3905]: Ignoring "noauto" option for root device
	[  +0.090449] kauditd_printk_skb: 100 callbacks suppressed
	[  +5.633568] kauditd_printk_skb: 22 callbacks suppressed
	[ +12.620460] kauditd_printk_skb: 75 callbacks suppressed
	[ +10.059902] kauditd_printk_skb: 1 callbacks suppressed
	[ +17.995655] kauditd_printk_skb: 5 callbacks suppressed
	[May 1 02:44] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> etcd [9f36a128ab65a069e1cafa5abf1b7774272b73428ea3826f595758d5e99b4e93] <==
	{"level":"info","ts":"2024-05-01T02:41:26.853995Z","caller":"traceutil/trace.go:171","msg":"trace[1798900376] range","detail":"{range_begin:/registry/ingressclasses/; range_end:/registry/ingressclasses0; }","duration":"7.944992517s","start":"2024-05-01T02:41:18.908998Z","end":"2024-05-01T02:41:26.853991Z","steps":["trace[1798900376] 'agreement among raft nodes before linearized reading'  (duration: 7.944988995s)"],"step_count":1}
	2024/05/01 02:41:26 WARNING: [core] [Server #5] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"info","ts":"2024-05-01T02:41:26.854093Z","caller":"traceutil/trace.go:171","msg":"trace[184740758] range","detail":"{range_begin:/registry/horizontalpodautoscalers/; range_end:/registry/horizontalpodautoscalers0; }","duration":"8.03083736s","start":"2024-05-01T02:41:18.823253Z","end":"2024-05-01T02:41:26.85409Z","steps":["trace[184740758] 'agreement among raft nodes before linearized reading'  (duration: 8.008399377s)"],"step_count":1}
	2024/05/01 02:41:26 WARNING: [core] [Server #5] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-05-01T02:41:26.900352Z","caller":"etcdserver/v3_server.go:897","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":154124257143701688,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-05-01T02:41:26.914809Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.5:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-05-01T02:41:26.914877Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.5:2379: use of closed network connection"}
	{"level":"info","ts":"2024-05-01T02:41:26.914964Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"c5263387c79c0223","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-05-01T02:41:26.915152Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"64dbb1bdcfddc92c"}
	{"level":"info","ts":"2024-05-01T02:41:26.915207Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"64dbb1bdcfddc92c"}
	{"level":"info","ts":"2024-05-01T02:41:26.91523Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"64dbb1bdcfddc92c"}
	{"level":"info","ts":"2024-05-01T02:41:26.915329Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"c5263387c79c0223","remote-peer-id":"64dbb1bdcfddc92c"}
	{"level":"info","ts":"2024-05-01T02:41:26.915396Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"c5263387c79c0223","remote-peer-id":"64dbb1bdcfddc92c"}
	{"level":"info","ts":"2024-05-01T02:41:26.915428Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"c5263387c79c0223","remote-peer-id":"64dbb1bdcfddc92c"}
	{"level":"info","ts":"2024-05-01T02:41:26.915438Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"64dbb1bdcfddc92c"}
	{"level":"info","ts":"2024-05-01T02:41:26.915444Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"d9d7aed2183d5ca6"}
	{"level":"info","ts":"2024-05-01T02:41:26.915452Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"d9d7aed2183d5ca6"}
	{"level":"info","ts":"2024-05-01T02:41:26.915494Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"d9d7aed2183d5ca6"}
	{"level":"info","ts":"2024-05-01T02:41:26.915565Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"c5263387c79c0223","remote-peer-id":"d9d7aed2183d5ca6"}
	{"level":"info","ts":"2024-05-01T02:41:26.915622Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"c5263387c79c0223","remote-peer-id":"d9d7aed2183d5ca6"}
	{"level":"info","ts":"2024-05-01T02:41:26.915651Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"c5263387c79c0223","remote-peer-id":"d9d7aed2183d5ca6"}
	{"level":"info","ts":"2024-05-01T02:41:26.915753Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"d9d7aed2183d5ca6"}
	{"level":"info","ts":"2024-05-01T02:41:26.919307Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.5:2380"}
	{"level":"info","ts":"2024-05-01T02:41:26.919485Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.5:2380"}
	{"level":"info","ts":"2024-05-01T02:41:26.919524Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-329926","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.5:2380"],"advertise-client-urls":["https://192.168.39.5:2379"]}
	
	
	==> etcd [d4940e86ab3aeeda6fefe1a1c3eceee2908fbf5e3ebc1584761c2744b7a04e3e] <==
	{"level":"warn","ts":"2024-05-01T02:45:23.00372Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.115:2380/version","remote-member-id":"d9d7aed2183d5ca6","error":"Get \"https://192.168.39.115:2380/version\": dial tcp 192.168.39.115:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-05-01T02:45:23.003825Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"d9d7aed2183d5ca6","error":"Get \"https://192.168.39.115:2380/version\": dial tcp 192.168.39.115:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-05-01T02:45:27.005357Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.115:2380/version","remote-member-id":"d9d7aed2183d5ca6","error":"Get \"https://192.168.39.115:2380/version\": dial tcp 192.168.39.115:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-05-01T02:45:27.005447Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"d9d7aed2183d5ca6","error":"Get \"https://192.168.39.115:2380/version\": dial tcp 192.168.39.115:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-05-01T02:45:27.054981Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"d9d7aed2183d5ca6","rtt":"0s","error":"dial tcp 192.168.39.115:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-05-01T02:45:27.05496Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"d9d7aed2183d5ca6","rtt":"0s","error":"dial tcp 192.168.39.115:2380: connect: connection refused"}
	{"level":"info","ts":"2024-05-01T02:45:28.09491Z","caller":"traceutil/trace.go:171","msg":"trace[1973116624] linearizableReadLoop","detail":"{readStateIndex:2884; appliedIndex:2884; }","duration":"109.022341ms","start":"2024-05-01T02:45:27.985836Z","end":"2024-05-01T02:45:28.094858Z","steps":["trace[1973116624] 'read index received'  (duration: 109.016835ms)","trace[1973116624] 'applied index is now lower than readState.Index'  (duration: 3.972µs)"],"step_count":2}
	{"level":"warn","ts":"2024-05-01T02:45:28.095164Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"109.336458ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-05-01T02:45:28.095198Z","caller":"traceutil/trace.go:171","msg":"trace[2052039144] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:2465; }","duration":"109.415418ms","start":"2024-05-01T02:45:27.98577Z","end":"2024-05-01T02:45:28.095185Z","steps":["trace[2052039144] 'agreement among raft nodes before linearized reading'  (duration: 109.306387ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-01T02:45:31.00837Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.115:2380/version","remote-member-id":"d9d7aed2183d5ca6","error":"Get \"https://192.168.39.115:2380/version\": dial tcp 192.168.39.115:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-05-01T02:45:31.008481Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"d9d7aed2183d5ca6","error":"Get \"https://192.168.39.115:2380/version\": dial tcp 192.168.39.115:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-05-01T02:45:32.055921Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"d9d7aed2183d5ca6","rtt":"0s","error":"dial tcp 192.168.39.115:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-05-01T02:45:32.056013Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"d9d7aed2183d5ca6","rtt":"0s","error":"dial tcp 192.168.39.115:2380: connect: connection refused"}
	{"level":"info","ts":"2024-05-01T02:45:34.054906Z","caller":"traceutil/trace.go:171","msg":"trace[2036015493] transaction","detail":"{read_only:false; response_revision:2487; number_of_response:1; }","duration":"132.522011ms","start":"2024-05-01T02:45:33.922372Z","end":"2024-05-01T02:45:34.054894Z","steps":["trace[2036015493] 'process raft request'  (duration: 132.437771ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-01T02:45:34.837805Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"d9d7aed2183d5ca6"}
	{"level":"warn","ts":"2024-05-01T02:45:34.838385Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"d9d7aed2183d5ca6","error":"failed to dial d9d7aed2183d5ca6 on stream MsgApp v2 (peer d9d7aed2183d5ca6 failed to find local node c5263387c79c0223)"}
	{"level":"info","ts":"2024-05-01T02:45:34.888134Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"c5263387c79c0223","to":"d9d7aed2183d5ca6","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-05-01T02:45:34.888225Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"d9d7aed2183d5ca6"}
	{"level":"info","ts":"2024-05-01T02:45:34.888247Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"c5263387c79c0223","remote-peer-id":"d9d7aed2183d5ca6"}
	{"level":"info","ts":"2024-05-01T02:45:34.915451Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"c5263387c79c0223","to":"d9d7aed2183d5ca6","stream-type":"stream Message"}
	{"level":"info","ts":"2024-05-01T02:45:34.915591Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"c5263387c79c0223","remote-peer-id":"d9d7aed2183d5ca6"}
	{"level":"info","ts":"2024-05-01T02:45:34.927127Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"c5263387c79c0223","remote-peer-id":"d9d7aed2183d5ca6"}
	{"level":"warn","ts":"2024-05-01T02:45:34.939281Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.168.39.115:43130","server-name":"","error":"EOF"}
	{"level":"info","ts":"2024-05-01T02:45:34.940791Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"c5263387c79c0223","remote-peer-id":"d9d7aed2183d5ca6"}
	{"level":"info","ts":"2024-05-01T02:45:45.085323Z","caller":"traceutil/trace.go:171","msg":"trace[1669874535] transaction","detail":"{read_only:false; response_revision:2526; number_of_response:1; }","duration":"155.393831ms","start":"2024-05-01T02:45:44.929886Z","end":"2024-05-01T02:45:45.085279Z","steps":["trace[1669874535] 'process raft request'  (duration: 155.246153ms)"],"step_count":1}
	
	
	==> kernel <==
	 02:46:23 up 15 min,  0 users,  load average: 0.12, 0.42, 0.35
	Linux ha-329926 5.10.207 #1 SMP Tue Apr 30 22:38:43 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [042c499301cf07675ef97e386fb802a0684efc1ff197389bf8b7458ca853493f] <==
	I0501 02:43:12.755398       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0501 02:43:12.755491       1 main.go:107] hostIP = 192.168.39.5
	podIP = 192.168.39.5
	I0501 02:43:12.755776       1 main.go:116] setting mtu 1500 for CNI 
	I0501 02:43:12.755825       1 main.go:146] kindnetd IP family: "ipv4"
	I0501 02:43:12.755847       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0501 02:43:16.150932       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0501 02:43:16.151255       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	I0501 02:43:27.155983       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": net/http: TLS handshake timeout
	I0501 02:43:40.724541       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 192.168.122.184:59676->10.96.0.1:443: read: connection reset by peer
	I0501 02:43:43.796893       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	panic: Reached maximum retries obtaining node list: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	
	goroutine 1 [running]:
	main.main()
		/go/src/cmd/kindnetd/main.go:195 +0xd3d
	
	
	==> kindnet [68c9b61e07b789bb1c441a33b66eeb07476719d85f4affe9c264e34bd73d8008] <==
	I0501 02:45:52.372284       1 main.go:250] Node ha-329926-m04 has CIDR [10.244.3.0/24] 
	I0501 02:46:02.390309       1 main.go:223] Handling node with IPs: map[192.168.39.5:{}]
	I0501 02:46:02.390393       1 main.go:227] handling current node
	I0501 02:46:02.390416       1 main.go:223] Handling node with IPs: map[192.168.39.79:{}]
	I0501 02:46:02.390433       1 main.go:250] Node ha-329926-m02 has CIDR [10.244.1.0/24] 
	I0501 02:46:02.390539       1 main.go:223] Handling node with IPs: map[192.168.39.115:{}]
	I0501 02:46:02.390558       1 main.go:250] Node ha-329926-m03 has CIDR [10.244.2.0/24] 
	I0501 02:46:02.390612       1 main.go:223] Handling node with IPs: map[192.168.39.84:{}]
	I0501 02:46:02.390629       1 main.go:250] Node ha-329926-m04 has CIDR [10.244.3.0/24] 
	I0501 02:46:12.406439       1 main.go:223] Handling node with IPs: map[192.168.39.5:{}]
	I0501 02:46:12.406905       1 main.go:227] handling current node
	I0501 02:46:12.406985       1 main.go:223] Handling node with IPs: map[192.168.39.79:{}]
	I0501 02:46:12.407016       1 main.go:250] Node ha-329926-m02 has CIDR [10.244.1.0/24] 
	I0501 02:46:12.407427       1 main.go:223] Handling node with IPs: map[192.168.39.115:{}]
	I0501 02:46:12.407481       1 main.go:250] Node ha-329926-m03 has CIDR [10.244.2.0/24] 
	I0501 02:46:12.407606       1 main.go:223] Handling node with IPs: map[192.168.39.84:{}]
	I0501 02:46:12.407642       1 main.go:250] Node ha-329926-m04 has CIDR [10.244.3.0/24] 
	I0501 02:46:22.431480       1 main.go:223] Handling node with IPs: map[192.168.39.5:{}]
	I0501 02:46:22.431594       1 main.go:227] handling current node
	I0501 02:46:22.431622       1 main.go:223] Handling node with IPs: map[192.168.39.79:{}]
	I0501 02:46:22.431645       1 main.go:250] Node ha-329926-m02 has CIDR [10.244.1.0/24] 
	I0501 02:46:22.431883       1 main.go:223] Handling node with IPs: map[192.168.39.115:{}]
	I0501 02:46:22.431976       1 main.go:250] Node ha-329926-m03 has CIDR [10.244.2.0/24] 
	I0501 02:46:22.432115       1 main.go:223] Handling node with IPs: map[192.168.39.84:{}]
	I0501 02:46:22.432147       1 main.go:250] Node ha-329926-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [028fe64855c8942e704a66fc8d7d80db9662c05c5252b9ae01043eb95134a0a6] <==
	I0501 02:43:16.759959       1 options.go:221] external host was not specified, using 192.168.39.5
	I0501 02:43:16.761150       1 server.go:148] Version: v1.30.0
	I0501 02:43:16.761205       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0501 02:43:17.277407       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0501 02:43:17.295422       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0501 02:43:17.297749       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0501 02:43:17.298565       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0501 02:43:17.298855       1 instance.go:299] Using reconciler: lease
	W0501 02:43:37.270800       1 logging.go:59] [core] [Channel #1 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0501 02:43:37.278266       1 logging.go:59] [core] [Channel #2 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0501 02:43:37.303106       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F0501 02:43:37.303114       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [c8f3963bbca5a5d8ca96fd7bf715f2b551bcaf4380803b443a346bccff25655b] <==
	I0501 02:44:03.284302       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0501 02:44:03.284929       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0501 02:44:03.286205       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0501 02:44:03.276058       1 available_controller.go:423] Starting AvailableConditionController
	I0501 02:44:03.287970       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
	I0501 02:44:03.379527       1 shared_informer.go:320] Caches are synced for configmaps
	I0501 02:44:03.384276       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0501 02:44:03.384504       1 policy_source.go:224] refreshing policies
	I0501 02:44:03.388182       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0501 02:44:03.389215       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0501 02:44:03.389973       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0501 02:44:03.393950       1 aggregator.go:165] initial CRD sync complete...
	I0501 02:44:03.394016       1 autoregister_controller.go:141] Starting autoregister controller
	I0501 02:44:03.394042       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0501 02:44:03.394067       1 cache.go:39] Caches are synced for autoregister controller
	I0501 02:44:03.472614       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0501 02:44:03.472761       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0501 02:44:03.473456       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0501 02:44:03.476144       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0501 02:44:03.478606       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0501 02:44:03.482541       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0501 02:44:04.284362       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0501 02:44:04.931098       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.5 192.168.39.79]
	I0501 02:44:04.932629       1 controller.go:615] quota admission added evaluator for: endpoints
	I0501 02:44:04.941518       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [a409ec7ab3a3389b868353ff5b180728bff4d9fd6e9ee235408658387a54e865] <==
	I0501 02:43:17.430540       1 serving.go:380] Generated self-signed cert in-memory
	I0501 02:43:17.668047       1 controllermanager.go:189] "Starting" version="v1.30.0"
	I0501 02:43:17.669744       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0501 02:43:17.671414       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0501 02:43:17.673261       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0501 02:43:17.673277       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0501 02:43:17.673289       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E0501 02:43:38.309421       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.5:8443/healthz\": dial tcp 192.168.39.5:8443: connect: connection refused"
	
	
	==> kube-controller-manager [edfe1ca8adcff5c57dd48a6e4e52f6129014ec43e797455a799c8abb8dddf9ad] <==
	I0501 02:44:15.680547       1 shared_informer.go:320] Caches are synced for resource quota
	I0501 02:44:15.686241       1 shared_informer.go:320] Caches are synced for attach detach
	I0501 02:44:15.712263       1 shared_informer.go:320] Caches are synced for resource quota
	I0501 02:44:16.159981       1 shared_informer.go:320] Caches are synced for garbage collector
	I0501 02:44:16.204088       1 shared_informer.go:320] Caches are synced for garbage collector
	I0501 02:44:16.204214       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0501 02:44:22.922969       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="388.545µs"
	I0501 02:44:29.416788       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="42.43124ms"
	I0501 02:44:29.416923       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="52.599µs"
	I0501 02:44:34.950472       1 endpointslice_controller.go:311] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-kd6jp EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-kd6jp\": the object has been modified; please apply your changes to the latest version and try again"
	I0501 02:44:34.954119       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"2732fada-1ff2-46d0-a3d4-bb90399b26b0", APIVersion:"v1", ResourceVersion:"250", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-kd6jp EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-kd6jp": the object has been modified; please apply your changes to the latest version and try again
	I0501 02:44:34.976333       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="87.097203ms"
	I0501 02:44:34.976605       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="151.917µs"
	I0501 02:44:43.199532       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="25.611905ms"
	I0501 02:44:43.200277       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="107.681µs"
	E0501 02:44:43.284342       1 daemon_controller.go:324] kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kindnet", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"1e7303a0-cd90-4aa2-8ede-ded46d60d9b3", ResourceVersion:"2294", Generation:1, CreationTimestamp:time.Date(2024, time.May, 1, 2, 31, 49, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"},\"name\":\"kindnet\",\"namespace\":\"kube-system\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"kindnet\"}},\"template\":{\"metadata\":
{\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"}},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"HOST_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.hostIP\"}}},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}},{\"name\":\"POD_SUBNET\",\"value\":\"10.244.0.0/16\"}],\"image\":\"docker.io/kindest/kindnetd:v20240202-8f1494ea\",\"name\":\"kindnet-cni\",\"resources\":{\"limits\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"},\"requests\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"}},\"securityContext\":{\"capabilities\":{\"add\":[\"NET_RAW\",\"NET_ADMIN\"]},\"privileged\":false},\"volumeMounts\":[{\"mountPath\":\"/etc/cni/net.d\",\"name\":\"cni-cfg\"},{\"mountPath\":\"/run/xtables.lock\",\"name\":\"xtables-lock\",\"readOnly\":false},{\"mountPath\":\"/lib/modules\",\"name\":\"lib-modules\",\"readOnly\":true}]}],\"hostNetwork\":true,\"serviceAccountName\":\"kindnet\",\"tolerations\":[{\"effect\":\"NoSchedule\",\"operator\":\"Exists\"}],\"volumes\":[{\"hostPat
h\":{\"path\":\"/etc/cni/net.d\",\"type\":\"DirectoryOrCreate\"},\"name\":\"cni-cfg\"},{\"hostPath\":{\"path\":\"/run/xtables.lock\",\"type\":\"FileOrCreate\"},\"name\":\"xtables-lock\"},{\"hostPath\":{\"path\":\"/lib/modules\"},\"name\":\"lib-modules\"}]}}}}\n"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc001688300), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"cni-cfg", Volu
meSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc00201e498), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.P
hotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc00201e4b0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), Downwa
rdAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc00201e4c8), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCS
IVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{
Name:"kindnet-cni", Image:"docker.io/kindest/kindnetd:v20240202-8f1494ea", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"HOST_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc001688320)}, v1.EnvVar{Name:"POD_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc001688360)}, v1.EnvVar{Name:"POD_SUBNET", Value:"10.244.0.0/16", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resou
rce.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}, Claims:[]v1.ResourceClaim(nil)}, ResizePolicy:[]v1.ContainerResizePolicy(nil), RestartPolicy:(*v1.ContainerRestartPolicy)(nil), VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"cni-cfg", ReadOnly:false, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:"/etc/cni/net.d", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v
1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc0025c4540), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0012e6e88), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"kindnet", DeprecatedServiceAccount:"kindnet", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002660900), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}}, Hos
tAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil), OS:(*v1.PodOS)(nil), HostUsers:(*bool)(nil), SchedulingGates:[]v1.PodSchedulingGate(nil), ResourceClaims:[]v1.PodResourceClaim(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc0023b2b50)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc0012e6ed0)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:4, NumberMisscheduled:0, DesiredNumberScheduled:4, NumberReady:4, ObservedGeneration:1, UpdatedNumberScheduled:4, NumberAvailable:4, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on d
aemonsets.apps "kindnet": the object has been modified; please apply your changes to the latest version and try again
	I0501 02:44:44.905345       1 endpointslice_controller.go:311] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-kd6jp EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-kd6jp\": the object has been modified; please apply your changes to the latest version and try again"
	I0501 02:44:44.905924       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"2732fada-1ff2-46d0-a3d4-bb90399b26b0", APIVersion:"v1", ResourceVersion:"250", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-kd6jp EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-kd6jp": the object has been modified; please apply your changes to the latest version and try again
	I0501 02:44:44.941780       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="61.245184ms"
	I0501 02:44:44.942357       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="290.048µs"
	I0501 02:45:22.644179       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="61.203µs"
	I0501 02:45:47.934575       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.503885ms"
	I0501 02:45:47.934723       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="92.071µs"
	I0501 02:46:14.622752       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-329926-m04"
	
	
	==> kube-proxy [2ab64850e34b66cbe91e099c91014051472b87888130297cd7c47a4a78992140] <==
	E0501 02:40:21.302633       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1811": dial tcp 192.168.39.254:8443: connect: no route to host
	W0501 02:40:24.372540       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-329926&resourceVersion=1867": dial tcp 192.168.39.254:8443: connect: no route to host
	E0501 02:40:24.372608       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-329926&resourceVersion=1867": dial tcp 192.168.39.254:8443: connect: no route to host
	W0501 02:40:24.372747       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1811": dial tcp 192.168.39.254:8443: connect: no route to host
	E0501 02:40:24.372794       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1811": dial tcp 192.168.39.254:8443: connect: no route to host
	W0501 02:40:24.372851       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1834": dial tcp 192.168.39.254:8443: connect: no route to host
	E0501 02:40:24.372897       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1834": dial tcp 192.168.39.254:8443: connect: no route to host
	W0501 02:40:30.517127       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1834": dial tcp 192.168.39.254:8443: connect: no route to host
	E0501 02:40:30.517203       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1834": dial tcp 192.168.39.254:8443: connect: no route to host
	W0501 02:40:30.517286       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-329926&resourceVersion=1867": dial tcp 192.168.39.254:8443: connect: no route to host
	E0501 02:40:30.517336       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-329926&resourceVersion=1867": dial tcp 192.168.39.254:8443: connect: no route to host
	W0501 02:40:30.517308       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1811": dial tcp 192.168.39.254:8443: connect: no route to host
	E0501 02:40:30.517404       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1811": dial tcp 192.168.39.254:8443: connect: no route to host
	W0501 02:40:39.732596       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-329926&resourceVersion=1867": dial tcp 192.168.39.254:8443: connect: no route to host
	E0501 02:40:39.732718       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-329926&resourceVersion=1867": dial tcp 192.168.39.254:8443: connect: no route to host
	W0501 02:40:42.806445       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1811": dial tcp 192.168.39.254:8443: connect: no route to host
	W0501 02:40:42.806483       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1834": dial tcp 192.168.39.254:8443: connect: no route to host
	E0501 02:40:42.806845       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1834": dial tcp 192.168.39.254:8443: connect: no route to host
	E0501 02:40:42.806890       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1811": dial tcp 192.168.39.254:8443: connect: no route to host
	W0501 02:40:58.166492       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1834": dial tcp 192.168.39.254:8443: connect: no route to host
	E0501 02:40:58.166767       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1834": dial tcp 192.168.39.254:8443: connect: no route to host
	W0501 02:41:01.237978       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-329926&resourceVersion=1867": dial tcp 192.168.39.254:8443: connect: no route to host
	E0501 02:41:01.238302       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-329926&resourceVersion=1867": dial tcp 192.168.39.254:8443: connect: no route to host
	W0501 02:41:04.309884       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1811": dial tcp 192.168.39.254:8443: connect: no route to host
	E0501 02:41:04.310043       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1811": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-proxy [3b19253347b3c6a6483c37f96f8593931755f784d085f793c13e001ae0d76794] <==
	I0501 02:43:17.726855       1 server_linux.go:69] "Using iptables proxy"
	E0501 02:43:19.476429       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-329926\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0501 02:43:22.549893       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-329926\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0501 02:43:25.620569       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-329926\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0501 02:43:31.764380       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-329926\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0501 02:43:40.980955       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-329926\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0501 02:43:59.449793       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.5"]
	I0501 02:43:59.554344       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0501 02:43:59.554410       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0501 02:43:59.554429       1 server_linux.go:165] "Using iptables Proxier"
	I0501 02:43:59.561077       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0501 02:43:59.561313       1 server.go:872] "Version info" version="v1.30.0"
	I0501 02:43:59.561356       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0501 02:43:59.565389       1 config.go:192] "Starting service config controller"
	I0501 02:43:59.565443       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0501 02:43:59.565533       1 config.go:101] "Starting endpoint slice config controller"
	I0501 02:43:59.565564       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0501 02:43:59.566188       1 config.go:319] "Starting node config controller"
	I0501 02:43:59.566223       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0501 02:43:59.666555       1 shared_informer.go:320] Caches are synced for node config
	I0501 02:43:59.667097       1 shared_informer.go:320] Caches are synced for service config
	I0501 02:43:59.667187       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [e3ffc6d046e214333ed85219819ad54bfa5c3ac0b10167beacf72edbd6035736] <==
	W0501 02:41:23.464064       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0501 02:41:23.464138       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0501 02:41:23.504867       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0501 02:41:23.504954       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0501 02:41:23.536950       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0501 02:41:23.537066       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0501 02:41:23.706312       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0501 02:41:23.706435       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0501 02:41:23.856853       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0501 02:41:23.856901       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0501 02:41:23.965118       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0501 02:41:23.965176       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0501 02:41:23.986437       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0501 02:41:23.986560       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0501 02:41:24.051323       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0501 02:41:24.051517       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0501 02:41:24.101844       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0501 02:41:24.102022       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0501 02:41:24.190866       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0501 02:41:24.190896       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0501 02:41:24.261737       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0501 02:41:24.261829       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0501 02:41:24.303498       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0501 02:41:24.303726       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0501 02:41:26.815795       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [fc712ce5191365a1b74a4fb690a3fe1fca3ef109f0525a60d88ecff10b96a61b] <==
	W0501 02:43:55.950147       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.39.5:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.5:8443: connect: connection refused
	E0501 02:43:55.950214       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.39.5:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.5:8443: connect: connection refused
	W0501 02:43:56.010081       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.39.5:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.5:8443: connect: connection refused
	E0501 02:43:56.010168       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.39.5:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.5:8443: connect: connection refused
	W0501 02:43:56.208294       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.5:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.5:8443: connect: connection refused
	E0501 02:43:56.208365       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.39.5:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.5:8443: connect: connection refused
	W0501 02:43:57.327171       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.5:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.5:8443: connect: connection refused
	E0501 02:43:57.327253       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.39.5:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.5:8443: connect: connection refused
	W0501 02:43:57.551219       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.5:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.5:8443: connect: connection refused
	E0501 02:43:57.551293       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.39.5:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.5:8443: connect: connection refused
	W0501 02:43:57.836838       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.5:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.5:8443: connect: connection refused
	E0501 02:43:57.836907       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.5:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.5:8443: connect: connection refused
	W0501 02:43:58.059423       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.39.5:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.5:8443: connect: connection refused
	E0501 02:43:58.059539       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.39.5:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.5:8443: connect: connection refused
	W0501 02:43:58.245087       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.5:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.5:8443: connect: connection refused
	E0501 02:43:58.245213       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.39.5:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.5:8443: connect: connection refused
	W0501 02:43:58.448578       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.5:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.5:8443: connect: connection refused
	E0501 02:43:58.448814       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.39.5:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.5:8443: connect: connection refused
	W0501 02:43:58.604044       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.5:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.5:8443: connect: connection refused
	E0501 02:43:58.604164       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.39.5:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.5:8443: connect: connection refused
	W0501 02:43:59.228847       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.5:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.5:8443: connect: connection refused
	E0501 02:43:59.228949       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.39.5:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.5:8443: connect: connection refused
	W0501 02:43:59.997177       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.5:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.5:8443: connect: connection refused
	E0501 02:43:59.997262       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.5:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.5:8443: connect: connection refused
	I0501 02:44:19.614216       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	May 01 02:44:14 ha-329926 kubelet[1388]: I0501 02:44:14.105873    1388 scope.go:117] "RemoveContainer" containerID="042c499301cf07675ef97e386fb802a0684efc1ff197389bf8b7458ca853493f"
	May 01 02:44:14 ha-329926 kubelet[1388]: E0501 02:44:14.106263    1388 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-kcmp7_kube-system(8e15c166-9ba1-40c9-8f33-db7f83733932)\"" pod="kube-system/kindnet-kcmp7" podUID="8e15c166-9ba1-40c9-8f33-db7f83733932"
	May 01 02:44:22 ha-329926 kubelet[1388]: I0501 02:44:22.105195    1388 scope.go:117] "RemoveContainer" containerID="6c768225fa391861ab12a75d643a4757892824fd20ef1294ba1e9b817cbe81f3"
	May 01 02:44:22 ha-329926 kubelet[1388]: E0501 02:44:22.105435    1388 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(371423a6-a156-4e8d-bf66-812d606cc8d7)\"" pod="kube-system/storage-provisioner" podUID="371423a6-a156-4e8d-bf66-812d606cc8d7"
	May 01 02:44:27 ha-329926 kubelet[1388]: I0501 02:44:27.105101    1388 scope.go:117] "RemoveContainer" containerID="042c499301cf07675ef97e386fb802a0684efc1ff197389bf8b7458ca853493f"
	May 01 02:44:27 ha-329926 kubelet[1388]: E0501 02:44:27.105428    1388 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-kcmp7_kube-system(8e15c166-9ba1-40c9-8f33-db7f83733932)\"" pod="kube-system/kindnet-kcmp7" podUID="8e15c166-9ba1-40c9-8f33-db7f83733932"
	May 01 02:44:29 ha-329926 kubelet[1388]: I0501 02:44:29.365020    1388 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox-fc5497c4f-nwj5x" podStartSLOduration=581.738311197 podStartE2EDuration="9m44.364988036s" podCreationTimestamp="2024-05-01 02:34:45 +0000 UTC" firstStartedPulling="2024-05-01 02:34:46.40241935 +0000 UTC m=+178.455766984" lastFinishedPulling="2024-05-01 02:34:49.029096198 +0000 UTC m=+181.082443823" observedRunningTime="2024-05-01 02:34:50.014298265 +0000 UTC m=+182.067645907" watchObservedRunningTime="2024-05-01 02:44:29.364988036 +0000 UTC m=+761.418335676"
	May 01 02:44:33 ha-329926 kubelet[1388]: I0501 02:44:33.105502    1388 scope.go:117] "RemoveContainer" containerID="6c768225fa391861ab12a75d643a4757892824fd20ef1294ba1e9b817cbe81f3"
	May 01 02:44:33 ha-329926 kubelet[1388]: E0501 02:44:33.106757    1388 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(371423a6-a156-4e8d-bf66-812d606cc8d7)\"" pod="kube-system/storage-provisioner" podUID="371423a6-a156-4e8d-bf66-812d606cc8d7"
	May 01 02:44:41 ha-329926 kubelet[1388]: I0501 02:44:41.105113    1388 scope.go:117] "RemoveContainer" containerID="042c499301cf07675ef97e386fb802a0684efc1ff197389bf8b7458ca853493f"
	May 01 02:44:48 ha-329926 kubelet[1388]: I0501 02:44:48.106732    1388 scope.go:117] "RemoveContainer" containerID="6c768225fa391861ab12a75d643a4757892824fd20ef1294ba1e9b817cbe81f3"
	May 01 02:44:48 ha-329926 kubelet[1388]: E0501 02:44:48.143560    1388 iptables.go:577] "Could not set up iptables canary" err=<
	May 01 02:44:48 ha-329926 kubelet[1388]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 01 02:44:48 ha-329926 kubelet[1388]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 01 02:44:48 ha-329926 kubelet[1388]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 01 02:44:48 ha-329926 kubelet[1388]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 01 02:44:55 ha-329926 kubelet[1388]: I0501 02:44:55.105344    1388 kubelet.go:1908] "Trying to delete pod" pod="kube-system/kube-vip-ha-329926" podUID="0fbbb815-441d-48d0-b0cf-1bb57ff6d993"
	May 01 02:44:55 ha-329926 kubelet[1388]: I0501 02:44:55.145938    1388 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-329926"
	May 01 02:44:56 ha-329926 kubelet[1388]: I0501 02:44:56.143566    1388 kubelet.go:1908] "Trying to delete pod" pod="kube-system/kube-vip-ha-329926" podUID="0fbbb815-441d-48d0-b0cf-1bb57ff6d993"
	May 01 02:44:58 ha-329926 kubelet[1388]: I0501 02:44:58.128956    1388 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-vip-ha-329926" podStartSLOduration=3.128924506 podStartE2EDuration="3.128924506s" podCreationTimestamp="2024-05-01 02:44:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-05-01 02:44:58.12715467 +0000 UTC m=+790.180502315" watchObservedRunningTime="2024-05-01 02:44:58.128924506 +0000 UTC m=+790.182272176"
	May 01 02:45:48 ha-329926 kubelet[1388]: E0501 02:45:48.135237    1388 iptables.go:577] "Could not set up iptables canary" err=<
	May 01 02:45:48 ha-329926 kubelet[1388]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 01 02:45:48 ha-329926 kubelet[1388]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 01 02:45:48 ha-329926 kubelet[1388]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 01 02:45:48 ha-329926 kubelet[1388]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0501 02:46:22.445313   40795 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18779-13391/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-329926 -n ha-329926
helpers_test.go:261: (dbg) Run:  kubectl --context ha-329926 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (421.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (142s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-329926 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-329926 stop -v=7 --alsologtostderr: exit status 82 (2m0.497006886s)

                                                
                                                
-- stdout --
	* Stopping node "ha-329926-m04"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0501 02:46:43.083099   41203 out.go:291] Setting OutFile to fd 1 ...
	I0501 02:46:43.083203   41203 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 02:46:43.083212   41203 out.go:304] Setting ErrFile to fd 2...
	I0501 02:46:43.083216   41203 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 02:46:43.083404   41203 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18779-13391/.minikube/bin
	I0501 02:46:43.083622   41203 out.go:298] Setting JSON to false
	I0501 02:46:43.083692   41203 mustload.go:65] Loading cluster: ha-329926
	I0501 02:46:43.084042   41203 config.go:182] Loaded profile config "ha-329926": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0501 02:46:43.084152   41203 profile.go:143] Saving config to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/config.json ...
	I0501 02:46:43.084339   41203 mustload.go:65] Loading cluster: ha-329926
	I0501 02:46:43.084463   41203 config.go:182] Loaded profile config "ha-329926": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0501 02:46:43.084484   41203 stop.go:39] StopHost: ha-329926-m04
	I0501 02:46:43.084808   41203 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:46:43.084847   41203 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:46:43.099020   41203 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41009
	I0501 02:46:43.099466   41203 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:46:43.099948   41203 main.go:141] libmachine: Using API Version  1
	I0501 02:46:43.099973   41203 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:46:43.100244   41203 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:46:43.102625   41203 out.go:177] * Stopping node "ha-329926-m04"  ...
	I0501 02:46:43.104075   41203 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0501 02:46:43.104112   41203 main.go:141] libmachine: (ha-329926-m04) Calling .DriverName
	I0501 02:46:43.104342   41203 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0501 02:46:43.104367   41203 main.go:141] libmachine: (ha-329926-m04) Calling .GetSSHHostname
	I0501 02:46:43.107050   41203 main.go:141] libmachine: (ha-329926-m04) DBG | domain ha-329926-m04 has defined MAC address 52:54:00:95:f4:8b in network mk-ha-329926
	I0501 02:46:43.107487   41203 main.go:141] libmachine: (ha-329926-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:f4:8b", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:46:08 +0000 UTC Type:0 Mac:52:54:00:95:f4:8b Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-329926-m04 Clientid:01:52:54:00:95:f4:8b}
	I0501 02:46:43.107519   41203 main.go:141] libmachine: (ha-329926-m04) DBG | domain ha-329926-m04 has defined IP address 192.168.39.84 and MAC address 52:54:00:95:f4:8b in network mk-ha-329926
	I0501 02:46:43.107618   41203 main.go:141] libmachine: (ha-329926-m04) Calling .GetSSHPort
	I0501 02:46:43.107801   41203 main.go:141] libmachine: (ha-329926-m04) Calling .GetSSHKeyPath
	I0501 02:46:43.107943   41203 main.go:141] libmachine: (ha-329926-m04) Calling .GetSSHUsername
	I0501 02:46:43.108068   41203 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926-m04/id_rsa Username:docker}
	I0501 02:46:43.201313   41203 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0501 02:46:43.256457   41203 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0501 02:46:43.312276   41203 main.go:141] libmachine: Stopping "ha-329926-m04"...
	I0501 02:46:43.312310   41203 main.go:141] libmachine: (ha-329926-m04) Calling .GetState
	I0501 02:46:43.313917   41203 main.go:141] libmachine: (ha-329926-m04) Calling .Stop
	I0501 02:46:43.317312   41203 main.go:141] libmachine: (ha-329926-m04) Waiting for machine to stop 0/120
	I0501 02:46:44.318702   41203 main.go:141] libmachine: (ha-329926-m04) Waiting for machine to stop 1/120
	I0501 02:46:45.320434   41203 main.go:141] libmachine: (ha-329926-m04) Waiting for machine to stop 2/120
	I0501 02:46:46.322229   41203 main.go:141] libmachine: (ha-329926-m04) Waiting for machine to stop 3/120
	I0501 02:46:47.323601   41203 main.go:141] libmachine: (ha-329926-m04) Waiting for machine to stop 4/120
	I0501 02:46:48.325505   41203 main.go:141] libmachine: (ha-329926-m04) Waiting for machine to stop 5/120
	I0501 02:46:49.326961   41203 main.go:141] libmachine: (ha-329926-m04) Waiting for machine to stop 6/120
	I0501 02:46:50.328723   41203 main.go:141] libmachine: (ha-329926-m04) Waiting for machine to stop 7/120
	I0501 02:46:51.330026   41203 main.go:141] libmachine: (ha-329926-m04) Waiting for machine to stop 8/120
	I0501 02:46:52.331490   41203 main.go:141] libmachine: (ha-329926-m04) Waiting for machine to stop 9/120
	I0501 02:46:53.333651   41203 main.go:141] libmachine: (ha-329926-m04) Waiting for machine to stop 10/120
	I0501 02:46:54.334952   41203 main.go:141] libmachine: (ha-329926-m04) Waiting for machine to stop 11/120
	I0501 02:46:55.337309   41203 main.go:141] libmachine: (ha-329926-m04) Waiting for machine to stop 12/120
	I0501 02:46:56.339026   41203 main.go:141] libmachine: (ha-329926-m04) Waiting for machine to stop 13/120
	I0501 02:46:57.341018   41203 main.go:141] libmachine: (ha-329926-m04) Waiting for machine to stop 14/120
	I0501 02:46:58.343161   41203 main.go:141] libmachine: (ha-329926-m04) Waiting for machine to stop 15/120
	I0501 02:46:59.344386   41203 main.go:141] libmachine: (ha-329926-m04) Waiting for machine to stop 16/120
	I0501 02:47:00.345744   41203 main.go:141] libmachine: (ha-329926-m04) Waiting for machine to stop 17/120
	I0501 02:47:01.347391   41203 main.go:141] libmachine: (ha-329926-m04) Waiting for machine to stop 18/120
	I0501 02:47:02.348660   41203 main.go:141] libmachine: (ha-329926-m04) Waiting for machine to stop 19/120
	I0501 02:47:03.350748   41203 main.go:141] libmachine: (ha-329926-m04) Waiting for machine to stop 20/120
	I0501 02:47:04.352776   41203 main.go:141] libmachine: (ha-329926-m04) Waiting for machine to stop 21/120
	I0501 02:47:05.354222   41203 main.go:141] libmachine: (ha-329926-m04) Waiting for machine to stop 22/120
	I0501 02:47:06.355580   41203 main.go:141] libmachine: (ha-329926-m04) Waiting for machine to stop 23/120
	I0501 02:47:07.356839   41203 main.go:141] libmachine: (ha-329926-m04) Waiting for machine to stop 24/120
	I0501 02:47:08.358246   41203 main.go:141] libmachine: (ha-329926-m04) Waiting for machine to stop 25/120
	I0501 02:47:09.359640   41203 main.go:141] libmachine: (ha-329926-m04) Waiting for machine to stop 26/120
	I0501 02:47:10.360924   41203 main.go:141] libmachine: (ha-329926-m04) Waiting for machine to stop 27/120
	I0501 02:47:11.362523   41203 main.go:141] libmachine: (ha-329926-m04) Waiting for machine to stop 28/120
	I0501 02:47:12.363906   41203 main.go:141] libmachine: (ha-329926-m04) Waiting for machine to stop 29/120
	I0501 02:47:13.366027   41203 main.go:141] libmachine: (ha-329926-m04) Waiting for machine to stop 30/120
	I0501 02:47:14.367372   41203 main.go:141] libmachine: (ha-329926-m04) Waiting for machine to stop 31/120
	I0501 02:47:15.369894   41203 main.go:141] libmachine: (ha-329926-m04) Waiting for machine to stop 32/120
	I0501 02:47:16.371183   41203 main.go:141] libmachine: (ha-329926-m04) Waiting for machine to stop 33/120
	I0501 02:47:17.372864   41203 main.go:141] libmachine: (ha-329926-m04) Waiting for machine to stop 34/120
	I0501 02:47:18.375197   41203 main.go:141] libmachine: (ha-329926-m04) Waiting for machine to stop 35/120
	I0501 02:47:19.376853   41203 main.go:141] libmachine: (ha-329926-m04) Waiting for machine to stop 36/120
	I0501 02:47:20.378187   41203 main.go:141] libmachine: (ha-329926-m04) Waiting for machine to stop 37/120
	I0501 02:47:21.379582   41203 main.go:141] libmachine: (ha-329926-m04) Waiting for machine to stop 38/120
	I0501 02:47:22.380897   41203 main.go:141] libmachine: (ha-329926-m04) Waiting for machine to stop 39/120
	I0501 02:47:23.382690   41203 main.go:141] libmachine: (ha-329926-m04) Waiting for machine to stop 40/120
	I0501 02:47:24.384970   41203 main.go:141] libmachine: (ha-329926-m04) Waiting for machine to stop 41/120
	I0501 02:47:25.386559   41203 main.go:141] libmachine: (ha-329926-m04) Waiting for machine to stop 42/120
	I0501 02:47:26.389084   41203 main.go:141] libmachine: (ha-329926-m04) Waiting for machine to stop 43/120
	I0501 02:47:27.390561   41203 main.go:141] libmachine: (ha-329926-m04) Waiting for machine to stop 44/120
	I0501 02:47:28.392479   41203 main.go:141] libmachine: (ha-329926-m04) Waiting for machine to stop 45/120
	I0501 02:47:29.393931   41203 main.go:141] libmachine: (ha-329926-m04) Waiting for machine to stop 46/120
	I0501 02:47:30.395319   41203 main.go:141] libmachine: (ha-329926-m04) Waiting for machine to stop 47/120
	I0501 02:47:31.396768   41203 main.go:141] libmachine: (ha-329926-m04) Waiting for machine to stop 48/120
	I0501 02:47:32.398164   41203 main.go:141] libmachine: (ha-329926-m04) Waiting for machine to stop 49/120
	I0501 02:47:33.400151   41203 main.go:141] libmachine: (ha-329926-m04) Waiting for machine to stop 50/120
	I0501 02:47:34.402251   41203 main.go:141] libmachine: (ha-329926-m04) Waiting for machine to stop 51/120
	I0501 02:47:35.403609   41203 main.go:141] libmachine: (ha-329926-m04) Waiting for machine to stop 52/120
	I0501 02:47:36.405596   41203 main.go:141] libmachine: (ha-329926-m04) Waiting for machine to stop 53/120
	I0501 02:47:37.407207   41203 main.go:141] libmachine: (ha-329926-m04) Waiting for machine to stop 54/120
	I0501 02:47:38.409224   41203 main.go:141] libmachine: (ha-329926-m04) Waiting for machine to stop 55/120
	I0501 02:47:39.410734   41203 main.go:141] libmachine: (ha-329926-m04) Waiting for machine to stop 56/120
	I0501 02:47:40.412911   41203 main.go:141] libmachine: (ha-329926-m04) Waiting for machine to stop 57/120
	I0501 02:47:41.415131   41203 main.go:141] libmachine: (ha-329926-m04) Waiting for machine to stop 58/120
	I0501 02:47:42.416348   41203 main.go:141] libmachine: (ha-329926-m04) Waiting for machine to stop 59/120
	I0501 02:47:43.417749   41203 main.go:141] libmachine: (ha-329926-m04) Waiting for machine to stop 60/120
	I0501 02:47:44.419388   41203 main.go:141] libmachine: (ha-329926-m04) Waiting for machine to stop 61/120
	I0501 02:47:45.420925   41203 main.go:141] libmachine: (ha-329926-m04) Waiting for machine to stop 62/120
	I0501 02:47:46.422344   41203 main.go:141] libmachine: (ha-329926-m04) Waiting for machine to stop 63/120
	I0501 02:47:47.423649   41203 main.go:141] libmachine: (ha-329926-m04) Waiting for machine to stop 64/120
	I0501 02:47:48.425337   41203 main.go:141] libmachine: (ha-329926-m04) Waiting for machine to stop 65/120
	I0501 02:47:49.426749   41203 main.go:141] libmachine: (ha-329926-m04) Waiting for machine to stop 66/120
	I0501 02:47:50.428716   41203 main.go:141] libmachine: (ha-329926-m04) Waiting for machine to stop 67/120
	I0501 02:47:51.430323   41203 main.go:141] libmachine: (ha-329926-m04) Waiting for machine to stop 68/120
	I0501 02:47:52.431700   41203 main.go:141] libmachine: (ha-329926-m04) Waiting for machine to stop 69/120
	I0501 02:47:53.433687   41203 main.go:141] libmachine: (ha-329926-m04) Waiting for machine to stop 70/120
	I0501 02:47:54.435124   41203 main.go:141] libmachine: (ha-329926-m04) Waiting for machine to stop 71/120
	I0501 02:47:55.437025   41203 main.go:141] libmachine: (ha-329926-m04) Waiting for machine to stop 72/120
	I0501 02:47:56.438627   41203 main.go:141] libmachine: (ha-329926-m04) Waiting for machine to stop 73/120
	I0501 02:47:57.441491   41203 main.go:141] libmachine: (ha-329926-m04) Waiting for machine to stop 74/120
	I0501 02:47:58.443437   41203 main.go:141] libmachine: (ha-329926-m04) Waiting for machine to stop 75/120
	I0501 02:47:59.444937   41203 main.go:141] libmachine: (ha-329926-m04) Waiting for machine to stop 76/120
	I0501 02:48:00.446524   41203 main.go:141] libmachine: (ha-329926-m04) Waiting for machine to stop 77/120
	I0501 02:48:01.447706   41203 main.go:141] libmachine: (ha-329926-m04) Waiting for machine to stop 78/120
	I0501 02:48:02.449608   41203 main.go:141] libmachine: (ha-329926-m04) Waiting for machine to stop 79/120
	I0501 02:48:03.451611   41203 main.go:141] libmachine: (ha-329926-m04) Waiting for machine to stop 80/120
	I0501 02:48:04.452904   41203 main.go:141] libmachine: (ha-329926-m04) Waiting for machine to stop 81/120
	I0501 02:48:05.454197   41203 main.go:141] libmachine: (ha-329926-m04) Waiting for machine to stop 82/120
	I0501 02:48:06.455550   41203 main.go:141] libmachine: (ha-329926-m04) Waiting for machine to stop 83/120
	I0501 02:48:07.457045   41203 main.go:141] libmachine: (ha-329926-m04) Waiting for machine to stop 84/120
	I0501 02:48:08.459037   41203 main.go:141] libmachine: (ha-329926-m04) Waiting for machine to stop 85/120
	I0501 02:48:09.461035   41203 main.go:141] libmachine: (ha-329926-m04) Waiting for machine to stop 86/120
	I0501 02:48:10.462352   41203 main.go:141] libmachine: (ha-329926-m04) Waiting for machine to stop 87/120
	I0501 02:48:11.463631   41203 main.go:141] libmachine: (ha-329926-m04) Waiting for machine to stop 88/120
	I0501 02:48:12.464993   41203 main.go:141] libmachine: (ha-329926-m04) Waiting for machine to stop 89/120
	I0501 02:48:13.467097   41203 main.go:141] libmachine: (ha-329926-m04) Waiting for machine to stop 90/120
	I0501 02:48:14.468436   41203 main.go:141] libmachine: (ha-329926-m04) Waiting for machine to stop 91/120
	I0501 02:48:15.469651   41203 main.go:141] libmachine: (ha-329926-m04) Waiting for machine to stop 92/120
	I0501 02:48:16.472041   41203 main.go:141] libmachine: (ha-329926-m04) Waiting for machine to stop 93/120
	I0501 02:48:17.473408   41203 main.go:141] libmachine: (ha-329926-m04) Waiting for machine to stop 94/120
	I0501 02:48:18.475455   41203 main.go:141] libmachine: (ha-329926-m04) Waiting for machine to stop 95/120
	I0501 02:48:19.476820   41203 main.go:141] libmachine: (ha-329926-m04) Waiting for machine to stop 96/120
	I0501 02:48:20.478125   41203 main.go:141] libmachine: (ha-329926-m04) Waiting for machine to stop 97/120
	I0501 02:48:21.479425   41203 main.go:141] libmachine: (ha-329926-m04) Waiting for machine to stop 98/120
	I0501 02:48:22.480933   41203 main.go:141] libmachine: (ha-329926-m04) Waiting for machine to stop 99/120
	I0501 02:48:23.482935   41203 main.go:141] libmachine: (ha-329926-m04) Waiting for machine to stop 100/120
	I0501 02:48:24.484964   41203 main.go:141] libmachine: (ha-329926-m04) Waiting for machine to stop 101/120
	I0501 02:48:25.486481   41203 main.go:141] libmachine: (ha-329926-m04) Waiting for machine to stop 102/120
	I0501 02:48:26.487936   41203 main.go:141] libmachine: (ha-329926-m04) Waiting for machine to stop 103/120
	I0501 02:48:27.490246   41203 main.go:141] libmachine: (ha-329926-m04) Waiting for machine to stop 104/120
	I0501 02:48:28.492204   41203 main.go:141] libmachine: (ha-329926-m04) Waiting for machine to stop 105/120
	I0501 02:48:29.494189   41203 main.go:141] libmachine: (ha-329926-m04) Waiting for machine to stop 106/120
	I0501 02:48:30.495559   41203 main.go:141] libmachine: (ha-329926-m04) Waiting for machine to stop 107/120
	I0501 02:48:31.497959   41203 main.go:141] libmachine: (ha-329926-m04) Waiting for machine to stop 108/120
	I0501 02:48:32.499285   41203 main.go:141] libmachine: (ha-329926-m04) Waiting for machine to stop 109/120
	I0501 02:48:33.501671   41203 main.go:141] libmachine: (ha-329926-m04) Waiting for machine to stop 110/120
	I0501 02:48:34.503012   41203 main.go:141] libmachine: (ha-329926-m04) Waiting for machine to stop 111/120
	I0501 02:48:35.504266   41203 main.go:141] libmachine: (ha-329926-m04) Waiting for machine to stop 112/120
	I0501 02:48:36.505504   41203 main.go:141] libmachine: (ha-329926-m04) Waiting for machine to stop 113/120
	I0501 02:48:37.506863   41203 main.go:141] libmachine: (ha-329926-m04) Waiting for machine to stop 114/120
	I0501 02:48:38.508899   41203 main.go:141] libmachine: (ha-329926-m04) Waiting for machine to stop 115/120
	I0501 02:48:39.510939   41203 main.go:141] libmachine: (ha-329926-m04) Waiting for machine to stop 116/120
	I0501 02:48:40.512939   41203 main.go:141] libmachine: (ha-329926-m04) Waiting for machine to stop 117/120
	I0501 02:48:41.514260   41203 main.go:141] libmachine: (ha-329926-m04) Waiting for machine to stop 118/120
	I0501 02:48:42.515616   41203 main.go:141] libmachine: (ha-329926-m04) Waiting for machine to stop 119/120
	I0501 02:48:43.516351   41203 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0501 02:48:43.516399   41203 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0501 02:48:43.518125   41203 out.go:177] 
	W0501 02:48:43.519292   41203 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0501 02:48:43.519310   41203 out.go:239] * 
	* 
	W0501 02:48:43.521507   41203 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0501 02:48:43.522733   41203 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:533: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-329926 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-329926 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-329926 status -v=7 --alsologtostderr: exit status 3 (18.914708721s)

                                                
                                                
-- stdout --
	ha-329926
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-329926-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-329926-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0501 02:48:43.584033   41621 out.go:291] Setting OutFile to fd 1 ...
	I0501 02:48:43.584318   41621 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 02:48:43.584330   41621 out.go:304] Setting ErrFile to fd 2...
	I0501 02:48:43.584337   41621 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 02:48:43.584585   41621 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18779-13391/.minikube/bin
	I0501 02:48:43.584835   41621 out.go:298] Setting JSON to false
	I0501 02:48:43.584863   41621 mustload.go:65] Loading cluster: ha-329926
	I0501 02:48:43.584923   41621 notify.go:220] Checking for updates...
	I0501 02:48:43.585418   41621 config.go:182] Loaded profile config "ha-329926": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0501 02:48:43.585439   41621 status.go:255] checking status of ha-329926 ...
	I0501 02:48:43.586044   41621 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:48:43.586108   41621 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:48:43.604958   41621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38473
	I0501 02:48:43.605451   41621 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:48:43.606088   41621 main.go:141] libmachine: Using API Version  1
	I0501 02:48:43.606121   41621 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:48:43.606509   41621 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:48:43.606747   41621 main.go:141] libmachine: (ha-329926) Calling .GetState
	I0501 02:48:43.608516   41621 status.go:330] ha-329926 host status = "Running" (err=<nil>)
	I0501 02:48:43.608530   41621 host.go:66] Checking if "ha-329926" exists ...
	I0501 02:48:43.608862   41621 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:48:43.608900   41621 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:48:43.624258   41621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36951
	I0501 02:48:43.624662   41621 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:48:43.625114   41621 main.go:141] libmachine: Using API Version  1
	I0501 02:48:43.625156   41621 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:48:43.625485   41621 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:48:43.625674   41621 main.go:141] libmachine: (ha-329926) Calling .GetIP
	I0501 02:48:43.628495   41621 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:48:43.628954   41621 main.go:141] libmachine: (ha-329926) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:d8:43", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:31:18 +0000 UTC Type:0 Mac:52:54:00:ce:d8:43 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-329926 Clientid:01:52:54:00:ce:d8:43}
	I0501 02:48:43.628975   41621 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined IP address 192.168.39.5 and MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:48:43.629099   41621 host.go:66] Checking if "ha-329926" exists ...
	I0501 02:48:43.629402   41621 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:48:43.629442   41621 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:48:43.644074   41621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39365
	I0501 02:48:43.644463   41621 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:48:43.644935   41621 main.go:141] libmachine: Using API Version  1
	I0501 02:48:43.644957   41621 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:48:43.645239   41621 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:48:43.645449   41621 main.go:141] libmachine: (ha-329926) Calling .DriverName
	I0501 02:48:43.645618   41621 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0501 02:48:43.645638   41621 main.go:141] libmachine: (ha-329926) Calling .GetSSHHostname
	I0501 02:48:43.648590   41621 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:48:43.649084   41621 main.go:141] libmachine: (ha-329926) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:d8:43", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:31:18 +0000 UTC Type:0 Mac:52:54:00:ce:d8:43 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-329926 Clientid:01:52:54:00:ce:d8:43}
	I0501 02:48:43.649113   41621 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined IP address 192.168.39.5 and MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:48:43.649275   41621 main.go:141] libmachine: (ha-329926) Calling .GetSSHPort
	I0501 02:48:43.649450   41621 main.go:141] libmachine: (ha-329926) Calling .GetSSHKeyPath
	I0501 02:48:43.649596   41621 main.go:141] libmachine: (ha-329926) Calling .GetSSHUsername
	I0501 02:48:43.649698   41621 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926/id_rsa Username:docker}
	I0501 02:48:43.733219   41621 ssh_runner.go:195] Run: systemctl --version
	I0501 02:48:43.742587   41621 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 02:48:43.764281   41621 kubeconfig.go:125] found "ha-329926" server: "https://192.168.39.254:8443"
	I0501 02:48:43.764318   41621 api_server.go:166] Checking apiserver status ...
	I0501 02:48:43.764366   41621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 02:48:43.782690   41621 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/5250/cgroup
	W0501 02:48:43.795644   41621 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/5250/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0501 02:48:43.795705   41621 ssh_runner.go:195] Run: ls
	I0501 02:48:43.801528   41621 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0501 02:48:43.807599   41621 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0501 02:48:43.807623   41621 status.go:422] ha-329926 apiserver status = Running (err=<nil>)
	I0501 02:48:43.807632   41621 status.go:257] ha-329926 status: &{Name:ha-329926 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0501 02:48:43.807648   41621 status.go:255] checking status of ha-329926-m02 ...
	I0501 02:48:43.807938   41621 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:48:43.807987   41621 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:48:43.825044   41621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33183
	I0501 02:48:43.825477   41621 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:48:43.825916   41621 main.go:141] libmachine: Using API Version  1
	I0501 02:48:43.825936   41621 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:48:43.826189   41621 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:48:43.826370   41621 main.go:141] libmachine: (ha-329926-m02) Calling .GetState
	I0501 02:48:43.827985   41621 status.go:330] ha-329926-m02 host status = "Running" (err=<nil>)
	I0501 02:48:43.828002   41621 host.go:66] Checking if "ha-329926-m02" exists ...
	I0501 02:48:43.828305   41621 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:48:43.828366   41621 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:48:43.843515   41621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46459
	I0501 02:48:43.843942   41621 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:48:43.844401   41621 main.go:141] libmachine: Using API Version  1
	I0501 02:48:43.844424   41621 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:48:43.844713   41621 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:48:43.844918   41621 main.go:141] libmachine: (ha-329926-m02) Calling .GetIP
	I0501 02:48:43.847726   41621 main.go:141] libmachine: (ha-329926-m02) DBG | domain ha-329926-m02 has defined MAC address 52:54:00:92:16:5f in network mk-ha-329926
	I0501 02:48:43.848214   41621 main.go:141] libmachine: (ha-329926-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:16:5f", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:43:22 +0000 UTC Type:0 Mac:52:54:00:92:16:5f Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-329926-m02 Clientid:01:52:54:00:92:16:5f}
	I0501 02:48:43.848240   41621 main.go:141] libmachine: (ha-329926-m02) DBG | domain ha-329926-m02 has defined IP address 192.168.39.79 and MAC address 52:54:00:92:16:5f in network mk-ha-329926
	I0501 02:48:43.848355   41621 host.go:66] Checking if "ha-329926-m02" exists ...
	I0501 02:48:43.848729   41621 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:48:43.848787   41621 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:48:43.864738   41621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36043
	I0501 02:48:43.865204   41621 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:48:43.865706   41621 main.go:141] libmachine: Using API Version  1
	I0501 02:48:43.865726   41621 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:48:43.866113   41621 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:48:43.866331   41621 main.go:141] libmachine: (ha-329926-m02) Calling .DriverName
	I0501 02:48:43.866579   41621 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0501 02:48:43.866604   41621 main.go:141] libmachine: (ha-329926-m02) Calling .GetSSHHostname
	I0501 02:48:43.869643   41621 main.go:141] libmachine: (ha-329926-m02) DBG | domain ha-329926-m02 has defined MAC address 52:54:00:92:16:5f in network mk-ha-329926
	I0501 02:48:43.870006   41621 main.go:141] libmachine: (ha-329926-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:16:5f", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:43:22 +0000 UTC Type:0 Mac:52:54:00:92:16:5f Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-329926-m02 Clientid:01:52:54:00:92:16:5f}
	I0501 02:48:43.870033   41621 main.go:141] libmachine: (ha-329926-m02) DBG | domain ha-329926-m02 has defined IP address 192.168.39.79 and MAC address 52:54:00:92:16:5f in network mk-ha-329926
	I0501 02:48:43.870185   41621 main.go:141] libmachine: (ha-329926-m02) Calling .GetSSHPort
	I0501 02:48:43.870375   41621 main.go:141] libmachine: (ha-329926-m02) Calling .GetSSHKeyPath
	I0501 02:48:43.870583   41621 main.go:141] libmachine: (ha-329926-m02) Calling .GetSSHUsername
	I0501 02:48:43.870760   41621 sshutil.go:53] new ssh client: &{IP:192.168.39.79 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926-m02/id_rsa Username:docker}
	I0501 02:48:43.958920   41621 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 02:48:43.978653   41621 kubeconfig.go:125] found "ha-329926" server: "https://192.168.39.254:8443"
	I0501 02:48:43.978686   41621 api_server.go:166] Checking apiserver status ...
	I0501 02:48:43.978726   41621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 02:48:43.996374   41621 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1495/cgroup
	W0501 02:48:44.008177   41621 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1495/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0501 02:48:44.008234   41621 ssh_runner.go:195] Run: ls
	I0501 02:48:44.013610   41621 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0501 02:48:44.018186   41621 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0501 02:48:44.018211   41621 status.go:422] ha-329926-m02 apiserver status = Running (err=<nil>)
	I0501 02:48:44.018220   41621 status.go:257] ha-329926-m02 status: &{Name:ha-329926-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0501 02:48:44.018240   41621 status.go:255] checking status of ha-329926-m04 ...
	I0501 02:48:44.018634   41621 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:48:44.018671   41621 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:48:44.033346   41621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40077
	I0501 02:48:44.033839   41621 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:48:44.034478   41621 main.go:141] libmachine: Using API Version  1
	I0501 02:48:44.034509   41621 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:48:44.034871   41621 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:48:44.035101   41621 main.go:141] libmachine: (ha-329926-m04) Calling .GetState
	I0501 02:48:44.036851   41621 status.go:330] ha-329926-m04 host status = "Running" (err=<nil>)
	I0501 02:48:44.036867   41621 host.go:66] Checking if "ha-329926-m04" exists ...
	I0501 02:48:44.037153   41621 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:48:44.037199   41621 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:48:44.052303   41621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34453
	I0501 02:48:44.052785   41621 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:48:44.053315   41621 main.go:141] libmachine: Using API Version  1
	I0501 02:48:44.053339   41621 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:48:44.053642   41621 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:48:44.053874   41621 main.go:141] libmachine: (ha-329926-m04) Calling .GetIP
	I0501 02:48:44.057002   41621 main.go:141] libmachine: (ha-329926-m04) DBG | domain ha-329926-m04 has defined MAC address 52:54:00:95:f4:8b in network mk-ha-329926
	I0501 02:48:44.057524   41621 main.go:141] libmachine: (ha-329926-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:f4:8b", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:46:08 +0000 UTC Type:0 Mac:52:54:00:95:f4:8b Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-329926-m04 Clientid:01:52:54:00:95:f4:8b}
	I0501 02:48:44.057559   41621 main.go:141] libmachine: (ha-329926-m04) DBG | domain ha-329926-m04 has defined IP address 192.168.39.84 and MAC address 52:54:00:95:f4:8b in network mk-ha-329926
	I0501 02:48:44.057758   41621 host.go:66] Checking if "ha-329926-m04" exists ...
	I0501 02:48:44.058065   41621 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:48:44.058109   41621 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:48:44.073132   41621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46113
	I0501 02:48:44.073599   41621 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:48:44.074069   41621 main.go:141] libmachine: Using API Version  1
	I0501 02:48:44.074093   41621 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:48:44.074462   41621 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:48:44.074650   41621 main.go:141] libmachine: (ha-329926-m04) Calling .DriverName
	I0501 02:48:44.074847   41621 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0501 02:48:44.074866   41621 main.go:141] libmachine: (ha-329926-m04) Calling .GetSSHHostname
	I0501 02:48:44.077632   41621 main.go:141] libmachine: (ha-329926-m04) DBG | domain ha-329926-m04 has defined MAC address 52:54:00:95:f4:8b in network mk-ha-329926
	I0501 02:48:44.078098   41621 main.go:141] libmachine: (ha-329926-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:f4:8b", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:46:08 +0000 UTC Type:0 Mac:52:54:00:95:f4:8b Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-329926-m04 Clientid:01:52:54:00:95:f4:8b}
	I0501 02:48:44.078144   41621 main.go:141] libmachine: (ha-329926-m04) DBG | domain ha-329926-m04 has defined IP address 192.168.39.84 and MAC address 52:54:00:95:f4:8b in network mk-ha-329926
	I0501 02:48:44.078309   41621 main.go:141] libmachine: (ha-329926-m04) Calling .GetSSHPort
	I0501 02:48:44.078529   41621 main.go:141] libmachine: (ha-329926-m04) Calling .GetSSHKeyPath
	I0501 02:48:44.078684   41621 main.go:141] libmachine: (ha-329926-m04) Calling .GetSSHUsername
	I0501 02:48:44.078846   41621 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926-m04/id_rsa Username:docker}
	W0501 02:49:02.438646   41621 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.84:22: connect: no route to host
	W0501 02:49:02.438742   41621 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.84:22: connect: no route to host
	E0501 02:49:02.438756   41621 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.84:22: connect: no route to host
	I0501 02:49:02.438766   41621 status.go:257] ha-329926-m04 status: &{Name:ha-329926-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0501 02:49:02.438805   41621 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.84:22: connect: no route to host

                                                
                                                
** /stderr **
ha_test.go:540: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-329926 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-329926 -n ha-329926
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-329926 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-329926 logs -n 25: (1.898599203s)
helpers_test.go:252: TestMultiControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-329926 ssh -n ha-329926-m02 sudo cat                                         | ha-329926 | jenkins | v1.33.0 | 01 May 24 02:35 UTC | 01 May 24 02:35 UTC |
	|         | /home/docker/cp-test_ha-329926-m03_ha-329926-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-329926 cp ha-329926-m03:/home/docker/cp-test.txt                             | ha-329926 | jenkins | v1.33.0 | 01 May 24 02:35 UTC | 01 May 24 02:35 UTC |
	|         | ha-329926-m04:/home/docker/cp-test_ha-329926-m03_ha-329926-m04.txt              |           |         |         |                     |                     |
	| ssh     | ha-329926 ssh -n                                                                | ha-329926 | jenkins | v1.33.0 | 01 May 24 02:35 UTC | 01 May 24 02:35 UTC |
	|         | ha-329926-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-329926 ssh -n ha-329926-m04 sudo cat                                         | ha-329926 | jenkins | v1.33.0 | 01 May 24 02:35 UTC | 01 May 24 02:35 UTC |
	|         | /home/docker/cp-test_ha-329926-m03_ha-329926-m04.txt                            |           |         |         |                     |                     |
	| cp      | ha-329926 cp testdata/cp-test.txt                                               | ha-329926 | jenkins | v1.33.0 | 01 May 24 02:35 UTC | 01 May 24 02:35 UTC |
	|         | ha-329926-m04:/home/docker/cp-test.txt                                          |           |         |         |                     |                     |
	| ssh     | ha-329926 ssh -n                                                                | ha-329926 | jenkins | v1.33.0 | 01 May 24 02:35 UTC | 01 May 24 02:35 UTC |
	|         | ha-329926-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-329926 cp ha-329926-m04:/home/docker/cp-test.txt                             | ha-329926 | jenkins | v1.33.0 | 01 May 24 02:35 UTC | 01 May 24 02:35 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile895580191/001/cp-test_ha-329926-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-329926 ssh -n                                                                | ha-329926 | jenkins | v1.33.0 | 01 May 24 02:35 UTC | 01 May 24 02:35 UTC |
	|         | ha-329926-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-329926 cp ha-329926-m04:/home/docker/cp-test.txt                             | ha-329926 | jenkins | v1.33.0 | 01 May 24 02:35 UTC | 01 May 24 02:35 UTC |
	|         | ha-329926:/home/docker/cp-test_ha-329926-m04_ha-329926.txt                      |           |         |         |                     |                     |
	| ssh     | ha-329926 ssh -n                                                                | ha-329926 | jenkins | v1.33.0 | 01 May 24 02:35 UTC | 01 May 24 02:35 UTC |
	|         | ha-329926-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-329926 ssh -n ha-329926 sudo cat                                             | ha-329926 | jenkins | v1.33.0 | 01 May 24 02:35 UTC | 01 May 24 02:35 UTC |
	|         | /home/docker/cp-test_ha-329926-m04_ha-329926.txt                                |           |         |         |                     |                     |
	| cp      | ha-329926 cp ha-329926-m04:/home/docker/cp-test.txt                             | ha-329926 | jenkins | v1.33.0 | 01 May 24 02:35 UTC | 01 May 24 02:35 UTC |
	|         | ha-329926-m02:/home/docker/cp-test_ha-329926-m04_ha-329926-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-329926 ssh -n                                                                | ha-329926 | jenkins | v1.33.0 | 01 May 24 02:35 UTC | 01 May 24 02:35 UTC |
	|         | ha-329926-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-329926 ssh -n ha-329926-m02 sudo cat                                         | ha-329926 | jenkins | v1.33.0 | 01 May 24 02:35 UTC | 01 May 24 02:35 UTC |
	|         | /home/docker/cp-test_ha-329926-m04_ha-329926-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-329926 cp ha-329926-m04:/home/docker/cp-test.txt                             | ha-329926 | jenkins | v1.33.0 | 01 May 24 02:35 UTC | 01 May 24 02:35 UTC |
	|         | ha-329926-m03:/home/docker/cp-test_ha-329926-m04_ha-329926-m03.txt              |           |         |         |                     |                     |
	| ssh     | ha-329926 ssh -n                                                                | ha-329926 | jenkins | v1.33.0 | 01 May 24 02:35 UTC | 01 May 24 02:35 UTC |
	|         | ha-329926-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-329926 ssh -n ha-329926-m03 sudo cat                                         | ha-329926 | jenkins | v1.33.0 | 01 May 24 02:35 UTC | 01 May 24 02:35 UTC |
	|         | /home/docker/cp-test_ha-329926-m04_ha-329926-m03.txt                            |           |         |         |                     |                     |
	| node    | ha-329926 node stop m02 -v=7                                                    | ha-329926 | jenkins | v1.33.0 | 01 May 24 02:35 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | ha-329926 node start m02 -v=7                                                   | ha-329926 | jenkins | v1.33.0 | 01 May 24 02:38 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | list -p ha-329926 -v=7                                                          | ha-329926 | jenkins | v1.33.0 | 01 May 24 02:39 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| stop    | -p ha-329926 -v=7                                                               | ha-329926 | jenkins | v1.33.0 | 01 May 24 02:39 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| start   | -p ha-329926 --wait=true -v=7                                                   | ha-329926 | jenkins | v1.33.0 | 01 May 24 02:41 UTC | 01 May 24 02:46 UTC |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | list -p ha-329926                                                               | ha-329926 | jenkins | v1.33.0 | 01 May 24 02:46 UTC |                     |
	| node    | ha-329926 node delete m03 -v=7                                                  | ha-329926 | jenkins | v1.33.0 | 01 May 24 02:46 UTC | 01 May 24 02:46 UTC |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| stop    | ha-329926 stop -v=7                                                             | ha-329926 | jenkins | v1.33.0 | 01 May 24 02:46 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/01 02:41:25
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0501 02:41:25.892088   39235 out.go:291] Setting OutFile to fd 1 ...
	I0501 02:41:25.892219   39235 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 02:41:25.892229   39235 out.go:304] Setting ErrFile to fd 2...
	I0501 02:41:25.892234   39235 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 02:41:25.892432   39235 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18779-13391/.minikube/bin
	I0501 02:41:25.893010   39235 out.go:298] Setting JSON to false
	I0501 02:41:25.893898   39235 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5029,"bootTime":1714526257,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0501 02:41:25.893955   39235 start.go:139] virtualization: kvm guest
	I0501 02:41:25.896237   39235 out.go:177] * [ha-329926] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0501 02:41:25.897626   39235 out.go:177]   - MINIKUBE_LOCATION=18779
	I0501 02:41:25.897587   39235 notify.go:220] Checking for updates...
	I0501 02:41:25.899156   39235 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0501 02:41:25.900581   39235 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18779-13391/kubeconfig
	I0501 02:41:25.901680   39235 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18779-13391/.minikube
	I0501 02:41:25.903091   39235 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0501 02:41:25.904286   39235 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0501 02:41:25.905925   39235 config.go:182] Loaded profile config "ha-329926": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0501 02:41:25.906030   39235 driver.go:392] Setting default libvirt URI to qemu:///system
	I0501 02:41:25.906452   39235 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:41:25.906505   39235 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:41:25.922177   39235 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43331
	I0501 02:41:25.922553   39235 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:41:25.923049   39235 main.go:141] libmachine: Using API Version  1
	I0501 02:41:25.923071   39235 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:41:25.923374   39235 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:41:25.923534   39235 main.go:141] libmachine: (ha-329926) Calling .DriverName
	I0501 02:41:25.958389   39235 out.go:177] * Using the kvm2 driver based on existing profile
	I0501 02:41:25.959560   39235 start.go:297] selected driver: kvm2
	I0501 02:41:25.959571   39235 start.go:901] validating driver "kvm2" against &{Name:ha-329926 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.0 ClusterName:ha-329926 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.5 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.79 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.115 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.84 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:f
alse freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:
docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 02:41:25.959704   39235 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0501 02:41:25.960008   39235 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0501 02:41:25.960075   39235 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18779-13391/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0501 02:41:25.974627   39235 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0501 02:41:25.975302   39235 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0501 02:41:25.975373   39235 cni.go:84] Creating CNI manager for ""
	I0501 02:41:25.975387   39235 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0501 02:41:25.975443   39235 start.go:340] cluster config:
	{Name:ha-329926 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-329926 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.5 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.79 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.115 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.84 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller
:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 02:41:25.975578   39235 iso.go:125] acquiring lock: {Name:mkcd2496aadb29931e179193d707c731bfda98b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0501 02:41:25.977222   39235 out.go:177] * Starting "ha-329926" primary control-plane node in "ha-329926" cluster
	I0501 02:41:25.978372   39235 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0501 02:41:25.978418   39235 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0501 02:41:25.978433   39235 cache.go:56] Caching tarball of preloaded images
	I0501 02:41:25.978536   39235 preload.go:173] Found /home/jenkins/minikube-integration/18779-13391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0501 02:41:25.978550   39235 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0501 02:41:25.978667   39235 profile.go:143] Saving config to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/config.json ...
	I0501 02:41:25.978848   39235 start.go:360] acquireMachinesLock for ha-329926: {Name:mkc4548e5afe1d4b0a833f6c522103562b0cefff Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0501 02:41:25.978907   39235 start.go:364] duration metric: took 41.656µs to acquireMachinesLock for "ha-329926"
	I0501 02:41:25.978925   39235 start.go:96] Skipping create...Using existing machine configuration
	I0501 02:41:25.978932   39235 fix.go:54] fixHost starting: 
	I0501 02:41:25.979164   39235 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:41:25.979195   39235 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:41:25.992928   39235 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33517
	I0501 02:41:25.993323   39235 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:41:25.993738   39235 main.go:141] libmachine: Using API Version  1
	I0501 02:41:25.993759   39235 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:41:25.994054   39235 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:41:25.994212   39235 main.go:141] libmachine: (ha-329926) Calling .DriverName
	I0501 02:41:25.994359   39235 main.go:141] libmachine: (ha-329926) Calling .GetState
	I0501 02:41:25.995800   39235 fix.go:112] recreateIfNeeded on ha-329926: state=Running err=<nil>
	W0501 02:41:25.995818   39235 fix.go:138] unexpected machine state, will restart: <nil>
	I0501 02:41:25.997468   39235 out.go:177] * Updating the running kvm2 "ha-329926" VM ...
	I0501 02:41:25.998518   39235 machine.go:94] provisionDockerMachine start ...
	I0501 02:41:25.998537   39235 main.go:141] libmachine: (ha-329926) Calling .DriverName
	I0501 02:41:25.998739   39235 main.go:141] libmachine: (ha-329926) Calling .GetSSHHostname
	I0501 02:41:26.000881   39235 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:41:26.001348   39235 main.go:141] libmachine: (ha-329926) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:d8:43", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:31:18 +0000 UTC Type:0 Mac:52:54:00:ce:d8:43 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-329926 Clientid:01:52:54:00:ce:d8:43}
	I0501 02:41:26.001377   39235 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined IP address 192.168.39.5 and MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:41:26.001489   39235 main.go:141] libmachine: (ha-329926) Calling .GetSSHPort
	I0501 02:41:26.001666   39235 main.go:141] libmachine: (ha-329926) Calling .GetSSHKeyPath
	I0501 02:41:26.001826   39235 main.go:141] libmachine: (ha-329926) Calling .GetSSHKeyPath
	I0501 02:41:26.001953   39235 main.go:141] libmachine: (ha-329926) Calling .GetSSHUsername
	I0501 02:41:26.002103   39235 main.go:141] libmachine: Using SSH client type: native
	I0501 02:41:26.002279   39235 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.5 22 <nil> <nil>}
	I0501 02:41:26.002297   39235 main.go:141] libmachine: About to run SSH command:
	hostname
	I0501 02:41:26.107909   39235 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-329926
	
	I0501 02:41:26.107932   39235 main.go:141] libmachine: (ha-329926) Calling .GetMachineName
	I0501 02:41:26.108181   39235 buildroot.go:166] provisioning hostname "ha-329926"
	I0501 02:41:26.108207   39235 main.go:141] libmachine: (ha-329926) Calling .GetMachineName
	I0501 02:41:26.108414   39235 main.go:141] libmachine: (ha-329926) Calling .GetSSHHostname
	I0501 02:41:26.111150   39235 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:41:26.111500   39235 main.go:141] libmachine: (ha-329926) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:d8:43", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:31:18 +0000 UTC Type:0 Mac:52:54:00:ce:d8:43 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-329926 Clientid:01:52:54:00:ce:d8:43}
	I0501 02:41:26.111532   39235 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined IP address 192.168.39.5 and MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:41:26.111673   39235 main.go:141] libmachine: (ha-329926) Calling .GetSSHPort
	I0501 02:41:26.111874   39235 main.go:141] libmachine: (ha-329926) Calling .GetSSHKeyPath
	I0501 02:41:26.112022   39235 main.go:141] libmachine: (ha-329926) Calling .GetSSHKeyPath
	I0501 02:41:26.112134   39235 main.go:141] libmachine: (ha-329926) Calling .GetSSHUsername
	I0501 02:41:26.112273   39235 main.go:141] libmachine: Using SSH client type: native
	I0501 02:41:26.112456   39235 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.5 22 <nil> <nil>}
	I0501 02:41:26.112470   39235 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-329926 && echo "ha-329926" | sudo tee /etc/hostname
	I0501 02:41:26.240369   39235 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-329926
	
	I0501 02:41:26.240401   39235 main.go:141] libmachine: (ha-329926) Calling .GetSSHHostname
	I0501 02:41:26.243048   39235 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:41:26.243396   39235 main.go:141] libmachine: (ha-329926) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:d8:43", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:31:18 +0000 UTC Type:0 Mac:52:54:00:ce:d8:43 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-329926 Clientid:01:52:54:00:ce:d8:43}
	I0501 02:41:26.243429   39235 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined IP address 192.168.39.5 and MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:41:26.243611   39235 main.go:141] libmachine: (ha-329926) Calling .GetSSHPort
	I0501 02:41:26.243803   39235 main.go:141] libmachine: (ha-329926) Calling .GetSSHKeyPath
	I0501 02:41:26.243998   39235 main.go:141] libmachine: (ha-329926) Calling .GetSSHKeyPath
	I0501 02:41:26.244137   39235 main.go:141] libmachine: (ha-329926) Calling .GetSSHUsername
	I0501 02:41:26.244302   39235 main.go:141] libmachine: Using SSH client type: native
	I0501 02:41:26.244467   39235 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.5 22 <nil> <nil>}
	I0501 02:41:26.244482   39235 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-329926' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-329926/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-329926' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0501 02:41:26.352537   39235 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0501 02:41:26.352585   39235 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18779-13391/.minikube CaCertPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18779-13391/.minikube}
	I0501 02:41:26.352641   39235 buildroot.go:174] setting up certificates
	I0501 02:41:26.352649   39235 provision.go:84] configureAuth start
	I0501 02:41:26.352659   39235 main.go:141] libmachine: (ha-329926) Calling .GetMachineName
	I0501 02:41:26.352949   39235 main.go:141] libmachine: (ha-329926) Calling .GetIP
	I0501 02:41:26.355545   39235 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:41:26.355872   39235 main.go:141] libmachine: (ha-329926) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:d8:43", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:31:18 +0000 UTC Type:0 Mac:52:54:00:ce:d8:43 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-329926 Clientid:01:52:54:00:ce:d8:43}
	I0501 02:41:26.355900   39235 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined IP address 192.168.39.5 and MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:41:26.356059   39235 main.go:141] libmachine: (ha-329926) Calling .GetSSHHostname
	I0501 02:41:26.357978   39235 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:41:26.358248   39235 main.go:141] libmachine: (ha-329926) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:d8:43", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:31:18 +0000 UTC Type:0 Mac:52:54:00:ce:d8:43 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-329926 Clientid:01:52:54:00:ce:d8:43}
	I0501 02:41:26.358270   39235 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined IP address 192.168.39.5 and MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:41:26.358421   39235 provision.go:143] copyHostCerts
	I0501 02:41:26.358448   39235 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem
	I0501 02:41:26.358489   39235 exec_runner.go:144] found /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem, removing ...
	I0501 02:41:26.358505   39235 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem
	I0501 02:41:26.358569   39235 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem (1078 bytes)
	I0501 02:41:26.358637   39235 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem
	I0501 02:41:26.358654   39235 exec_runner.go:144] found /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem, removing ...
	I0501 02:41:26.358661   39235 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem
	I0501 02:41:26.358683   39235 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem (1123 bytes)
	I0501 02:41:26.358721   39235 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem
	I0501 02:41:26.358739   39235 exec_runner.go:144] found /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem, removing ...
	I0501 02:41:26.358745   39235 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem
	I0501 02:41:26.358769   39235 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem (1675 bytes)
	I0501 02:41:26.358810   39235 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca-key.pem org=jenkins.ha-329926 san=[127.0.0.1 192.168.39.5 ha-329926 localhost minikube]
	I0501 02:41:26.530762   39235 provision.go:177] copyRemoteCerts
	I0501 02:41:26.530813   39235 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0501 02:41:26.530835   39235 main.go:141] libmachine: (ha-329926) Calling .GetSSHHostname
	I0501 02:41:26.533227   39235 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:41:26.533560   39235 main.go:141] libmachine: (ha-329926) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:d8:43", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:31:18 +0000 UTC Type:0 Mac:52:54:00:ce:d8:43 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-329926 Clientid:01:52:54:00:ce:d8:43}
	I0501 02:41:26.533597   39235 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined IP address 192.168.39.5 and MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:41:26.533767   39235 main.go:141] libmachine: (ha-329926) Calling .GetSSHPort
	I0501 02:41:26.533949   39235 main.go:141] libmachine: (ha-329926) Calling .GetSSHKeyPath
	I0501 02:41:26.534099   39235 main.go:141] libmachine: (ha-329926) Calling .GetSSHUsername
	I0501 02:41:26.534242   39235 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926/id_rsa Username:docker}
	I0501 02:41:26.618426   39235 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0501 02:41:26.618484   39235 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0501 02:41:26.647551   39235 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0501 02:41:26.647612   39235 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0501 02:41:26.677751   39235 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0501 02:41:26.677835   39235 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0501 02:41:26.706632   39235 provision.go:87] duration metric: took 353.97227ms to configureAuth
	I0501 02:41:26.706653   39235 buildroot.go:189] setting minikube options for container-runtime
	I0501 02:41:26.706892   39235 config.go:182] Loaded profile config "ha-329926": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0501 02:41:26.706956   39235 main.go:141] libmachine: (ha-329926) Calling .GetSSHHostname
	I0501 02:41:26.709324   39235 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:41:26.709656   39235 main.go:141] libmachine: (ha-329926) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:d8:43", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:31:18 +0000 UTC Type:0 Mac:52:54:00:ce:d8:43 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-329926 Clientid:01:52:54:00:ce:d8:43}
	I0501 02:41:26.709683   39235 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined IP address 192.168.39.5 and MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:41:26.709847   39235 main.go:141] libmachine: (ha-329926) Calling .GetSSHPort
	I0501 02:41:26.710055   39235 main.go:141] libmachine: (ha-329926) Calling .GetSSHKeyPath
	I0501 02:41:26.710210   39235 main.go:141] libmachine: (ha-329926) Calling .GetSSHKeyPath
	I0501 02:41:26.710377   39235 main.go:141] libmachine: (ha-329926) Calling .GetSSHUsername
	I0501 02:41:26.710555   39235 main.go:141] libmachine: Using SSH client type: native
	I0501 02:41:26.710708   39235 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.5 22 <nil> <nil>}
	I0501 02:41:26.710741   39235 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0501 02:42:57.618717   39235 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0501 02:42:57.618746   39235 machine.go:97] duration metric: took 1m31.620213044s to provisionDockerMachine
	I0501 02:42:57.618759   39235 start.go:293] postStartSetup for "ha-329926" (driver="kvm2")
	I0501 02:42:57.618770   39235 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0501 02:42:57.618785   39235 main.go:141] libmachine: (ha-329926) Calling .DriverName
	I0501 02:42:57.619180   39235 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0501 02:42:57.619214   39235 main.go:141] libmachine: (ha-329926) Calling .GetSSHHostname
	I0501 02:42:57.622423   39235 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:42:57.622836   39235 main.go:141] libmachine: (ha-329926) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:d8:43", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:31:18 +0000 UTC Type:0 Mac:52:54:00:ce:d8:43 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-329926 Clientid:01:52:54:00:ce:d8:43}
	I0501 02:42:57.622874   39235 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined IP address 192.168.39.5 and MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:42:57.623038   39235 main.go:141] libmachine: (ha-329926) Calling .GetSSHPort
	I0501 02:42:57.623209   39235 main.go:141] libmachine: (ha-329926) Calling .GetSSHKeyPath
	I0501 02:42:57.623363   39235 main.go:141] libmachine: (ha-329926) Calling .GetSSHUsername
	I0501 02:42:57.623474   39235 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926/id_rsa Username:docker}
	I0501 02:42:57.707404   39235 ssh_runner.go:195] Run: cat /etc/os-release
	I0501 02:42:57.712359   39235 info.go:137] Remote host: Buildroot 2023.02.9
	I0501 02:42:57.712388   39235 filesync.go:126] Scanning /home/jenkins/minikube-integration/18779-13391/.minikube/addons for local assets ...
	I0501 02:42:57.712464   39235 filesync.go:126] Scanning /home/jenkins/minikube-integration/18779-13391/.minikube/files for local assets ...
	I0501 02:42:57.712553   39235 filesync.go:149] local asset: /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem -> 207242.pem in /etc/ssl/certs
	I0501 02:42:57.712575   39235 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem -> /etc/ssl/certs/207242.pem
	I0501 02:42:57.712684   39235 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0501 02:42:57.723985   39235 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem --> /etc/ssl/certs/207242.pem (1708 bytes)
	I0501 02:42:57.753277   39235 start.go:296] duration metric: took 134.502924ms for postStartSetup
	I0501 02:42:57.753320   39235 main.go:141] libmachine: (ha-329926) Calling .DriverName
	I0501 02:42:57.753611   39235 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0501 02:42:57.753644   39235 main.go:141] libmachine: (ha-329926) Calling .GetSSHHostname
	I0501 02:42:57.756543   39235 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:42:57.756949   39235 main.go:141] libmachine: (ha-329926) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:d8:43", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:31:18 +0000 UTC Type:0 Mac:52:54:00:ce:d8:43 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-329926 Clientid:01:52:54:00:ce:d8:43}
	I0501 02:42:57.756978   39235 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined IP address 192.168.39.5 and MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:42:57.757069   39235 main.go:141] libmachine: (ha-329926) Calling .GetSSHPort
	I0501 02:42:57.757225   39235 main.go:141] libmachine: (ha-329926) Calling .GetSSHKeyPath
	I0501 02:42:57.757390   39235 main.go:141] libmachine: (ha-329926) Calling .GetSSHUsername
	I0501 02:42:57.757525   39235 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926/id_rsa Username:docker}
	W0501 02:42:57.837763   39235 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0501 02:42:57.837796   39235 fix.go:56] duration metric: took 1m31.858862807s for fixHost
	I0501 02:42:57.837824   39235 main.go:141] libmachine: (ha-329926) Calling .GetSSHHostname
	I0501 02:42:57.840530   39235 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:42:57.840813   39235 main.go:141] libmachine: (ha-329926) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:d8:43", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:31:18 +0000 UTC Type:0 Mac:52:54:00:ce:d8:43 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-329926 Clientid:01:52:54:00:ce:d8:43}
	I0501 02:42:57.840841   39235 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined IP address 192.168.39.5 and MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:42:57.840995   39235 main.go:141] libmachine: (ha-329926) Calling .GetSSHPort
	I0501 02:42:57.841179   39235 main.go:141] libmachine: (ha-329926) Calling .GetSSHKeyPath
	I0501 02:42:57.841354   39235 main.go:141] libmachine: (ha-329926) Calling .GetSSHKeyPath
	I0501 02:42:57.841476   39235 main.go:141] libmachine: (ha-329926) Calling .GetSSHUsername
	I0501 02:42:57.841655   39235 main.go:141] libmachine: Using SSH client type: native
	I0501 02:42:57.841819   39235 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.5 22 <nil> <nil>}
	I0501 02:42:57.841830   39235 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0501 02:42:57.944094   39235 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714531377.896474154
	
	I0501 02:42:57.944121   39235 fix.go:216] guest clock: 1714531377.896474154
	I0501 02:42:57.944143   39235 fix.go:229] Guest: 2024-05-01 02:42:57.896474154 +0000 UTC Remote: 2024-05-01 02:42:57.837806525 +0000 UTC m=+91.996092869 (delta=58.667629ms)
	I0501 02:42:57.944164   39235 fix.go:200] guest clock delta is within tolerance: 58.667629ms
	I0501 02:42:57.944169   39235 start.go:83] releasing machines lock for "ha-329926", held for 1m31.965251882s
	I0501 02:42:57.944194   39235 main.go:141] libmachine: (ha-329926) Calling .DriverName
	I0501 02:42:57.944468   39235 main.go:141] libmachine: (ha-329926) Calling .GetIP
	I0501 02:42:57.947110   39235 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:42:57.947398   39235 main.go:141] libmachine: (ha-329926) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:d8:43", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:31:18 +0000 UTC Type:0 Mac:52:54:00:ce:d8:43 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-329926 Clientid:01:52:54:00:ce:d8:43}
	I0501 02:42:57.947418   39235 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined IP address 192.168.39.5 and MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:42:57.947548   39235 main.go:141] libmachine: (ha-329926) Calling .DriverName
	I0501 02:42:57.948042   39235 main.go:141] libmachine: (ha-329926) Calling .DriverName
	I0501 02:42:57.948235   39235 main.go:141] libmachine: (ha-329926) Calling .DriverName
	I0501 02:42:57.948377   39235 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0501 02:42:57.948420   39235 main.go:141] libmachine: (ha-329926) Calling .GetSSHHostname
	I0501 02:42:57.948473   39235 ssh_runner.go:195] Run: cat /version.json
	I0501 02:42:57.948491   39235 main.go:141] libmachine: (ha-329926) Calling .GetSSHHostname
	I0501 02:42:57.951064   39235 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:42:57.951283   39235 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:42:57.951439   39235 main.go:141] libmachine: (ha-329926) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:d8:43", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:31:18 +0000 UTC Type:0 Mac:52:54:00:ce:d8:43 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-329926 Clientid:01:52:54:00:ce:d8:43}
	I0501 02:42:57.951462   39235 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined IP address 192.168.39.5 and MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:42:57.951567   39235 main.go:141] libmachine: (ha-329926) Calling .GetSSHPort
	I0501 02:42:57.951685   39235 main.go:141] libmachine: (ha-329926) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:d8:43", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:31:18 +0000 UTC Type:0 Mac:52:54:00:ce:d8:43 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-329926 Clientid:01:52:54:00:ce:d8:43}
	I0501 02:42:57.951712   39235 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined IP address 192.168.39.5 and MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:42:57.951715   39235 main.go:141] libmachine: (ha-329926) Calling .GetSSHKeyPath
	I0501 02:42:57.951866   39235 main.go:141] libmachine: (ha-329926) Calling .GetSSHPort
	I0501 02:42:57.951890   39235 main.go:141] libmachine: (ha-329926) Calling .GetSSHUsername
	I0501 02:42:57.951979   39235 main.go:141] libmachine: (ha-329926) Calling .GetSSHKeyPath
	I0501 02:42:57.952037   39235 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926/id_rsa Username:docker}
	I0501 02:42:57.952095   39235 main.go:141] libmachine: (ha-329926) Calling .GetSSHUsername
	I0501 02:42:57.952239   39235 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/ha-329926/id_rsa Username:docker}
	I0501 02:42:58.062263   39235 ssh_runner.go:195] Run: systemctl --version
	I0501 02:42:58.069272   39235 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0501 02:42:58.236351   39235 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0501 02:42:58.243386   39235 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0501 02:42:58.243455   39235 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0501 02:42:58.253640   39235 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0501 02:42:58.253663   39235 start.go:494] detecting cgroup driver to use...
	I0501 02:42:58.253729   39235 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0501 02:42:58.272557   39235 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0501 02:42:58.289266   39235 docker.go:217] disabling cri-docker service (if available) ...
	I0501 02:42:58.289326   39235 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0501 02:42:58.305053   39235 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0501 02:42:58.319905   39235 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0501 02:42:58.480664   39235 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0501 02:42:58.636046   39235 docker.go:233] disabling docker service ...
	I0501 02:42:58.636104   39235 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0501 02:42:58.655237   39235 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0501 02:42:58.669819   39235 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0501 02:42:58.829003   39235 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0501 02:42:58.989996   39235 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0501 02:42:59.006703   39235 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0501 02:42:59.030216   39235 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0501 02:42:59.030294   39235 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 02:42:59.043172   39235 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0501 02:42:59.043242   39235 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 02:42:59.057452   39235 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 02:42:59.069954   39235 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 02:42:59.082603   39235 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0501 02:42:59.095265   39235 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 02:42:59.109262   39235 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 02:42:59.121195   39235 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 02:42:59.133777   39235 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0501 02:42:59.144697   39235 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0501 02:42:59.157218   39235 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:42:59.327577   39235 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0501 02:43:09.315864   39235 ssh_runner.go:235] Completed: sudo systemctl restart crio: (9.988252796s)
	I0501 02:43:09.315898   39235 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0501 02:43:09.315948   39235 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0501 02:43:09.321755   39235 start.go:562] Will wait 60s for crictl version
	I0501 02:43:09.321810   39235 ssh_runner.go:195] Run: which crictl
	I0501 02:43:09.326174   39235 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0501 02:43:09.364999   39235 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0501 02:43:09.365102   39235 ssh_runner.go:195] Run: crio --version
	I0501 02:43:09.396220   39235 ssh_runner.go:195] Run: crio --version
	I0501 02:43:09.429215   39235 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0501 02:43:09.430455   39235 main.go:141] libmachine: (ha-329926) Calling .GetIP
	I0501 02:43:09.433192   39235 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:43:09.433537   39235 main.go:141] libmachine: (ha-329926) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:d8:43", ip: ""} in network mk-ha-329926: {Iface:virbr1 ExpiryTime:2024-05-01 03:31:18 +0000 UTC Type:0 Mac:52:54:00:ce:d8:43 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-329926 Clientid:01:52:54:00:ce:d8:43}
	I0501 02:43:09.433562   39235 main.go:141] libmachine: (ha-329926) DBG | domain ha-329926 has defined IP address 192.168.39.5 and MAC address 52:54:00:ce:d8:43 in network mk-ha-329926
	I0501 02:43:09.433785   39235 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0501 02:43:09.439468   39235 kubeadm.go:877] updating cluster {Name:ha-329926 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Cl
usterName:ha-329926 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.5 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.79 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.115 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.84 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshp
od:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Moun
tIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0501 02:43:09.439669   39235 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0501 02:43:09.439733   39235 ssh_runner.go:195] Run: sudo crictl images --output json
	I0501 02:43:09.498983   39235 crio.go:514] all images are preloaded for cri-o runtime.
	I0501 02:43:09.499010   39235 crio.go:433] Images already preloaded, skipping extraction
	I0501 02:43:09.499065   39235 ssh_runner.go:195] Run: sudo crictl images --output json
	I0501 02:43:09.545195   39235 crio.go:514] all images are preloaded for cri-o runtime.
	I0501 02:43:09.545221   39235 cache_images.go:84] Images are preloaded, skipping loading
	I0501 02:43:09.545232   39235 kubeadm.go:928] updating node { 192.168.39.5 8443 v1.30.0 crio true true} ...
	I0501 02:43:09.545352   39235 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-329926 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-329926 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0501 02:43:09.545437   39235 ssh_runner.go:195] Run: crio config
	I0501 02:43:09.614383   39235 cni.go:84] Creating CNI manager for ""
	I0501 02:43:09.614422   39235 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0501 02:43:09.614434   39235 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0501 02:43:09.614461   39235 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.5 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-329926 NodeName:ha-329926 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0501 02:43:09.614639   39235 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-329926"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0501 02:43:09.614663   39235 kube-vip.go:111] generating kube-vip config ...
	I0501 02:43:09.614711   39235 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0501 02:43:09.630044   39235 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0501 02:43:09.630136   39235 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0501 02:43:09.630191   39235 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0501 02:43:09.643362   39235 binaries.go:44] Found k8s binaries, skipping transfer
	I0501 02:43:09.643424   39235 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0501 02:43:09.656163   39235 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I0501 02:43:09.677024   39235 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0501 02:43:09.696909   39235 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2147 bytes)
	I0501 02:43:09.717210   39235 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0501 02:43:09.737838   39235 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0501 02:43:09.742941   39235 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:43:09.912331   39235 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0501 02:43:09.929488   39235 certs.go:68] Setting up /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926 for IP: 192.168.39.5
	I0501 02:43:09.929514   39235 certs.go:194] generating shared ca certs ...
	I0501 02:43:09.929535   39235 certs.go:226] acquiring lock for ca certs: {Name:mk85a75d14c00780d4bf09cd9bdaf0f96f1b8fce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:43:09.929723   39235 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18779-13391/.minikube/ca.key
	I0501 02:43:09.929777   39235 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.key
	I0501 02:43:09.929790   39235 certs.go:256] generating profile certs ...
	I0501 02:43:09.929909   39235 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/client.key
	I0501 02:43:09.929944   39235 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/apiserver.key.870d3c0e
	I0501 02:43:09.929962   39235 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/apiserver.crt.870d3c0e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.5 192.168.39.79 192.168.39.115 192.168.39.254]
	I0501 02:43:10.012851   39235 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/apiserver.crt.870d3c0e ...
	I0501 02:43:10.012885   39235 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/apiserver.crt.870d3c0e: {Name:mk4ccaf90fd6dcf78b8e9e2b8db11f9737a1bd70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:43:10.013054   39235 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/apiserver.key.870d3c0e ...
	I0501 02:43:10.013066   39235 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/apiserver.key.870d3c0e: {Name:mk2c3b7f593cad4d68b6ae9c2deae1c15fbc0249 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:43:10.013129   39235 certs.go:381] copying /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/apiserver.crt.870d3c0e -> /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/apiserver.crt
	I0501 02:43:10.013284   39235 certs.go:385] copying /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/apiserver.key.870d3c0e -> /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/apiserver.key
	I0501 02:43:10.013413   39235 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/proxy-client.key
	I0501 02:43:10.013436   39235 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0501 02:43:10.013448   39235 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0501 02:43:10.013462   39235 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0501 02:43:10.013474   39235 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0501 02:43:10.013483   39235 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0501 02:43:10.013494   39235 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0501 02:43:10.013509   39235 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0501 02:43:10.013521   39235 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0501 02:43:10.013567   39235 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/20724.pem (1338 bytes)
	W0501 02:43:10.013593   39235 certs.go:480] ignoring /home/jenkins/minikube-integration/18779-13391/.minikube/certs/20724_empty.pem, impossibly tiny 0 bytes
	I0501 02:43:10.013602   39235 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca-key.pem (1679 bytes)
	I0501 02:43:10.013624   39235 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem (1078 bytes)
	I0501 02:43:10.013645   39235 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem (1123 bytes)
	I0501 02:43:10.013665   39235 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem (1675 bytes)
	I0501 02:43:10.013705   39235 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem (1708 bytes)
	I0501 02:43:10.013729   39235 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0501 02:43:10.013763   39235 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/20724.pem -> /usr/share/ca-certificates/20724.pem
	I0501 02:43:10.013776   39235 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem -> /usr/share/ca-certificates/207242.pem
	I0501 02:43:10.014356   39235 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0501 02:43:10.044218   39235 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0501 02:43:10.071457   39235 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0501 02:43:10.099415   39235 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0501 02:43:10.126994   39235 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0501 02:43:10.154491   39235 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0501 02:43:10.181074   39235 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0501 02:43:10.207477   39235 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/ha-329926/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0501 02:43:10.234294   39235 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0501 02:43:10.260359   39235 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/certs/20724.pem --> /usr/share/ca-certificates/20724.pem (1338 bytes)
	I0501 02:43:10.285469   39235 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem --> /usr/share/ca-certificates/207242.pem (1708 bytes)
	I0501 02:43:10.312946   39235 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0501 02:43:10.338880   39235 ssh_runner.go:195] Run: openssl version
	I0501 02:43:10.345377   39235 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0501 02:43:10.357312   39235 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0501 02:43:10.362191   39235 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May  1 02:08 /usr/share/ca-certificates/minikubeCA.pem
	I0501 02:43:10.362226   39235 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0501 02:43:10.368239   39235 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0501 02:43:10.378439   39235 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20724.pem && ln -fs /usr/share/ca-certificates/20724.pem /etc/ssl/certs/20724.pem"
	I0501 02:43:10.390421   39235 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20724.pem
	I0501 02:43:10.395446   39235 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May  1 02:20 /usr/share/ca-certificates/20724.pem
	I0501 02:43:10.395497   39235 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20724.pem
	I0501 02:43:10.401896   39235 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20724.pem /etc/ssl/certs/51391683.0"
	I0501 02:43:10.412204   39235 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/207242.pem && ln -fs /usr/share/ca-certificates/207242.pem /etc/ssl/certs/207242.pem"
	I0501 02:43:10.424343   39235 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/207242.pem
	I0501 02:43:10.429120   39235 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May  1 02:20 /usr/share/ca-certificates/207242.pem
	I0501 02:43:10.429180   39235 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/207242.pem
	I0501 02:43:10.435171   39235 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/207242.pem /etc/ssl/certs/3ec20f2e.0"
	I0501 02:43:10.446115   39235 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0501 02:43:10.451405   39235 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0501 02:43:10.457897   39235 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0501 02:43:10.464098   39235 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0501 02:43:10.471263   39235 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0501 02:43:10.477507   39235 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0501 02:43:10.483955   39235 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0501 02:43:10.490367   39235 kubeadm.go:391] StartCluster: {Name:ha-329926 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Clust
erName:ha-329926 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.5 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.79 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.115 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.84 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:
false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 02:43:10.490526   39235 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0501 02:43:10.490571   39235 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0501 02:43:10.538838   39235 cri.go:89] found id: "33f44ace2eeffb23232e362caee4613e5d7141986d53d906034466c53cbfe7a8"
	I0501 02:43:10.538857   39235 cri.go:89] found id: "0558df584d79d0d8bc6e53073c4ad9708e838bb70f9953ce93d7593344c0385a"
	I0501 02:43:10.538862   39235 cri.go:89] found id: "9dc26cb1281fef5591fd3b938f25bc2b690517cc2fdb2d5506f19a18dd738057"
	I0501 02:43:10.538866   39235 cri.go:89] found id: "778cfaa464ec8ad52820f633702fc3f620188a6295206ae2173142199f71f48e"
	I0501 02:43:10.538870   39235 cri.go:89] found id: "619f66869569c782fd59143653e79694cc74f9ae89927d4f16dfcb10f47d0e03"
	I0501 02:43:10.538874   39235 cri.go:89] found id: "693a12cd2b2c6de7a7c03e4d290e69cdcef9f4f17b75ff84f84fb81b8297cd63"
	I0501 02:43:10.538878   39235 cri.go:89] found id: "fbc7b6bc224b5b53e156316187f05c941fd17da22bca2cc7fecf5071d8eb4d38"
	I0501 02:43:10.538882   39235 cri.go:89] found id: "2ab64850e34b66cbe91e099c91014051472b87888130297cd7c47a4a78992140"
	I0501 02:43:10.538886   39235 cri.go:89] found id: "9563ee09b7dc14582bda46368040d65e26370cf354a48e6db28fb4d5169a41db"
	I0501 02:43:10.538893   39235 cri.go:89] found id: "d24a4adfe9096e0063099c3390b72f12094c22465e8b666eb999e30740b77ea3"
	I0501 02:43:10.538897   39235 cri.go:89] found id: "e3ffc6d046e214333ed85219819ad54bfa5c3ac0b10167beacf72edbd6035736"
	I0501 02:43:10.538900   39235 cri.go:89] found id: "347407ef9dd66d0f2a44d6bc871649c2f38c1263ef6f3a33d6574f0e149ab701"
	I0501 02:43:10.538907   39235 cri.go:89] found id: "9f36a128ab65a069e1cafa5abf1b7774272b73428ea3826f595758d5e99b4e93"
	I0501 02:43:10.538910   39235 cri.go:89] found id: ""
	I0501 02:43:10.538957   39235 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	May 01 02:49:03 ha-329926 crio[3818]: time="2024-05-01 02:49:03.109772305Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714531743109743321,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=457842b5-f1c9-4e51-a892-e3a1285818bb name=/runtime.v1.ImageService/ImageFsInfo
	May 01 02:49:03 ha-329926 crio[3818]: time="2024-05-01 02:49:03.110532178Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=02f4579f-159e-4b6a-b537-e34732044ce4 name=/runtime.v1.RuntimeService/ListContainers
	May 01 02:49:03 ha-329926 crio[3818]: time="2024-05-01 02:49:03.110595102Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=02f4579f-159e-4b6a-b537-e34732044ce4 name=/runtime.v1.RuntimeService/ListContainers
	May 01 02:49:03 ha-329926 crio[3818]: time="2024-05-01 02:49:03.111089498Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0b3a57c2195832ebe5e7032bc917987f944db828a8e5925d886ea10fb432f1ab,PodSandboxId:222f0baa90487bc1f8b94ec1b03db6568034bc14d984f1da1cd8b2ca7b480596,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714531488140417174,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 371423a6-a156-4e8d-bf66-812d606cc8d7,},Annotations:map[string]string{io.kubernetes.container.hash: 79218c39,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68c9b61e07b789bb1c441a33b66eeb07476719d85f4affe9c264e34bd73d8008,PodSandboxId:5c187637e4af957fa3481369cfb179dd7f043b24bd7436ca0fa0e524b347859a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1714531481120651978,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-kcmp7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e15c166-9ba1-40c9-8f33-db7f83733932,},Annotations:map[string]string{io.kubernetes.container.hash: fdceac74,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8f3963bbca5a5d8ca96fd7bf715f2b551bcaf4380803b443a346bccff25655b,PodSandboxId:ec79d09460adcc3f5e74bae2063a9f639a550ab2e2282a391540873353174ac5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714531441122972737,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-329926,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c603c91fe09a36a9d3862475188142a,},Annotations:map[string]string{io.kubernetes.container.hash: 740b8b39,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernet
es.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edfe1ca8adcff5c57dd48a6e4e52f6129014ec43e797455a799c8abb8dddf9ad,PodSandboxId:639834849a63b0c5b6034ba67902819baf2c7bab7e147320753381a3fadfc9ef,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714531439122290070,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-329926,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d6c0ce9d370e02811c06c5c50fb7da1,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c768225fa391861ab12a75d643a4757892824fd20ef1294ba1e9b817cbe81f3,PodSandboxId:222f0baa90487bc1f8b94ec1b03db6568034bc14d984f1da1cd8b2ca7b480596,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1714531436122032584,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 371423a6-a156-4e8d-bf66-812d606cc8d7,},Annotations:map[string]string{io.kubernetes.container.hash: 79218c39,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.cont
ainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bb33341c42d847e3c54695cda601e3d3ee0fe3e95d113fabdb40a8b79ee00ac,PodSandboxId:7ec7a58f19ea4590ecf46d3c8faea8e7ab579c87ea9e31da82f9173c6e67e371,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714531429689100495,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-nwj5x,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0cfb5bda-fca7-479f-98d3-6be9bddf0e1c,},Annotations:map[string]string{io.kubernetes.container.hash: b50f62c1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45d5832e610c8d7398d46f4d0dc6929742b56ee579172c6289a7ebcedd229113,PodSandboxId:45ff3dc59db5afc538c97abf460cf706199fe452449ab01ed2f230cf7248cf45,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1714531411902616259,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-329926,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7e0afd8727d0417c20407d0e8880765,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:3b19253347b3c6a6483c37f96f8593931755f784d085f793c13e001ae0d76794,PodSandboxId:6bd635b17d9c0d981c6e2c3a943281df18c7780f9ff5380d3554dfb073340194,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714531396113172009,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-msshn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7575fbfc-11ce-4223-bd99-ff9cdddd3568,},Annotations:map[string]string{io.kubernetes.container.hash: 725b7ad5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4005dbd
94fbfed92cf6c4bb53e9b2208adb17302eabfa52ea663e83fa24fef7,PodSandboxId:15d9ac760b30773086acc72880e8a01cf304d780f29db72009315c976cb517ff,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714531396470001536,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-cfdqc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a37e982e-9e4f-43bf-b957-0d6f082f4ec8,},Annotations:map[string]string{io.kubernetes.container.hash: d10dbf58,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d21d7f0022d1d1edaee4b7f45614bc8d98a407b0ba70c272d9fcdbc67fdba53,PodSandboxId:77261f211cf7416433533bfbdf670550fc76f5c15415fa7ad3d2c30a90d5c656,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714531396224192528,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-2h8lc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 937e09f0-6a7d-4387-aa19-ee959eb5a2a5,},Annotations:map[string]string{io.kubernetes.container.hash: 64cd0f7d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,
\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a409ec7ab3a3389b868353ff5b180728bff4d9fd6e9ee235408658387a54e865,PodSandboxId:639834849a63b0c5b6034ba67902819baf2c7bab7e147320753381a3fadfc9ef,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714531396158062617,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-329926,io.kuber
netes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d6c0ce9d370e02811c06c5c50fb7da1,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc712ce5191365a1b74a4fb690a3fe1fca3ef109f0525a60d88ecff10b96a61b,PodSandboxId:33a2511848b79ed0b27f51b17c8e8d0380da02cb35f4dd0ab8930ed674b8a9e3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714531396049539667,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-329926,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: 50544e7f95cae164184f9b27f78747c6,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:028fe64855c8942e704a66fc8d7d80db9662c05c5252b9ae01043eb95134a0a6,PodSandboxId:ec79d09460adcc3f5e74bae2063a9f639a550ab2e2282a391540873353174ac5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714531395925134025,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-329926,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 4c603c91fe09a36a9d3862475188142a,},Annotations:map[string]string{io.kubernetes.container.hash: 740b8b39,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4940e86ab3aeeda6fefe1a1c3eceee2908fbf5e3ebc1584761c2744b7a04e3e,PodSandboxId:27f30cfd71acf5bbb1ccf09c13482fbe21411ba6499ce9959099bd47c7ce537f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714531395866512593,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-329926,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 684463257510837a1c150a7df713bf62,},Ann
otations:map[string]string{io.kubernetes.container.hash: d6da2a03,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:042c499301cf07675ef97e386fb802a0684efc1ff197389bf8b7458ca853493f,PodSandboxId:5c187637e4af957fa3481369cfb179dd7f043b24bd7436ca0fa0e524b347859a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1714531392201491789,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-kcmp7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e15c166-9ba1-40c9-8f33-db7f83733932,},Annotations:map[string]string{io.kuber
netes.container.hash: fdceac74,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d8c54a9eb6fd7f6b4514ddfa0201da9a28692806db71b7d277493e9f9b90233,PodSandboxId:abf4acd7dd09ff0cf728fe61b1bf3c73291eacc97a98f9f2d79b9fe49a629266,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1714530889047946688,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-nwj5x,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0cfb5bda-fca7-479f-98d3-6be9bddf0e1c,},Annotations:map[string]string{io.kuberne
tes.container.hash: b50f62c1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:619f66869569c782fd59143653e79694cc74f9ae89927d4f16dfcb10f47d0e03,PodSandboxId:0fe93b95f6356e43c599c74942976a3c1c2025ad4e52377711767c38c88d3d63,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714530725115421538,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-cfdqc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a37e982e-9e4f-43bf-b957-0d6f082f4ec8,},Annotations:map[string]string{io.kubernetes.container.hash: d10dbf58,io.
kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:693a12cd2b2c6de7a7c03e4d290e69cdcef9f4f17b75ff84f84fb81b8297cd63,PodSandboxId:1771f42c6abecb905d783dc2c43071807fc9138d9eae9432b2b132602a450cb2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714530725082845913,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredn
s-7db6d8ff4d-2h8lc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 937e09f0-6a7d-4387-aa19-ee959eb5a2a5,},Annotations:map[string]string{io.kubernetes.container.hash: 64cd0f7d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ab64850e34b66cbe91e099c91014051472b87888130297cd7c47a4a78992140,PodSandboxId:f6611da96d51a594f93865322c664539feca749ea5a59ee33b637aae006635ba,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf4
31fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1714530722007160196,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-msshn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7575fbfc-11ce-4223-bd99-ff9cdddd3568,},Annotations:map[string]string{io.kubernetes.container.hash: 725b7ad5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3ffc6d046e214333ed85219819ad54bfa5c3ac0b10167beacf72edbd6035736,PodSandboxId:170d412885089b502fe3701809ec5f3271a9fd4f9181bf1c293cd527d144907b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929d
c8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1714530701589505342,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-329926,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50544e7f95cae164184f9b27f78747c6,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f36a128ab65a069e1cafa5abf1b7774272b73428ea3826f595758d5e99b4e93,PodSandboxId:0c17dc8e917b3fee067acb3ff1b63beb8742f444a12d58ee96183b021052a9ed,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTA
INER_EXITED,CreatedAt:1714530701461361460,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-329926,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 684463257510837a1c150a7df713bf62,},Annotations:map[string]string{io.kubernetes.container.hash: d6da2a03,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=02f4579f-159e-4b6a-b537-e34732044ce4 name=/runtime.v1.RuntimeService/ListContainers
	May 01 02:49:03 ha-329926 crio[3818]: time="2024-05-01 02:49:03.161594064Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=dcce19a6-a47e-4a41-aace-bdd0e2912616 name=/runtime.v1.RuntimeService/Version
	May 01 02:49:03 ha-329926 crio[3818]: time="2024-05-01 02:49:03.161890672Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=dcce19a6-a47e-4a41-aace-bdd0e2912616 name=/runtime.v1.RuntimeService/Version
	May 01 02:49:03 ha-329926 crio[3818]: time="2024-05-01 02:49:03.162923046Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=76050429-8e68-4fb9-8c77-ebe5d91bf092 name=/runtime.v1.ImageService/ImageFsInfo
	May 01 02:49:03 ha-329926 crio[3818]: time="2024-05-01 02:49:03.163532308Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714531743163507388,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=76050429-8e68-4fb9-8c77-ebe5d91bf092 name=/runtime.v1.ImageService/ImageFsInfo
	May 01 02:49:03 ha-329926 crio[3818]: time="2024-05-01 02:49:03.164064463Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0465502b-cbeb-4bf6-8a59-b1ec8ff49f2d name=/runtime.v1.RuntimeService/ListContainers
	May 01 02:49:03 ha-329926 crio[3818]: time="2024-05-01 02:49:03.164194446Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0465502b-cbeb-4bf6-8a59-b1ec8ff49f2d name=/runtime.v1.RuntimeService/ListContainers
	May 01 02:49:03 ha-329926 crio[3818]: time="2024-05-01 02:49:03.164815030Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0b3a57c2195832ebe5e7032bc917987f944db828a8e5925d886ea10fb432f1ab,PodSandboxId:222f0baa90487bc1f8b94ec1b03db6568034bc14d984f1da1cd8b2ca7b480596,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714531488140417174,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 371423a6-a156-4e8d-bf66-812d606cc8d7,},Annotations:map[string]string{io.kubernetes.container.hash: 79218c39,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68c9b61e07b789bb1c441a33b66eeb07476719d85f4affe9c264e34bd73d8008,PodSandboxId:5c187637e4af957fa3481369cfb179dd7f043b24bd7436ca0fa0e524b347859a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1714531481120651978,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-kcmp7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e15c166-9ba1-40c9-8f33-db7f83733932,},Annotations:map[string]string{io.kubernetes.container.hash: fdceac74,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8f3963bbca5a5d8ca96fd7bf715f2b551bcaf4380803b443a346bccff25655b,PodSandboxId:ec79d09460adcc3f5e74bae2063a9f639a550ab2e2282a391540873353174ac5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714531441122972737,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-329926,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c603c91fe09a36a9d3862475188142a,},Annotations:map[string]string{io.kubernetes.container.hash: 740b8b39,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernet
es.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edfe1ca8adcff5c57dd48a6e4e52f6129014ec43e797455a799c8abb8dddf9ad,PodSandboxId:639834849a63b0c5b6034ba67902819baf2c7bab7e147320753381a3fadfc9ef,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714531439122290070,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-329926,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d6c0ce9d370e02811c06c5c50fb7da1,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c768225fa391861ab12a75d643a4757892824fd20ef1294ba1e9b817cbe81f3,PodSandboxId:222f0baa90487bc1f8b94ec1b03db6568034bc14d984f1da1cd8b2ca7b480596,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1714531436122032584,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 371423a6-a156-4e8d-bf66-812d606cc8d7,},Annotations:map[string]string{io.kubernetes.container.hash: 79218c39,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.cont
ainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bb33341c42d847e3c54695cda601e3d3ee0fe3e95d113fabdb40a8b79ee00ac,PodSandboxId:7ec7a58f19ea4590ecf46d3c8faea8e7ab579c87ea9e31da82f9173c6e67e371,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714531429689100495,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-nwj5x,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0cfb5bda-fca7-479f-98d3-6be9bddf0e1c,},Annotations:map[string]string{io.kubernetes.container.hash: b50f62c1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45d5832e610c8d7398d46f4d0dc6929742b56ee579172c6289a7ebcedd229113,PodSandboxId:45ff3dc59db5afc538c97abf460cf706199fe452449ab01ed2f230cf7248cf45,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1714531411902616259,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-329926,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7e0afd8727d0417c20407d0e8880765,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:3b19253347b3c6a6483c37f96f8593931755f784d085f793c13e001ae0d76794,PodSandboxId:6bd635b17d9c0d981c6e2c3a943281df18c7780f9ff5380d3554dfb073340194,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714531396113172009,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-msshn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7575fbfc-11ce-4223-bd99-ff9cdddd3568,},Annotations:map[string]string{io.kubernetes.container.hash: 725b7ad5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4005dbd
94fbfed92cf6c4bb53e9b2208adb17302eabfa52ea663e83fa24fef7,PodSandboxId:15d9ac760b30773086acc72880e8a01cf304d780f29db72009315c976cb517ff,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714531396470001536,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-cfdqc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a37e982e-9e4f-43bf-b957-0d6f082f4ec8,},Annotations:map[string]string{io.kubernetes.container.hash: d10dbf58,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d21d7f0022d1d1edaee4b7f45614bc8d98a407b0ba70c272d9fcdbc67fdba53,PodSandboxId:77261f211cf7416433533bfbdf670550fc76f5c15415fa7ad3d2c30a90d5c656,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714531396224192528,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-2h8lc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 937e09f0-6a7d-4387-aa19-ee959eb5a2a5,},Annotations:map[string]string{io.kubernetes.container.hash: 64cd0f7d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,
\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a409ec7ab3a3389b868353ff5b180728bff4d9fd6e9ee235408658387a54e865,PodSandboxId:639834849a63b0c5b6034ba67902819baf2c7bab7e147320753381a3fadfc9ef,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714531396158062617,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-329926,io.kuber
netes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d6c0ce9d370e02811c06c5c50fb7da1,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc712ce5191365a1b74a4fb690a3fe1fca3ef109f0525a60d88ecff10b96a61b,PodSandboxId:33a2511848b79ed0b27f51b17c8e8d0380da02cb35f4dd0ab8930ed674b8a9e3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714531396049539667,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-329926,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: 50544e7f95cae164184f9b27f78747c6,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:028fe64855c8942e704a66fc8d7d80db9662c05c5252b9ae01043eb95134a0a6,PodSandboxId:ec79d09460adcc3f5e74bae2063a9f639a550ab2e2282a391540873353174ac5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714531395925134025,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-329926,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 4c603c91fe09a36a9d3862475188142a,},Annotations:map[string]string{io.kubernetes.container.hash: 740b8b39,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4940e86ab3aeeda6fefe1a1c3eceee2908fbf5e3ebc1584761c2744b7a04e3e,PodSandboxId:27f30cfd71acf5bbb1ccf09c13482fbe21411ba6499ce9959099bd47c7ce537f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714531395866512593,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-329926,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 684463257510837a1c150a7df713bf62,},Ann
otations:map[string]string{io.kubernetes.container.hash: d6da2a03,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:042c499301cf07675ef97e386fb802a0684efc1ff197389bf8b7458ca853493f,PodSandboxId:5c187637e4af957fa3481369cfb179dd7f043b24bd7436ca0fa0e524b347859a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1714531392201491789,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-kcmp7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e15c166-9ba1-40c9-8f33-db7f83733932,},Annotations:map[string]string{io.kuber
netes.container.hash: fdceac74,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d8c54a9eb6fd7f6b4514ddfa0201da9a28692806db71b7d277493e9f9b90233,PodSandboxId:abf4acd7dd09ff0cf728fe61b1bf3c73291eacc97a98f9f2d79b9fe49a629266,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1714530889047946688,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-nwj5x,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0cfb5bda-fca7-479f-98d3-6be9bddf0e1c,},Annotations:map[string]string{io.kuberne
tes.container.hash: b50f62c1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:619f66869569c782fd59143653e79694cc74f9ae89927d4f16dfcb10f47d0e03,PodSandboxId:0fe93b95f6356e43c599c74942976a3c1c2025ad4e52377711767c38c88d3d63,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714530725115421538,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-cfdqc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a37e982e-9e4f-43bf-b957-0d6f082f4ec8,},Annotations:map[string]string{io.kubernetes.container.hash: d10dbf58,io.
kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:693a12cd2b2c6de7a7c03e4d290e69cdcef9f4f17b75ff84f84fb81b8297cd63,PodSandboxId:1771f42c6abecb905d783dc2c43071807fc9138d9eae9432b2b132602a450cb2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714530725082845913,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredn
s-7db6d8ff4d-2h8lc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 937e09f0-6a7d-4387-aa19-ee959eb5a2a5,},Annotations:map[string]string{io.kubernetes.container.hash: 64cd0f7d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ab64850e34b66cbe91e099c91014051472b87888130297cd7c47a4a78992140,PodSandboxId:f6611da96d51a594f93865322c664539feca749ea5a59ee33b637aae006635ba,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf4
31fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1714530722007160196,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-msshn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7575fbfc-11ce-4223-bd99-ff9cdddd3568,},Annotations:map[string]string{io.kubernetes.container.hash: 725b7ad5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3ffc6d046e214333ed85219819ad54bfa5c3ac0b10167beacf72edbd6035736,PodSandboxId:170d412885089b502fe3701809ec5f3271a9fd4f9181bf1c293cd527d144907b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929d
c8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1714530701589505342,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-329926,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50544e7f95cae164184f9b27f78747c6,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f36a128ab65a069e1cafa5abf1b7774272b73428ea3826f595758d5e99b4e93,PodSandboxId:0c17dc8e917b3fee067acb3ff1b63beb8742f444a12d58ee96183b021052a9ed,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTA
INER_EXITED,CreatedAt:1714530701461361460,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-329926,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 684463257510837a1c150a7df713bf62,},Annotations:map[string]string{io.kubernetes.container.hash: d6da2a03,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0465502b-cbeb-4bf6-8a59-b1ec8ff49f2d name=/runtime.v1.RuntimeService/ListContainers
	May 01 02:49:03 ha-329926 crio[3818]: time="2024-05-01 02:49:03.216344303Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c346079d-75fa-44ea-8436-1a5ef2a0b010 name=/runtime.v1.RuntimeService/Version
	May 01 02:49:03 ha-329926 crio[3818]: time="2024-05-01 02:49:03.216465690Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c346079d-75fa-44ea-8436-1a5ef2a0b010 name=/runtime.v1.RuntimeService/Version
	May 01 02:49:03 ha-329926 crio[3818]: time="2024-05-01 02:49:03.218401555Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8861cd67-cff8-4a48-99fd-41839314cdad name=/runtime.v1.ImageService/ImageFsInfo
	May 01 02:49:03 ha-329926 crio[3818]: time="2024-05-01 02:49:03.219063488Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714531743219035794,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8861cd67-cff8-4a48-99fd-41839314cdad name=/runtime.v1.ImageService/ImageFsInfo
	May 01 02:49:03 ha-329926 crio[3818]: time="2024-05-01 02:49:03.220189824Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=abef4f8b-1ad8-4c1f-a3ed-63a2929b92a4 name=/runtime.v1.RuntimeService/ListContainers
	May 01 02:49:03 ha-329926 crio[3818]: time="2024-05-01 02:49:03.220249203Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=abef4f8b-1ad8-4c1f-a3ed-63a2929b92a4 name=/runtime.v1.RuntimeService/ListContainers
	May 01 02:49:03 ha-329926 crio[3818]: time="2024-05-01 02:49:03.220792314Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0b3a57c2195832ebe5e7032bc917987f944db828a8e5925d886ea10fb432f1ab,PodSandboxId:222f0baa90487bc1f8b94ec1b03db6568034bc14d984f1da1cd8b2ca7b480596,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714531488140417174,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 371423a6-a156-4e8d-bf66-812d606cc8d7,},Annotations:map[string]string{io.kubernetes.container.hash: 79218c39,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68c9b61e07b789bb1c441a33b66eeb07476719d85f4affe9c264e34bd73d8008,PodSandboxId:5c187637e4af957fa3481369cfb179dd7f043b24bd7436ca0fa0e524b347859a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1714531481120651978,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-kcmp7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e15c166-9ba1-40c9-8f33-db7f83733932,},Annotations:map[string]string{io.kubernetes.container.hash: fdceac74,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8f3963bbca5a5d8ca96fd7bf715f2b551bcaf4380803b443a346bccff25655b,PodSandboxId:ec79d09460adcc3f5e74bae2063a9f639a550ab2e2282a391540873353174ac5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714531441122972737,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-329926,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c603c91fe09a36a9d3862475188142a,},Annotations:map[string]string{io.kubernetes.container.hash: 740b8b39,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernet
es.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edfe1ca8adcff5c57dd48a6e4e52f6129014ec43e797455a799c8abb8dddf9ad,PodSandboxId:639834849a63b0c5b6034ba67902819baf2c7bab7e147320753381a3fadfc9ef,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714531439122290070,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-329926,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d6c0ce9d370e02811c06c5c50fb7da1,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c768225fa391861ab12a75d643a4757892824fd20ef1294ba1e9b817cbe81f3,PodSandboxId:222f0baa90487bc1f8b94ec1b03db6568034bc14d984f1da1cd8b2ca7b480596,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1714531436122032584,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 371423a6-a156-4e8d-bf66-812d606cc8d7,},Annotations:map[string]string{io.kubernetes.container.hash: 79218c39,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.cont
ainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bb33341c42d847e3c54695cda601e3d3ee0fe3e95d113fabdb40a8b79ee00ac,PodSandboxId:7ec7a58f19ea4590ecf46d3c8faea8e7ab579c87ea9e31da82f9173c6e67e371,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714531429689100495,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-nwj5x,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0cfb5bda-fca7-479f-98d3-6be9bddf0e1c,},Annotations:map[string]string{io.kubernetes.container.hash: b50f62c1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45d5832e610c8d7398d46f4d0dc6929742b56ee579172c6289a7ebcedd229113,PodSandboxId:45ff3dc59db5afc538c97abf460cf706199fe452449ab01ed2f230cf7248cf45,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1714531411902616259,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-329926,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7e0afd8727d0417c20407d0e8880765,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:3b19253347b3c6a6483c37f96f8593931755f784d085f793c13e001ae0d76794,PodSandboxId:6bd635b17d9c0d981c6e2c3a943281df18c7780f9ff5380d3554dfb073340194,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714531396113172009,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-msshn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7575fbfc-11ce-4223-bd99-ff9cdddd3568,},Annotations:map[string]string{io.kubernetes.container.hash: 725b7ad5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4005dbd
94fbfed92cf6c4bb53e9b2208adb17302eabfa52ea663e83fa24fef7,PodSandboxId:15d9ac760b30773086acc72880e8a01cf304d780f29db72009315c976cb517ff,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714531396470001536,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-cfdqc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a37e982e-9e4f-43bf-b957-0d6f082f4ec8,},Annotations:map[string]string{io.kubernetes.container.hash: d10dbf58,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d21d7f0022d1d1edaee4b7f45614bc8d98a407b0ba70c272d9fcdbc67fdba53,PodSandboxId:77261f211cf7416433533bfbdf670550fc76f5c15415fa7ad3d2c30a90d5c656,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714531396224192528,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-2h8lc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 937e09f0-6a7d-4387-aa19-ee959eb5a2a5,},Annotations:map[string]string{io.kubernetes.container.hash: 64cd0f7d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,
\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a409ec7ab3a3389b868353ff5b180728bff4d9fd6e9ee235408658387a54e865,PodSandboxId:639834849a63b0c5b6034ba67902819baf2c7bab7e147320753381a3fadfc9ef,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714531396158062617,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-329926,io.kuber
netes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d6c0ce9d370e02811c06c5c50fb7da1,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc712ce5191365a1b74a4fb690a3fe1fca3ef109f0525a60d88ecff10b96a61b,PodSandboxId:33a2511848b79ed0b27f51b17c8e8d0380da02cb35f4dd0ab8930ed674b8a9e3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714531396049539667,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-329926,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: 50544e7f95cae164184f9b27f78747c6,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:028fe64855c8942e704a66fc8d7d80db9662c05c5252b9ae01043eb95134a0a6,PodSandboxId:ec79d09460adcc3f5e74bae2063a9f639a550ab2e2282a391540873353174ac5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714531395925134025,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-329926,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 4c603c91fe09a36a9d3862475188142a,},Annotations:map[string]string{io.kubernetes.container.hash: 740b8b39,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4940e86ab3aeeda6fefe1a1c3eceee2908fbf5e3ebc1584761c2744b7a04e3e,PodSandboxId:27f30cfd71acf5bbb1ccf09c13482fbe21411ba6499ce9959099bd47c7ce537f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714531395866512593,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-329926,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 684463257510837a1c150a7df713bf62,},Ann
otations:map[string]string{io.kubernetes.container.hash: d6da2a03,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:042c499301cf07675ef97e386fb802a0684efc1ff197389bf8b7458ca853493f,PodSandboxId:5c187637e4af957fa3481369cfb179dd7f043b24bd7436ca0fa0e524b347859a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1714531392201491789,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-kcmp7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e15c166-9ba1-40c9-8f33-db7f83733932,},Annotations:map[string]string{io.kuber
netes.container.hash: fdceac74,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d8c54a9eb6fd7f6b4514ddfa0201da9a28692806db71b7d277493e9f9b90233,PodSandboxId:abf4acd7dd09ff0cf728fe61b1bf3c73291eacc97a98f9f2d79b9fe49a629266,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1714530889047946688,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-nwj5x,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0cfb5bda-fca7-479f-98d3-6be9bddf0e1c,},Annotations:map[string]string{io.kuberne
tes.container.hash: b50f62c1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:619f66869569c782fd59143653e79694cc74f9ae89927d4f16dfcb10f47d0e03,PodSandboxId:0fe93b95f6356e43c599c74942976a3c1c2025ad4e52377711767c38c88d3d63,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714530725115421538,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-cfdqc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a37e982e-9e4f-43bf-b957-0d6f082f4ec8,},Annotations:map[string]string{io.kubernetes.container.hash: d10dbf58,io.
kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:693a12cd2b2c6de7a7c03e4d290e69cdcef9f4f17b75ff84f84fb81b8297cd63,PodSandboxId:1771f42c6abecb905d783dc2c43071807fc9138d9eae9432b2b132602a450cb2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714530725082845913,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredn
s-7db6d8ff4d-2h8lc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 937e09f0-6a7d-4387-aa19-ee959eb5a2a5,},Annotations:map[string]string{io.kubernetes.container.hash: 64cd0f7d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ab64850e34b66cbe91e099c91014051472b87888130297cd7c47a4a78992140,PodSandboxId:f6611da96d51a594f93865322c664539feca749ea5a59ee33b637aae006635ba,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf4
31fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1714530722007160196,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-msshn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7575fbfc-11ce-4223-bd99-ff9cdddd3568,},Annotations:map[string]string{io.kubernetes.container.hash: 725b7ad5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3ffc6d046e214333ed85219819ad54bfa5c3ac0b10167beacf72edbd6035736,PodSandboxId:170d412885089b502fe3701809ec5f3271a9fd4f9181bf1c293cd527d144907b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929d
c8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1714530701589505342,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-329926,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50544e7f95cae164184f9b27f78747c6,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f36a128ab65a069e1cafa5abf1b7774272b73428ea3826f595758d5e99b4e93,PodSandboxId:0c17dc8e917b3fee067acb3ff1b63beb8742f444a12d58ee96183b021052a9ed,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTA
INER_EXITED,CreatedAt:1714530701461361460,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-329926,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 684463257510837a1c150a7df713bf62,},Annotations:map[string]string{io.kubernetes.container.hash: d6da2a03,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=abef4f8b-1ad8-4c1f-a3ed-63a2929b92a4 name=/runtime.v1.RuntimeService/ListContainers
	May 01 02:49:03 ha-329926 crio[3818]: time="2024-05-01 02:49:03.272014031Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=cf005815-aeda-430d-a0a0-5b778a1db271 name=/runtime.v1.RuntimeService/Version
	May 01 02:49:03 ha-329926 crio[3818]: time="2024-05-01 02:49:03.272083982Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cf005815-aeda-430d-a0a0-5b778a1db271 name=/runtime.v1.RuntimeService/Version
	May 01 02:49:03 ha-329926 crio[3818]: time="2024-05-01 02:49:03.273725045Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1d591069-d430-4ba4-a683-37f6c0914599 name=/runtime.v1.ImageService/ImageFsInfo
	May 01 02:49:03 ha-329926 crio[3818]: time="2024-05-01 02:49:03.274148653Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714531743274127009,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1d591069-d430-4ba4-a683-37f6c0914599 name=/runtime.v1.ImageService/ImageFsInfo
	May 01 02:49:03 ha-329926 crio[3818]: time="2024-05-01 02:49:03.274587750Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f3881df6-9eb1-445c-a09e-513bacc1598a name=/runtime.v1.RuntimeService/ListContainers
	May 01 02:49:03 ha-329926 crio[3818]: time="2024-05-01 02:49:03.274644954Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f3881df6-9eb1-445c-a09e-513bacc1598a name=/runtime.v1.RuntimeService/ListContainers
	May 01 02:49:03 ha-329926 crio[3818]: time="2024-05-01 02:49:03.275125962Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0b3a57c2195832ebe5e7032bc917987f944db828a8e5925d886ea10fb432f1ab,PodSandboxId:222f0baa90487bc1f8b94ec1b03db6568034bc14d984f1da1cd8b2ca7b480596,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714531488140417174,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 371423a6-a156-4e8d-bf66-812d606cc8d7,},Annotations:map[string]string{io.kubernetes.container.hash: 79218c39,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68c9b61e07b789bb1c441a33b66eeb07476719d85f4affe9c264e34bd73d8008,PodSandboxId:5c187637e4af957fa3481369cfb179dd7f043b24bd7436ca0fa0e524b347859a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1714531481120651978,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-kcmp7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e15c166-9ba1-40c9-8f33-db7f83733932,},Annotations:map[string]string{io.kubernetes.container.hash: fdceac74,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8f3963bbca5a5d8ca96fd7bf715f2b551bcaf4380803b443a346bccff25655b,PodSandboxId:ec79d09460adcc3f5e74bae2063a9f639a550ab2e2282a391540873353174ac5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714531441122972737,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-329926,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c603c91fe09a36a9d3862475188142a,},Annotations:map[string]string{io.kubernetes.container.hash: 740b8b39,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernet
es.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edfe1ca8adcff5c57dd48a6e4e52f6129014ec43e797455a799c8abb8dddf9ad,PodSandboxId:639834849a63b0c5b6034ba67902819baf2c7bab7e147320753381a3fadfc9ef,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714531439122290070,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-329926,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d6c0ce9d370e02811c06c5c50fb7da1,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c768225fa391861ab12a75d643a4757892824fd20ef1294ba1e9b817cbe81f3,PodSandboxId:222f0baa90487bc1f8b94ec1b03db6568034bc14d984f1da1cd8b2ca7b480596,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1714531436122032584,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 371423a6-a156-4e8d-bf66-812d606cc8d7,},Annotations:map[string]string{io.kubernetes.container.hash: 79218c39,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.cont
ainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bb33341c42d847e3c54695cda601e3d3ee0fe3e95d113fabdb40a8b79ee00ac,PodSandboxId:7ec7a58f19ea4590ecf46d3c8faea8e7ab579c87ea9e31da82f9173c6e67e371,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714531429689100495,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-nwj5x,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0cfb5bda-fca7-479f-98d3-6be9bddf0e1c,},Annotations:map[string]string{io.kubernetes.container.hash: b50f62c1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45d5832e610c8d7398d46f4d0dc6929742b56ee579172c6289a7ebcedd229113,PodSandboxId:45ff3dc59db5afc538c97abf460cf706199fe452449ab01ed2f230cf7248cf45,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1714531411902616259,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-329926,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7e0afd8727d0417c20407d0e8880765,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:3b19253347b3c6a6483c37f96f8593931755f784d085f793c13e001ae0d76794,PodSandboxId:6bd635b17d9c0d981c6e2c3a943281df18c7780f9ff5380d3554dfb073340194,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714531396113172009,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-msshn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7575fbfc-11ce-4223-bd99-ff9cdddd3568,},Annotations:map[string]string{io.kubernetes.container.hash: 725b7ad5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4005dbd
94fbfed92cf6c4bb53e9b2208adb17302eabfa52ea663e83fa24fef7,PodSandboxId:15d9ac760b30773086acc72880e8a01cf304d780f29db72009315c976cb517ff,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714531396470001536,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-cfdqc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a37e982e-9e4f-43bf-b957-0d6f082f4ec8,},Annotations:map[string]string{io.kubernetes.container.hash: d10dbf58,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d21d7f0022d1d1edaee4b7f45614bc8d98a407b0ba70c272d9fcdbc67fdba53,PodSandboxId:77261f211cf7416433533bfbdf670550fc76f5c15415fa7ad3d2c30a90d5c656,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714531396224192528,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-2h8lc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 937e09f0-6a7d-4387-aa19-ee959eb5a2a5,},Annotations:map[string]string{io.kubernetes.container.hash: 64cd0f7d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,
\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a409ec7ab3a3389b868353ff5b180728bff4d9fd6e9ee235408658387a54e865,PodSandboxId:639834849a63b0c5b6034ba67902819baf2c7bab7e147320753381a3fadfc9ef,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714531396158062617,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-329926,io.kuber
netes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d6c0ce9d370e02811c06c5c50fb7da1,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc712ce5191365a1b74a4fb690a3fe1fca3ef109f0525a60d88ecff10b96a61b,PodSandboxId:33a2511848b79ed0b27f51b17c8e8d0380da02cb35f4dd0ab8930ed674b8a9e3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714531396049539667,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-329926,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: 50544e7f95cae164184f9b27f78747c6,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:028fe64855c8942e704a66fc8d7d80db9662c05c5252b9ae01043eb95134a0a6,PodSandboxId:ec79d09460adcc3f5e74bae2063a9f639a550ab2e2282a391540873353174ac5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714531395925134025,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-329926,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 4c603c91fe09a36a9d3862475188142a,},Annotations:map[string]string{io.kubernetes.container.hash: 740b8b39,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4940e86ab3aeeda6fefe1a1c3eceee2908fbf5e3ebc1584761c2744b7a04e3e,PodSandboxId:27f30cfd71acf5bbb1ccf09c13482fbe21411ba6499ce9959099bd47c7ce537f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714531395866512593,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-329926,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 684463257510837a1c150a7df713bf62,},Ann
otations:map[string]string{io.kubernetes.container.hash: d6da2a03,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:042c499301cf07675ef97e386fb802a0684efc1ff197389bf8b7458ca853493f,PodSandboxId:5c187637e4af957fa3481369cfb179dd7f043b24bd7436ca0fa0e524b347859a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1714531392201491789,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-kcmp7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e15c166-9ba1-40c9-8f33-db7f83733932,},Annotations:map[string]string{io.kuber
netes.container.hash: fdceac74,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d8c54a9eb6fd7f6b4514ddfa0201da9a28692806db71b7d277493e9f9b90233,PodSandboxId:abf4acd7dd09ff0cf728fe61b1bf3c73291eacc97a98f9f2d79b9fe49a629266,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1714530889047946688,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-nwj5x,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0cfb5bda-fca7-479f-98d3-6be9bddf0e1c,},Annotations:map[string]string{io.kuberne
tes.container.hash: b50f62c1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:619f66869569c782fd59143653e79694cc74f9ae89927d4f16dfcb10f47d0e03,PodSandboxId:0fe93b95f6356e43c599c74942976a3c1c2025ad4e52377711767c38c88d3d63,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714530725115421538,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-cfdqc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a37e982e-9e4f-43bf-b957-0d6f082f4ec8,},Annotations:map[string]string{io.kubernetes.container.hash: d10dbf58,io.
kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:693a12cd2b2c6de7a7c03e4d290e69cdcef9f4f17b75ff84f84fb81b8297cd63,PodSandboxId:1771f42c6abecb905d783dc2c43071807fc9138d9eae9432b2b132602a450cb2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714530725082845913,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredn
s-7db6d8ff4d-2h8lc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 937e09f0-6a7d-4387-aa19-ee959eb5a2a5,},Annotations:map[string]string{io.kubernetes.container.hash: 64cd0f7d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ab64850e34b66cbe91e099c91014051472b87888130297cd7c47a4a78992140,PodSandboxId:f6611da96d51a594f93865322c664539feca749ea5a59ee33b637aae006635ba,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf4
31fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1714530722007160196,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-msshn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7575fbfc-11ce-4223-bd99-ff9cdddd3568,},Annotations:map[string]string{io.kubernetes.container.hash: 725b7ad5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3ffc6d046e214333ed85219819ad54bfa5c3ac0b10167beacf72edbd6035736,PodSandboxId:170d412885089b502fe3701809ec5f3271a9fd4f9181bf1c293cd527d144907b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929d
c8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1714530701589505342,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-329926,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50544e7f95cae164184f9b27f78747c6,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f36a128ab65a069e1cafa5abf1b7774272b73428ea3826f595758d5e99b4e93,PodSandboxId:0c17dc8e917b3fee067acb3ff1b63beb8742f444a12d58ee96183b021052a9ed,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTA
INER_EXITED,CreatedAt:1714530701461361460,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-329926,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 684463257510837a1c150a7df713bf62,},Annotations:map[string]string{io.kubernetes.container.hash: d6da2a03,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f3881df6-9eb1-445c-a09e-513bacc1598a name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	0b3a57c219583       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Running             storage-provisioner       4                   222f0baa90487       storage-provisioner
	68c9b61e07b78       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      4 minutes ago       Running             kindnet-cni               3                   5c187637e4af9       kindnet-kcmp7
	c8f3963bbca5a       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0                                      5 minutes ago       Running             kube-apiserver            3                   ec79d09460adc       kube-apiserver-ha-329926
	edfe1ca8adcff       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b                                      5 minutes ago       Running             kube-controller-manager   2                   639834849a63b       kube-controller-manager-ha-329926
	6c768225fa391       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Exited              storage-provisioner       3                   222f0baa90487       storage-provisioner
	0bb33341c42d8       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      5 minutes ago       Running             busybox                   1                   7ec7a58f19ea4       busybox-fc5497c4f-nwj5x
	45d5832e610c8       22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba                                      5 minutes ago       Running             kube-vip                  0                   45ff3dc59db5a       kube-vip-ha-329926
	b4005dbd94fbf       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   1                   15d9ac760b307       coredns-7db6d8ff4d-cfdqc
	2d21d7f0022d1       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   1                   77261f211cf74       coredns-7db6d8ff4d-2h8lc
	a409ec7ab3a33       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b                                      5 minutes ago       Exited              kube-controller-manager   1                   639834849a63b       kube-controller-manager-ha-329926
	3b19253347b3c       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b                                      5 minutes ago       Running             kube-proxy                1                   6bd635b17d9c0       kube-proxy-msshn
	fc712ce519136       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced                                      5 minutes ago       Running             kube-scheduler            1                   33a2511848b79       kube-scheduler-ha-329926
	028fe64855c89       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0                                      5 minutes ago       Exited              kube-apiserver            2                   ec79d09460adc       kube-apiserver-ha-329926
	d4940e86ab3ae       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      5 minutes ago       Running             etcd                      1                   27f30cfd71acf       etcd-ha-329926
	042c499301cf0       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      5 minutes ago       Exited              kindnet-cni               2                   5c187637e4af9       kindnet-kcmp7
	4d8c54a9eb6fd       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   14 minutes ago      Exited              busybox                   0                   abf4acd7dd09f       busybox-fc5497c4f-nwj5x
	619f66869569c       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      16 minutes ago      Exited              coredns                   0                   0fe93b95f6356       coredns-7db6d8ff4d-cfdqc
	693a12cd2b2c6       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      16 minutes ago      Exited              coredns                   0                   1771f42c6abec       coredns-7db6d8ff4d-2h8lc
	2ab64850e34b6       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b                                      17 minutes ago      Exited              kube-proxy                0                   f6611da96d51a       kube-proxy-msshn
	e3ffc6d046e21       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced                                      17 minutes ago      Exited              kube-scheduler            0                   170d412885089       kube-scheduler-ha-329926
	9f36a128ab65a       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      17 minutes ago      Exited              etcd                      0                   0c17dc8e917b3       etcd-ha-329926
	
	
	==> coredns [2d21d7f0022d1d1edaee4b7f45614bc8d98a407b0ba70c272d9fcdbc67fdba53] <==
	[INFO] plugin/kubernetes: Trace[1392783372]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (01-May-2024 02:43:28.125) (total time: 10185ms):
	Trace[1392783372]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:58212->10.96.0.1:443: read: connection reset by peer 10184ms (02:43:38.309)
	Trace[1392783372]: [10.185012457s] [10.185012457s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:58212->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [619f66869569c782fd59143653e79694cc74f9ae89927d4f16dfcb10f47d0e03] <==
	[INFO] 10.244.1.2:38209 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000174169s
	[INFO] 10.244.1.2:49411 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000226927s
	[INFO] 10.244.0.4:36823 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000251251s
	[INFO] 10.244.0.4:50159 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001217267s
	[INFO] 10.244.0.4:40861 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000095644s
	[INFO] 10.244.0.4:39347 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000037736s
	[INFO] 10.244.2.2:41105 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000265426s
	[INFO] 10.244.2.2:60245 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000092358s
	[INFO] 10.244.2.2:33866 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00027339s
	[INFO] 10.244.2.2:40430 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000118178s
	[INFO] 10.244.2.2:34835 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000101675s
	[INFO] 10.244.1.2:50970 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000173405s
	[INFO] 10.244.1.2:45808 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000138806s
	[INFO] 10.244.0.4:35255 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000156547s
	[INFO] 10.244.0.4:41916 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000142712s
	[INFO] 10.244.0.4:47485 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000089433s
	[INFO] 10.244.2.2:53686 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000133335s
	[INFO] 10.244.2.2:36841 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000214942s
	[INFO] 10.244.2.2:60707 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000154s
	[INFO] 10.244.1.2:56577 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000484498s
	[INFO] 10.244.0.4:54313 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000184738s
	[INFO] 10.244.0.4:52463 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000369344s
	[INFO] 10.244.2.2:41039 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000224698s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [693a12cd2b2c6de7a7c03e4d290e69cdcef9f4f17b75ff84f84fb81b8297cd63] <==
	[INFO] 10.244.1.2:60518 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00017936s
	[INFO] 10.244.0.4:49957 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000203599s
	[INFO] 10.244.0.4:42538 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001710693s
	[INFO] 10.244.0.4:56099 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000083655s
	[INFO] 10.244.0.4:32984 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000156518s
	[INFO] 10.244.2.2:55668 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001793326s
	[INFO] 10.244.2.2:50808 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001174633s
	[INFO] 10.244.2.2:44291 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000119382s
	[INFO] 10.244.1.2:38278 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000204436s
	[INFO] 10.244.1.2:59141 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000117309s
	[INFO] 10.244.0.4:37516 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00005532s
	[INFO] 10.244.2.2:57332 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000189855s
	[INFO] 10.244.1.2:34171 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00024042s
	[INFO] 10.244.1.2:37491 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000234774s
	[INFO] 10.244.1.2:47588 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000815872s
	[INFO] 10.244.0.4:38552 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000135078s
	[INFO] 10.244.0.4:37827 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000154857s
	[INFO] 10.244.2.2:47767 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000154967s
	[INFO] 10.244.2.2:56393 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000156764s
	[INFO] 10.244.2.2:38616 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000127045s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	
	
	==> coredns [b4005dbd94fbfed92cf6c4bb53e9b2208adb17302eabfa52ea663e83fa24fef7] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:36528->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:49274->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:49274->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:49260->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[1398911808]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (01-May-2024 02:43:28.488) (total time: 12236ms):
	Trace[1398911808]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:49260->10.96.0.1:443: read: connection reset by peer 12235ms (02:43:40.724)
	Trace[1398911808]: [12.2361642s] [12.2361642s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:49260->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-329926
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-329926
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2c4eae41cda912e6a762d77f0d8868e00f97bb4e
	                    minikube.k8s.io/name=ha-329926
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_01T02_31_49_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 01 May 2024 02:31:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-329926
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 01 May 2024 02:48:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 01 May 2024 02:43:59 +0000   Wed, 01 May 2024 02:31:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 01 May 2024 02:43:59 +0000   Wed, 01 May 2024 02:31:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 01 May 2024 02:43:59 +0000   Wed, 01 May 2024 02:31:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 01 May 2024 02:43:59 +0000   Wed, 01 May 2024 02:32:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.5
	  Hostname:    ha-329926
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f2958e1e59474320901fe20ba723db00
	  System UUID:                f2958e1e-5947-4320-901f-e20ba723db00
	  Boot ID:                    29fc4c0c-83d6-4af9-8767-4e1b7b7102d4
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-nwj5x              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 coredns-7db6d8ff4d-2h8lc             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     17m
	  kube-system                 coredns-7db6d8ff4d-cfdqc             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     17m
	  kube-system                 etcd-ha-329926                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         17m
	  kube-system                 kindnet-kcmp7                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      17m
	  kube-system                 kube-apiserver-ha-329926             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-controller-manager-ha-329926    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-proxy-msshn                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-scheduler-ha-329926             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-vip-ha-329926                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m8s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 17m                    kube-proxy       
	  Normal   Starting                 5m4s                   kube-proxy       
	  Normal   NodeHasNoDiskPressure    17m                    kubelet          Node ha-329926 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 17m                    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  17m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  17m                    kubelet          Node ha-329926 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     17m                    kubelet          Node ha-329926 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           17m                    node-controller  Node ha-329926 event: Registered Node ha-329926 in Controller
	  Normal   NodeReady                17m                    kubelet          Node ha-329926 status is now: NodeReady
	  Normal   RegisteredNode           15m                    node-controller  Node ha-329926 event: Registered Node ha-329926 in Controller
	  Normal   RegisteredNode           14m                    node-controller  Node ha-329926 event: Registered Node ha-329926 in Controller
	  Warning  ContainerGCFailed        6m15s (x2 over 7m15s)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           5m1s                   node-controller  Node ha-329926 event: Registered Node ha-329926 in Controller
	  Normal   RegisteredNode           4m48s                  node-controller  Node ha-329926 event: Registered Node ha-329926 in Controller
	  Normal   RegisteredNode           3m11s                  node-controller  Node ha-329926 event: Registered Node ha-329926 in Controller
	
	
	Name:               ha-329926-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-329926-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2c4eae41cda912e6a762d77f0d8868e00f97bb4e
	                    minikube.k8s.io/name=ha-329926
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_01T02_33_11_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 01 May 2024 02:33:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-329926-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 01 May 2024 02:48:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 01 May 2024 02:44:41 +0000   Wed, 01 May 2024 02:44:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 01 May 2024 02:44:41 +0000   Wed, 01 May 2024 02:44:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 01 May 2024 02:44:41 +0000   Wed, 01 May 2024 02:44:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 01 May 2024 02:44:41 +0000   Wed, 01 May 2024 02:44:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.79
	  Hostname:    ha-329926-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 135aac161d694487846d436743753149
	  System UUID:                135aac16-1d69-4487-846d-436743753149
	  Boot ID:                    fcfabe85-9cad-4538-b8cf-2825508a7ab0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-h8dxv                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 etcd-ha-329926-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kindnet-9r8zn                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      15m
	  kube-system                 kube-apiserver-ha-329926-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-ha-329926-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-rfsm8                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-ha-329926-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-vip-ha-329926-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m37s                  kube-proxy       
	  Normal  Starting                 15m                    kube-proxy       
	  Normal  NodeAllocatableEnforced  15m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  15m (x8 over 15m)      kubelet          Node ha-329926-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m (x8 over 15m)      kubelet          Node ha-329926-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)      kubelet          Node ha-329926-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           15m                    node-controller  Node ha-329926-m02 event: Registered Node ha-329926-m02 in Controller
	  Normal  RegisteredNode           15m                    node-controller  Node ha-329926-m02 event: Registered Node ha-329926-m02 in Controller
	  Normal  RegisteredNode           14m                    node-controller  Node ha-329926-m02 event: Registered Node ha-329926-m02 in Controller
	  Normal  NodeNotReady             12m                    node-controller  Node ha-329926-m02 status is now: NodeNotReady
	  Normal  Starting                 5m31s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  5m31s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m30s (x8 over 5m31s)  kubelet          Node ha-329926-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m30s (x8 over 5m31s)  kubelet          Node ha-329926-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m30s (x7 over 5m31s)  kubelet          Node ha-329926-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           5m1s                   node-controller  Node ha-329926-m02 event: Registered Node ha-329926-m02 in Controller
	  Normal  RegisteredNode           4m48s                  node-controller  Node ha-329926-m02 event: Registered Node ha-329926-m02 in Controller
	  Normal  RegisteredNode           3m11s                  node-controller  Node ha-329926-m02 event: Registered Node ha-329926-m02 in Controller
	
	
	Name:               ha-329926-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-329926-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2c4eae41cda912e6a762d77f0d8868e00f97bb4e
	                    minikube.k8s.io/name=ha-329926
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_01T02_35_25_0700
	                    minikube.k8s.io/version=v1.33.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 01 May 2024 02:35:24 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-329926-m04
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 01 May 2024 02:46:34 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Wed, 01 May 2024 02:46:14 +0000   Wed, 01 May 2024 02:47:15 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Wed, 01 May 2024 02:46:14 +0000   Wed, 01 May 2024 02:47:15 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Wed, 01 May 2024 02:46:14 +0000   Wed, 01 May 2024 02:47:15 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Wed, 01 May 2024 02:46:14 +0000   Wed, 01 May 2024 02:47:15 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.84
	  Hostname:    ha-329926-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 b19ce422aa224cda91e88f6cd8b003f9
	  System UUID:                b19ce422-aa22-4cda-91e8-8f6cd8b003f9
	  Boot ID:                    6ddbb389-e2ec-49d2-a7e2-c9728da82050
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-9f722    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m38s
	  kube-system                 kindnet-86ngt              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-proxy-9492r           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 13m                    kube-proxy       
	  Normal   Starting                 2m45s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  13m (x2 over 13m)      kubelet          Node ha-329926-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m (x2 over 13m)      kubelet          Node ha-329926-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m (x2 over 13m)      kubelet          Node ha-329926-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           13m                    node-controller  Node ha-329926-m04 event: Registered Node ha-329926-m04 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-329926-m04 event: Registered Node ha-329926-m04 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-329926-m04 event: Registered Node ha-329926-m04 in Controller
	  Normal   NodeReady                13m                    kubelet          Node ha-329926-m04 status is now: NodeReady
	  Normal   RegisteredNode           5m                     node-controller  Node ha-329926-m04 event: Registered Node ha-329926-m04 in Controller
	  Normal   RegisteredNode           4m48s                  node-controller  Node ha-329926-m04 event: Registered Node ha-329926-m04 in Controller
	  Normal   NodeNotReady             4m20s                  node-controller  Node ha-329926-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           3m11s                  node-controller  Node ha-329926-m04 event: Registered Node ha-329926-m04 in Controller
	  Normal   Starting                 2m49s                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  2m49s                  kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 2m49s (x2 over 2m49s)  kubelet          Node ha-329926-m04 has been rebooted, boot id: 6ddbb389-e2ec-49d2-a7e2-c9728da82050
	  Normal   NodeHasSufficientMemory  2m49s (x3 over 2m49s)  kubelet          Node ha-329926-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m49s (x3 over 2m49s)  kubelet          Node ha-329926-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m49s (x3 over 2m49s)  kubelet          Node ha-329926-m04 status is now: NodeHasSufficientPID
	  Normal   NodeNotReady             2m49s                  kubelet          Node ha-329926-m04 status is now: NodeNotReady
	  Normal   NodeReady                2m49s                  kubelet          Node ha-329926-m04 status is now: NodeReady
	  Normal   NodeNotReady             108s                   node-controller  Node ha-329926-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.059078] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.050190] systemd-fstab-generator[614]: Ignoring "noauto" option for root device
	[  +0.172804] systemd-fstab-generator[628]: Ignoring "noauto" option for root device
	[  +0.147592] systemd-fstab-generator[641]: Ignoring "noauto" option for root device
	[  +0.297725] systemd-fstab-generator[671]: Ignoring "noauto" option for root device
	[  +4.784571] systemd-fstab-generator[771]: Ignoring "noauto" option for root device
	[  +0.063787] kauditd_printk_skb: 130 callbacks suppressed
	[  +5.533501] systemd-fstab-generator[961]: Ignoring "noauto" option for root device
	[  +0.060916] kauditd_printk_skb: 18 callbacks suppressed
	[  +7.479829] systemd-fstab-generator[1381]: Ignoring "noauto" option for root device
	[  +0.092024] kauditd_printk_skb: 79 callbacks suppressed
	[May 1 02:32] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.650154] kauditd_printk_skb: 74 callbacks suppressed
	[May 1 02:42] systemd-fstab-generator[3736]: Ignoring "noauto" option for root device
	[  +0.159126] systemd-fstab-generator[3748]: Ignoring "noauto" option for root device
	[  +0.187836] systemd-fstab-generator[3762]: Ignoring "noauto" option for root device
	[  +0.169864] systemd-fstab-generator[3774]: Ignoring "noauto" option for root device
	[  +0.322099] systemd-fstab-generator[3802]: Ignoring "noauto" option for root device
	[May 1 02:43] systemd-fstab-generator[3905]: Ignoring "noauto" option for root device
	[  +0.090449] kauditd_printk_skb: 100 callbacks suppressed
	[  +5.633568] kauditd_printk_skb: 22 callbacks suppressed
	[ +12.620460] kauditd_printk_skb: 75 callbacks suppressed
	[ +10.059902] kauditd_printk_skb: 1 callbacks suppressed
	[ +17.995655] kauditd_printk_skb: 5 callbacks suppressed
	[May 1 02:44] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> etcd [9f36a128ab65a069e1cafa5abf1b7774272b73428ea3826f595758d5e99b4e93] <==
	{"level":"info","ts":"2024-05-01T02:41:26.853995Z","caller":"traceutil/trace.go:171","msg":"trace[1798900376] range","detail":"{range_begin:/registry/ingressclasses/; range_end:/registry/ingressclasses0; }","duration":"7.944992517s","start":"2024-05-01T02:41:18.908998Z","end":"2024-05-01T02:41:26.853991Z","steps":["trace[1798900376] 'agreement among raft nodes before linearized reading'  (duration: 7.944988995s)"],"step_count":1}
	2024/05/01 02:41:26 WARNING: [core] [Server #5] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"info","ts":"2024-05-01T02:41:26.854093Z","caller":"traceutil/trace.go:171","msg":"trace[184740758] range","detail":"{range_begin:/registry/horizontalpodautoscalers/; range_end:/registry/horizontalpodautoscalers0; }","duration":"8.03083736s","start":"2024-05-01T02:41:18.823253Z","end":"2024-05-01T02:41:26.85409Z","steps":["trace[184740758] 'agreement among raft nodes before linearized reading'  (duration: 8.008399377s)"],"step_count":1}
	2024/05/01 02:41:26 WARNING: [core] [Server #5] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-05-01T02:41:26.900352Z","caller":"etcdserver/v3_server.go:897","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":154124257143701688,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-05-01T02:41:26.914809Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.5:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-05-01T02:41:26.914877Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.5:2379: use of closed network connection"}
	{"level":"info","ts":"2024-05-01T02:41:26.914964Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"c5263387c79c0223","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-05-01T02:41:26.915152Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"64dbb1bdcfddc92c"}
	{"level":"info","ts":"2024-05-01T02:41:26.915207Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"64dbb1bdcfddc92c"}
	{"level":"info","ts":"2024-05-01T02:41:26.91523Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"64dbb1bdcfddc92c"}
	{"level":"info","ts":"2024-05-01T02:41:26.915329Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"c5263387c79c0223","remote-peer-id":"64dbb1bdcfddc92c"}
	{"level":"info","ts":"2024-05-01T02:41:26.915396Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"c5263387c79c0223","remote-peer-id":"64dbb1bdcfddc92c"}
	{"level":"info","ts":"2024-05-01T02:41:26.915428Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"c5263387c79c0223","remote-peer-id":"64dbb1bdcfddc92c"}
	{"level":"info","ts":"2024-05-01T02:41:26.915438Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"64dbb1bdcfddc92c"}
	{"level":"info","ts":"2024-05-01T02:41:26.915444Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"d9d7aed2183d5ca6"}
	{"level":"info","ts":"2024-05-01T02:41:26.915452Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"d9d7aed2183d5ca6"}
	{"level":"info","ts":"2024-05-01T02:41:26.915494Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"d9d7aed2183d5ca6"}
	{"level":"info","ts":"2024-05-01T02:41:26.915565Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"c5263387c79c0223","remote-peer-id":"d9d7aed2183d5ca6"}
	{"level":"info","ts":"2024-05-01T02:41:26.915622Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"c5263387c79c0223","remote-peer-id":"d9d7aed2183d5ca6"}
	{"level":"info","ts":"2024-05-01T02:41:26.915651Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"c5263387c79c0223","remote-peer-id":"d9d7aed2183d5ca6"}
	{"level":"info","ts":"2024-05-01T02:41:26.915753Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"d9d7aed2183d5ca6"}
	{"level":"info","ts":"2024-05-01T02:41:26.919307Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.5:2380"}
	{"level":"info","ts":"2024-05-01T02:41:26.919485Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.5:2380"}
	{"level":"info","ts":"2024-05-01T02:41:26.919524Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-329926","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.5:2380"],"advertise-client-urls":["https://192.168.39.5:2379"]}
	
	
	==> etcd [d4940e86ab3aeeda6fefe1a1c3eceee2908fbf5e3ebc1584761c2744b7a04e3e] <==
	{"level":"info","ts":"2024-05-01T02:45:34.888247Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"c5263387c79c0223","remote-peer-id":"d9d7aed2183d5ca6"}
	{"level":"info","ts":"2024-05-01T02:45:34.915451Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"c5263387c79c0223","to":"d9d7aed2183d5ca6","stream-type":"stream Message"}
	{"level":"info","ts":"2024-05-01T02:45:34.915591Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"c5263387c79c0223","remote-peer-id":"d9d7aed2183d5ca6"}
	{"level":"info","ts":"2024-05-01T02:45:34.927127Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"c5263387c79c0223","remote-peer-id":"d9d7aed2183d5ca6"}
	{"level":"warn","ts":"2024-05-01T02:45:34.939281Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.168.39.115:43130","server-name":"","error":"EOF"}
	{"level":"info","ts":"2024-05-01T02:45:34.940791Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"c5263387c79c0223","remote-peer-id":"d9d7aed2183d5ca6"}
	{"level":"info","ts":"2024-05-01T02:45:45.085323Z","caller":"traceutil/trace.go:171","msg":"trace[1669874535] transaction","detail":"{read_only:false; response_revision:2526; number_of_response:1; }","duration":"155.393831ms","start":"2024-05-01T02:45:44.929886Z","end":"2024-05-01T02:45:45.085279Z","steps":["trace[1669874535] 'process raft request'  (duration: 155.246153ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-01T02:46:29.032236Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c5263387c79c0223 switched to configuration voters=(7267597852486781228 14206098732849300003)"}
	{"level":"info","ts":"2024-05-01T02:46:29.034617Z","caller":"membership/cluster.go:472","msg":"removed member","cluster-id":"436188ec3031a10e","local-member-id":"c5263387c79c0223","removed-remote-peer-id":"d9d7aed2183d5ca6","removed-remote-peer-urls":["https://192.168.39.115:2380"]}
	{"level":"info","ts":"2024-05-01T02:46:29.034774Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"d9d7aed2183d5ca6"}
	{"level":"warn","ts":"2024-05-01T02:46:29.036424Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"d9d7aed2183d5ca6"}
	{"level":"info","ts":"2024-05-01T02:46:29.036502Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"d9d7aed2183d5ca6"}
	{"level":"warn","ts":"2024-05-01T02:46:29.036634Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"d9d7aed2183d5ca6"}
	{"level":"info","ts":"2024-05-01T02:46:29.036752Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"d9d7aed2183d5ca6"}
	{"level":"info","ts":"2024-05-01T02:46:29.037061Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"c5263387c79c0223","remote-peer-id":"d9d7aed2183d5ca6"}
	{"level":"warn","ts":"2024-05-01T02:46:29.037323Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"c5263387c79c0223","remote-peer-id":"d9d7aed2183d5ca6","error":"context canceled"}
	{"level":"warn","ts":"2024-05-01T02:46:29.037439Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"d9d7aed2183d5ca6","error":"failed to read d9d7aed2183d5ca6 on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-05-01T02:46:29.037479Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"c5263387c79c0223","remote-peer-id":"d9d7aed2183d5ca6"}
	{"level":"warn","ts":"2024-05-01T02:46:29.04345Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"c5263387c79c0223","remote-peer-id":"d9d7aed2183d5ca6","error":"http: read on closed response body"}
	{"level":"info","ts":"2024-05-01T02:46:29.043553Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"c5263387c79c0223","remote-peer-id":"d9d7aed2183d5ca6"}
	{"level":"info","ts":"2024-05-01T02:46:29.043575Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"d9d7aed2183d5ca6"}
	{"level":"info","ts":"2024-05-01T02:46:29.043589Z","caller":"rafthttp/transport.go:355","msg":"removed remote peer","local-member-id":"c5263387c79c0223","removed-remote-peer-id":"d9d7aed2183d5ca6"}
	{"level":"info","ts":"2024-05-01T02:46:29.043641Z","caller":"etcdserver/server.go:1946","msg":"applied a configuration change through raft","local-member-id":"c5263387c79c0223","raft-conf-change":"ConfChangeRemoveNode","raft-conf-change-node-id":"d9d7aed2183d5ca6"}
	{"level":"warn","ts":"2024-05-01T02:46:29.052551Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"c5263387c79c0223","remote-peer-id-stream-handler":"c5263387c79c0223","remote-peer-id-from":"d9d7aed2183d5ca6"}
	{"level":"warn","ts":"2024-05-01T02:46:29.066451Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.168.39.115:56876","server-name":"","error":"read tcp 192.168.39.5:2380->192.168.39.115:56876: read: connection reset by peer"}
	
	
	==> kernel <==
	 02:49:04 up 17 min,  0 users,  load average: 0.18, 0.31, 0.31
	Linux ha-329926 5.10.207 #1 SMP Tue Apr 30 22:38:43 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [042c499301cf07675ef97e386fb802a0684efc1ff197389bf8b7458ca853493f] <==
	I0501 02:43:12.755398       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0501 02:43:12.755491       1 main.go:107] hostIP = 192.168.39.5
	podIP = 192.168.39.5
	I0501 02:43:12.755776       1 main.go:116] setting mtu 1500 for CNI 
	I0501 02:43:12.755825       1 main.go:146] kindnetd IP family: "ipv4"
	I0501 02:43:12.755847       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0501 02:43:16.150932       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0501 02:43:16.151255       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	I0501 02:43:27.155983       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": net/http: TLS handshake timeout
	I0501 02:43:40.724541       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 192.168.122.184:59676->10.96.0.1:443: read: connection reset by peer
	I0501 02:43:43.796893       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	panic: Reached maximum retries obtaining node list: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	
	goroutine 1 [running]:
	main.main()
		/go/src/cmd/kindnetd/main.go:195 +0xd3d
	
	
	==> kindnet [68c9b61e07b789bb1c441a33b66eeb07476719d85f4affe9c264e34bd73d8008] <==
	I0501 02:48:22.689874       1 main.go:250] Node ha-329926-m04 has CIDR [10.244.3.0/24] 
	I0501 02:48:32.699199       1 main.go:223] Handling node with IPs: map[192.168.39.5:{}]
	I0501 02:48:32.699282       1 main.go:227] handling current node
	I0501 02:48:32.699305       1 main.go:223] Handling node with IPs: map[192.168.39.79:{}]
	I0501 02:48:32.699354       1 main.go:250] Node ha-329926-m02 has CIDR [10.244.1.0/24] 
	I0501 02:48:32.699506       1 main.go:223] Handling node with IPs: map[192.168.39.84:{}]
	I0501 02:48:32.699531       1 main.go:250] Node ha-329926-m04 has CIDR [10.244.3.0/24] 
	I0501 02:48:42.708652       1 main.go:223] Handling node with IPs: map[192.168.39.5:{}]
	I0501 02:48:42.708796       1 main.go:227] handling current node
	I0501 02:48:42.708828       1 main.go:223] Handling node with IPs: map[192.168.39.79:{}]
	I0501 02:48:42.708846       1 main.go:250] Node ha-329926-m02 has CIDR [10.244.1.0/24] 
	I0501 02:48:42.709030       1 main.go:223] Handling node with IPs: map[192.168.39.84:{}]
	I0501 02:48:42.709073       1 main.go:250] Node ha-329926-m04 has CIDR [10.244.3.0/24] 
	I0501 02:48:52.723319       1 main.go:223] Handling node with IPs: map[192.168.39.5:{}]
	I0501 02:48:52.723409       1 main.go:227] handling current node
	I0501 02:48:52.723433       1 main.go:223] Handling node with IPs: map[192.168.39.79:{}]
	I0501 02:48:52.723455       1 main.go:250] Node ha-329926-m02 has CIDR [10.244.1.0/24] 
	I0501 02:48:52.723564       1 main.go:223] Handling node with IPs: map[192.168.39.84:{}]
	I0501 02:48:52.723584       1 main.go:250] Node ha-329926-m04 has CIDR [10.244.3.0/24] 
	I0501 02:49:02.732807       1 main.go:223] Handling node with IPs: map[192.168.39.5:{}]
	I0501 02:49:02.732898       1 main.go:227] handling current node
	I0501 02:49:02.732927       1 main.go:223] Handling node with IPs: map[192.168.39.79:{}]
	I0501 02:49:02.732950       1 main.go:250] Node ha-329926-m02 has CIDR [10.244.1.0/24] 
	I0501 02:49:02.733221       1 main.go:223] Handling node with IPs: map[192.168.39.84:{}]
	I0501 02:49:02.733253       1 main.go:250] Node ha-329926-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [028fe64855c8942e704a66fc8d7d80db9662c05c5252b9ae01043eb95134a0a6] <==
	I0501 02:43:16.759959       1 options.go:221] external host was not specified, using 192.168.39.5
	I0501 02:43:16.761150       1 server.go:148] Version: v1.30.0
	I0501 02:43:16.761205       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0501 02:43:17.277407       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0501 02:43:17.295422       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0501 02:43:17.297749       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0501 02:43:17.298565       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0501 02:43:17.298855       1 instance.go:299] Using reconciler: lease
	W0501 02:43:37.270800       1 logging.go:59] [core] [Channel #1 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0501 02:43:37.278266       1 logging.go:59] [core] [Channel #2 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0501 02:43:37.303106       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F0501 02:43:37.303114       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [c8f3963bbca5a5d8ca96fd7bf715f2b551bcaf4380803b443a346bccff25655b] <==
	I0501 02:44:03.284302       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0501 02:44:03.284929       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0501 02:44:03.286205       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0501 02:44:03.276058       1 available_controller.go:423] Starting AvailableConditionController
	I0501 02:44:03.287970       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
	I0501 02:44:03.379527       1 shared_informer.go:320] Caches are synced for configmaps
	I0501 02:44:03.384276       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0501 02:44:03.384504       1 policy_source.go:224] refreshing policies
	I0501 02:44:03.388182       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0501 02:44:03.389215       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0501 02:44:03.389973       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0501 02:44:03.393950       1 aggregator.go:165] initial CRD sync complete...
	I0501 02:44:03.394016       1 autoregister_controller.go:141] Starting autoregister controller
	I0501 02:44:03.394042       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0501 02:44:03.394067       1 cache.go:39] Caches are synced for autoregister controller
	I0501 02:44:03.472614       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0501 02:44:03.472761       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0501 02:44:03.473456       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0501 02:44:03.476144       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0501 02:44:03.478606       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0501 02:44:03.482541       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0501 02:44:04.284362       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0501 02:44:04.931098       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.5 192.168.39.79]
	I0501 02:44:04.932629       1 controller.go:615] quota admission added evaluator for: endpoints
	I0501 02:44:04.941518       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [a409ec7ab3a3389b868353ff5b180728bff4d9fd6e9ee235408658387a54e865] <==
	I0501 02:43:17.430540       1 serving.go:380] Generated self-signed cert in-memory
	I0501 02:43:17.668047       1 controllermanager.go:189] "Starting" version="v1.30.0"
	I0501 02:43:17.669744       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0501 02:43:17.671414       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0501 02:43:17.673261       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0501 02:43:17.673277       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0501 02:43:17.673289       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E0501 02:43:38.309421       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.5:8443/healthz\": dial tcp 192.168.39.5:8443: connect: connection refused"
	
	
	==> kube-controller-manager [edfe1ca8adcff5c57dd48a6e4e52f6129014ec43e797455a799c8abb8dddf9ad] <==
	I0501 02:46:25.784756       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="128.555564ms"
	I0501 02:46:25.833279       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="47.642187ms"
	I0501 02:46:25.853242       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="19.900701ms"
	I0501 02:46:25.853604       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="44.498µs"
	I0501 02:46:25.913263       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="14.631028ms"
	I0501 02:46:25.913426       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="55.907µs"
	I0501 02:46:27.875321       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="66.789µs"
	I0501 02:46:28.222152       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="153.786µs"
	I0501 02:46:28.244267       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="67.349µs"
	I0501 02:46:28.250737       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="126.254µs"
	I0501 02:46:29.933372       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="34.752555ms"
	I0501 02:46:29.934030       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="365.578µs"
	I0501 02:46:40.921914       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-329926-m04"
	E0501 02:46:55.500732       1 gc_controller.go:153] "Failed to get node" err="node \"ha-329926-m03\" not found" logger="pod-garbage-collector-controller" node="ha-329926-m03"
	E0501 02:46:55.500763       1 gc_controller.go:153] "Failed to get node" err="node \"ha-329926-m03\" not found" logger="pod-garbage-collector-controller" node="ha-329926-m03"
	E0501 02:46:55.500770       1 gc_controller.go:153] "Failed to get node" err="node \"ha-329926-m03\" not found" logger="pod-garbage-collector-controller" node="ha-329926-m03"
	E0501 02:46:55.500775       1 gc_controller.go:153] "Failed to get node" err="node \"ha-329926-m03\" not found" logger="pod-garbage-collector-controller" node="ha-329926-m03"
	E0501 02:46:55.500781       1 gc_controller.go:153] "Failed to get node" err="node \"ha-329926-m03\" not found" logger="pod-garbage-collector-controller" node="ha-329926-m03"
	E0501 02:47:15.501906       1 gc_controller.go:153] "Failed to get node" err="node \"ha-329926-m03\" not found" logger="pod-garbage-collector-controller" node="ha-329926-m03"
	E0501 02:47:15.502047       1 gc_controller.go:153] "Failed to get node" err="node \"ha-329926-m03\" not found" logger="pod-garbage-collector-controller" node="ha-329926-m03"
	E0501 02:47:15.502075       1 gc_controller.go:153] "Failed to get node" err="node \"ha-329926-m03\" not found" logger="pod-garbage-collector-controller" node="ha-329926-m03"
	E0501 02:47:15.502114       1 gc_controller.go:153] "Failed to get node" err="node \"ha-329926-m03\" not found" logger="pod-garbage-collector-controller" node="ha-329926-m03"
	E0501 02:47:15.502138       1 gc_controller.go:153] "Failed to get node" err="node \"ha-329926-m03\" not found" logger="pod-garbage-collector-controller" node="ha-329926-m03"
	I0501 02:47:15.632207       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="92.719618ms"
	I0501 02:47:15.632346       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="49.89µs"
	
	
	==> kube-proxy [2ab64850e34b66cbe91e099c91014051472b87888130297cd7c47a4a78992140] <==
	E0501 02:40:21.302633       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1811": dial tcp 192.168.39.254:8443: connect: no route to host
	W0501 02:40:24.372540       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-329926&resourceVersion=1867": dial tcp 192.168.39.254:8443: connect: no route to host
	E0501 02:40:24.372608       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-329926&resourceVersion=1867": dial tcp 192.168.39.254:8443: connect: no route to host
	W0501 02:40:24.372747       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1811": dial tcp 192.168.39.254:8443: connect: no route to host
	E0501 02:40:24.372794       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1811": dial tcp 192.168.39.254:8443: connect: no route to host
	W0501 02:40:24.372851       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1834": dial tcp 192.168.39.254:8443: connect: no route to host
	E0501 02:40:24.372897       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1834": dial tcp 192.168.39.254:8443: connect: no route to host
	W0501 02:40:30.517127       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1834": dial tcp 192.168.39.254:8443: connect: no route to host
	E0501 02:40:30.517203       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1834": dial tcp 192.168.39.254:8443: connect: no route to host
	W0501 02:40:30.517286       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-329926&resourceVersion=1867": dial tcp 192.168.39.254:8443: connect: no route to host
	E0501 02:40:30.517336       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-329926&resourceVersion=1867": dial tcp 192.168.39.254:8443: connect: no route to host
	W0501 02:40:30.517308       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1811": dial tcp 192.168.39.254:8443: connect: no route to host
	E0501 02:40:30.517404       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1811": dial tcp 192.168.39.254:8443: connect: no route to host
	W0501 02:40:39.732596       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-329926&resourceVersion=1867": dial tcp 192.168.39.254:8443: connect: no route to host
	E0501 02:40:39.732718       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-329926&resourceVersion=1867": dial tcp 192.168.39.254:8443: connect: no route to host
	W0501 02:40:42.806445       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1811": dial tcp 192.168.39.254:8443: connect: no route to host
	W0501 02:40:42.806483       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1834": dial tcp 192.168.39.254:8443: connect: no route to host
	E0501 02:40:42.806845       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1834": dial tcp 192.168.39.254:8443: connect: no route to host
	E0501 02:40:42.806890       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1811": dial tcp 192.168.39.254:8443: connect: no route to host
	W0501 02:40:58.166492       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1834": dial tcp 192.168.39.254:8443: connect: no route to host
	E0501 02:40:58.166767       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1834": dial tcp 192.168.39.254:8443: connect: no route to host
	W0501 02:41:01.237978       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-329926&resourceVersion=1867": dial tcp 192.168.39.254:8443: connect: no route to host
	E0501 02:41:01.238302       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-329926&resourceVersion=1867": dial tcp 192.168.39.254:8443: connect: no route to host
	W0501 02:41:04.309884       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1811": dial tcp 192.168.39.254:8443: connect: no route to host
	E0501 02:41:04.310043       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1811": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-proxy [3b19253347b3c6a6483c37f96f8593931755f784d085f793c13e001ae0d76794] <==
	I0501 02:43:17.726855       1 server_linux.go:69] "Using iptables proxy"
	E0501 02:43:19.476429       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-329926\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0501 02:43:22.549893       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-329926\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0501 02:43:25.620569       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-329926\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0501 02:43:31.764380       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-329926\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0501 02:43:40.980955       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-329926\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0501 02:43:59.449793       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.5"]
	I0501 02:43:59.554344       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0501 02:43:59.554410       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0501 02:43:59.554429       1 server_linux.go:165] "Using iptables Proxier"
	I0501 02:43:59.561077       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0501 02:43:59.561313       1 server.go:872] "Version info" version="v1.30.0"
	I0501 02:43:59.561356       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0501 02:43:59.565389       1 config.go:192] "Starting service config controller"
	I0501 02:43:59.565443       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0501 02:43:59.565533       1 config.go:101] "Starting endpoint slice config controller"
	I0501 02:43:59.565564       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0501 02:43:59.566188       1 config.go:319] "Starting node config controller"
	I0501 02:43:59.566223       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0501 02:43:59.666555       1 shared_informer.go:320] Caches are synced for node config
	I0501 02:43:59.667097       1 shared_informer.go:320] Caches are synced for service config
	I0501 02:43:59.667187       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [e3ffc6d046e214333ed85219819ad54bfa5c3ac0b10167beacf72edbd6035736] <==
	W0501 02:41:23.464064       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0501 02:41:23.464138       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0501 02:41:23.504867       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0501 02:41:23.504954       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0501 02:41:23.536950       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0501 02:41:23.537066       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0501 02:41:23.706312       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0501 02:41:23.706435       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0501 02:41:23.856853       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0501 02:41:23.856901       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0501 02:41:23.965118       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0501 02:41:23.965176       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0501 02:41:23.986437       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0501 02:41:23.986560       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0501 02:41:24.051323       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0501 02:41:24.051517       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0501 02:41:24.101844       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0501 02:41:24.102022       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0501 02:41:24.190866       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0501 02:41:24.190896       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0501 02:41:24.261737       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0501 02:41:24.261829       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0501 02:41:24.303498       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0501 02:41:24.303726       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0501 02:41:26.815795       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [fc712ce5191365a1b74a4fb690a3fe1fca3ef109f0525a60d88ecff10b96a61b] <==
	W0501 02:43:55.950147       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.39.5:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.5:8443: connect: connection refused
	E0501 02:43:55.950214       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.39.5:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.5:8443: connect: connection refused
	W0501 02:43:56.010081       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.39.5:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.5:8443: connect: connection refused
	E0501 02:43:56.010168       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.39.5:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.5:8443: connect: connection refused
	W0501 02:43:56.208294       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.5:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.5:8443: connect: connection refused
	E0501 02:43:56.208365       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.39.5:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.5:8443: connect: connection refused
	W0501 02:43:57.327171       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.5:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.5:8443: connect: connection refused
	E0501 02:43:57.327253       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.39.5:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.5:8443: connect: connection refused
	W0501 02:43:57.551219       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.5:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.5:8443: connect: connection refused
	E0501 02:43:57.551293       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.39.5:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.5:8443: connect: connection refused
	W0501 02:43:57.836838       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.5:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.5:8443: connect: connection refused
	E0501 02:43:57.836907       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.5:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.5:8443: connect: connection refused
	W0501 02:43:58.059423       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.39.5:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.5:8443: connect: connection refused
	E0501 02:43:58.059539       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.39.5:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.5:8443: connect: connection refused
	W0501 02:43:58.245087       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.5:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.5:8443: connect: connection refused
	E0501 02:43:58.245213       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.39.5:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.5:8443: connect: connection refused
	W0501 02:43:58.448578       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.5:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.5:8443: connect: connection refused
	E0501 02:43:58.448814       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.39.5:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.5:8443: connect: connection refused
	W0501 02:43:58.604044       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.5:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.5:8443: connect: connection refused
	E0501 02:43:58.604164       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.39.5:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.5:8443: connect: connection refused
	W0501 02:43:59.228847       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.5:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.5:8443: connect: connection refused
	E0501 02:43:59.228949       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.39.5:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.5:8443: connect: connection refused
	W0501 02:43:59.997177       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.5:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.5:8443: connect: connection refused
	E0501 02:43:59.997262       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.5:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.5:8443: connect: connection refused
	I0501 02:44:19.614216       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	May 01 02:44:48 ha-329926 kubelet[1388]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 01 02:44:55 ha-329926 kubelet[1388]: I0501 02:44:55.105344    1388 kubelet.go:1908] "Trying to delete pod" pod="kube-system/kube-vip-ha-329926" podUID="0fbbb815-441d-48d0-b0cf-1bb57ff6d993"
	May 01 02:44:55 ha-329926 kubelet[1388]: I0501 02:44:55.145938    1388 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-329926"
	May 01 02:44:56 ha-329926 kubelet[1388]: I0501 02:44:56.143566    1388 kubelet.go:1908] "Trying to delete pod" pod="kube-system/kube-vip-ha-329926" podUID="0fbbb815-441d-48d0-b0cf-1bb57ff6d993"
	May 01 02:44:58 ha-329926 kubelet[1388]: I0501 02:44:58.128956    1388 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-vip-ha-329926" podStartSLOduration=3.128924506 podStartE2EDuration="3.128924506s" podCreationTimestamp="2024-05-01 02:44:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-05-01 02:44:58.12715467 +0000 UTC m=+790.180502315" watchObservedRunningTime="2024-05-01 02:44:58.128924506 +0000 UTC m=+790.182272176"
	May 01 02:45:48 ha-329926 kubelet[1388]: E0501 02:45:48.135237    1388 iptables.go:577] "Could not set up iptables canary" err=<
	May 01 02:45:48 ha-329926 kubelet[1388]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 01 02:45:48 ha-329926 kubelet[1388]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 01 02:45:48 ha-329926 kubelet[1388]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 01 02:45:48 ha-329926 kubelet[1388]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 01 02:46:48 ha-329926 kubelet[1388]: E0501 02:46:48.133068    1388 iptables.go:577] "Could not set up iptables canary" err=<
	May 01 02:46:48 ha-329926 kubelet[1388]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 01 02:46:48 ha-329926 kubelet[1388]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 01 02:46:48 ha-329926 kubelet[1388]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 01 02:46:48 ha-329926 kubelet[1388]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 01 02:47:48 ha-329926 kubelet[1388]: E0501 02:47:48.133854    1388 iptables.go:577] "Could not set up iptables canary" err=<
	May 01 02:47:48 ha-329926 kubelet[1388]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 01 02:47:48 ha-329926 kubelet[1388]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 01 02:47:48 ha-329926 kubelet[1388]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 01 02:47:48 ha-329926 kubelet[1388]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 01 02:48:48 ha-329926 kubelet[1388]: E0501 02:48:48.146392    1388 iptables.go:577] "Could not set up iptables canary" err=<
	May 01 02:48:48 ha-329926 kubelet[1388]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 01 02:48:48 ha-329926 kubelet[1388]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 01 02:48:48 ha-329926 kubelet[1388]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 01 02:48:48 ha-329926 kubelet[1388]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0501 02:49:02.805417   41782 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18779-13391/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-329926 -n ha-329926
helpers_test.go:261: (dbg) Run:  kubectl --context ha-329926 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopCluster (142.00s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (310.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-282238
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-282238
E0501 03:06:24.421967   20724 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/addons-286595/client.crt: no such file or directory
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-282238: exit status 82 (2m2.699630598s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-282238-m03"  ...
	* Stopping node "multinode-282238-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-282238" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-282238 --wait=true -v=8 --alsologtostderr
E0501 03:07:59.248534   20724 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/functional-960026/client.crt: no such file or directory
E0501 03:09:56.198147   20724 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/functional-960026/client.crt: no such file or directory
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-282238 --wait=true -v=8 --alsologtostderr: (3m5.128148267s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-282238
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-282238 -n multinode-282238
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-282238 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-282238 logs -n 25: (1.654446501s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-282238 ssh -n                                                                 | multinode-282238 | jenkins | v1.33.0 | 01 May 24 03:04 UTC | 01 May 24 03:04 UTC |
	|         | multinode-282238-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-282238 cp multinode-282238-m02:/home/docker/cp-test.txt                       | multinode-282238 | jenkins | v1.33.0 | 01 May 24 03:04 UTC | 01 May 24 03:04 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile2058267319/001/cp-test_multinode-282238-m02.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-282238 ssh -n                                                                 | multinode-282238 | jenkins | v1.33.0 | 01 May 24 03:04 UTC | 01 May 24 03:04 UTC |
	|         | multinode-282238-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-282238 cp multinode-282238-m02:/home/docker/cp-test.txt                       | multinode-282238 | jenkins | v1.33.0 | 01 May 24 03:04 UTC | 01 May 24 03:04 UTC |
	|         | multinode-282238:/home/docker/cp-test_multinode-282238-m02_multinode-282238.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-282238 ssh -n                                                                 | multinode-282238 | jenkins | v1.33.0 | 01 May 24 03:04 UTC | 01 May 24 03:04 UTC |
	|         | multinode-282238-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-282238 ssh -n multinode-282238 sudo cat                                       | multinode-282238 | jenkins | v1.33.0 | 01 May 24 03:04 UTC | 01 May 24 03:04 UTC |
	|         | /home/docker/cp-test_multinode-282238-m02_multinode-282238.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-282238 cp multinode-282238-m02:/home/docker/cp-test.txt                       | multinode-282238 | jenkins | v1.33.0 | 01 May 24 03:04 UTC | 01 May 24 03:04 UTC |
	|         | multinode-282238-m03:/home/docker/cp-test_multinode-282238-m02_multinode-282238-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-282238 ssh -n                                                                 | multinode-282238 | jenkins | v1.33.0 | 01 May 24 03:04 UTC | 01 May 24 03:04 UTC |
	|         | multinode-282238-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-282238 ssh -n multinode-282238-m03 sudo cat                                   | multinode-282238 | jenkins | v1.33.0 | 01 May 24 03:04 UTC | 01 May 24 03:04 UTC |
	|         | /home/docker/cp-test_multinode-282238-m02_multinode-282238-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-282238 cp testdata/cp-test.txt                                                | multinode-282238 | jenkins | v1.33.0 | 01 May 24 03:04 UTC | 01 May 24 03:04 UTC |
	|         | multinode-282238-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-282238 ssh -n                                                                 | multinode-282238 | jenkins | v1.33.0 | 01 May 24 03:04 UTC | 01 May 24 03:04 UTC |
	|         | multinode-282238-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-282238 cp multinode-282238-m03:/home/docker/cp-test.txt                       | multinode-282238 | jenkins | v1.33.0 | 01 May 24 03:04 UTC | 01 May 24 03:04 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile2058267319/001/cp-test_multinode-282238-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-282238 ssh -n                                                                 | multinode-282238 | jenkins | v1.33.0 | 01 May 24 03:04 UTC | 01 May 24 03:04 UTC |
	|         | multinode-282238-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-282238 cp multinode-282238-m03:/home/docker/cp-test.txt                       | multinode-282238 | jenkins | v1.33.0 | 01 May 24 03:04 UTC | 01 May 24 03:04 UTC |
	|         | multinode-282238:/home/docker/cp-test_multinode-282238-m03_multinode-282238.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-282238 ssh -n                                                                 | multinode-282238 | jenkins | v1.33.0 | 01 May 24 03:04 UTC | 01 May 24 03:04 UTC |
	|         | multinode-282238-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-282238 ssh -n multinode-282238 sudo cat                                       | multinode-282238 | jenkins | v1.33.0 | 01 May 24 03:04 UTC | 01 May 24 03:04 UTC |
	|         | /home/docker/cp-test_multinode-282238-m03_multinode-282238.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-282238 cp multinode-282238-m03:/home/docker/cp-test.txt                       | multinode-282238 | jenkins | v1.33.0 | 01 May 24 03:04 UTC | 01 May 24 03:04 UTC |
	|         | multinode-282238-m02:/home/docker/cp-test_multinode-282238-m03_multinode-282238-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-282238 ssh -n                                                                 | multinode-282238 | jenkins | v1.33.0 | 01 May 24 03:04 UTC | 01 May 24 03:04 UTC |
	|         | multinode-282238-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-282238 ssh -n multinode-282238-m02 sudo cat                                   | multinode-282238 | jenkins | v1.33.0 | 01 May 24 03:04 UTC | 01 May 24 03:04 UTC |
	|         | /home/docker/cp-test_multinode-282238-m03_multinode-282238-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-282238 node stop m03                                                          | multinode-282238 | jenkins | v1.33.0 | 01 May 24 03:04 UTC | 01 May 24 03:04 UTC |
	| node    | multinode-282238 node start                                                             | multinode-282238 | jenkins | v1.33.0 | 01 May 24 03:04 UTC | 01 May 24 03:05 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-282238                                                                | multinode-282238 | jenkins | v1.33.0 | 01 May 24 03:05 UTC |                     |
	| stop    | -p multinode-282238                                                                     | multinode-282238 | jenkins | v1.33.0 | 01 May 24 03:05 UTC |                     |
	| start   | -p multinode-282238                                                                     | multinode-282238 | jenkins | v1.33.0 | 01 May 24 03:07 UTC | 01 May 24 03:10 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-282238                                                                | multinode-282238 | jenkins | v1.33.0 | 01 May 24 03:10 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/01 03:07:25
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0501 03:07:25.032643   51401 out.go:291] Setting OutFile to fd 1 ...
	I0501 03:07:25.032903   51401 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 03:07:25.032912   51401 out.go:304] Setting ErrFile to fd 2...
	I0501 03:07:25.032916   51401 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 03:07:25.033101   51401 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18779-13391/.minikube/bin
	I0501 03:07:25.033643   51401 out.go:298] Setting JSON to false
	I0501 03:07:25.034574   51401 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6588,"bootTime":1714526257,"procs":190,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0501 03:07:25.034635   51401 start.go:139] virtualization: kvm guest
	I0501 03:07:25.036751   51401 out.go:177] * [multinode-282238] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0501 03:07:25.038279   51401 out.go:177]   - MINIKUBE_LOCATION=18779
	I0501 03:07:25.039543   51401 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0501 03:07:25.038290   51401 notify.go:220] Checking for updates...
	I0501 03:07:25.040849   51401 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18779-13391/kubeconfig
	I0501 03:07:25.042262   51401 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18779-13391/.minikube
	I0501 03:07:25.043550   51401 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0501 03:07:25.044963   51401 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0501 03:07:25.046788   51401 config.go:182] Loaded profile config "multinode-282238": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0501 03:07:25.046868   51401 driver.go:392] Setting default libvirt URI to qemu:///system
	I0501 03:07:25.047338   51401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:07:25.047373   51401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:07:25.063587   51401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43787
	I0501 03:07:25.063954   51401 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:07:25.064478   51401 main.go:141] libmachine: Using API Version  1
	I0501 03:07:25.064497   51401 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:07:25.064861   51401 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:07:25.065037   51401 main.go:141] libmachine: (multinode-282238) Calling .DriverName
	I0501 03:07:25.101733   51401 out.go:177] * Using the kvm2 driver based on existing profile
	I0501 03:07:25.103071   51401 start.go:297] selected driver: kvm2
	I0501 03:07:25.103084   51401 start.go:901] validating driver "kvm2" against &{Name:multinode-282238 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.0 ClusterName:multinode-282238 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.139 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.29 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.220 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingr
ess-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 03:07:25.103254   51401 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0501 03:07:25.103601   51401 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0501 03:07:25.103672   51401 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18779-13391/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0501 03:07:25.118248   51401 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0501 03:07:25.118983   51401 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0501 03:07:25.119055   51401 cni.go:84] Creating CNI manager for ""
	I0501 03:07:25.119067   51401 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0501 03:07:25.119129   51401 start.go:340] cluster config:
	{Name:multinode-282238 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-282238 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.139 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.29 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.220 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false k
ong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 03:07:25.119266   51401 iso.go:125] acquiring lock: {Name:mkcd2496aadb29931e179193d707c731bfda98b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0501 03:07:25.121032   51401 out.go:177] * Starting "multinode-282238" primary control-plane node in "multinode-282238" cluster
	I0501 03:07:25.122115   51401 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0501 03:07:25.122148   51401 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0501 03:07:25.122163   51401 cache.go:56] Caching tarball of preloaded images
	I0501 03:07:25.122250   51401 preload.go:173] Found /home/jenkins/minikube-integration/18779-13391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0501 03:07:25.122264   51401 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0501 03:07:25.122453   51401 profile.go:143] Saving config to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/multinode-282238/config.json ...
	I0501 03:07:25.122668   51401 start.go:360] acquireMachinesLock for multinode-282238: {Name:mkc4548e5afe1d4b0a833f6c522103562b0cefff Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0501 03:07:25.122712   51401 start.go:364] duration metric: took 25.941µs to acquireMachinesLock for "multinode-282238"
	I0501 03:07:25.122732   51401 start.go:96] Skipping create...Using existing machine configuration
	I0501 03:07:25.122738   51401 fix.go:54] fixHost starting: 
	I0501 03:07:25.123048   51401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:07:25.123081   51401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:07:25.137262   51401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41195
	I0501 03:07:25.137682   51401 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:07:25.138160   51401 main.go:141] libmachine: Using API Version  1
	I0501 03:07:25.138185   51401 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:07:25.138497   51401 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:07:25.138735   51401 main.go:141] libmachine: (multinode-282238) Calling .DriverName
	I0501 03:07:25.138873   51401 main.go:141] libmachine: (multinode-282238) Calling .GetState
	I0501 03:07:25.140461   51401 fix.go:112] recreateIfNeeded on multinode-282238: state=Running err=<nil>
	W0501 03:07:25.140489   51401 fix.go:138] unexpected machine state, will restart: <nil>
	I0501 03:07:25.142331   51401 out.go:177] * Updating the running kvm2 "multinode-282238" VM ...
	I0501 03:07:25.143539   51401 machine.go:94] provisionDockerMachine start ...
	I0501 03:07:25.143564   51401 main.go:141] libmachine: (multinode-282238) Calling .DriverName
	I0501 03:07:25.143791   51401 main.go:141] libmachine: (multinode-282238) Calling .GetSSHHostname
	I0501 03:07:25.146289   51401 main.go:141] libmachine: (multinode-282238) DBG | domain multinode-282238 has defined MAC address 52:54:00:3c:33:06 in network mk-multinode-282238
	I0501 03:07:25.146760   51401 main.go:141] libmachine: (multinode-282238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:33:06", ip: ""} in network mk-multinode-282238: {Iface:virbr1 ExpiryTime:2024-05-01 04:01:56 +0000 UTC Type:0 Mac:52:54:00:3c:33:06 Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:multinode-282238 Clientid:01:52:54:00:3c:33:06}
	I0501 03:07:25.146782   51401 main.go:141] libmachine: (multinode-282238) DBG | domain multinode-282238 has defined IP address 192.168.39.139 and MAC address 52:54:00:3c:33:06 in network mk-multinode-282238
	I0501 03:07:25.146932   51401 main.go:141] libmachine: (multinode-282238) Calling .GetSSHPort
	I0501 03:07:25.147099   51401 main.go:141] libmachine: (multinode-282238) Calling .GetSSHKeyPath
	I0501 03:07:25.147235   51401 main.go:141] libmachine: (multinode-282238) Calling .GetSSHKeyPath
	I0501 03:07:25.147376   51401 main.go:141] libmachine: (multinode-282238) Calling .GetSSHUsername
	I0501 03:07:25.147576   51401 main.go:141] libmachine: Using SSH client type: native
	I0501 03:07:25.147741   51401 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.139 22 <nil> <nil>}
	I0501 03:07:25.147752   51401 main.go:141] libmachine: About to run SSH command:
	hostname
	I0501 03:07:25.260618   51401 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-282238
	
	I0501 03:07:25.260653   51401 main.go:141] libmachine: (multinode-282238) Calling .GetMachineName
	I0501 03:07:25.260900   51401 buildroot.go:166] provisioning hostname "multinode-282238"
	I0501 03:07:25.260922   51401 main.go:141] libmachine: (multinode-282238) Calling .GetMachineName
	I0501 03:07:25.261065   51401 main.go:141] libmachine: (multinode-282238) Calling .GetSSHHostname
	I0501 03:07:25.264045   51401 main.go:141] libmachine: (multinode-282238) DBG | domain multinode-282238 has defined MAC address 52:54:00:3c:33:06 in network mk-multinode-282238
	I0501 03:07:25.264448   51401 main.go:141] libmachine: (multinode-282238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:33:06", ip: ""} in network mk-multinode-282238: {Iface:virbr1 ExpiryTime:2024-05-01 04:01:56 +0000 UTC Type:0 Mac:52:54:00:3c:33:06 Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:multinode-282238 Clientid:01:52:54:00:3c:33:06}
	I0501 03:07:25.264478   51401 main.go:141] libmachine: (multinode-282238) DBG | domain multinode-282238 has defined IP address 192.168.39.139 and MAC address 52:54:00:3c:33:06 in network mk-multinode-282238
	I0501 03:07:25.264713   51401 main.go:141] libmachine: (multinode-282238) Calling .GetSSHPort
	I0501 03:07:25.264900   51401 main.go:141] libmachine: (multinode-282238) Calling .GetSSHKeyPath
	I0501 03:07:25.265034   51401 main.go:141] libmachine: (multinode-282238) Calling .GetSSHKeyPath
	I0501 03:07:25.265129   51401 main.go:141] libmachine: (multinode-282238) Calling .GetSSHUsername
	I0501 03:07:25.265257   51401 main.go:141] libmachine: Using SSH client type: native
	I0501 03:07:25.265405   51401 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.139 22 <nil> <nil>}
	I0501 03:07:25.265418   51401 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-282238 && echo "multinode-282238" | sudo tee /etc/hostname
	I0501 03:07:25.390587   51401 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-282238
	
	I0501 03:07:25.390621   51401 main.go:141] libmachine: (multinode-282238) Calling .GetSSHHostname
	I0501 03:07:25.393307   51401 main.go:141] libmachine: (multinode-282238) DBG | domain multinode-282238 has defined MAC address 52:54:00:3c:33:06 in network mk-multinode-282238
	I0501 03:07:25.393641   51401 main.go:141] libmachine: (multinode-282238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:33:06", ip: ""} in network mk-multinode-282238: {Iface:virbr1 ExpiryTime:2024-05-01 04:01:56 +0000 UTC Type:0 Mac:52:54:00:3c:33:06 Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:multinode-282238 Clientid:01:52:54:00:3c:33:06}
	I0501 03:07:25.393670   51401 main.go:141] libmachine: (multinode-282238) DBG | domain multinode-282238 has defined IP address 192.168.39.139 and MAC address 52:54:00:3c:33:06 in network mk-multinode-282238
	I0501 03:07:25.393868   51401 main.go:141] libmachine: (multinode-282238) Calling .GetSSHPort
	I0501 03:07:25.394065   51401 main.go:141] libmachine: (multinode-282238) Calling .GetSSHKeyPath
	I0501 03:07:25.394211   51401 main.go:141] libmachine: (multinode-282238) Calling .GetSSHKeyPath
	I0501 03:07:25.394335   51401 main.go:141] libmachine: (multinode-282238) Calling .GetSSHUsername
	I0501 03:07:25.394485   51401 main.go:141] libmachine: Using SSH client type: native
	I0501 03:07:25.394678   51401 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.139 22 <nil> <nil>}
	I0501 03:07:25.394703   51401 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-282238' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-282238/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-282238' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0501 03:07:25.503494   51401 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0501 03:07:25.503530   51401 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18779-13391/.minikube CaCertPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18779-13391/.minikube}
	I0501 03:07:25.503556   51401 buildroot.go:174] setting up certificates
	I0501 03:07:25.503567   51401 provision.go:84] configureAuth start
	I0501 03:07:25.503580   51401 main.go:141] libmachine: (multinode-282238) Calling .GetMachineName
	I0501 03:07:25.503864   51401 main.go:141] libmachine: (multinode-282238) Calling .GetIP
	I0501 03:07:25.506274   51401 main.go:141] libmachine: (multinode-282238) DBG | domain multinode-282238 has defined MAC address 52:54:00:3c:33:06 in network mk-multinode-282238
	I0501 03:07:25.506622   51401 main.go:141] libmachine: (multinode-282238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:33:06", ip: ""} in network mk-multinode-282238: {Iface:virbr1 ExpiryTime:2024-05-01 04:01:56 +0000 UTC Type:0 Mac:52:54:00:3c:33:06 Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:multinode-282238 Clientid:01:52:54:00:3c:33:06}
	I0501 03:07:25.506656   51401 main.go:141] libmachine: (multinode-282238) DBG | domain multinode-282238 has defined IP address 192.168.39.139 and MAC address 52:54:00:3c:33:06 in network mk-multinode-282238
	I0501 03:07:25.506763   51401 main.go:141] libmachine: (multinode-282238) Calling .GetSSHHostname
	I0501 03:07:25.508928   51401 main.go:141] libmachine: (multinode-282238) DBG | domain multinode-282238 has defined MAC address 52:54:00:3c:33:06 in network mk-multinode-282238
	I0501 03:07:25.509281   51401 main.go:141] libmachine: (multinode-282238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:33:06", ip: ""} in network mk-multinode-282238: {Iface:virbr1 ExpiryTime:2024-05-01 04:01:56 +0000 UTC Type:0 Mac:52:54:00:3c:33:06 Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:multinode-282238 Clientid:01:52:54:00:3c:33:06}
	I0501 03:07:25.509308   51401 main.go:141] libmachine: (multinode-282238) DBG | domain multinode-282238 has defined IP address 192.168.39.139 and MAC address 52:54:00:3c:33:06 in network mk-multinode-282238
	I0501 03:07:25.509439   51401 provision.go:143] copyHostCerts
	I0501 03:07:25.509471   51401 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem
	I0501 03:07:25.509502   51401 exec_runner.go:144] found /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem, removing ...
	I0501 03:07:25.509510   51401 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem
	I0501 03:07:25.509577   51401 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem (1078 bytes)
	I0501 03:07:25.509655   51401 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem
	I0501 03:07:25.509682   51401 exec_runner.go:144] found /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem, removing ...
	I0501 03:07:25.509689   51401 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem
	I0501 03:07:25.509720   51401 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem (1123 bytes)
	I0501 03:07:25.509769   51401 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem
	I0501 03:07:25.509785   51401 exec_runner.go:144] found /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem, removing ...
	I0501 03:07:25.509792   51401 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem
	I0501 03:07:25.509811   51401 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem (1675 bytes)
	I0501 03:07:25.509862   51401 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca-key.pem org=jenkins.multinode-282238 san=[127.0.0.1 192.168.39.139 localhost minikube multinode-282238]
	I0501 03:07:25.741904   51401 provision.go:177] copyRemoteCerts
	I0501 03:07:25.741971   51401 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0501 03:07:25.741996   51401 main.go:141] libmachine: (multinode-282238) Calling .GetSSHHostname
	I0501 03:07:25.744626   51401 main.go:141] libmachine: (multinode-282238) DBG | domain multinode-282238 has defined MAC address 52:54:00:3c:33:06 in network mk-multinode-282238
	I0501 03:07:25.744995   51401 main.go:141] libmachine: (multinode-282238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:33:06", ip: ""} in network mk-multinode-282238: {Iface:virbr1 ExpiryTime:2024-05-01 04:01:56 +0000 UTC Type:0 Mac:52:54:00:3c:33:06 Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:multinode-282238 Clientid:01:52:54:00:3c:33:06}
	I0501 03:07:25.745025   51401 main.go:141] libmachine: (multinode-282238) DBG | domain multinode-282238 has defined IP address 192.168.39.139 and MAC address 52:54:00:3c:33:06 in network mk-multinode-282238
	I0501 03:07:25.745137   51401 main.go:141] libmachine: (multinode-282238) Calling .GetSSHPort
	I0501 03:07:25.745342   51401 main.go:141] libmachine: (multinode-282238) Calling .GetSSHKeyPath
	I0501 03:07:25.745515   51401 main.go:141] libmachine: (multinode-282238) Calling .GetSSHUsername
	I0501 03:07:25.745665   51401 sshutil.go:53] new ssh client: &{IP:192.168.39.139 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/multinode-282238/id_rsa Username:docker}
	I0501 03:07:25.826537   51401 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0501 03:07:25.826604   51401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0501 03:07:25.859022   51401 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0501 03:07:25.859094   51401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0501 03:07:25.887664   51401 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0501 03:07:25.887727   51401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0501 03:07:25.917389   51401 provision.go:87] duration metric: took 413.792925ms to configureAuth
	I0501 03:07:25.917417   51401 buildroot.go:189] setting minikube options for container-runtime
	I0501 03:07:25.917637   51401 config.go:182] Loaded profile config "multinode-282238": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0501 03:07:25.917716   51401 main.go:141] libmachine: (multinode-282238) Calling .GetSSHHostname
	I0501 03:07:25.920388   51401 main.go:141] libmachine: (multinode-282238) DBG | domain multinode-282238 has defined MAC address 52:54:00:3c:33:06 in network mk-multinode-282238
	I0501 03:07:25.920817   51401 main.go:141] libmachine: (multinode-282238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:33:06", ip: ""} in network mk-multinode-282238: {Iface:virbr1 ExpiryTime:2024-05-01 04:01:56 +0000 UTC Type:0 Mac:52:54:00:3c:33:06 Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:multinode-282238 Clientid:01:52:54:00:3c:33:06}
	I0501 03:07:25.920845   51401 main.go:141] libmachine: (multinode-282238) DBG | domain multinode-282238 has defined IP address 192.168.39.139 and MAC address 52:54:00:3c:33:06 in network mk-multinode-282238
	I0501 03:07:25.920969   51401 main.go:141] libmachine: (multinode-282238) Calling .GetSSHPort
	I0501 03:07:25.921144   51401 main.go:141] libmachine: (multinode-282238) Calling .GetSSHKeyPath
	I0501 03:07:25.921297   51401 main.go:141] libmachine: (multinode-282238) Calling .GetSSHKeyPath
	I0501 03:07:25.921419   51401 main.go:141] libmachine: (multinode-282238) Calling .GetSSHUsername
	I0501 03:07:25.921573   51401 main.go:141] libmachine: Using SSH client type: native
	I0501 03:07:25.921728   51401 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.139 22 <nil> <nil>}
	I0501 03:07:25.921744   51401 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0501 03:08:56.675142   51401 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0501 03:08:56.675175   51401 machine.go:97] duration metric: took 1m31.531621429s to provisionDockerMachine
	I0501 03:08:56.675191   51401 start.go:293] postStartSetup for "multinode-282238" (driver="kvm2")
	I0501 03:08:56.675206   51401 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0501 03:08:56.675253   51401 main.go:141] libmachine: (multinode-282238) Calling .DriverName
	I0501 03:08:56.675579   51401 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0501 03:08:56.675612   51401 main.go:141] libmachine: (multinode-282238) Calling .GetSSHHostname
	I0501 03:08:56.678601   51401 main.go:141] libmachine: (multinode-282238) DBG | domain multinode-282238 has defined MAC address 52:54:00:3c:33:06 in network mk-multinode-282238
	I0501 03:08:56.679020   51401 main.go:141] libmachine: (multinode-282238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:33:06", ip: ""} in network mk-multinode-282238: {Iface:virbr1 ExpiryTime:2024-05-01 04:01:56 +0000 UTC Type:0 Mac:52:54:00:3c:33:06 Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:multinode-282238 Clientid:01:52:54:00:3c:33:06}
	I0501 03:08:56.679047   51401 main.go:141] libmachine: (multinode-282238) DBG | domain multinode-282238 has defined IP address 192.168.39.139 and MAC address 52:54:00:3c:33:06 in network mk-multinode-282238
	I0501 03:08:56.679200   51401 main.go:141] libmachine: (multinode-282238) Calling .GetSSHPort
	I0501 03:08:56.679381   51401 main.go:141] libmachine: (multinode-282238) Calling .GetSSHKeyPath
	I0501 03:08:56.679535   51401 main.go:141] libmachine: (multinode-282238) Calling .GetSSHUsername
	I0501 03:08:56.679660   51401 sshutil.go:53] new ssh client: &{IP:192.168.39.139 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/multinode-282238/id_rsa Username:docker}
	I0501 03:08:56.763350   51401 ssh_runner.go:195] Run: cat /etc/os-release
	I0501 03:08:56.768099   51401 command_runner.go:130] > NAME=Buildroot
	I0501 03:08:56.768111   51401 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0501 03:08:56.768116   51401 command_runner.go:130] > ID=buildroot
	I0501 03:08:56.768120   51401 command_runner.go:130] > VERSION_ID=2023.02.9
	I0501 03:08:56.768126   51401 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0501 03:08:56.768150   51401 info.go:137] Remote host: Buildroot 2023.02.9
	I0501 03:08:56.768169   51401 filesync.go:126] Scanning /home/jenkins/minikube-integration/18779-13391/.minikube/addons for local assets ...
	I0501 03:08:56.768255   51401 filesync.go:126] Scanning /home/jenkins/minikube-integration/18779-13391/.minikube/files for local assets ...
	I0501 03:08:56.768326   51401 filesync.go:149] local asset: /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem -> 207242.pem in /etc/ssl/certs
	I0501 03:08:56.768334   51401 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem -> /etc/ssl/certs/207242.pem
	I0501 03:08:56.768411   51401 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0501 03:08:56.778385   51401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem --> /etc/ssl/certs/207242.pem (1708 bytes)
	I0501 03:08:56.805585   51401 start.go:296] duration metric: took 130.38193ms for postStartSetup
	I0501 03:08:56.805619   51401 fix.go:56] duration metric: took 1m31.682880587s for fixHost
	I0501 03:08:56.805637   51401 main.go:141] libmachine: (multinode-282238) Calling .GetSSHHostname
	I0501 03:08:56.808651   51401 main.go:141] libmachine: (multinode-282238) DBG | domain multinode-282238 has defined MAC address 52:54:00:3c:33:06 in network mk-multinode-282238
	I0501 03:08:56.809077   51401 main.go:141] libmachine: (multinode-282238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:33:06", ip: ""} in network mk-multinode-282238: {Iface:virbr1 ExpiryTime:2024-05-01 04:01:56 +0000 UTC Type:0 Mac:52:54:00:3c:33:06 Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:multinode-282238 Clientid:01:52:54:00:3c:33:06}
	I0501 03:08:56.809104   51401 main.go:141] libmachine: (multinode-282238) DBG | domain multinode-282238 has defined IP address 192.168.39.139 and MAC address 52:54:00:3c:33:06 in network mk-multinode-282238
	I0501 03:08:56.809273   51401 main.go:141] libmachine: (multinode-282238) Calling .GetSSHPort
	I0501 03:08:56.809456   51401 main.go:141] libmachine: (multinode-282238) Calling .GetSSHKeyPath
	I0501 03:08:56.809613   51401 main.go:141] libmachine: (multinode-282238) Calling .GetSSHKeyPath
	I0501 03:08:56.809772   51401 main.go:141] libmachine: (multinode-282238) Calling .GetSSHUsername
	I0501 03:08:56.809913   51401 main.go:141] libmachine: Using SSH client type: native
	I0501 03:08:56.810118   51401 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.139 22 <nil> <nil>}
	I0501 03:08:56.810134   51401 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0501 03:08:56.911693   51401 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714532936.883050544
	
	I0501 03:08:56.911710   51401 fix.go:216] guest clock: 1714532936.883050544
	I0501 03:08:56.911717   51401 fix.go:229] Guest: 2024-05-01 03:08:56.883050544 +0000 UTC Remote: 2024-05-01 03:08:56.805622688 +0000 UTC m=+91.819245890 (delta=77.427856ms)
	I0501 03:08:56.911746   51401 fix.go:200] guest clock delta is within tolerance: 77.427856ms
	I0501 03:08:56.911753   51401 start.go:83] releasing machines lock for "multinode-282238", held for 1m31.789028144s
	I0501 03:08:56.911776   51401 main.go:141] libmachine: (multinode-282238) Calling .DriverName
	I0501 03:08:56.912048   51401 main.go:141] libmachine: (multinode-282238) Calling .GetIP
	I0501 03:08:56.914352   51401 main.go:141] libmachine: (multinode-282238) DBG | domain multinode-282238 has defined MAC address 52:54:00:3c:33:06 in network mk-multinode-282238
	I0501 03:08:56.914696   51401 main.go:141] libmachine: (multinode-282238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:33:06", ip: ""} in network mk-multinode-282238: {Iface:virbr1 ExpiryTime:2024-05-01 04:01:56 +0000 UTC Type:0 Mac:52:54:00:3c:33:06 Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:multinode-282238 Clientid:01:52:54:00:3c:33:06}
	I0501 03:08:56.914726   51401 main.go:141] libmachine: (multinode-282238) DBG | domain multinode-282238 has defined IP address 192.168.39.139 and MAC address 52:54:00:3c:33:06 in network mk-multinode-282238
	I0501 03:08:56.914896   51401 main.go:141] libmachine: (multinode-282238) Calling .DriverName
	I0501 03:08:56.915412   51401 main.go:141] libmachine: (multinode-282238) Calling .DriverName
	I0501 03:08:56.915602   51401 main.go:141] libmachine: (multinode-282238) Calling .DriverName
	I0501 03:08:56.915664   51401 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0501 03:08:56.915711   51401 main.go:141] libmachine: (multinode-282238) Calling .GetSSHHostname
	I0501 03:08:56.915822   51401 ssh_runner.go:195] Run: cat /version.json
	I0501 03:08:56.915845   51401 main.go:141] libmachine: (multinode-282238) Calling .GetSSHHostname
	I0501 03:08:56.918273   51401 main.go:141] libmachine: (multinode-282238) DBG | domain multinode-282238 has defined MAC address 52:54:00:3c:33:06 in network mk-multinode-282238
	I0501 03:08:56.918297   51401 main.go:141] libmachine: (multinode-282238) DBG | domain multinode-282238 has defined MAC address 52:54:00:3c:33:06 in network mk-multinode-282238
	I0501 03:08:56.918656   51401 main.go:141] libmachine: (multinode-282238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:33:06", ip: ""} in network mk-multinode-282238: {Iface:virbr1 ExpiryTime:2024-05-01 04:01:56 +0000 UTC Type:0 Mac:52:54:00:3c:33:06 Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:multinode-282238 Clientid:01:52:54:00:3c:33:06}
	I0501 03:08:56.918686   51401 main.go:141] libmachine: (multinode-282238) DBG | domain multinode-282238 has defined IP address 192.168.39.139 and MAC address 52:54:00:3c:33:06 in network mk-multinode-282238
	I0501 03:08:56.918717   51401 main.go:141] libmachine: (multinode-282238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:33:06", ip: ""} in network mk-multinode-282238: {Iface:virbr1 ExpiryTime:2024-05-01 04:01:56 +0000 UTC Type:0 Mac:52:54:00:3c:33:06 Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:multinode-282238 Clientid:01:52:54:00:3c:33:06}
	I0501 03:08:56.918733   51401 main.go:141] libmachine: (multinode-282238) DBG | domain multinode-282238 has defined IP address 192.168.39.139 and MAC address 52:54:00:3c:33:06 in network mk-multinode-282238
	I0501 03:08:56.918919   51401 main.go:141] libmachine: (multinode-282238) Calling .GetSSHPort
	I0501 03:08:56.918995   51401 main.go:141] libmachine: (multinode-282238) Calling .GetSSHPort
	I0501 03:08:56.919063   51401 main.go:141] libmachine: (multinode-282238) Calling .GetSSHKeyPath
	I0501 03:08:56.919194   51401 main.go:141] libmachine: (multinode-282238) Calling .GetSSHUsername
	I0501 03:08:56.919199   51401 main.go:141] libmachine: (multinode-282238) Calling .GetSSHKeyPath
	I0501 03:08:56.919370   51401 main.go:141] libmachine: (multinode-282238) Calling .GetSSHUsername
	I0501 03:08:56.919376   51401 sshutil.go:53] new ssh client: &{IP:192.168.39.139 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/multinode-282238/id_rsa Username:docker}
	I0501 03:08:56.919511   51401 sshutil.go:53] new ssh client: &{IP:192.168.39.139 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/multinode-282238/id_rsa Username:docker}
	I0501 03:08:56.995861   51401 command_runner.go:130] > {"iso_version": "v1.33.0-1714498396-18779", "kicbase_version": "v0.0.43-1714386659-18769", "minikube_version": "v1.33.0", "commit": "0c7995ab2d4914d5c74027eee5f5d102e19316f2"}
	I0501 03:08:56.995977   51401 ssh_runner.go:195] Run: systemctl --version
	I0501 03:08:57.021726   51401 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0501 03:08:57.021771   51401 command_runner.go:130] > systemd 252 (252)
	I0501 03:08:57.021788   51401 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0501 03:08:57.021850   51401 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0501 03:08:57.188063   51401 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0501 03:08:57.197104   51401 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0501 03:08:57.197518   51401 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0501 03:08:57.197578   51401 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0501 03:08:57.208117   51401 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0501 03:08:57.208144   51401 start.go:494] detecting cgroup driver to use...
	I0501 03:08:57.208223   51401 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0501 03:08:57.225667   51401 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0501 03:08:57.240786   51401 docker.go:217] disabling cri-docker service (if available) ...
	I0501 03:08:57.240851   51401 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0501 03:08:57.255199   51401 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0501 03:08:57.269525   51401 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0501 03:08:57.418649   51401 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0501 03:08:57.561947   51401 docker.go:233] disabling docker service ...
	I0501 03:08:57.562030   51401 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0501 03:08:57.581248   51401 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0501 03:08:57.596573   51401 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0501 03:08:57.740231   51401 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0501 03:08:57.885508   51401 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0501 03:08:57.902165   51401 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0501 03:08:57.924050   51401 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0501 03:08:57.924547   51401 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0501 03:08:57.924599   51401 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:08:57.936403   51401 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0501 03:08:57.936476   51401 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:08:57.948396   51401 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:08:57.960143   51401 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:08:57.971708   51401 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0501 03:08:57.983774   51401 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:08:57.995642   51401 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:08:58.010171   51401 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:08:58.022409   51401 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0501 03:08:58.033294   51401 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0501 03:08:58.033394   51401 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0501 03:08:58.043731   51401 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 03:08:58.187907   51401 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0501 03:08:58.451531   51401 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0501 03:08:58.451590   51401 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0501 03:08:58.458018   51401 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0501 03:08:58.458032   51401 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0501 03:08:58.458038   51401 command_runner.go:130] > Device: 0,22	Inode: 1319        Links: 1
	I0501 03:08:58.458045   51401 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0501 03:08:58.458051   51401 command_runner.go:130] > Access: 2024-05-01 03:08:58.315900439 +0000
	I0501 03:08:58.458057   51401 command_runner.go:130] > Modify: 2024-05-01 03:08:58.315900439 +0000
	I0501 03:08:58.458062   51401 command_runner.go:130] > Change: 2024-05-01 03:08:58.315900439 +0000
	I0501 03:08:58.458080   51401 command_runner.go:130] >  Birth: -
	I0501 03:08:58.458355   51401 start.go:562] Will wait 60s for crictl version
	I0501 03:08:58.458389   51401 ssh_runner.go:195] Run: which crictl
	I0501 03:08:58.462686   51401 command_runner.go:130] > /usr/bin/crictl
	I0501 03:08:58.462978   51401 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0501 03:08:58.513282   51401 command_runner.go:130] > Version:  0.1.0
	I0501 03:08:58.513301   51401 command_runner.go:130] > RuntimeName:  cri-o
	I0501 03:08:58.513305   51401 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0501 03:08:58.513312   51401 command_runner.go:130] > RuntimeApiVersion:  v1
	I0501 03:08:58.513529   51401 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0501 03:08:58.513588   51401 ssh_runner.go:195] Run: crio --version
	I0501 03:08:58.548493   51401 command_runner.go:130] > crio version 1.29.1
	I0501 03:08:58.548512   51401 command_runner.go:130] > Version:        1.29.1
	I0501 03:08:58.548518   51401 command_runner.go:130] > GitCommit:      unknown
	I0501 03:08:58.548522   51401 command_runner.go:130] > GitCommitDate:  unknown
	I0501 03:08:58.548526   51401 command_runner.go:130] > GitTreeState:   clean
	I0501 03:08:58.548532   51401 command_runner.go:130] > BuildDate:      2024-04-30T23:23:49Z
	I0501 03:08:58.548537   51401 command_runner.go:130] > GoVersion:      go1.21.6
	I0501 03:08:58.548541   51401 command_runner.go:130] > Compiler:       gc
	I0501 03:08:58.548546   51401 command_runner.go:130] > Platform:       linux/amd64
	I0501 03:08:58.548550   51401 command_runner.go:130] > Linkmode:       dynamic
	I0501 03:08:58.548566   51401 command_runner.go:130] > BuildTags:      
	I0501 03:08:58.548574   51401 command_runner.go:130] >   containers_image_ostree_stub
	I0501 03:08:58.548578   51401 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0501 03:08:58.548582   51401 command_runner.go:130] >   btrfs_noversion
	I0501 03:08:58.548587   51401 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0501 03:08:58.548594   51401 command_runner.go:130] >   libdm_no_deferred_remove
	I0501 03:08:58.548597   51401 command_runner.go:130] >   seccomp
	I0501 03:08:58.548601   51401 command_runner.go:130] > LDFlags:          unknown
	I0501 03:08:58.548607   51401 command_runner.go:130] > SeccompEnabled:   true
	I0501 03:08:58.548611   51401 command_runner.go:130] > AppArmorEnabled:  false
	I0501 03:08:58.550047   51401 ssh_runner.go:195] Run: crio --version
	I0501 03:08:58.588796   51401 command_runner.go:130] > crio version 1.29.1
	I0501 03:08:58.588819   51401 command_runner.go:130] > Version:        1.29.1
	I0501 03:08:58.588837   51401 command_runner.go:130] > GitCommit:      unknown
	I0501 03:08:58.588841   51401 command_runner.go:130] > GitCommitDate:  unknown
	I0501 03:08:58.588845   51401 command_runner.go:130] > GitTreeState:   clean
	I0501 03:08:58.588851   51401 command_runner.go:130] > BuildDate:      2024-04-30T23:23:49Z
	I0501 03:08:58.588858   51401 command_runner.go:130] > GoVersion:      go1.21.6
	I0501 03:08:58.588865   51401 command_runner.go:130] > Compiler:       gc
	I0501 03:08:58.588872   51401 command_runner.go:130] > Platform:       linux/amd64
	I0501 03:08:58.588880   51401 command_runner.go:130] > Linkmode:       dynamic
	I0501 03:08:58.588888   51401 command_runner.go:130] > BuildTags:      
	I0501 03:08:58.588901   51401 command_runner.go:130] >   containers_image_ostree_stub
	I0501 03:08:58.588905   51401 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0501 03:08:58.588909   51401 command_runner.go:130] >   btrfs_noversion
	I0501 03:08:58.588913   51401 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0501 03:08:58.588919   51401 command_runner.go:130] >   libdm_no_deferred_remove
	I0501 03:08:58.588923   51401 command_runner.go:130] >   seccomp
	I0501 03:08:58.588927   51401 command_runner.go:130] > LDFlags:          unknown
	I0501 03:08:58.588932   51401 command_runner.go:130] > SeccompEnabled:   true
	I0501 03:08:58.588937   51401 command_runner.go:130] > AppArmorEnabled:  false
	I0501 03:08:58.592169   51401 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0501 03:08:58.593781   51401 main.go:141] libmachine: (multinode-282238) Calling .GetIP
	I0501 03:08:58.596558   51401 main.go:141] libmachine: (multinode-282238) DBG | domain multinode-282238 has defined MAC address 52:54:00:3c:33:06 in network mk-multinode-282238
	I0501 03:08:58.596907   51401 main.go:141] libmachine: (multinode-282238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:33:06", ip: ""} in network mk-multinode-282238: {Iface:virbr1 ExpiryTime:2024-05-01 04:01:56 +0000 UTC Type:0 Mac:52:54:00:3c:33:06 Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:multinode-282238 Clientid:01:52:54:00:3c:33:06}
	I0501 03:08:58.596930   51401 main.go:141] libmachine: (multinode-282238) DBG | domain multinode-282238 has defined IP address 192.168.39.139 and MAC address 52:54:00:3c:33:06 in network mk-multinode-282238
	I0501 03:08:58.597185   51401 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0501 03:08:58.602113   51401 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0501 03:08:58.602335   51401 kubeadm.go:877] updating cluster {Name:multinode-282238 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
30.0 ClusterName:multinode-282238 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.139 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.29 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.220 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fal
se inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0501 03:08:58.602509   51401 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0501 03:08:58.602567   51401 ssh_runner.go:195] Run: sudo crictl images --output json
	I0501 03:08:58.653378   51401 command_runner.go:130] > {
	I0501 03:08:58.653404   51401 command_runner.go:130] >   "images": [
	I0501 03:08:58.653410   51401 command_runner.go:130] >     {
	I0501 03:08:58.653422   51401 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0501 03:08:58.653429   51401 command_runner.go:130] >       "repoTags": [
	I0501 03:08:58.653438   51401 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0501 03:08:58.653453   51401 command_runner.go:130] >       ],
	I0501 03:08:58.653469   51401 command_runner.go:130] >       "repoDigests": [
	I0501 03:08:58.653483   51401 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0501 03:08:58.653494   51401 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0501 03:08:58.653499   51401 command_runner.go:130] >       ],
	I0501 03:08:58.653503   51401 command_runner.go:130] >       "size": "65291810",
	I0501 03:08:58.653507   51401 command_runner.go:130] >       "uid": null,
	I0501 03:08:58.653512   51401 command_runner.go:130] >       "username": "",
	I0501 03:08:58.653518   51401 command_runner.go:130] >       "spec": null,
	I0501 03:08:58.653525   51401 command_runner.go:130] >       "pinned": false
	I0501 03:08:58.653528   51401 command_runner.go:130] >     },
	I0501 03:08:58.653531   51401 command_runner.go:130] >     {
	I0501 03:08:58.653539   51401 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0501 03:08:58.653543   51401 command_runner.go:130] >       "repoTags": [
	I0501 03:08:58.653548   51401 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0501 03:08:58.653553   51401 command_runner.go:130] >       ],
	I0501 03:08:58.653557   51401 command_runner.go:130] >       "repoDigests": [
	I0501 03:08:58.653566   51401 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0501 03:08:58.653573   51401 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0501 03:08:58.653580   51401 command_runner.go:130] >       ],
	I0501 03:08:58.653584   51401 command_runner.go:130] >       "size": "1363676",
	I0501 03:08:58.653590   51401 command_runner.go:130] >       "uid": null,
	I0501 03:08:58.653599   51401 command_runner.go:130] >       "username": "",
	I0501 03:08:58.653605   51401 command_runner.go:130] >       "spec": null,
	I0501 03:08:58.653609   51401 command_runner.go:130] >       "pinned": false
	I0501 03:08:58.653615   51401 command_runner.go:130] >     },
	I0501 03:08:58.653618   51401 command_runner.go:130] >     {
	I0501 03:08:58.653627   51401 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0501 03:08:58.653635   51401 command_runner.go:130] >       "repoTags": [
	I0501 03:08:58.653641   51401 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0501 03:08:58.653646   51401 command_runner.go:130] >       ],
	I0501 03:08:58.653651   51401 command_runner.go:130] >       "repoDigests": [
	I0501 03:08:58.653660   51401 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0501 03:08:58.653670   51401 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0501 03:08:58.653676   51401 command_runner.go:130] >       ],
	I0501 03:08:58.653680   51401 command_runner.go:130] >       "size": "31470524",
	I0501 03:08:58.653695   51401 command_runner.go:130] >       "uid": null,
	I0501 03:08:58.653705   51401 command_runner.go:130] >       "username": "",
	I0501 03:08:58.653715   51401 command_runner.go:130] >       "spec": null,
	I0501 03:08:58.653725   51401 command_runner.go:130] >       "pinned": false
	I0501 03:08:58.653733   51401 command_runner.go:130] >     },
	I0501 03:08:58.653742   51401 command_runner.go:130] >     {
	I0501 03:08:58.653755   51401 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0501 03:08:58.653764   51401 command_runner.go:130] >       "repoTags": [
	I0501 03:08:58.653775   51401 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0501 03:08:58.653783   51401 command_runner.go:130] >       ],
	I0501 03:08:58.653792   51401 command_runner.go:130] >       "repoDigests": [
	I0501 03:08:58.653807   51401 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0501 03:08:58.653826   51401 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0501 03:08:58.653832   51401 command_runner.go:130] >       ],
	I0501 03:08:58.653836   51401 command_runner.go:130] >       "size": "61245718",
	I0501 03:08:58.653842   51401 command_runner.go:130] >       "uid": null,
	I0501 03:08:58.653847   51401 command_runner.go:130] >       "username": "nonroot",
	I0501 03:08:58.653853   51401 command_runner.go:130] >       "spec": null,
	I0501 03:08:58.653857   51401 command_runner.go:130] >       "pinned": false
	I0501 03:08:58.653863   51401 command_runner.go:130] >     },
	I0501 03:08:58.653867   51401 command_runner.go:130] >     {
	I0501 03:08:58.653875   51401 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0501 03:08:58.653881   51401 command_runner.go:130] >       "repoTags": [
	I0501 03:08:58.653886   51401 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0501 03:08:58.653892   51401 command_runner.go:130] >       ],
	I0501 03:08:58.653896   51401 command_runner.go:130] >       "repoDigests": [
	I0501 03:08:58.653903   51401 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0501 03:08:58.653911   51401 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0501 03:08:58.653917   51401 command_runner.go:130] >       ],
	I0501 03:08:58.653921   51401 command_runner.go:130] >       "size": "150779692",
	I0501 03:08:58.653927   51401 command_runner.go:130] >       "uid": {
	I0501 03:08:58.653931   51401 command_runner.go:130] >         "value": "0"
	I0501 03:08:58.653937   51401 command_runner.go:130] >       },
	I0501 03:08:58.653941   51401 command_runner.go:130] >       "username": "",
	I0501 03:08:58.653947   51401 command_runner.go:130] >       "spec": null,
	I0501 03:08:58.653951   51401 command_runner.go:130] >       "pinned": false
	I0501 03:08:58.653961   51401 command_runner.go:130] >     },
	I0501 03:08:58.653966   51401 command_runner.go:130] >     {
	I0501 03:08:58.653972   51401 command_runner.go:130] >       "id": "c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0",
	I0501 03:08:58.653978   51401 command_runner.go:130] >       "repoTags": [
	I0501 03:08:58.653983   51401 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.0"
	I0501 03:08:58.653989   51401 command_runner.go:130] >       ],
	I0501 03:08:58.653992   51401 command_runner.go:130] >       "repoDigests": [
	I0501 03:08:58.654001   51401 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:31282cf15b67192cd35f847715a9571f5dd4ac0e130290a408a866bd040bcd81",
	I0501 03:08:58.654010   51401 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:6b8e197b2d39c321189a475ac755a77896e34b56729425590fbc99f3a96468a3"
	I0501 03:08:58.654016   51401 command_runner.go:130] >       ],
	I0501 03:08:58.654022   51401 command_runner.go:130] >       "size": "117609952",
	I0501 03:08:58.654028   51401 command_runner.go:130] >       "uid": {
	I0501 03:08:58.654033   51401 command_runner.go:130] >         "value": "0"
	I0501 03:08:58.654039   51401 command_runner.go:130] >       },
	I0501 03:08:58.654043   51401 command_runner.go:130] >       "username": "",
	I0501 03:08:58.654051   51401 command_runner.go:130] >       "spec": null,
	I0501 03:08:58.654057   51401 command_runner.go:130] >       "pinned": false
	I0501 03:08:58.654060   51401 command_runner.go:130] >     },
	I0501 03:08:58.654064   51401 command_runner.go:130] >     {
	I0501 03:08:58.654072   51401 command_runner.go:130] >       "id": "c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b",
	I0501 03:08:58.654077   51401 command_runner.go:130] >       "repoTags": [
	I0501 03:08:58.654082   51401 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.0"
	I0501 03:08:58.654088   51401 command_runner.go:130] >       ],
	I0501 03:08:58.654092   51401 command_runner.go:130] >       "repoDigests": [
	I0501 03:08:58.654102   51401 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:5f52f00f17d5784b5ca004dffca59710fa1a9eec8d54cebdf9433a1d134150fe",
	I0501 03:08:58.654117   51401 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:b7622a0826b7690a307eea994e2abc918f35a27a08e30c37b58c9e3f8336a450"
	I0501 03:08:58.654123   51401 command_runner.go:130] >       ],
	I0501 03:08:58.654128   51401 command_runner.go:130] >       "size": "112170310",
	I0501 03:08:58.654133   51401 command_runner.go:130] >       "uid": {
	I0501 03:08:58.654137   51401 command_runner.go:130] >         "value": "0"
	I0501 03:08:58.654143   51401 command_runner.go:130] >       },
	I0501 03:08:58.654147   51401 command_runner.go:130] >       "username": "",
	I0501 03:08:58.654153   51401 command_runner.go:130] >       "spec": null,
	I0501 03:08:58.654157   51401 command_runner.go:130] >       "pinned": false
	I0501 03:08:58.654162   51401 command_runner.go:130] >     },
	I0501 03:08:58.654167   51401 command_runner.go:130] >     {
	I0501 03:08:58.654179   51401 command_runner.go:130] >       "id": "a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b",
	I0501 03:08:58.654185   51401 command_runner.go:130] >       "repoTags": [
	I0501 03:08:58.654191   51401 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.0"
	I0501 03:08:58.654197   51401 command_runner.go:130] >       ],
	I0501 03:08:58.654201   51401 command_runner.go:130] >       "repoDigests": [
	I0501 03:08:58.654224   51401 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:880f26b53295d384d2f1fed06aa4d58567e3038157f70a1151a7dd8ef8afaa68",
	I0501 03:08:58.654241   51401 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:ec532ff47eaf39822387e51ec73f1f2502eb74658c6303319db88d2c380d0210"
	I0501 03:08:58.654245   51401 command_runner.go:130] >       ],
	I0501 03:08:58.654249   51401 command_runner.go:130] >       "size": "85932953",
	I0501 03:08:58.654253   51401 command_runner.go:130] >       "uid": null,
	I0501 03:08:58.654263   51401 command_runner.go:130] >       "username": "",
	I0501 03:08:58.654269   51401 command_runner.go:130] >       "spec": null,
	I0501 03:08:58.654273   51401 command_runner.go:130] >       "pinned": false
	I0501 03:08:58.654276   51401 command_runner.go:130] >     },
	I0501 03:08:58.654279   51401 command_runner.go:130] >     {
	I0501 03:08:58.654285   51401 command_runner.go:130] >       "id": "259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced",
	I0501 03:08:58.654288   51401 command_runner.go:130] >       "repoTags": [
	I0501 03:08:58.654293   51401 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.0"
	I0501 03:08:58.654296   51401 command_runner.go:130] >       ],
	I0501 03:08:58.654300   51401 command_runner.go:130] >       "repoDigests": [
	I0501 03:08:58.654307   51401 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2353c3a1803229970fcb571cffc9b2f120372350e01c7381b4b650c4a02b9d67",
	I0501 03:08:58.654314   51401 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d2c2a1d9de7a42d91bfedba5ed4f58126f9cff702d35419d78ce4e7cb07f3b7a"
	I0501 03:08:58.654317   51401 command_runner.go:130] >       ],
	I0501 03:08:58.654321   51401 command_runner.go:130] >       "size": "63026502",
	I0501 03:08:58.654325   51401 command_runner.go:130] >       "uid": {
	I0501 03:08:58.654328   51401 command_runner.go:130] >         "value": "0"
	I0501 03:08:58.654332   51401 command_runner.go:130] >       },
	I0501 03:08:58.654335   51401 command_runner.go:130] >       "username": "",
	I0501 03:08:58.654339   51401 command_runner.go:130] >       "spec": null,
	I0501 03:08:58.654342   51401 command_runner.go:130] >       "pinned": false
	I0501 03:08:58.654345   51401 command_runner.go:130] >     },
	I0501 03:08:58.654348   51401 command_runner.go:130] >     {
	I0501 03:08:58.654356   51401 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0501 03:08:58.654360   51401 command_runner.go:130] >       "repoTags": [
	I0501 03:08:58.654364   51401 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0501 03:08:58.654367   51401 command_runner.go:130] >       ],
	I0501 03:08:58.654382   51401 command_runner.go:130] >       "repoDigests": [
	I0501 03:08:58.654392   51401 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0501 03:08:58.654415   51401 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0501 03:08:58.654424   51401 command_runner.go:130] >       ],
	I0501 03:08:58.654430   51401 command_runner.go:130] >       "size": "750414",
	I0501 03:08:58.654437   51401 command_runner.go:130] >       "uid": {
	I0501 03:08:58.654442   51401 command_runner.go:130] >         "value": "65535"
	I0501 03:08:58.654445   51401 command_runner.go:130] >       },
	I0501 03:08:58.654450   51401 command_runner.go:130] >       "username": "",
	I0501 03:08:58.654456   51401 command_runner.go:130] >       "spec": null,
	I0501 03:08:58.654459   51401 command_runner.go:130] >       "pinned": true
	I0501 03:08:58.654465   51401 command_runner.go:130] >     }
	I0501 03:08:58.654469   51401 command_runner.go:130] >   ]
	I0501 03:08:58.654474   51401 command_runner.go:130] > }
	I0501 03:08:58.654659   51401 crio.go:514] all images are preloaded for cri-o runtime.
	I0501 03:08:58.654674   51401 crio.go:433] Images already preloaded, skipping extraction
	I0501 03:08:58.654732   51401 ssh_runner.go:195] Run: sudo crictl images --output json
	I0501 03:08:58.690809   51401 command_runner.go:130] > {
	I0501 03:08:58.690838   51401 command_runner.go:130] >   "images": [
	I0501 03:08:58.690844   51401 command_runner.go:130] >     {
	I0501 03:08:58.690857   51401 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0501 03:08:58.690864   51401 command_runner.go:130] >       "repoTags": [
	I0501 03:08:58.690879   51401 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0501 03:08:58.690888   51401 command_runner.go:130] >       ],
	I0501 03:08:58.690895   51401 command_runner.go:130] >       "repoDigests": [
	I0501 03:08:58.690914   51401 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0501 03:08:58.690929   51401 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0501 03:08:58.690938   51401 command_runner.go:130] >       ],
	I0501 03:08:58.690949   51401 command_runner.go:130] >       "size": "65291810",
	I0501 03:08:58.690959   51401 command_runner.go:130] >       "uid": null,
	I0501 03:08:58.690969   51401 command_runner.go:130] >       "username": "",
	I0501 03:08:58.690982   51401 command_runner.go:130] >       "spec": null,
	I0501 03:08:58.690992   51401 command_runner.go:130] >       "pinned": false
	I0501 03:08:58.691001   51401 command_runner.go:130] >     },
	I0501 03:08:58.691010   51401 command_runner.go:130] >     {
	I0501 03:08:58.691023   51401 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0501 03:08:58.691033   51401 command_runner.go:130] >       "repoTags": [
	I0501 03:08:58.691044   51401 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0501 03:08:58.691053   51401 command_runner.go:130] >       ],
	I0501 03:08:58.691064   51401 command_runner.go:130] >       "repoDigests": [
	I0501 03:08:58.691076   51401 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0501 03:08:58.691092   51401 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0501 03:08:58.691100   51401 command_runner.go:130] >       ],
	I0501 03:08:58.691110   51401 command_runner.go:130] >       "size": "1363676",
	I0501 03:08:58.691119   51401 command_runner.go:130] >       "uid": null,
	I0501 03:08:58.691137   51401 command_runner.go:130] >       "username": "",
	I0501 03:08:58.691146   51401 command_runner.go:130] >       "spec": null,
	I0501 03:08:58.691156   51401 command_runner.go:130] >       "pinned": false
	I0501 03:08:58.691164   51401 command_runner.go:130] >     },
	I0501 03:08:58.691173   51401 command_runner.go:130] >     {
	I0501 03:08:58.691187   51401 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0501 03:08:58.691197   51401 command_runner.go:130] >       "repoTags": [
	I0501 03:08:58.691211   51401 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0501 03:08:58.691219   51401 command_runner.go:130] >       ],
	I0501 03:08:58.691226   51401 command_runner.go:130] >       "repoDigests": [
	I0501 03:08:58.691241   51401 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0501 03:08:58.691255   51401 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0501 03:08:58.691269   51401 command_runner.go:130] >       ],
	I0501 03:08:58.691278   51401 command_runner.go:130] >       "size": "31470524",
	I0501 03:08:58.691287   51401 command_runner.go:130] >       "uid": null,
	I0501 03:08:58.691295   51401 command_runner.go:130] >       "username": "",
	I0501 03:08:58.691303   51401 command_runner.go:130] >       "spec": null,
	I0501 03:08:58.691311   51401 command_runner.go:130] >       "pinned": false
	I0501 03:08:58.691316   51401 command_runner.go:130] >     },
	I0501 03:08:58.691323   51401 command_runner.go:130] >     {
	I0501 03:08:58.691332   51401 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0501 03:08:58.691341   51401 command_runner.go:130] >       "repoTags": [
	I0501 03:08:58.691350   51401 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0501 03:08:58.691358   51401 command_runner.go:130] >       ],
	I0501 03:08:58.691364   51401 command_runner.go:130] >       "repoDigests": [
	I0501 03:08:58.691378   51401 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0501 03:08:58.691394   51401 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0501 03:08:58.691401   51401 command_runner.go:130] >       ],
	I0501 03:08:58.691407   51401 command_runner.go:130] >       "size": "61245718",
	I0501 03:08:58.691415   51401 command_runner.go:130] >       "uid": null,
	I0501 03:08:58.691424   51401 command_runner.go:130] >       "username": "nonroot",
	I0501 03:08:58.691433   51401 command_runner.go:130] >       "spec": null,
	I0501 03:08:58.691443   51401 command_runner.go:130] >       "pinned": false
	I0501 03:08:58.691450   51401 command_runner.go:130] >     },
	I0501 03:08:58.691458   51401 command_runner.go:130] >     {
	I0501 03:08:58.691466   51401 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0501 03:08:58.691475   51401 command_runner.go:130] >       "repoTags": [
	I0501 03:08:58.691485   51401 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0501 03:08:58.691493   51401 command_runner.go:130] >       ],
	I0501 03:08:58.691499   51401 command_runner.go:130] >       "repoDigests": [
	I0501 03:08:58.691512   51401 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0501 03:08:58.691526   51401 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0501 03:08:58.691536   51401 command_runner.go:130] >       ],
	I0501 03:08:58.691546   51401 command_runner.go:130] >       "size": "150779692",
	I0501 03:08:58.691555   51401 command_runner.go:130] >       "uid": {
	I0501 03:08:58.691564   51401 command_runner.go:130] >         "value": "0"
	I0501 03:08:58.691572   51401 command_runner.go:130] >       },
	I0501 03:08:58.691577   51401 command_runner.go:130] >       "username": "",
	I0501 03:08:58.691583   51401 command_runner.go:130] >       "spec": null,
	I0501 03:08:58.691592   51401 command_runner.go:130] >       "pinned": false
	I0501 03:08:58.691600   51401 command_runner.go:130] >     },
	I0501 03:08:58.691609   51401 command_runner.go:130] >     {
	I0501 03:08:58.691619   51401 command_runner.go:130] >       "id": "c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0",
	I0501 03:08:58.691629   51401 command_runner.go:130] >       "repoTags": [
	I0501 03:08:58.691641   51401 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.0"
	I0501 03:08:58.691650   51401 command_runner.go:130] >       ],
	I0501 03:08:58.691659   51401 command_runner.go:130] >       "repoDigests": [
	I0501 03:08:58.691675   51401 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:31282cf15b67192cd35f847715a9571f5dd4ac0e130290a408a866bd040bcd81",
	I0501 03:08:58.691689   51401 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:6b8e197b2d39c321189a475ac755a77896e34b56729425590fbc99f3a96468a3"
	I0501 03:08:58.691697   51401 command_runner.go:130] >       ],
	I0501 03:08:58.691707   51401 command_runner.go:130] >       "size": "117609952",
	I0501 03:08:58.691716   51401 command_runner.go:130] >       "uid": {
	I0501 03:08:58.691725   51401 command_runner.go:130] >         "value": "0"
	I0501 03:08:58.691734   51401 command_runner.go:130] >       },
	I0501 03:08:58.691742   51401 command_runner.go:130] >       "username": "",
	I0501 03:08:58.691752   51401 command_runner.go:130] >       "spec": null,
	I0501 03:08:58.691760   51401 command_runner.go:130] >       "pinned": false
	I0501 03:08:58.691768   51401 command_runner.go:130] >     },
	I0501 03:08:58.691776   51401 command_runner.go:130] >     {
	I0501 03:08:58.691790   51401 command_runner.go:130] >       "id": "c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b",
	I0501 03:08:58.691800   51401 command_runner.go:130] >       "repoTags": [
	I0501 03:08:58.691812   51401 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.0"
	I0501 03:08:58.691819   51401 command_runner.go:130] >       ],
	I0501 03:08:58.691825   51401 command_runner.go:130] >       "repoDigests": [
	I0501 03:08:58.691838   51401 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:5f52f00f17d5784b5ca004dffca59710fa1a9eec8d54cebdf9433a1d134150fe",
	I0501 03:08:58.691850   51401 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:b7622a0826b7690a307eea994e2abc918f35a27a08e30c37b58c9e3f8336a450"
	I0501 03:08:58.691859   51401 command_runner.go:130] >       ],
	I0501 03:08:58.691870   51401 command_runner.go:130] >       "size": "112170310",
	I0501 03:08:58.691879   51401 command_runner.go:130] >       "uid": {
	I0501 03:08:58.691890   51401 command_runner.go:130] >         "value": "0"
	I0501 03:08:58.691898   51401 command_runner.go:130] >       },
	I0501 03:08:58.691904   51401 command_runner.go:130] >       "username": "",
	I0501 03:08:58.691912   51401 command_runner.go:130] >       "spec": null,
	I0501 03:08:58.691921   51401 command_runner.go:130] >       "pinned": false
	I0501 03:08:58.691928   51401 command_runner.go:130] >     },
	I0501 03:08:58.691936   51401 command_runner.go:130] >     {
	I0501 03:08:58.691944   51401 command_runner.go:130] >       "id": "a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b",
	I0501 03:08:58.691953   51401 command_runner.go:130] >       "repoTags": [
	I0501 03:08:58.691964   51401 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.0"
	I0501 03:08:58.691974   51401 command_runner.go:130] >       ],
	I0501 03:08:58.691984   51401 command_runner.go:130] >       "repoDigests": [
	I0501 03:08:58.692003   51401 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:880f26b53295d384d2f1fed06aa4d58567e3038157f70a1151a7dd8ef8afaa68",
	I0501 03:08:58.692018   51401 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:ec532ff47eaf39822387e51ec73f1f2502eb74658c6303319db88d2c380d0210"
	I0501 03:08:58.692024   51401 command_runner.go:130] >       ],
	I0501 03:08:58.692032   51401 command_runner.go:130] >       "size": "85932953",
	I0501 03:08:58.692040   51401 command_runner.go:130] >       "uid": null,
	I0501 03:08:58.692050   51401 command_runner.go:130] >       "username": "",
	I0501 03:08:58.692058   51401 command_runner.go:130] >       "spec": null,
	I0501 03:08:58.692066   51401 command_runner.go:130] >       "pinned": false
	I0501 03:08:58.692071   51401 command_runner.go:130] >     },
	I0501 03:08:58.692079   51401 command_runner.go:130] >     {
	I0501 03:08:58.692088   51401 command_runner.go:130] >       "id": "259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced",
	I0501 03:08:58.692096   51401 command_runner.go:130] >       "repoTags": [
	I0501 03:08:58.692107   51401 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.0"
	I0501 03:08:58.692115   51401 command_runner.go:130] >       ],
	I0501 03:08:58.692124   51401 command_runner.go:130] >       "repoDigests": [
	I0501 03:08:58.692137   51401 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2353c3a1803229970fcb571cffc9b2f120372350e01c7381b4b650c4a02b9d67",
	I0501 03:08:58.692151   51401 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d2c2a1d9de7a42d91bfedba5ed4f58126f9cff702d35419d78ce4e7cb07f3b7a"
	I0501 03:08:58.692160   51401 command_runner.go:130] >       ],
	I0501 03:08:58.692166   51401 command_runner.go:130] >       "size": "63026502",
	I0501 03:08:58.692176   51401 command_runner.go:130] >       "uid": {
	I0501 03:08:58.692185   51401 command_runner.go:130] >         "value": "0"
	I0501 03:08:58.692193   51401 command_runner.go:130] >       },
	I0501 03:08:58.692201   51401 command_runner.go:130] >       "username": "",
	I0501 03:08:58.692209   51401 command_runner.go:130] >       "spec": null,
	I0501 03:08:58.692218   51401 command_runner.go:130] >       "pinned": false
	I0501 03:08:58.692226   51401 command_runner.go:130] >     },
	I0501 03:08:58.692234   51401 command_runner.go:130] >     {
	I0501 03:08:58.692248   51401 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0501 03:08:58.692257   51401 command_runner.go:130] >       "repoTags": [
	I0501 03:08:58.692274   51401 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0501 03:08:58.692282   51401 command_runner.go:130] >       ],
	I0501 03:08:58.692291   51401 command_runner.go:130] >       "repoDigests": [
	I0501 03:08:58.692319   51401 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0501 03:08:58.692334   51401 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0501 03:08:58.692345   51401 command_runner.go:130] >       ],
	I0501 03:08:58.692354   51401 command_runner.go:130] >       "size": "750414",
	I0501 03:08:58.692363   51401 command_runner.go:130] >       "uid": {
	I0501 03:08:58.692371   51401 command_runner.go:130] >         "value": "65535"
	I0501 03:08:58.692379   51401 command_runner.go:130] >       },
	I0501 03:08:58.692388   51401 command_runner.go:130] >       "username": "",
	I0501 03:08:58.692397   51401 command_runner.go:130] >       "spec": null,
	I0501 03:08:58.692406   51401 command_runner.go:130] >       "pinned": true
	I0501 03:08:58.692414   51401 command_runner.go:130] >     }
	I0501 03:08:58.692420   51401 command_runner.go:130] >   ]
	I0501 03:08:58.692428   51401 command_runner.go:130] > }
	I0501 03:08:58.692767   51401 crio.go:514] all images are preloaded for cri-o runtime.
	I0501 03:08:58.692794   51401 cache_images.go:84] Images are preloaded, skipping loading
	I0501 03:08:58.692808   51401 kubeadm.go:928] updating node { 192.168.39.139 8443 v1.30.0 crio true true} ...
	I0501 03:08:58.692923   51401 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-282238 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.139
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:multinode-282238 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0501 03:08:58.692997   51401 ssh_runner.go:195] Run: crio config
	I0501 03:08:58.741538   51401 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0501 03:08:58.741570   51401 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0501 03:08:58.741580   51401 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0501 03:08:58.741584   51401 command_runner.go:130] > #
	I0501 03:08:58.741595   51401 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0501 03:08:58.741604   51401 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0501 03:08:58.741614   51401 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0501 03:08:58.741629   51401 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0501 03:08:58.741642   51401 command_runner.go:130] > # reload'.
	I0501 03:08:58.741654   51401 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0501 03:08:58.741668   51401 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0501 03:08:58.741682   51401 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0501 03:08:58.741695   51401 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0501 03:08:58.741704   51401 command_runner.go:130] > [crio]
	I0501 03:08:58.741715   51401 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0501 03:08:58.741727   51401 command_runner.go:130] > # containers images, in this directory.
	I0501 03:08:58.741941   51401 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0501 03:08:58.741973   51401 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0501 03:08:58.742110   51401 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0501 03:08:58.742127   51401 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0501 03:08:58.742439   51401 command_runner.go:130] > # imagestore = ""
	I0501 03:08:58.742454   51401 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0501 03:08:58.742465   51401 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0501 03:08:58.742476   51401 command_runner.go:130] > storage_driver = "overlay"
	I0501 03:08:58.742489   51401 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0501 03:08:58.742503   51401 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0501 03:08:58.742510   51401 command_runner.go:130] > storage_option = [
	I0501 03:08:58.742685   51401 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0501 03:08:58.742755   51401 command_runner.go:130] > ]
	I0501 03:08:58.742771   51401 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0501 03:08:58.742781   51401 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0501 03:08:58.743264   51401 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0501 03:08:58.743280   51401 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0501 03:08:58.743291   51401 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0501 03:08:58.743298   51401 command_runner.go:130] > # always happen on a node reboot
	I0501 03:08:58.743621   51401 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0501 03:08:58.743642   51401 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0501 03:08:58.743652   51401 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0501 03:08:58.743664   51401 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0501 03:08:58.743922   51401 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0501 03:08:58.743937   51401 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0501 03:08:58.743953   51401 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0501 03:08:58.744308   51401 command_runner.go:130] > # internal_wipe = true
	I0501 03:08:58.744325   51401 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0501 03:08:58.744333   51401 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0501 03:08:58.744625   51401 command_runner.go:130] > # internal_repair = false
	I0501 03:08:58.744637   51401 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0501 03:08:58.744647   51401 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0501 03:08:58.744656   51401 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0501 03:08:58.745081   51401 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0501 03:08:58.745094   51401 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0501 03:08:58.745100   51401 command_runner.go:130] > [crio.api]
	I0501 03:08:58.745108   51401 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0501 03:08:58.745453   51401 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0501 03:08:58.745465   51401 command_runner.go:130] > # IP address on which the stream server will listen.
	I0501 03:08:58.745957   51401 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0501 03:08:58.745971   51401 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0501 03:08:58.745980   51401 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0501 03:08:58.746317   51401 command_runner.go:130] > # stream_port = "0"
	I0501 03:08:58.746337   51401 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0501 03:08:58.746678   51401 command_runner.go:130] > # stream_enable_tls = false
	I0501 03:08:58.746693   51401 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0501 03:08:58.746947   51401 command_runner.go:130] > # stream_idle_timeout = ""
	I0501 03:08:58.746966   51401 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0501 03:08:58.746976   51401 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0501 03:08:58.746983   51401 command_runner.go:130] > # minutes.
	I0501 03:08:58.747056   51401 command_runner.go:130] > # stream_tls_cert = ""
	I0501 03:08:58.747071   51401 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0501 03:08:58.747081   51401 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0501 03:08:58.747088   51401 command_runner.go:130] > # stream_tls_key = ""
	I0501 03:08:58.747099   51401 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0501 03:08:58.747115   51401 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0501 03:08:58.747147   51401 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0501 03:08:58.747157   51401 command_runner.go:130] > # stream_tls_ca = ""
	I0501 03:08:58.747170   51401 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0501 03:08:58.747183   51401 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0501 03:08:58.747205   51401 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0501 03:08:58.747217   51401 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0501 03:08:58.747231   51401 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0501 03:08:58.747245   51401 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0501 03:08:58.747254   51401 command_runner.go:130] > [crio.runtime]
	I0501 03:08:58.747264   51401 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0501 03:08:58.747283   51401 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0501 03:08:58.747299   51401 command_runner.go:130] > # "nofile=1024:2048"
	I0501 03:08:58.747313   51401 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0501 03:08:58.747323   51401 command_runner.go:130] > # default_ulimits = [
	I0501 03:08:58.747331   51401 command_runner.go:130] > # ]
	I0501 03:08:58.747342   51401 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0501 03:08:58.747354   51401 command_runner.go:130] > # no_pivot = false
	I0501 03:08:58.747367   51401 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0501 03:08:58.747380   51401 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0501 03:08:58.747392   51401 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0501 03:08:58.747406   51401 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0501 03:08:58.747417   51401 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0501 03:08:58.747429   51401 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0501 03:08:58.747440   51401 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0501 03:08:58.747448   51401 command_runner.go:130] > # Cgroup setting for conmon
	I0501 03:08:58.747464   51401 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0501 03:08:58.747474   51401 command_runner.go:130] > conmon_cgroup = "pod"
	I0501 03:08:58.747488   51401 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0501 03:08:58.747497   51401 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0501 03:08:58.747511   51401 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0501 03:08:58.747521   51401 command_runner.go:130] > conmon_env = [
	I0501 03:08:58.747532   51401 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0501 03:08:58.747540   51401 command_runner.go:130] > ]
	I0501 03:08:58.747549   51401 command_runner.go:130] > # Additional environment variables to set for all the
	I0501 03:08:58.747561   51401 command_runner.go:130] > # containers. These are overridden if set in the
	I0501 03:08:58.747574   51401 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0501 03:08:58.747584   51401 command_runner.go:130] > # default_env = [
	I0501 03:08:58.747590   51401 command_runner.go:130] > # ]
	I0501 03:08:58.747603   51401 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0501 03:08:58.747619   51401 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0501 03:08:58.747641   51401 command_runner.go:130] > # selinux = false
	I0501 03:08:58.747656   51401 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0501 03:08:58.747670   51401 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0501 03:08:58.747686   51401 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0501 03:08:58.747694   51401 command_runner.go:130] > # seccomp_profile = ""
	I0501 03:08:58.747707   51401 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0501 03:08:58.747721   51401 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0501 03:08:58.747735   51401 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0501 03:08:58.747746   51401 command_runner.go:130] > # which might increase security.
	I0501 03:08:58.747754   51401 command_runner.go:130] > # This option is currently deprecated,
	I0501 03:08:58.747768   51401 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0501 03:08:58.747778   51401 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0501 03:08:58.747788   51401 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0501 03:08:58.747802   51401 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0501 03:08:58.747815   51401 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0501 03:08:58.747829   51401 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0501 03:08:58.747840   51401 command_runner.go:130] > # This option supports live configuration reload.
	I0501 03:08:58.747849   51401 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0501 03:08:58.747861   51401 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0501 03:08:58.747873   51401 command_runner.go:130] > # the cgroup blockio controller.
	I0501 03:08:58.747882   51401 command_runner.go:130] > # blockio_config_file = ""
	I0501 03:08:58.747894   51401 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0501 03:08:58.747904   51401 command_runner.go:130] > # blockio parameters.
	I0501 03:08:58.747912   51401 command_runner.go:130] > # blockio_reload = false
	I0501 03:08:58.747923   51401 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0501 03:08:58.747930   51401 command_runner.go:130] > # irqbalance daemon.
	I0501 03:08:58.747937   51401 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0501 03:08:58.747946   51401 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0501 03:08:58.747953   51401 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0501 03:08:58.747962   51401 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0501 03:08:58.747967   51401 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0501 03:08:58.747976   51401 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0501 03:08:58.747987   51401 command_runner.go:130] > # This option supports live configuration reload.
	I0501 03:08:58.747997   51401 command_runner.go:130] > # rdt_config_file = ""
	I0501 03:08:58.748009   51401 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0501 03:08:58.748019   51401 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0501 03:08:58.748046   51401 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0501 03:08:58.748054   51401 command_runner.go:130] > # separate_pull_cgroup = ""
	I0501 03:08:58.748060   51401 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0501 03:08:58.748068   51401 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0501 03:08:58.748077   51401 command_runner.go:130] > # will be added.
	I0501 03:08:58.748084   51401 command_runner.go:130] > # default_capabilities = [
	I0501 03:08:58.748094   51401 command_runner.go:130] > # 	"CHOWN",
	I0501 03:08:58.748101   51401 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0501 03:08:58.748110   51401 command_runner.go:130] > # 	"FSETID",
	I0501 03:08:58.748116   51401 command_runner.go:130] > # 	"FOWNER",
	I0501 03:08:58.748123   51401 command_runner.go:130] > # 	"SETGID",
	I0501 03:08:58.748129   51401 command_runner.go:130] > # 	"SETUID",
	I0501 03:08:58.748138   51401 command_runner.go:130] > # 	"SETPCAP",
	I0501 03:08:58.748144   51401 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0501 03:08:58.748157   51401 command_runner.go:130] > # 	"KILL",
	I0501 03:08:58.748166   51401 command_runner.go:130] > # ]
	I0501 03:08:58.748179   51401 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0501 03:08:58.748192   51401 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0501 03:08:58.748203   51401 command_runner.go:130] > # add_inheritable_capabilities = false
	I0501 03:08:58.748213   51401 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0501 03:08:58.748226   51401 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0501 03:08:58.748232   51401 command_runner.go:130] > default_sysctls = [
	I0501 03:08:58.748237   51401 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0501 03:08:58.748241   51401 command_runner.go:130] > ]
	I0501 03:08:58.748249   51401 command_runner.go:130] > # List of devices on the host that a
	I0501 03:08:58.748263   51401 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0501 03:08:58.748277   51401 command_runner.go:130] > # allowed_devices = [
	I0501 03:08:58.748283   51401 command_runner.go:130] > # 	"/dev/fuse",
	I0501 03:08:58.748291   51401 command_runner.go:130] > # ]
	I0501 03:08:58.748300   51401 command_runner.go:130] > # List of additional devices. specified as
	I0501 03:08:58.748314   51401 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0501 03:08:58.748323   51401 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0501 03:08:58.748331   51401 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0501 03:08:58.748341   51401 command_runner.go:130] > # additional_devices = [
	I0501 03:08:58.748351   51401 command_runner.go:130] > # ]
	I0501 03:08:58.748360   51401 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0501 03:08:58.748374   51401 command_runner.go:130] > # cdi_spec_dirs = [
	I0501 03:08:58.748384   51401 command_runner.go:130] > # 	"/etc/cdi",
	I0501 03:08:58.748391   51401 command_runner.go:130] > # 	"/var/run/cdi",
	I0501 03:08:58.748398   51401 command_runner.go:130] > # ]
	I0501 03:08:58.748406   51401 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0501 03:08:58.748416   51401 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0501 03:08:58.748422   51401 command_runner.go:130] > # Defaults to false.
	I0501 03:08:58.748434   51401 command_runner.go:130] > # device_ownership_from_security_context = false
	I0501 03:08:58.748449   51401 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0501 03:08:58.748461   51401 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0501 03:08:58.748468   51401 command_runner.go:130] > # hooks_dir = [
	I0501 03:08:58.748477   51401 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0501 03:08:58.748483   51401 command_runner.go:130] > # ]
	I0501 03:08:58.748492   51401 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0501 03:08:58.748502   51401 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0501 03:08:58.748512   51401 command_runner.go:130] > # its default mounts from the following two files:
	I0501 03:08:58.748520   51401 command_runner.go:130] > #
	I0501 03:08:58.748531   51401 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0501 03:08:58.748544   51401 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0501 03:08:58.748554   51401 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0501 03:08:58.748560   51401 command_runner.go:130] > #
	I0501 03:08:58.748573   51401 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0501 03:08:58.748583   51401 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0501 03:08:58.748592   51401 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0501 03:08:58.748603   51401 command_runner.go:130] > #      only add mounts it finds in this file.
	I0501 03:08:58.748612   51401 command_runner.go:130] > #
	I0501 03:08:58.748621   51401 command_runner.go:130] > # default_mounts_file = ""
	I0501 03:08:58.748632   51401 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0501 03:08:58.748646   51401 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0501 03:08:58.748657   51401 command_runner.go:130] > pids_limit = 1024
	I0501 03:08:58.748664   51401 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0501 03:08:58.748676   51401 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0501 03:08:58.748689   51401 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0501 03:08:58.748706   51401 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0501 03:08:58.748715   51401 command_runner.go:130] > # log_size_max = -1
	I0501 03:08:58.748727   51401 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0501 03:08:58.748738   51401 command_runner.go:130] > # log_to_journald = false
	I0501 03:08:58.748749   51401 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0501 03:08:58.748755   51401 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0501 03:08:58.748766   51401 command_runner.go:130] > # Path to directory for container attach sockets.
	I0501 03:08:58.748775   51401 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0501 03:08:58.748787   51401 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0501 03:08:58.748797   51401 command_runner.go:130] > # bind_mount_prefix = ""
	I0501 03:08:58.748810   51401 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0501 03:08:58.748819   51401 command_runner.go:130] > # read_only = false
	I0501 03:08:58.748832   51401 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0501 03:08:58.748842   51401 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0501 03:08:58.748847   51401 command_runner.go:130] > # live configuration reload.
	I0501 03:08:58.748857   51401 command_runner.go:130] > # log_level = "info"
	I0501 03:08:58.748866   51401 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0501 03:08:58.748878   51401 command_runner.go:130] > # This option supports live configuration reload.
	I0501 03:08:58.748884   51401 command_runner.go:130] > # log_filter = ""
	I0501 03:08:58.748897   51401 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0501 03:08:58.748910   51401 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0501 03:08:58.748919   51401 command_runner.go:130] > # separated by comma.
	I0501 03:08:58.748927   51401 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0501 03:08:58.748935   51401 command_runner.go:130] > # uid_mappings = ""
	I0501 03:08:58.748944   51401 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0501 03:08:58.748958   51401 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0501 03:08:58.748968   51401 command_runner.go:130] > # separated by comma.
	I0501 03:08:58.748983   51401 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0501 03:08:58.748993   51401 command_runner.go:130] > # gid_mappings = ""
	I0501 03:08:58.749005   51401 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0501 03:08:58.749015   51401 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0501 03:08:58.749027   51401 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0501 03:08:58.749043   51401 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0501 03:08:58.749053   51401 command_runner.go:130] > # minimum_mappable_uid = -1
	I0501 03:08:58.749066   51401 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0501 03:08:58.749079   51401 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0501 03:08:58.749091   51401 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0501 03:08:58.749099   51401 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0501 03:08:58.749105   51401 command_runner.go:130] > # minimum_mappable_gid = -1
	I0501 03:08:58.749116   51401 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0501 03:08:58.749130   51401 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0501 03:08:58.749138   51401 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0501 03:08:58.749148   51401 command_runner.go:130] > # ctr_stop_timeout = 30
	I0501 03:08:58.749158   51401 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0501 03:08:58.749170   51401 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0501 03:08:58.749185   51401 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0501 03:08:58.749196   51401 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0501 03:08:58.749211   51401 command_runner.go:130] > drop_infra_ctr = false
	I0501 03:08:58.749222   51401 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0501 03:08:58.749234   51401 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0501 03:08:58.749246   51401 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0501 03:08:58.749256   51401 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0501 03:08:58.749267   51401 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0501 03:08:58.749279   51401 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0501 03:08:58.749288   51401 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0501 03:08:58.749300   51401 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0501 03:08:58.749310   51401 command_runner.go:130] > # shared_cpuset = ""
	I0501 03:08:58.749320   51401 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0501 03:08:58.749332   51401 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0501 03:08:58.749342   51401 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0501 03:08:58.749355   51401 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0501 03:08:58.749366   51401 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0501 03:08:58.749379   51401 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0501 03:08:58.749392   51401 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0501 03:08:58.749402   51401 command_runner.go:130] > # enable_criu_support = false
	I0501 03:08:58.749413   51401 command_runner.go:130] > # Enable/disable the generation of the container,
	I0501 03:08:58.749425   51401 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0501 03:08:58.749435   51401 command_runner.go:130] > # enable_pod_events = false
	I0501 03:08:58.749443   51401 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0501 03:08:58.749453   51401 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0501 03:08:58.749462   51401 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0501 03:08:58.749472   51401 command_runner.go:130] > # default_runtime = "runc"
	I0501 03:08:58.749481   51401 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0501 03:08:58.749496   51401 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0501 03:08:58.749513   51401 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0501 03:08:58.749530   51401 command_runner.go:130] > # creation as a file is not desired either.
	I0501 03:08:58.749544   51401 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0501 03:08:58.749556   51401 command_runner.go:130] > # the hostname is being managed dynamically.
	I0501 03:08:58.749566   51401 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0501 03:08:58.749572   51401 command_runner.go:130] > # ]
	I0501 03:08:58.749584   51401 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0501 03:08:58.749597   51401 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0501 03:08:58.749610   51401 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0501 03:08:58.749618   51401 command_runner.go:130] > # Each entry in the table should follow the format:
	I0501 03:08:58.749621   51401 command_runner.go:130] > #
	I0501 03:08:58.749628   51401 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0501 03:08:58.749639   51401 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0501 03:08:58.749698   51401 command_runner.go:130] > # runtime_type = "oci"
	I0501 03:08:58.749705   51401 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0501 03:08:58.749711   51401 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0501 03:08:58.749718   51401 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0501 03:08:58.749729   51401 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0501 03:08:58.749735   51401 command_runner.go:130] > # monitor_env = []
	I0501 03:08:58.749746   51401 command_runner.go:130] > # privileged_without_host_devices = false
	I0501 03:08:58.749754   51401 command_runner.go:130] > # allowed_annotations = []
	I0501 03:08:58.749766   51401 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0501 03:08:58.749775   51401 command_runner.go:130] > # Where:
	I0501 03:08:58.749784   51401 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0501 03:08:58.749793   51401 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0501 03:08:58.749802   51401 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0501 03:08:58.749816   51401 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0501 03:08:58.749823   51401 command_runner.go:130] > #   in $PATH.
	I0501 03:08:58.749836   51401 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0501 03:08:58.749845   51401 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0501 03:08:58.749858   51401 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0501 03:08:58.749865   51401 command_runner.go:130] > #   state.
	I0501 03:08:58.749875   51401 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0501 03:08:58.749883   51401 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0501 03:08:58.749893   51401 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0501 03:08:58.749905   51401 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0501 03:08:58.749919   51401 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0501 03:08:58.749935   51401 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0501 03:08:58.749945   51401 command_runner.go:130] > #   The currently recognized values are:
	I0501 03:08:58.749956   51401 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0501 03:08:58.749965   51401 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0501 03:08:58.749973   51401 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0501 03:08:58.749985   51401 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0501 03:08:58.750001   51401 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0501 03:08:58.750014   51401 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0501 03:08:58.750028   51401 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0501 03:08:58.750041   51401 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0501 03:08:58.750049   51401 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0501 03:08:58.750057   51401 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0501 03:08:58.750066   51401 command_runner.go:130] > #   deprecated option "conmon".
	I0501 03:08:58.750078   51401 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0501 03:08:58.750089   51401 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0501 03:08:58.750102   51401 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0501 03:08:58.750113   51401 command_runner.go:130] > #   should be moved to the container's cgroup
	I0501 03:08:58.750124   51401 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0501 03:08:58.750134   51401 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0501 03:08:58.750142   51401 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0501 03:08:58.750153   51401 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0501 03:08:58.750159   51401 command_runner.go:130] > #
	I0501 03:08:58.750166   51401 command_runner.go:130] > # Using the seccomp notifier feature:
	I0501 03:08:58.750174   51401 command_runner.go:130] > #
	I0501 03:08:58.750184   51401 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0501 03:08:58.750197   51401 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0501 03:08:58.750204   51401 command_runner.go:130] > #
	I0501 03:08:58.750215   51401 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0501 03:08:58.750224   51401 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0501 03:08:58.750228   51401 command_runner.go:130] > #
	I0501 03:08:58.750238   51401 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0501 03:08:58.750248   51401 command_runner.go:130] > # feature.
	I0501 03:08:58.750253   51401 command_runner.go:130] > #
	I0501 03:08:58.750266   51401 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0501 03:08:58.750282   51401 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0501 03:08:58.750295   51401 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0501 03:08:58.750310   51401 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0501 03:08:58.750323   51401 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0501 03:08:58.750331   51401 command_runner.go:130] > #
	I0501 03:08:58.750342   51401 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0501 03:08:58.750354   51401 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0501 03:08:58.750362   51401 command_runner.go:130] > #
	I0501 03:08:58.750372   51401 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0501 03:08:58.750384   51401 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0501 03:08:58.750388   51401 command_runner.go:130] > #
	I0501 03:08:58.750409   51401 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0501 03:08:58.750424   51401 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0501 03:08:58.750433   51401 command_runner.go:130] > # limitation.
	I0501 03:08:58.750441   51401 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0501 03:08:58.750451   51401 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0501 03:08:58.750460   51401 command_runner.go:130] > runtime_type = "oci"
	I0501 03:08:58.750468   51401 command_runner.go:130] > runtime_root = "/run/runc"
	I0501 03:08:58.750478   51401 command_runner.go:130] > runtime_config_path = ""
	I0501 03:08:58.750485   51401 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0501 03:08:58.750496   51401 command_runner.go:130] > monitor_cgroup = "pod"
	I0501 03:08:58.750503   51401 command_runner.go:130] > monitor_exec_cgroup = ""
	I0501 03:08:58.750512   51401 command_runner.go:130] > monitor_env = [
	I0501 03:08:58.750521   51401 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0501 03:08:58.750529   51401 command_runner.go:130] > ]
	I0501 03:08:58.750537   51401 command_runner.go:130] > privileged_without_host_devices = false
	I0501 03:08:58.750549   51401 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0501 03:08:58.750558   51401 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0501 03:08:58.750566   51401 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0501 03:08:58.750582   51401 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0501 03:08:58.750595   51401 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0501 03:08:58.750607   51401 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0501 03:08:58.750622   51401 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0501 03:08:58.750637   51401 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0501 03:08:58.750645   51401 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0501 03:08:58.750657   51401 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0501 03:08:58.750666   51401 command_runner.go:130] > # Example:
	I0501 03:08:58.750674   51401 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0501 03:08:58.750692   51401 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0501 03:08:58.750703   51401 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0501 03:08:58.750715   51401 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0501 03:08:58.750723   51401 command_runner.go:130] > # cpuset = 0
	I0501 03:08:58.750728   51401 command_runner.go:130] > # cpushares = "0-1"
	I0501 03:08:58.750735   51401 command_runner.go:130] > # Where:
	I0501 03:08:58.750743   51401 command_runner.go:130] > # The workload name is workload-type.
	I0501 03:08:58.750758   51401 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0501 03:08:58.750770   51401 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0501 03:08:58.750780   51401 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0501 03:08:58.750796   51401 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0501 03:08:58.750808   51401 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0501 03:08:58.750816   51401 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0501 03:08:58.750824   51401 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0501 03:08:58.750835   51401 command_runner.go:130] > # Default value is set to true
	I0501 03:08:58.750845   51401 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0501 03:08:58.750863   51401 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0501 03:08:58.750874   51401 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0501 03:08:58.750880   51401 command_runner.go:130] > # Default value is set to 'false'
	I0501 03:08:58.750890   51401 command_runner.go:130] > # disable_hostport_mapping = false
	I0501 03:08:58.750898   51401 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0501 03:08:58.750904   51401 command_runner.go:130] > #
	I0501 03:08:58.750914   51401 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0501 03:08:58.750928   51401 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0501 03:08:58.750943   51401 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0501 03:08:58.750953   51401 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0501 03:08:58.750962   51401 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0501 03:08:58.750967   51401 command_runner.go:130] > [crio.image]
	I0501 03:08:58.750977   51401 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0501 03:08:58.750983   51401 command_runner.go:130] > # default_transport = "docker://"
	I0501 03:08:58.750989   51401 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0501 03:08:58.750999   51401 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0501 03:08:58.751005   51401 command_runner.go:130] > # global_auth_file = ""
	I0501 03:08:58.751013   51401 command_runner.go:130] > # The image used to instantiate infra containers.
	I0501 03:08:58.751022   51401 command_runner.go:130] > # This option supports live configuration reload.
	I0501 03:08:58.751030   51401 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0501 03:08:58.751047   51401 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0501 03:08:58.751057   51401 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0501 03:08:58.751065   51401 command_runner.go:130] > # This option supports live configuration reload.
	I0501 03:08:58.751070   51401 command_runner.go:130] > # pause_image_auth_file = ""
	I0501 03:08:58.751076   51401 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0501 03:08:58.751085   51401 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0501 03:08:58.751095   51401 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0501 03:08:58.751104   51401 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0501 03:08:58.751110   51401 command_runner.go:130] > # pause_command = "/pause"
	I0501 03:08:58.751120   51401 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0501 03:08:58.751130   51401 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0501 03:08:58.751138   51401 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0501 03:08:58.751148   51401 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0501 03:08:58.751155   51401 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0501 03:08:58.751160   51401 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0501 03:08:58.751165   51401 command_runner.go:130] > # pinned_images = [
	I0501 03:08:58.751173   51401 command_runner.go:130] > # ]
	I0501 03:08:58.751183   51401 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0501 03:08:58.751198   51401 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0501 03:08:58.751211   51401 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0501 03:08:58.751223   51401 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0501 03:08:58.751235   51401 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0501 03:08:58.751240   51401 command_runner.go:130] > # signature_policy = ""
	I0501 03:08:58.751245   51401 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0501 03:08:58.751258   51401 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0501 03:08:58.751276   51401 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0501 03:08:58.751289   51401 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0501 03:08:58.751301   51401 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0501 03:08:58.751312   51401 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0501 03:08:58.751324   51401 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0501 03:08:58.751332   51401 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0501 03:08:58.751337   51401 command_runner.go:130] > # changing them here.
	I0501 03:08:58.751347   51401 command_runner.go:130] > # insecure_registries = [
	I0501 03:08:58.751352   51401 command_runner.go:130] > # ]
	I0501 03:08:58.751365   51401 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0501 03:08:58.751377   51401 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0501 03:08:58.751392   51401 command_runner.go:130] > # image_volumes = "mkdir"
	I0501 03:08:58.751404   51401 command_runner.go:130] > # Temporary directory to use for storing big files
	I0501 03:08:58.751410   51401 command_runner.go:130] > # big_files_temporary_dir = ""
	I0501 03:08:58.751419   51401 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0501 03:08:58.751423   51401 command_runner.go:130] > # CNI plugins.
	I0501 03:08:58.751429   51401 command_runner.go:130] > [crio.network]
	I0501 03:08:58.751434   51401 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0501 03:08:58.751442   51401 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0501 03:08:58.751449   51401 command_runner.go:130] > # cni_default_network = ""
	I0501 03:08:58.751462   51401 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0501 03:08:58.751472   51401 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0501 03:08:58.751484   51401 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0501 03:08:58.751493   51401 command_runner.go:130] > # plugin_dirs = [
	I0501 03:08:58.751500   51401 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0501 03:08:58.751508   51401 command_runner.go:130] > # ]
	I0501 03:08:58.751518   51401 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0501 03:08:58.751524   51401 command_runner.go:130] > [crio.metrics]
	I0501 03:08:58.751528   51401 command_runner.go:130] > # Globally enable or disable metrics support.
	I0501 03:08:58.751534   51401 command_runner.go:130] > enable_metrics = true
	I0501 03:08:58.751538   51401 command_runner.go:130] > # Specify enabled metrics collectors.
	I0501 03:08:58.751545   51401 command_runner.go:130] > # Per default all metrics are enabled.
	I0501 03:08:58.751551   51401 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0501 03:08:58.751559   51401 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0501 03:08:58.751565   51401 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0501 03:08:58.751571   51401 command_runner.go:130] > # metrics_collectors = [
	I0501 03:08:58.751574   51401 command_runner.go:130] > # 	"operations",
	I0501 03:08:58.751579   51401 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0501 03:08:58.751585   51401 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0501 03:08:58.751589   51401 command_runner.go:130] > # 	"operations_errors",
	I0501 03:08:58.751595   51401 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0501 03:08:58.751605   51401 command_runner.go:130] > # 	"image_pulls_by_name",
	I0501 03:08:58.751612   51401 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0501 03:08:58.751624   51401 command_runner.go:130] > # 	"image_pulls_failures",
	I0501 03:08:58.751632   51401 command_runner.go:130] > # 	"image_pulls_successes",
	I0501 03:08:58.751639   51401 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0501 03:08:58.751648   51401 command_runner.go:130] > # 	"image_layer_reuse",
	I0501 03:08:58.751662   51401 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0501 03:08:58.751669   51401 command_runner.go:130] > # 	"containers_oom_total",
	I0501 03:08:58.751673   51401 command_runner.go:130] > # 	"containers_oom",
	I0501 03:08:58.751679   51401 command_runner.go:130] > # 	"processes_defunct",
	I0501 03:08:58.751683   51401 command_runner.go:130] > # 	"operations_total",
	I0501 03:08:58.751687   51401 command_runner.go:130] > # 	"operations_latency_seconds",
	I0501 03:08:58.751694   51401 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0501 03:08:58.751698   51401 command_runner.go:130] > # 	"operations_errors_total",
	I0501 03:08:58.751703   51401 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0501 03:08:58.751708   51401 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0501 03:08:58.751712   51401 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0501 03:08:58.751716   51401 command_runner.go:130] > # 	"image_pulls_success_total",
	I0501 03:08:58.751723   51401 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0501 03:08:58.751727   51401 command_runner.go:130] > # 	"containers_oom_count_total",
	I0501 03:08:58.751732   51401 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0501 03:08:58.751738   51401 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0501 03:08:58.751741   51401 command_runner.go:130] > # ]
	I0501 03:08:58.751746   51401 command_runner.go:130] > # The port on which the metrics server will listen.
	I0501 03:08:58.751752   51401 command_runner.go:130] > # metrics_port = 9090
	I0501 03:08:58.751757   51401 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0501 03:08:58.751763   51401 command_runner.go:130] > # metrics_socket = ""
	I0501 03:08:58.751767   51401 command_runner.go:130] > # The certificate for the secure metrics server.
	I0501 03:08:58.751776   51401 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0501 03:08:58.751782   51401 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0501 03:08:58.751787   51401 command_runner.go:130] > # certificate on any modification event.
	I0501 03:08:58.751793   51401 command_runner.go:130] > # metrics_cert = ""
	I0501 03:08:58.751797   51401 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0501 03:08:58.751802   51401 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0501 03:08:58.751806   51401 command_runner.go:130] > # metrics_key = ""
	I0501 03:08:58.751811   51401 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0501 03:08:58.751816   51401 command_runner.go:130] > [crio.tracing]
	I0501 03:08:58.751825   51401 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0501 03:08:58.751836   51401 command_runner.go:130] > # enable_tracing = false
	I0501 03:08:58.751844   51401 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0501 03:08:58.751850   51401 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0501 03:08:58.751856   51401 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0501 03:08:58.751867   51401 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0501 03:08:58.751874   51401 command_runner.go:130] > # CRI-O NRI configuration.
	I0501 03:08:58.751878   51401 command_runner.go:130] > [crio.nri]
	I0501 03:08:58.751881   51401 command_runner.go:130] > # Globally enable or disable NRI.
	I0501 03:08:58.751885   51401 command_runner.go:130] > # enable_nri = false
	I0501 03:08:58.751889   51401 command_runner.go:130] > # NRI socket to listen on.
	I0501 03:08:58.751894   51401 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0501 03:08:58.751900   51401 command_runner.go:130] > # NRI plugin directory to use.
	I0501 03:08:58.751904   51401 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0501 03:08:58.751909   51401 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0501 03:08:58.751914   51401 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0501 03:08:58.751920   51401 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0501 03:08:58.751924   51401 command_runner.go:130] > # nri_disable_connections = false
	I0501 03:08:58.751929   51401 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0501 03:08:58.751936   51401 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0501 03:08:58.751942   51401 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0501 03:08:58.751948   51401 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0501 03:08:58.751954   51401 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0501 03:08:58.751959   51401 command_runner.go:130] > [crio.stats]
	I0501 03:08:58.751965   51401 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0501 03:08:58.751975   51401 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0501 03:08:58.751979   51401 command_runner.go:130] > # stats_collection_period = 0
	I0501 03:08:58.752000   51401 command_runner.go:130] ! time="2024-05-01 03:08:58.704459703Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0501 03:08:58.752017   51401 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0501 03:08:58.752156   51401 cni.go:84] Creating CNI manager for ""
	I0501 03:08:58.752167   51401 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0501 03:08:58.752175   51401 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0501 03:08:58.752201   51401 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.139 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-282238 NodeName:multinode-282238 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.139"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.139 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0501 03:08:58.752372   51401 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.139
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-282238"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.139
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.139"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0501 03:08:58.752432   51401 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0501 03:08:58.763902   51401 command_runner.go:130] > kubeadm
	I0501 03:08:58.763924   51401 command_runner.go:130] > kubectl
	I0501 03:08:58.763931   51401 command_runner.go:130] > kubelet
	I0501 03:08:58.763960   51401 binaries.go:44] Found k8s binaries, skipping transfer
	I0501 03:08:58.764007   51401 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0501 03:08:58.774652   51401 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0501 03:08:58.794100   51401 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0501 03:08:58.814078   51401 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0501 03:08:58.833561   51401 ssh_runner.go:195] Run: grep 192.168.39.139	control-plane.minikube.internal$ /etc/hosts
	I0501 03:08:58.838098   51401 command_runner.go:130] > 192.168.39.139	control-plane.minikube.internal
	I0501 03:08:58.838174   51401 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 03:08:58.981244   51401 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0501 03:08:58.997269   51401 certs.go:68] Setting up /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/multinode-282238 for IP: 192.168.39.139
	I0501 03:08:58.997288   51401 certs.go:194] generating shared ca certs ...
	I0501 03:08:58.997321   51401 certs.go:226] acquiring lock for ca certs: {Name:mk85a75d14c00780d4bf09cd9bdaf0f96f1b8fce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 03:08:58.997459   51401 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18779-13391/.minikube/ca.key
	I0501 03:08:58.997516   51401 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.key
	I0501 03:08:58.997531   51401 certs.go:256] generating profile certs ...
	I0501 03:08:58.997612   51401 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/multinode-282238/client.key
	I0501 03:08:58.997715   51401 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/multinode-282238/apiserver.key.0a59ce72
	I0501 03:08:58.997776   51401 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/multinode-282238/proxy-client.key
	I0501 03:08:58.997791   51401 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0501 03:08:58.997812   51401 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0501 03:08:58.997831   51401 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0501 03:08:58.997861   51401 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0501 03:08:58.997879   51401 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/multinode-282238/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0501 03:08:58.997897   51401 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/multinode-282238/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0501 03:08:58.997916   51401 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/multinode-282238/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0501 03:08:58.997936   51401 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/multinode-282238/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0501 03:08:58.998007   51401 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/20724.pem (1338 bytes)
	W0501 03:08:58.998050   51401 certs.go:480] ignoring /home/jenkins/minikube-integration/18779-13391/.minikube/certs/20724_empty.pem, impossibly tiny 0 bytes
	I0501 03:08:58.998064   51401 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca-key.pem (1679 bytes)
	I0501 03:08:58.998103   51401 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem (1078 bytes)
	I0501 03:08:58.998138   51401 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem (1123 bytes)
	I0501 03:08:58.998170   51401 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem (1675 bytes)
	I0501 03:08:58.998222   51401 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem (1708 bytes)
	I0501 03:08:58.998271   51401 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem -> /usr/share/ca-certificates/207242.pem
	I0501 03:08:58.998291   51401 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0501 03:08:58.998309   51401 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/20724.pem -> /usr/share/ca-certificates/20724.pem
	I0501 03:08:58.998929   51401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0501 03:08:59.028281   51401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0501 03:08:59.055386   51401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0501 03:08:59.081449   51401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0501 03:08:59.107554   51401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/multinode-282238/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0501 03:08:59.134226   51401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/multinode-282238/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0501 03:08:59.161017   51401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/multinode-282238/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0501 03:08:59.188800   51401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/multinode-282238/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0501 03:08:59.216208   51401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem --> /usr/share/ca-certificates/207242.pem (1708 bytes)
	I0501 03:08:59.242429   51401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0501 03:08:59.268406   51401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/certs/20724.pem --> /usr/share/ca-certificates/20724.pem (1338 bytes)
	I0501 03:08:59.294188   51401 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0501 03:08:59.337102   51401 ssh_runner.go:195] Run: openssl version
	I0501 03:08:59.345257   51401 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0501 03:08:59.345765   51401 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/207242.pem && ln -fs /usr/share/ca-certificates/207242.pem /etc/ssl/certs/207242.pem"
	I0501 03:08:59.357996   51401 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/207242.pem
	I0501 03:08:59.363159   51401 command_runner.go:130] > -rw-r--r-- 1 root root 1708 May  1 02:20 /usr/share/ca-certificates/207242.pem
	I0501 03:08:59.363185   51401 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May  1 02:20 /usr/share/ca-certificates/207242.pem
	I0501 03:08:59.363228   51401 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/207242.pem
	I0501 03:08:59.369429   51401 command_runner.go:130] > 3ec20f2e
	I0501 03:08:59.369495   51401 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/207242.pem /etc/ssl/certs/3ec20f2e.0"
	I0501 03:08:59.379776   51401 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0501 03:08:59.391951   51401 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0501 03:08:59.397315   51401 command_runner.go:130] > -rw-r--r-- 1 root root 1111 May  1 02:08 /usr/share/ca-certificates/minikubeCA.pem
	I0501 03:08:59.397354   51401 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May  1 02:08 /usr/share/ca-certificates/minikubeCA.pem
	I0501 03:08:59.397413   51401 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0501 03:08:59.403679   51401 command_runner.go:130] > b5213941
	I0501 03:08:59.404021   51401 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0501 03:08:59.414483   51401 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20724.pem && ln -fs /usr/share/ca-certificates/20724.pem /etc/ssl/certs/20724.pem"
	I0501 03:08:59.427042   51401 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20724.pem
	I0501 03:08:59.432006   51401 command_runner.go:130] > -rw-r--r-- 1 root root 1338 May  1 02:20 /usr/share/ca-certificates/20724.pem
	I0501 03:08:59.432230   51401 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May  1 02:20 /usr/share/ca-certificates/20724.pem
	I0501 03:08:59.432294   51401 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20724.pem
	I0501 03:08:59.438518   51401 command_runner.go:130] > 51391683
	I0501 03:08:59.438626   51401 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20724.pem /etc/ssl/certs/51391683.0"
	I0501 03:08:59.448906   51401 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0501 03:08:59.453718   51401 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0501 03:08:59.453742   51401 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0501 03:08:59.453750   51401 command_runner.go:130] > Device: 253,1	Inode: 533782      Links: 1
	I0501 03:08:59.453766   51401 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0501 03:08:59.453776   51401 command_runner.go:130] > Access: 2024-05-01 03:02:13.652300468 +0000
	I0501 03:08:59.453793   51401 command_runner.go:130] > Modify: 2024-05-01 03:02:13.652300468 +0000
	I0501 03:08:59.453805   51401 command_runner.go:130] > Change: 2024-05-01 03:02:13.652300468 +0000
	I0501 03:08:59.453812   51401 command_runner.go:130] >  Birth: 2024-05-01 03:02:13.652300468 +0000
	I0501 03:08:59.454017   51401 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0501 03:08:59.460048   51401 command_runner.go:130] > Certificate will not expire
	I0501 03:08:59.460252   51401 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0501 03:08:59.466464   51401 command_runner.go:130] > Certificate will not expire
	I0501 03:08:59.466721   51401 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0501 03:08:59.473593   51401 command_runner.go:130] > Certificate will not expire
	I0501 03:08:59.473633   51401 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0501 03:08:59.480332   51401 command_runner.go:130] > Certificate will not expire
	I0501 03:08:59.480376   51401 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0501 03:08:59.487005   51401 command_runner.go:130] > Certificate will not expire
	I0501 03:08:59.487058   51401 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0501 03:08:59.493475   51401 command_runner.go:130] > Certificate will not expire
	I0501 03:08:59.493731   51401 kubeadm.go:391] StartCluster: {Name:multinode-282238 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
0 ClusterName:multinode-282238 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.139 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.29 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.220 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 03:08:59.493816   51401 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0501 03:08:59.493861   51401 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0501 03:08:59.539427   51401 command_runner.go:130] > 8d816c0bbdea7d46440e648d984d842c5ae02201b1d5cd0bd5201c544dfda5e0
	I0501 03:08:59.539452   51401 command_runner.go:130] > bf5e18923ea3471ed4501abb095806293ca3da0eae78afe97a349f8747346b9b
	I0501 03:08:59.539458   51401 command_runner.go:130] > fcab67c5c49010d68dd607692c8a87af7ad0be31da8b639dd24377f122a4e4d2
	I0501 03:08:59.539465   51401 command_runner.go:130] > be40d7b3a3ded9d9e8420ed6594c65c831f2543e4a62eaa9eff0e1d4b5922c1e
	I0501 03:08:59.539470   51401 command_runner.go:130] > 0338a9652764e389d91bc7c406553a4271aaf3b47de0bef43e26752ddb86033f
	I0501 03:08:59.539475   51401 command_runner.go:130] > 15b3a41e9b9b60d6c65946591a4b9d001a896ae747addb52cb7f2d0945f41fb6
	I0501 03:08:59.539481   51401 command_runner.go:130] > 0bbe01883646dd171b19f4e453c14175e649929473ee82dae03eb7c7bce9b04c
	I0501 03:08:59.539490   51401 command_runner.go:130] > 648ac51c97cf05ff6096b7f920d602e441f2909db28005c9395bbed15cf2716e
	I0501 03:08:59.539512   51401 cri.go:89] found id: "8d816c0bbdea7d46440e648d984d842c5ae02201b1d5cd0bd5201c544dfda5e0"
	I0501 03:08:59.539524   51401 cri.go:89] found id: "bf5e18923ea3471ed4501abb095806293ca3da0eae78afe97a349f8747346b9b"
	I0501 03:08:59.539531   51401 cri.go:89] found id: "fcab67c5c49010d68dd607692c8a87af7ad0be31da8b639dd24377f122a4e4d2"
	I0501 03:08:59.539536   51401 cri.go:89] found id: "be40d7b3a3ded9d9e8420ed6594c65c831f2543e4a62eaa9eff0e1d4b5922c1e"
	I0501 03:08:59.539541   51401 cri.go:89] found id: "0338a9652764e389d91bc7c406553a4271aaf3b47de0bef43e26752ddb86033f"
	I0501 03:08:59.539555   51401 cri.go:89] found id: "15b3a41e9b9b60d6c65946591a4b9d001a896ae747addb52cb7f2d0945f41fb6"
	I0501 03:08:59.539563   51401 cri.go:89] found id: "0bbe01883646dd171b19f4e453c14175e649929473ee82dae03eb7c7bce9b04c"
	I0501 03:08:59.539568   51401 cri.go:89] found id: "648ac51c97cf05ff6096b7f920d602e441f2909db28005c9395bbed15cf2716e"
	I0501 03:08:59.539572   51401 cri.go:89] found id: ""
	I0501 03:08:59.539610   51401 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	May 01 03:10:30 multinode-282238 crio[2849]: time="2024-05-01 03:10:30.822547428Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714533030822499581,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133243,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=25776cf9-83b2-41f1-8745-eaa1e4b76a71 name=/runtime.v1.ImageService/ImageFsInfo
	May 01 03:10:30 multinode-282238 crio[2849]: time="2024-05-01 03:10:30.823045725Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ee2f4a74-9c39-4089-9173-11d6d52e0123 name=/runtime.v1.RuntimeService/ListContainers
	May 01 03:10:30 multinode-282238 crio[2849]: time="2024-05-01 03:10:30.823129188Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ee2f4a74-9c39-4089-9173-11d6d52e0123 name=/runtime.v1.RuntimeService/ListContainers
	May 01 03:10:30 multinode-282238 crio[2849]: time="2024-05-01 03:10:30.825754605Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ca7d4a2291b8008cfc87a2a7a5d4cf0c8a6e669f1ee014c86468b717378c4b2b,PodSandboxId:267b67cd8e9aec3f447b68d71fc5eb8e141345fb7b842519ad433030f85b0e9f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714532979318228186,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-dpfrf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 00cc3b07-24df-4bef-ba3f-b94a8c0cee87,},Annotations:map[string]string{io.kubernetes.container.hash: b1a3b5af,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:622e78c84bf099cdfac90652360cd9687ba7c350f0987f85a311200a32222190,PodSandboxId:08243fbb491296ecab007610cf4ccf95ac72d53f773dc944788f0a6a73eaac26,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1714532945820330367,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hl7zh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd0cbe33-025e-4a86-af98-8571c8f3340c,},Annotations:map[string]string{io.kubernetes.container.hash: 8129d9dd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebdfee87078676a4391b0ef7c57976ea25bca9367b33f20c56bdcb4233d1cd89,PodSandboxId:85780ac0c333442c21ced239eb561039c3b04a203f4434fa715d4f2d2a6e3731,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714532945747685214,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-pq89m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2cb009de-6a0c-47b9-b6a9-5da24ed79f03,},Annotations:map[string]string{io.kubernetes.container.hash: 8ae1ce26,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58136eaaeacc9dbd72f4c4277026813eb767e299885032dbbb476301df4752f8,PodSandboxId:9c6ad18fe2eada8be65551384789eeb57f735b475bf67391bcb4783f7275d144,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714532945560331855,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2rmjj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d33bb084-3ce9-4fa9-8703-b6bec622abac,},Annotations:map[string]
string{io.kubernetes.container.hash: 28f6cbf6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b3c0442530e9983851afe3d169ed4a274795a2286ebfca3103f85f523883d22,PodSandboxId:8547df9332914e6c38cb8cab5d43db58589eaa99f005db71df08ab4bc6b7648e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714532945557883042,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71ce398a-00b1-4aca-87ba-78b64361ed9d,},Annotations:map[string]string{io.ku
bernetes.container.hash: a91656d3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8778d60a896a8cf1338ee26951ff0c6bd7cc9899d8164db111249b76cd20b5c1,PodSandboxId:40cd32c06dc51aee52d568510435bc404498ba920cab07ecacccea061a3da55f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714532941777552486,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-282238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f907d837c32ea71bc11fb00ea245331,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f48f5b734c7eaf5babad6a7bd38e9f26e8d2c8f3b507d0eec92fc34dce752934,PodSandboxId:db1198f918a56cbc9fb24d6ca0f44c0e8c5a872ba5be28700a0748d75b1a8fdd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714532941719029370,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-282238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29ab78d11237a7f5525934b54837aa37,},Annotations:map[string]string{io.kubernetes.container.hash: bb56aa0,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f13ad7828f3b9e47354e8b7246e161db5528f5a208d0a771ee742358bb8a80ac,PodSandboxId:dced73734ef8e274e7401316d6e87d73307602cdf12eb3eeb95170669709509e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714532941790217260,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-282238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9a1a37448d80a6171236c69ab0170a9,},Annotations:map[string]string{io.kubernetes.container.hash: c7e3da59,io.kubernetes.container.restartCount: 1
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15cc964ddd2cbb7aeaf774665a85e7d70e1a039125b8a3ccb7187eae1b9acb1d,PodSandboxId:0a1604e1df4b5063f217fcd0922064b1ede7a7a7717952e80e80edcc53bfd012,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714532941720747981,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-282238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79340a67faa633be7e3979355e36a28d,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:601633c5701193e7c25d13b66c9ba48678106c948d479514bd1a335978bb232d,PodSandboxId:d0b7f0f8a027c07631c29c6f64a50ff65b53fb0efe3befffdee3ed16d8d69a33,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1714532636030480407,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-dpfrf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 00cc3b07-24df-4bef-ba3f-b94a8c0cee87,},Annotations:map[string]string{io.kubernetes.container.hash: b1a3b5af,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d816c0bbdea7d46440e648d984d842c5ae02201b1d5cd0bd5201c544dfda5e0,PodSandboxId:d8211874c627fa99ac5b154c3e365bbf270492c48671b0065f5a65145e408766,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714532588538185911,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-pq89m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2cb009de-6a0c-47b9-b6a9-5da24ed79f03,},Annotations:map[string]string{io.kubernetes.container.hash: 8ae1ce26,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf5e18923ea3471ed4501abb095806293ca3da0eae78afe97a349f8747346b9b,PodSandboxId:ccd4646808c1ec640dfd982c5725de9482cbe9a08b729a209b509eb6fb39a0be,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1714532588476758295,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 71ce398a-00b1-4aca-87ba-78b64361ed9d,},Annotations:map[string]string{io.kubernetes.container.hash: a91656d3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fcab67c5c49010d68dd607692c8a87af7ad0be31da8b639dd24377f122a4e4d2,PodSandboxId:c9f27cd653d1ac17d946a88eaf2d554d4f915c565df269a4cf12750f437ed0e2,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1714532556726871327,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hl7zh,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: fd0cbe33-025e-4a86-af98-8571c8f3340c,},Annotations:map[string]string{io.kubernetes.container.hash: 8129d9dd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be40d7b3a3ded9d9e8420ed6594c65c831f2543e4a62eaa9eff0e1d4b5922c1e,PodSandboxId:9e8fc276935812799c155d1dce8ea68c5a989b9e99762fdcd2b4155a38e76649,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1714532556632680326,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2rmjj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d33bb084-3ce9-4fa9-8703-
b6bec622abac,},Annotations:map[string]string{io.kubernetes.container.hash: 28f6cbf6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0338a9652764e389d91bc7c406553a4271aaf3b47de0bef43e26752ddb86033f,PodSandboxId:4967e3688b6353284da03ee8da5f159d0991064029ae317efc177e7530e3e659,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1714532537276922056,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-282238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29ab78d11237a7f5525934b54837aa37,},Annotations:map[string]string{
io.kubernetes.container.hash: bb56aa0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15b3a41e9b9b60d6c65946591a4b9d001a896ae747addb52cb7f2d0945f41fb6,PodSandboxId:24e5dd5fe8240df051208753ab2af06a002da8c9d72fe7e3e6765b7ea0933a46,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714532537247256083,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-282238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9a1a37448d80a6171236c69ab0170a9,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: c7e3da59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bbe01883646dd171b19f4e453c14175e649929473ee82dae03eb7c7bce9b04c,PodSandboxId:f60182d3a6d766d6c12a4ee997df3d3b9d01d4940479ab0014410f5556848ec2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1714532537226298789,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-282238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f907d837c32ea71bc11fb00ea245331,},Annotations:map[string]string{io.kubernetes.container.hash: d
e199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:648ac51c97cf05ff6096b7f920d602e441f2909db28005c9395bbed15cf2716e,PodSandboxId:252d7ef8f1bfd8e50ef4cce4f12d70526cd0a401d98a33056cd9fcd26d02136e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714532537219728283,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-282238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79340a67faa633be7e3979355e36a28d,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ee2f4a74-9c39-4089-9173-11d6d52e0123 name=/runtime.v1.RuntimeService/ListContainers
	May 01 03:10:30 multinode-282238 crio[2849]: time="2024-05-01 03:10:30.877561214Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8b68eb8a-b514-4bb8-9d79-94c2cac1e42d name=/runtime.v1.RuntimeService/Version
	May 01 03:10:30 multinode-282238 crio[2849]: time="2024-05-01 03:10:30.877666983Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8b68eb8a-b514-4bb8-9d79-94c2cac1e42d name=/runtime.v1.RuntimeService/Version
	May 01 03:10:30 multinode-282238 crio[2849]: time="2024-05-01 03:10:30.878855194Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=add9a0dc-227b-4716-aaa6-94ef1a06d1f9 name=/runtime.v1.ImageService/ImageFsInfo
	May 01 03:10:30 multinode-282238 crio[2849]: time="2024-05-01 03:10:30.879478594Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714533030879375057,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133243,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=add9a0dc-227b-4716-aaa6-94ef1a06d1f9 name=/runtime.v1.ImageService/ImageFsInfo
	May 01 03:10:30 multinode-282238 crio[2849]: time="2024-05-01 03:10:30.880226320Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b57b5383-d143-4bba-97bd-453d55513370 name=/runtime.v1.RuntimeService/ListContainers
	May 01 03:10:30 multinode-282238 crio[2849]: time="2024-05-01 03:10:30.880291301Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b57b5383-d143-4bba-97bd-453d55513370 name=/runtime.v1.RuntimeService/ListContainers
	May 01 03:10:30 multinode-282238 crio[2849]: time="2024-05-01 03:10:30.880786554Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ca7d4a2291b8008cfc87a2a7a5d4cf0c8a6e669f1ee014c86468b717378c4b2b,PodSandboxId:267b67cd8e9aec3f447b68d71fc5eb8e141345fb7b842519ad433030f85b0e9f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714532979318228186,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-dpfrf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 00cc3b07-24df-4bef-ba3f-b94a8c0cee87,},Annotations:map[string]string{io.kubernetes.container.hash: b1a3b5af,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:622e78c84bf099cdfac90652360cd9687ba7c350f0987f85a311200a32222190,PodSandboxId:08243fbb491296ecab007610cf4ccf95ac72d53f773dc944788f0a6a73eaac26,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1714532945820330367,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hl7zh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd0cbe33-025e-4a86-af98-8571c8f3340c,},Annotations:map[string]string{io.kubernetes.container.hash: 8129d9dd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebdfee87078676a4391b0ef7c57976ea25bca9367b33f20c56bdcb4233d1cd89,PodSandboxId:85780ac0c333442c21ced239eb561039c3b04a203f4434fa715d4f2d2a6e3731,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714532945747685214,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-pq89m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2cb009de-6a0c-47b9-b6a9-5da24ed79f03,},Annotations:map[string]string{io.kubernetes.container.hash: 8ae1ce26,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58136eaaeacc9dbd72f4c4277026813eb767e299885032dbbb476301df4752f8,PodSandboxId:9c6ad18fe2eada8be65551384789eeb57f735b475bf67391bcb4783f7275d144,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714532945560331855,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2rmjj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d33bb084-3ce9-4fa9-8703-b6bec622abac,},Annotations:map[string]
string{io.kubernetes.container.hash: 28f6cbf6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b3c0442530e9983851afe3d169ed4a274795a2286ebfca3103f85f523883d22,PodSandboxId:8547df9332914e6c38cb8cab5d43db58589eaa99f005db71df08ab4bc6b7648e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714532945557883042,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71ce398a-00b1-4aca-87ba-78b64361ed9d,},Annotations:map[string]string{io.ku
bernetes.container.hash: a91656d3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8778d60a896a8cf1338ee26951ff0c6bd7cc9899d8164db111249b76cd20b5c1,PodSandboxId:40cd32c06dc51aee52d568510435bc404498ba920cab07ecacccea061a3da55f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714532941777552486,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-282238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f907d837c32ea71bc11fb00ea245331,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f48f5b734c7eaf5babad6a7bd38e9f26e8d2c8f3b507d0eec92fc34dce752934,PodSandboxId:db1198f918a56cbc9fb24d6ca0f44c0e8c5a872ba5be28700a0748d75b1a8fdd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714532941719029370,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-282238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29ab78d11237a7f5525934b54837aa37,},Annotations:map[string]string{io.kubernetes.container.hash: bb56aa0,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f13ad7828f3b9e47354e8b7246e161db5528f5a208d0a771ee742358bb8a80ac,PodSandboxId:dced73734ef8e274e7401316d6e87d73307602cdf12eb3eeb95170669709509e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714532941790217260,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-282238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9a1a37448d80a6171236c69ab0170a9,},Annotations:map[string]string{io.kubernetes.container.hash: c7e3da59,io.kubernetes.container.restartCount: 1
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15cc964ddd2cbb7aeaf774665a85e7d70e1a039125b8a3ccb7187eae1b9acb1d,PodSandboxId:0a1604e1df4b5063f217fcd0922064b1ede7a7a7717952e80e80edcc53bfd012,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714532941720747981,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-282238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79340a67faa633be7e3979355e36a28d,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:601633c5701193e7c25d13b66c9ba48678106c948d479514bd1a335978bb232d,PodSandboxId:d0b7f0f8a027c07631c29c6f64a50ff65b53fb0efe3befffdee3ed16d8d69a33,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1714532636030480407,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-dpfrf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 00cc3b07-24df-4bef-ba3f-b94a8c0cee87,},Annotations:map[string]string{io.kubernetes.container.hash: b1a3b5af,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d816c0bbdea7d46440e648d984d842c5ae02201b1d5cd0bd5201c544dfda5e0,PodSandboxId:d8211874c627fa99ac5b154c3e365bbf270492c48671b0065f5a65145e408766,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714532588538185911,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-pq89m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2cb009de-6a0c-47b9-b6a9-5da24ed79f03,},Annotations:map[string]string{io.kubernetes.container.hash: 8ae1ce26,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf5e18923ea3471ed4501abb095806293ca3da0eae78afe97a349f8747346b9b,PodSandboxId:ccd4646808c1ec640dfd982c5725de9482cbe9a08b729a209b509eb6fb39a0be,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1714532588476758295,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 71ce398a-00b1-4aca-87ba-78b64361ed9d,},Annotations:map[string]string{io.kubernetes.container.hash: a91656d3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fcab67c5c49010d68dd607692c8a87af7ad0be31da8b639dd24377f122a4e4d2,PodSandboxId:c9f27cd653d1ac17d946a88eaf2d554d4f915c565df269a4cf12750f437ed0e2,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1714532556726871327,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hl7zh,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: fd0cbe33-025e-4a86-af98-8571c8f3340c,},Annotations:map[string]string{io.kubernetes.container.hash: 8129d9dd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be40d7b3a3ded9d9e8420ed6594c65c831f2543e4a62eaa9eff0e1d4b5922c1e,PodSandboxId:9e8fc276935812799c155d1dce8ea68c5a989b9e99762fdcd2b4155a38e76649,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1714532556632680326,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2rmjj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d33bb084-3ce9-4fa9-8703-
b6bec622abac,},Annotations:map[string]string{io.kubernetes.container.hash: 28f6cbf6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0338a9652764e389d91bc7c406553a4271aaf3b47de0bef43e26752ddb86033f,PodSandboxId:4967e3688b6353284da03ee8da5f159d0991064029ae317efc177e7530e3e659,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1714532537276922056,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-282238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29ab78d11237a7f5525934b54837aa37,},Annotations:map[string]string{
io.kubernetes.container.hash: bb56aa0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15b3a41e9b9b60d6c65946591a4b9d001a896ae747addb52cb7f2d0945f41fb6,PodSandboxId:24e5dd5fe8240df051208753ab2af06a002da8c9d72fe7e3e6765b7ea0933a46,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714532537247256083,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-282238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9a1a37448d80a6171236c69ab0170a9,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: c7e3da59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bbe01883646dd171b19f4e453c14175e649929473ee82dae03eb7c7bce9b04c,PodSandboxId:f60182d3a6d766d6c12a4ee997df3d3b9d01d4940479ab0014410f5556848ec2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1714532537226298789,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-282238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f907d837c32ea71bc11fb00ea245331,},Annotations:map[string]string{io.kubernetes.container.hash: d
e199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:648ac51c97cf05ff6096b7f920d602e441f2909db28005c9395bbed15cf2716e,PodSandboxId:252d7ef8f1bfd8e50ef4cce4f12d70526cd0a401d98a33056cd9fcd26d02136e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714532537219728283,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-282238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79340a67faa633be7e3979355e36a28d,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b57b5383-d143-4bba-97bd-453d55513370 name=/runtime.v1.RuntimeService/ListContainers
	May 01 03:10:30 multinode-282238 crio[2849]: time="2024-05-01 03:10:30.932189875Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ed31ff8d-5dca-467b-8998-1df72a0f4b0c name=/runtime.v1.RuntimeService/Version
	May 01 03:10:30 multinode-282238 crio[2849]: time="2024-05-01 03:10:30.932264538Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ed31ff8d-5dca-467b-8998-1df72a0f4b0c name=/runtime.v1.RuntimeService/Version
	May 01 03:10:30 multinode-282238 crio[2849]: time="2024-05-01 03:10:30.933922053Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2169caf1-3baf-4c55-8dbe-9336b42b1518 name=/runtime.v1.ImageService/ImageFsInfo
	May 01 03:10:30 multinode-282238 crio[2849]: time="2024-05-01 03:10:30.934297830Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714533030934277334,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133243,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2169caf1-3baf-4c55-8dbe-9336b42b1518 name=/runtime.v1.ImageService/ImageFsInfo
	May 01 03:10:30 multinode-282238 crio[2849]: time="2024-05-01 03:10:30.935136632Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bb237a98-26d0-4bf8-9a26-36c59ff34f37 name=/runtime.v1.RuntimeService/ListContainers
	May 01 03:10:30 multinode-282238 crio[2849]: time="2024-05-01 03:10:30.935193890Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bb237a98-26d0-4bf8-9a26-36c59ff34f37 name=/runtime.v1.RuntimeService/ListContainers
	May 01 03:10:30 multinode-282238 crio[2849]: time="2024-05-01 03:10:30.935635311Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ca7d4a2291b8008cfc87a2a7a5d4cf0c8a6e669f1ee014c86468b717378c4b2b,PodSandboxId:267b67cd8e9aec3f447b68d71fc5eb8e141345fb7b842519ad433030f85b0e9f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714532979318228186,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-dpfrf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 00cc3b07-24df-4bef-ba3f-b94a8c0cee87,},Annotations:map[string]string{io.kubernetes.container.hash: b1a3b5af,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:622e78c84bf099cdfac90652360cd9687ba7c350f0987f85a311200a32222190,PodSandboxId:08243fbb491296ecab007610cf4ccf95ac72d53f773dc944788f0a6a73eaac26,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1714532945820330367,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hl7zh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd0cbe33-025e-4a86-af98-8571c8f3340c,},Annotations:map[string]string{io.kubernetes.container.hash: 8129d9dd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebdfee87078676a4391b0ef7c57976ea25bca9367b33f20c56bdcb4233d1cd89,PodSandboxId:85780ac0c333442c21ced239eb561039c3b04a203f4434fa715d4f2d2a6e3731,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714532945747685214,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-pq89m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2cb009de-6a0c-47b9-b6a9-5da24ed79f03,},Annotations:map[string]string{io.kubernetes.container.hash: 8ae1ce26,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58136eaaeacc9dbd72f4c4277026813eb767e299885032dbbb476301df4752f8,PodSandboxId:9c6ad18fe2eada8be65551384789eeb57f735b475bf67391bcb4783f7275d144,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714532945560331855,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2rmjj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d33bb084-3ce9-4fa9-8703-b6bec622abac,},Annotations:map[string]
string{io.kubernetes.container.hash: 28f6cbf6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b3c0442530e9983851afe3d169ed4a274795a2286ebfca3103f85f523883d22,PodSandboxId:8547df9332914e6c38cb8cab5d43db58589eaa99f005db71df08ab4bc6b7648e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714532945557883042,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71ce398a-00b1-4aca-87ba-78b64361ed9d,},Annotations:map[string]string{io.ku
bernetes.container.hash: a91656d3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8778d60a896a8cf1338ee26951ff0c6bd7cc9899d8164db111249b76cd20b5c1,PodSandboxId:40cd32c06dc51aee52d568510435bc404498ba920cab07ecacccea061a3da55f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714532941777552486,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-282238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f907d837c32ea71bc11fb00ea245331,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f48f5b734c7eaf5babad6a7bd38e9f26e8d2c8f3b507d0eec92fc34dce752934,PodSandboxId:db1198f918a56cbc9fb24d6ca0f44c0e8c5a872ba5be28700a0748d75b1a8fdd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714532941719029370,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-282238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29ab78d11237a7f5525934b54837aa37,},Annotations:map[string]string{io.kubernetes.container.hash: bb56aa0,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f13ad7828f3b9e47354e8b7246e161db5528f5a208d0a771ee742358bb8a80ac,PodSandboxId:dced73734ef8e274e7401316d6e87d73307602cdf12eb3eeb95170669709509e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714532941790217260,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-282238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9a1a37448d80a6171236c69ab0170a9,},Annotations:map[string]string{io.kubernetes.container.hash: c7e3da59,io.kubernetes.container.restartCount: 1
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15cc964ddd2cbb7aeaf774665a85e7d70e1a039125b8a3ccb7187eae1b9acb1d,PodSandboxId:0a1604e1df4b5063f217fcd0922064b1ede7a7a7717952e80e80edcc53bfd012,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714532941720747981,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-282238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79340a67faa633be7e3979355e36a28d,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:601633c5701193e7c25d13b66c9ba48678106c948d479514bd1a335978bb232d,PodSandboxId:d0b7f0f8a027c07631c29c6f64a50ff65b53fb0efe3befffdee3ed16d8d69a33,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1714532636030480407,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-dpfrf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 00cc3b07-24df-4bef-ba3f-b94a8c0cee87,},Annotations:map[string]string{io.kubernetes.container.hash: b1a3b5af,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d816c0bbdea7d46440e648d984d842c5ae02201b1d5cd0bd5201c544dfda5e0,PodSandboxId:d8211874c627fa99ac5b154c3e365bbf270492c48671b0065f5a65145e408766,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714532588538185911,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-pq89m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2cb009de-6a0c-47b9-b6a9-5da24ed79f03,},Annotations:map[string]string{io.kubernetes.container.hash: 8ae1ce26,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf5e18923ea3471ed4501abb095806293ca3da0eae78afe97a349f8747346b9b,PodSandboxId:ccd4646808c1ec640dfd982c5725de9482cbe9a08b729a209b509eb6fb39a0be,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1714532588476758295,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 71ce398a-00b1-4aca-87ba-78b64361ed9d,},Annotations:map[string]string{io.kubernetes.container.hash: a91656d3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fcab67c5c49010d68dd607692c8a87af7ad0be31da8b639dd24377f122a4e4d2,PodSandboxId:c9f27cd653d1ac17d946a88eaf2d554d4f915c565df269a4cf12750f437ed0e2,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1714532556726871327,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hl7zh,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: fd0cbe33-025e-4a86-af98-8571c8f3340c,},Annotations:map[string]string{io.kubernetes.container.hash: 8129d9dd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be40d7b3a3ded9d9e8420ed6594c65c831f2543e4a62eaa9eff0e1d4b5922c1e,PodSandboxId:9e8fc276935812799c155d1dce8ea68c5a989b9e99762fdcd2b4155a38e76649,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1714532556632680326,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2rmjj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d33bb084-3ce9-4fa9-8703-
b6bec622abac,},Annotations:map[string]string{io.kubernetes.container.hash: 28f6cbf6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0338a9652764e389d91bc7c406553a4271aaf3b47de0bef43e26752ddb86033f,PodSandboxId:4967e3688b6353284da03ee8da5f159d0991064029ae317efc177e7530e3e659,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1714532537276922056,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-282238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29ab78d11237a7f5525934b54837aa37,},Annotations:map[string]string{
io.kubernetes.container.hash: bb56aa0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15b3a41e9b9b60d6c65946591a4b9d001a896ae747addb52cb7f2d0945f41fb6,PodSandboxId:24e5dd5fe8240df051208753ab2af06a002da8c9d72fe7e3e6765b7ea0933a46,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714532537247256083,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-282238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9a1a37448d80a6171236c69ab0170a9,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: c7e3da59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bbe01883646dd171b19f4e453c14175e649929473ee82dae03eb7c7bce9b04c,PodSandboxId:f60182d3a6d766d6c12a4ee997df3d3b9d01d4940479ab0014410f5556848ec2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1714532537226298789,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-282238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f907d837c32ea71bc11fb00ea245331,},Annotations:map[string]string{io.kubernetes.container.hash: d
e199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:648ac51c97cf05ff6096b7f920d602e441f2909db28005c9395bbed15cf2716e,PodSandboxId:252d7ef8f1bfd8e50ef4cce4f12d70526cd0a401d98a33056cd9fcd26d02136e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714532537219728283,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-282238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79340a67faa633be7e3979355e36a28d,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bb237a98-26d0-4bf8-9a26-36c59ff34f37 name=/runtime.v1.RuntimeService/ListContainers
	May 01 03:10:30 multinode-282238 crio[2849]: time="2024-05-01 03:10:30.985140000Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=779f67ba-a048-4150-ae00-ae79153236e1 name=/runtime.v1.RuntimeService/Version
	May 01 03:10:30 multinode-282238 crio[2849]: time="2024-05-01 03:10:30.985212960Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=779f67ba-a048-4150-ae00-ae79153236e1 name=/runtime.v1.RuntimeService/Version
	May 01 03:10:30 multinode-282238 crio[2849]: time="2024-05-01 03:10:30.986856839Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=56ddb30d-8c8d-4cda-b5d6-b73794939329 name=/runtime.v1.ImageService/ImageFsInfo
	May 01 03:10:30 multinode-282238 crio[2849]: time="2024-05-01 03:10:30.987226115Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714533030987203364,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133243,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=56ddb30d-8c8d-4cda-b5d6-b73794939329 name=/runtime.v1.ImageService/ImageFsInfo
	May 01 03:10:30 multinode-282238 crio[2849]: time="2024-05-01 03:10:30.987895734Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3efc526a-2acb-4187-bb75-37d9d7e28d06 name=/runtime.v1.RuntimeService/ListContainers
	May 01 03:10:30 multinode-282238 crio[2849]: time="2024-05-01 03:10:30.987951187Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3efc526a-2acb-4187-bb75-37d9d7e28d06 name=/runtime.v1.RuntimeService/ListContainers
	May 01 03:10:30 multinode-282238 crio[2849]: time="2024-05-01 03:10:30.988322624Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ca7d4a2291b8008cfc87a2a7a5d4cf0c8a6e669f1ee014c86468b717378c4b2b,PodSandboxId:267b67cd8e9aec3f447b68d71fc5eb8e141345fb7b842519ad433030f85b0e9f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714532979318228186,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-dpfrf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 00cc3b07-24df-4bef-ba3f-b94a8c0cee87,},Annotations:map[string]string{io.kubernetes.container.hash: b1a3b5af,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:622e78c84bf099cdfac90652360cd9687ba7c350f0987f85a311200a32222190,PodSandboxId:08243fbb491296ecab007610cf4ccf95ac72d53f773dc944788f0a6a73eaac26,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1714532945820330367,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hl7zh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd0cbe33-025e-4a86-af98-8571c8f3340c,},Annotations:map[string]string{io.kubernetes.container.hash: 8129d9dd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebdfee87078676a4391b0ef7c57976ea25bca9367b33f20c56bdcb4233d1cd89,PodSandboxId:85780ac0c333442c21ced239eb561039c3b04a203f4434fa715d4f2d2a6e3731,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714532945747685214,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-pq89m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2cb009de-6a0c-47b9-b6a9-5da24ed79f03,},Annotations:map[string]string{io.kubernetes.container.hash: 8ae1ce26,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58136eaaeacc9dbd72f4c4277026813eb767e299885032dbbb476301df4752f8,PodSandboxId:9c6ad18fe2eada8be65551384789eeb57f735b475bf67391bcb4783f7275d144,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714532945560331855,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2rmjj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d33bb084-3ce9-4fa9-8703-b6bec622abac,},Annotations:map[string]
string{io.kubernetes.container.hash: 28f6cbf6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b3c0442530e9983851afe3d169ed4a274795a2286ebfca3103f85f523883d22,PodSandboxId:8547df9332914e6c38cb8cab5d43db58589eaa99f005db71df08ab4bc6b7648e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714532945557883042,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71ce398a-00b1-4aca-87ba-78b64361ed9d,},Annotations:map[string]string{io.ku
bernetes.container.hash: a91656d3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8778d60a896a8cf1338ee26951ff0c6bd7cc9899d8164db111249b76cd20b5c1,PodSandboxId:40cd32c06dc51aee52d568510435bc404498ba920cab07ecacccea061a3da55f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714532941777552486,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-282238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f907d837c32ea71bc11fb00ea245331,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f48f5b734c7eaf5babad6a7bd38e9f26e8d2c8f3b507d0eec92fc34dce752934,PodSandboxId:db1198f918a56cbc9fb24d6ca0f44c0e8c5a872ba5be28700a0748d75b1a8fdd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714532941719029370,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-282238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29ab78d11237a7f5525934b54837aa37,},Annotations:map[string]string{io.kubernetes.container.hash: bb56aa0,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f13ad7828f3b9e47354e8b7246e161db5528f5a208d0a771ee742358bb8a80ac,PodSandboxId:dced73734ef8e274e7401316d6e87d73307602cdf12eb3eeb95170669709509e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714532941790217260,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-282238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9a1a37448d80a6171236c69ab0170a9,},Annotations:map[string]string{io.kubernetes.container.hash: c7e3da59,io.kubernetes.container.restartCount: 1
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15cc964ddd2cbb7aeaf774665a85e7d70e1a039125b8a3ccb7187eae1b9acb1d,PodSandboxId:0a1604e1df4b5063f217fcd0922064b1ede7a7a7717952e80e80edcc53bfd012,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714532941720747981,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-282238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79340a67faa633be7e3979355e36a28d,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:601633c5701193e7c25d13b66c9ba48678106c948d479514bd1a335978bb232d,PodSandboxId:d0b7f0f8a027c07631c29c6f64a50ff65b53fb0efe3befffdee3ed16d8d69a33,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1714532636030480407,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-dpfrf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 00cc3b07-24df-4bef-ba3f-b94a8c0cee87,},Annotations:map[string]string{io.kubernetes.container.hash: b1a3b5af,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d816c0bbdea7d46440e648d984d842c5ae02201b1d5cd0bd5201c544dfda5e0,PodSandboxId:d8211874c627fa99ac5b154c3e365bbf270492c48671b0065f5a65145e408766,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714532588538185911,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-pq89m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2cb009de-6a0c-47b9-b6a9-5da24ed79f03,},Annotations:map[string]string{io.kubernetes.container.hash: 8ae1ce26,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf5e18923ea3471ed4501abb095806293ca3da0eae78afe97a349f8747346b9b,PodSandboxId:ccd4646808c1ec640dfd982c5725de9482cbe9a08b729a209b509eb6fb39a0be,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1714532588476758295,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 71ce398a-00b1-4aca-87ba-78b64361ed9d,},Annotations:map[string]string{io.kubernetes.container.hash: a91656d3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fcab67c5c49010d68dd607692c8a87af7ad0be31da8b639dd24377f122a4e4d2,PodSandboxId:c9f27cd653d1ac17d946a88eaf2d554d4f915c565df269a4cf12750f437ed0e2,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1714532556726871327,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hl7zh,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: fd0cbe33-025e-4a86-af98-8571c8f3340c,},Annotations:map[string]string{io.kubernetes.container.hash: 8129d9dd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be40d7b3a3ded9d9e8420ed6594c65c831f2543e4a62eaa9eff0e1d4b5922c1e,PodSandboxId:9e8fc276935812799c155d1dce8ea68c5a989b9e99762fdcd2b4155a38e76649,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1714532556632680326,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2rmjj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d33bb084-3ce9-4fa9-8703-
b6bec622abac,},Annotations:map[string]string{io.kubernetes.container.hash: 28f6cbf6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0338a9652764e389d91bc7c406553a4271aaf3b47de0bef43e26752ddb86033f,PodSandboxId:4967e3688b6353284da03ee8da5f159d0991064029ae317efc177e7530e3e659,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1714532537276922056,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-282238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29ab78d11237a7f5525934b54837aa37,},Annotations:map[string]string{
io.kubernetes.container.hash: bb56aa0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15b3a41e9b9b60d6c65946591a4b9d001a896ae747addb52cb7f2d0945f41fb6,PodSandboxId:24e5dd5fe8240df051208753ab2af06a002da8c9d72fe7e3e6765b7ea0933a46,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714532537247256083,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-282238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9a1a37448d80a6171236c69ab0170a9,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: c7e3da59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bbe01883646dd171b19f4e453c14175e649929473ee82dae03eb7c7bce9b04c,PodSandboxId:f60182d3a6d766d6c12a4ee997df3d3b9d01d4940479ab0014410f5556848ec2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1714532537226298789,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-282238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f907d837c32ea71bc11fb00ea245331,},Annotations:map[string]string{io.kubernetes.container.hash: d
e199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:648ac51c97cf05ff6096b7f920d602e441f2909db28005c9395bbed15cf2716e,PodSandboxId:252d7ef8f1bfd8e50ef4cce4f12d70526cd0a401d98a33056cd9fcd26d02136e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714532537219728283,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-282238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79340a67faa633be7e3979355e36a28d,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3efc526a-2acb-4187-bb75-37d9d7e28d06 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	ca7d4a2291b80       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      51 seconds ago       Running             busybox                   1                   267b67cd8e9ae       busybox-fc5497c4f-dpfrf
	622e78c84bf09       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      About a minute ago   Running             kindnet-cni               1                   08243fbb49129       kindnet-hl7zh
	ebdfee8707867       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      About a minute ago   Running             coredns                   1                   85780ac0c3334       coredns-7db6d8ff4d-pq89m
	58136eaaeacc9       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b                                      About a minute ago   Running             kube-proxy                1                   9c6ad18fe2ead       kube-proxy-2rmjj
	3b3c0442530e9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       1                   8547df9332914       storage-provisioner
	f13ad7828f3b9       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0                                      About a minute ago   Running             kube-apiserver            1                   dced73734ef8e       kube-apiserver-multinode-282238
	8778d60a896a8       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced                                      About a minute ago   Running             kube-scheduler            1                   40cd32c06dc51       kube-scheduler-multinode-282238
	15cc964ddd2cb       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b                                      About a minute ago   Running             kube-controller-manager   1                   0a1604e1df4b5       kube-controller-manager-multinode-282238
	f48f5b734c7ea       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      About a minute ago   Running             etcd                      1                   db1198f918a56       etcd-multinode-282238
	601633c570119       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   6 minutes ago        Exited              busybox                   0                   d0b7f0f8a027c       busybox-fc5497c4f-dpfrf
	8d816c0bbdea7       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      7 minutes ago        Exited              coredns                   0                   d8211874c627f       coredns-7db6d8ff4d-pq89m
	bf5e18923ea34       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      7 minutes ago        Exited              storage-provisioner       0                   ccd4646808c1e       storage-provisioner
	fcab67c5c4901       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      7 minutes ago        Exited              kindnet-cni               0                   c9f27cd653d1a       kindnet-hl7zh
	be40d7b3a3ded       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b                                      7 minutes ago        Exited              kube-proxy                0                   9e8fc27693581       kube-proxy-2rmjj
	0338a9652764e       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      8 minutes ago        Exited              etcd                      0                   4967e3688b635       etcd-multinode-282238
	15b3a41e9b9b6       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0                                      8 minutes ago        Exited              kube-apiserver            0                   24e5dd5fe8240       kube-apiserver-multinode-282238
	0bbe01883646d       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced                                      8 minutes ago        Exited              kube-scheduler            0                   f60182d3a6d76       kube-scheduler-multinode-282238
	648ac51c97cf0       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b                                      8 minutes ago        Exited              kube-controller-manager   0                   252d7ef8f1bfd       kube-controller-manager-multinode-282238
	
	
	==> coredns [8d816c0bbdea7d46440e648d984d842c5ae02201b1d5cd0bd5201c544dfda5e0] <==
	[INFO] 10.244.1.2:59618 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001510725s
	[INFO] 10.244.1.2:60456 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000156562s
	[INFO] 10.244.1.2:36252 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000077922s
	[INFO] 10.244.1.2:47181 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001366745s
	[INFO] 10.244.1.2:37037 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000071647s
	[INFO] 10.244.1.2:36317 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000181442s
	[INFO] 10.244.1.2:38996 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000086856s
	[INFO] 10.244.0.3:37679 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000148366s
	[INFO] 10.244.0.3:53590 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000090081s
	[INFO] 10.244.0.3:39061 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000045676s
	[INFO] 10.244.0.3:51107 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000027055s
	[INFO] 10.244.1.2:39063 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000121533s
	[INFO] 10.244.1.2:46771 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000200225s
	[INFO] 10.244.1.2:41167 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000090026s
	[INFO] 10.244.1.2:33744 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000134405s
	[INFO] 10.244.0.3:54357 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000120434s
	[INFO] 10.244.0.3:37819 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000158211s
	[INFO] 10.244.0.3:53355 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000195347s
	[INFO] 10.244.0.3:59846 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000102906s
	[INFO] 10.244.1.2:48867 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000166885s
	[INFO] 10.244.1.2:33516 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000095903s
	[INFO] 10.244.1.2:33876 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000097965s
	[INFO] 10.244.1.2:51976 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000082889s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [ebdfee87078676a4391b0ef7c57976ea25bca9367b33f20c56bdcb4233d1cd89] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:44128 - 20402 "HINFO IN 8562177580602459877.2340631428550283688. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.022513189s
	
	
	==> describe nodes <==
	Name:               multinode-282238
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-282238
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2c4eae41cda912e6a762d77f0d8868e00f97bb4e
	                    minikube.k8s.io/name=multinode-282238
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_01T03_02_23_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 01 May 2024 03:02:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-282238
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 01 May 2024 03:10:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 01 May 2024 03:09:04 +0000   Wed, 01 May 2024 03:02:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 01 May 2024 03:09:04 +0000   Wed, 01 May 2024 03:02:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 01 May 2024 03:09:04 +0000   Wed, 01 May 2024 03:02:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 01 May 2024 03:09:04 +0000   Wed, 01 May 2024 03:03:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.139
	  Hostname:    multinode-282238
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ac8b8e4c2ce042738c18c8a843898f22
	  System UUID:                ac8b8e4c-2ce0-4273-8c18-c8a843898f22
	  Boot ID:                    8ab7d952-245f-482d-8568-788991e02aaa
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-dpfrf                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m38s
	  kube-system                 coredns-7db6d8ff4d-pq89m                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m55s
	  kube-system                 etcd-multinode-282238                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         8m9s
	  kube-system                 kindnet-hl7zh                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m56s
	  kube-system                 kube-apiserver-multinode-282238             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m10s
	  kube-system                 kube-controller-manager-multinode-282238    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m9s
	  kube-system                 kube-proxy-2rmjj                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m56s
	  kube-system                 kube-scheduler-multinode-282238             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m9s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 7m53s                  kube-proxy       
	  Normal  Starting                 85s                    kube-proxy       
	  Normal  Starting                 8m15s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m15s (x8 over 8m15s)  kubelet          Node multinode-282238 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m15s (x8 over 8m15s)  kubelet          Node multinode-282238 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m15s (x7 over 8m15s)  kubelet          Node multinode-282238 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m15s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 8m9s                   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  8m9s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    8m8s                   kubelet          Node multinode-282238 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m8s                   kubelet          Node multinode-282238 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  8m8s                   kubelet          Node multinode-282238 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           7m56s                  node-controller  Node multinode-282238 event: Registered Node multinode-282238 in Controller
	  Normal  NodeReady                7m24s                  kubelet          Node multinode-282238 status is now: NodeReady
	  Normal  Starting                 90s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  90s (x8 over 90s)      kubelet          Node multinode-282238 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    90s (x8 over 90s)      kubelet          Node multinode-282238 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     90s (x7 over 90s)      kubelet          Node multinode-282238 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  90s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           74s                    node-controller  Node multinode-282238 event: Registered Node multinode-282238 in Controller
	
	
	Name:               multinode-282238-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-282238-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2c4eae41cda912e6a762d77f0d8868e00f97bb4e
	                    minikube.k8s.io/name=multinode-282238
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_01T03_09_47_0700
	                    minikube.k8s.io/version=v1.33.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 01 May 2024 03:09:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-282238-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 01 May 2024 03:10:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 01 May 2024 03:10:18 +0000   Wed, 01 May 2024 03:09:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 01 May 2024 03:10:18 +0000   Wed, 01 May 2024 03:09:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 01 May 2024 03:10:18 +0000   Wed, 01 May 2024 03:09:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 01 May 2024 03:10:18 +0000   Wed, 01 May 2024 03:09:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.29
	  Hostname:    multinode-282238-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f5a400bf04a54c3c826bef4e8e41d9b6
	  System UUID:                f5a400bf-04a5-4c3c-826b-ef4e8e41d9b6
	  Boot ID:                    6d2f4221-f3cd-4281-b31f-9ca638e646c8
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-j8jhq    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         49s
	  kube-system                 kindnet-rxg49              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m51s
	  kube-system                 kube-proxy-66kjs           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 6m45s                  kube-proxy  
	  Normal  Starting                 39s                    kube-proxy  
	  Normal  NodeHasSufficientMemory  6m51s (x2 over 6m51s)  kubelet     Node multinode-282238-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m51s (x2 over 6m51s)  kubelet     Node multinode-282238-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m51s (x2 over 6m51s)  kubelet     Node multinode-282238-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m51s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                6m41s                  kubelet     Node multinode-282238-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  44s (x2 over 44s)      kubelet     Node multinode-282238-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    44s (x2 over 44s)      kubelet     Node multinode-282238-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     44s (x2 over 44s)      kubelet     Node multinode-282238-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  44s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                35s                    kubelet     Node multinode-282238-m02 status is now: NodeReady
	
	
	Name:               multinode-282238-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-282238-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2c4eae41cda912e6a762d77f0d8868e00f97bb4e
	                    minikube.k8s.io/name=multinode-282238
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_01T03_10_18_0700
	                    minikube.k8s.io/version=v1.33.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 01 May 2024 03:10:18 +0000
	Taints:             node.kubernetes.io/not-ready:NoExecute
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-282238-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 01 May 2024 03:10:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 01 May 2024 03:10:27 +0000   Wed, 01 May 2024 03:10:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 01 May 2024 03:10:27 +0000   Wed, 01 May 2024 03:10:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 01 May 2024 03:10:27 +0000   Wed, 01 May 2024 03:10:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 01 May 2024 03:10:27 +0000   Wed, 01 May 2024 03:10:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.220
	  Hostname:    multinode-282238-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 5c4e5e998c99409e8e34c87a931315c8
	  System UUID:                5c4e5e99-8c99-409e-8e34-c87a931315c8
	  Boot ID:                    25f99958-904f-47da-b80b-59aef7a0e7ad
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-lwglr       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m4s
	  kube-system                 kube-proxy-z96xb    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m16s                  kube-proxy       
	  Normal  Starting                 5m59s                  kube-proxy       
	  Normal  Starting                 8s                     kube-proxy       
	  Normal  NodeHasSufficientMemory  6m5s (x2 over 6m5s)    kubelet          Node multinode-282238-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m5s (x2 over 6m5s)    kubelet          Node multinode-282238-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m5s (x2 over 6m5s)    kubelet          Node multinode-282238-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m4s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                5m54s                  kubelet          Node multinode-282238-m03 status is now: NodeReady
	  Normal  NodeHasNoDiskPressure    5m21s (x2 over 5m22s)  kubelet          Node multinode-282238-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m21s (x2 over 5m22s)  kubelet          Node multinode-282238-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m21s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m21s (x2 over 5m22s)  kubelet          Node multinode-282238-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                5m12s                  kubelet          Node multinode-282238-m03 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  13s (x2 over 13s)      kubelet          Node multinode-282238-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13s (x2 over 13s)      kubelet          Node multinode-282238-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13s (x2 over 13s)      kubelet          Node multinode-282238-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           9s                     node-controller  Node multinode-282238-m03 event: Registered Node multinode-282238-m03 in Controller
	  Normal  NodeReady                4s                     kubelet          Node multinode-282238-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.072848] systemd-fstab-generator[610]: Ignoring "noauto" option for root device
	[  +0.201477] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +0.140585] systemd-fstab-generator[636]: Ignoring "noauto" option for root device
	[  +0.321892] systemd-fstab-generator[666]: Ignoring "noauto" option for root device
	[  +4.611878] systemd-fstab-generator[764]: Ignoring "noauto" option for root device
	[  +0.067544] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.131398] systemd-fstab-generator[947]: Ignoring "noauto" option for root device
	[  +1.079333] kauditd_printk_skb: 57 callbacks suppressed
	[  +5.467486] systemd-fstab-generator[1280]: Ignoring "noauto" option for root device
	[  +0.094157] kauditd_printk_skb: 30 callbacks suppressed
	[ +13.724442] kauditd_printk_skb: 21 callbacks suppressed
	[  +0.058462] systemd-fstab-generator[1517]: Ignoring "noauto" option for root device
	[May 1 03:03] kauditd_printk_skb: 60 callbacks suppressed
	[ +45.076888] kauditd_printk_skb: 14 callbacks suppressed
	[May 1 03:08] systemd-fstab-generator[2769]: Ignoring "noauto" option for root device
	[  +0.147830] systemd-fstab-generator[2782]: Ignoring "noauto" option for root device
	[  +0.175865] systemd-fstab-generator[2796]: Ignoring "noauto" option for root device
	[  +0.151955] systemd-fstab-generator[2808]: Ignoring "noauto" option for root device
	[  +0.297260] systemd-fstab-generator[2836]: Ignoring "noauto" option for root device
	[  +0.795652] systemd-fstab-generator[2932]: Ignoring "noauto" option for root device
	[May 1 03:09] systemd-fstab-generator[3057]: Ignoring "noauto" option for root device
	[  +4.617919] kauditd_printk_skb: 184 callbacks suppressed
	[ +11.908207] kauditd_printk_skb: 32 callbacks suppressed
	[  +4.232000] systemd-fstab-generator[3894]: Ignoring "noauto" option for root device
	[ +17.670652] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [0338a9652764e389d91bc7c406553a4271aaf3b47de0bef43e26752ddb86033f] <==
	{"level":"info","ts":"2024-05-01T03:02:18.108243Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-05-01T03:02:18.108506Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"warn","ts":"2024-05-01T03:03:40.358763Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"169.078835ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15793436913611130661 username:\"kube-apiserver-etcd-client\" auth_revision:1 > lease_grant:<ttl:3660-second id:5b2d8f321a554724>","response":"size:40"}
	{"level":"info","ts":"2024-05-01T03:03:40.359118Z","caller":"traceutil/trace.go:171","msg":"trace[356406267] transaction","detail":"{read_only:false; response_revision:486; number_of_response:1; }","duration":"174.873603ms","start":"2024-05-01T03:03:40.184228Z","end":"2024-05-01T03:03:40.359102Z","steps":["trace[356406267] 'process raft request'  (duration: 174.785494ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-01T03:03:40.359288Z","caller":"traceutil/trace.go:171","msg":"trace[315325814] linearizableReadLoop","detail":"{readStateIndex:511; appliedIndex:510; }","duration":"223.249847ms","start":"2024-05-01T03:03:40.136027Z","end":"2024-05-01T03:03:40.359277Z","steps":["trace[315325814] 'read index received'  (duration: 53.539797ms)","trace[315325814] 'applied index is now lower than readState.Index'  (duration: 169.709134ms)"],"step_count":2}
	{"level":"warn","ts":"2024-05-01T03:03:40.359566Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"171.407877ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-282238-m02\" ","response":"range_response_count:1 size:1925"}
	{"level":"info","ts":"2024-05-01T03:03:40.359644Z","caller":"traceutil/trace.go:171","msg":"trace[65675065] range","detail":"{range_begin:/registry/minions/multinode-282238-m02; range_end:; response_count:1; response_revision:486; }","duration":"171.49882ms","start":"2024-05-01T03:03:40.188134Z","end":"2024-05-01T03:03:40.359633Z","steps":["trace[65675065] 'agreement among raft nodes before linearized reading'  (duration: 171.383754ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-01T03:03:40.359565Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"223.522226ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-05-01T03:03:40.359812Z","caller":"traceutil/trace.go:171","msg":"trace[1585261514] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:486; }","duration":"223.797988ms","start":"2024-05-01T03:03:40.136004Z","end":"2024-05-01T03:03:40.359802Z","steps":["trace[1585261514] 'agreement among raft nodes before linearized reading'  (duration: 223.381212ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-01T03:04:27.133348Z","caller":"traceutil/trace.go:171","msg":"trace[85424170] transaction","detail":"{read_only:false; response_revision:615; number_of_response:1; }","duration":"248.119873ms","start":"2024-05-01T03:04:26.88518Z","end":"2024-05-01T03:04:27.1333Z","steps":["trace[85424170] 'process raft request'  (duration: 208.513234ms)","trace[85424170] 'compare'  (duration: 39.517932ms)"],"step_count":2}
	{"level":"info","ts":"2024-05-01T03:04:27.140519Z","caller":"traceutil/trace.go:171","msg":"trace[373897446] transaction","detail":"{read_only:false; response_revision:616; number_of_response:1; }","duration":"208.475179ms","start":"2024-05-01T03:04:26.932031Z","end":"2024-05-01T03:04:27.140506Z","steps":["trace[373897446] 'process raft request'  (duration: 207.678956ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-01T03:04:30.529679Z","caller":"traceutil/trace.go:171","msg":"trace[52786717] linearizableReadLoop","detail":"{readStateIndex:696; appliedIndex:695; }","duration":"209.792007ms","start":"2024-05-01T03:04:30.319868Z","end":"2024-05-01T03:04:30.52966Z","steps":["trace[52786717] 'read index received'  (duration: 147.884295ms)","trace[52786717] 'applied index is now lower than readState.Index'  (duration: 61.907017ms)"],"step_count":2}
	{"level":"warn","ts":"2024-05-01T03:04:30.53001Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"210.052329ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-282238-m03\" ","response":"range_response_count:1 size:3229"}
	{"level":"info","ts":"2024-05-01T03:04:30.53018Z","caller":"traceutil/trace.go:171","msg":"trace[2146336304] range","detail":"{range_begin:/registry/minions/multinode-282238-m03; range_end:; response_count:1; response_revision:649; }","duration":"210.323107ms","start":"2024-05-01T03:04:30.319835Z","end":"2024-05-01T03:04:30.530158Z","steps":["trace[2146336304] 'agreement among raft nodes before linearized reading'  (duration: 209.975113ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-01T03:04:30.530003Z","caller":"traceutil/trace.go:171","msg":"trace[282043103] transaction","detail":"{read_only:false; response_revision:649; number_of_response:1; }","duration":"255.241718ms","start":"2024-05-01T03:04:30.274743Z","end":"2024-05-01T03:04:30.529985Z","steps":["trace[282043103] 'process raft request'  (duration: 193.059812ms)","trace[282043103] 'compare'  (duration: 61.745625ms)"],"step_count":2}
	{"level":"info","ts":"2024-05-01T03:07:26.037635Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-05-01T03:07:26.037764Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-282238","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.139:2380"],"advertise-client-urls":["https://192.168.39.139:2379"]}
	{"level":"warn","ts":"2024-05-01T03:07:26.037921Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-05-01T03:07:26.038006Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-05-01T03:07:26.115154Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.139:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-05-01T03:07:26.115218Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.139:2379: use of closed network connection"}
	{"level":"info","ts":"2024-05-01T03:07:26.115313Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"3cbdd43a8949db2d","current-leader-member-id":"3cbdd43a8949db2d"}
	{"level":"info","ts":"2024-05-01T03:07:26.118018Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.139:2380"}
	{"level":"info","ts":"2024-05-01T03:07:26.118133Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.139:2380"}
	{"level":"info","ts":"2024-05-01T03:07:26.118142Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-282238","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.139:2380"],"advertise-client-urls":["https://192.168.39.139:2379"]}
	
	
	==> etcd [f48f5b734c7eaf5babad6a7bd38e9f26e8d2c8f3b507d0eec92fc34dce752934] <==
	{"level":"info","ts":"2024-05-01T03:09:02.231691Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3cbdd43a8949db2d switched to configuration voters=(4376887760750500653)"}
	{"level":"info","ts":"2024-05-01T03:09:02.232022Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"4af51893258ecb17","local-member-id":"3cbdd43a8949db2d","added-peer-id":"3cbdd43a8949db2d","added-peer-peer-urls":["https://192.168.39.139:2380"]}
	{"level":"info","ts":"2024-05-01T03:09:02.233558Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"4af51893258ecb17","local-member-id":"3cbdd43a8949db2d","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-01T03:09:02.23362Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-01T03:09:02.240663Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-05-01T03:09:02.240934Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"3cbdd43a8949db2d","initial-advertise-peer-urls":["https://192.168.39.139:2380"],"listen-peer-urls":["https://192.168.39.139:2380"],"advertise-client-urls":["https://192.168.39.139:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.139:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-05-01T03:09:02.242591Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-05-01T03:09:02.243618Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.139:2380"}
	{"level":"info","ts":"2024-05-01T03:09:02.251297Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.139:2380"}
	{"level":"info","ts":"2024-05-01T03:09:03.377504Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3cbdd43a8949db2d is starting a new election at term 2"}
	{"level":"info","ts":"2024-05-01T03:09:03.377577Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3cbdd43a8949db2d became pre-candidate at term 2"}
	{"level":"info","ts":"2024-05-01T03:09:03.37761Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3cbdd43a8949db2d received MsgPreVoteResp from 3cbdd43a8949db2d at term 2"}
	{"level":"info","ts":"2024-05-01T03:09:03.377623Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3cbdd43a8949db2d became candidate at term 3"}
	{"level":"info","ts":"2024-05-01T03:09:03.37764Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3cbdd43a8949db2d received MsgVoteResp from 3cbdd43a8949db2d at term 3"}
	{"level":"info","ts":"2024-05-01T03:09:03.377649Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3cbdd43a8949db2d became leader at term 3"}
	{"level":"info","ts":"2024-05-01T03:09:03.377659Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 3cbdd43a8949db2d elected leader 3cbdd43a8949db2d at term 3"}
	{"level":"info","ts":"2024-05-01T03:09:03.386632Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"3cbdd43a8949db2d","local-member-attributes":"{Name:multinode-282238 ClientURLs:[https://192.168.39.139:2379]}","request-path":"/0/members/3cbdd43a8949db2d/attributes","cluster-id":"4af51893258ecb17","publish-timeout":"7s"}
	{"level":"info","ts":"2024-05-01T03:09:03.386692Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-01T03:09:03.387131Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-01T03:09:03.39082Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-05-01T03:09:03.397899Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.139:2379"}
	{"level":"info","ts":"2024-05-01T03:09:03.397999Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-05-01T03:09:03.398034Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-05-01T03:10:22.538993Z","caller":"traceutil/trace.go:171","msg":"trace[1615329494] transaction","detail":"{read_only:false; response_revision:1147; number_of_response:1; }","duration":"159.125983ms","start":"2024-05-01T03:10:22.379826Z","end":"2024-05-01T03:10:22.538952Z","steps":["trace[1615329494] 'process raft request'  (duration: 158.998728ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-01T03:10:22.576394Z","caller":"traceutil/trace.go:171","msg":"trace[1303630415] transaction","detail":"{read_only:false; response_revision:1148; number_of_response:1; }","duration":"170.713237ms","start":"2024-05-01T03:10:22.405658Z","end":"2024-05-01T03:10:22.576371Z","steps":["trace[1303630415] 'process raft request'  (duration: 169.475774ms)"],"step_count":1}
	
	
	==> kernel <==
	 03:10:31 up 8 min,  0 users,  load average: 0.25, 0.18, 0.10
	Linux multinode-282238 5.10.207 #1 SMP Tue Apr 30 22:38:43 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [622e78c84bf099cdfac90652360cd9687ba7c350f0987f85a311200a32222190] <==
	I0501 03:09:46.804054       1 main.go:250] Node multinode-282238-m03 has CIDR [10.244.3.0/24] 
	I0501 03:09:56.818073       1 main.go:223] Handling node with IPs: map[192.168.39.139:{}]
	I0501 03:09:56.818117       1 main.go:227] handling current node
	I0501 03:09:56.818128       1 main.go:223] Handling node with IPs: map[192.168.39.29:{}]
	I0501 03:09:56.818134       1 main.go:250] Node multinode-282238-m02 has CIDR [10.244.1.0/24] 
	I0501 03:09:56.818240       1 main.go:223] Handling node with IPs: map[192.168.39.220:{}]
	I0501 03:09:56.818245       1 main.go:250] Node multinode-282238-m03 has CIDR [10.244.3.0/24] 
	I0501 03:10:06.824614       1 main.go:223] Handling node with IPs: map[192.168.39.139:{}]
	I0501 03:10:06.824678       1 main.go:227] handling current node
	I0501 03:10:06.824693       1 main.go:223] Handling node with IPs: map[192.168.39.29:{}]
	I0501 03:10:06.824701       1 main.go:250] Node multinode-282238-m02 has CIDR [10.244.1.0/24] 
	I0501 03:10:06.824830       1 main.go:223] Handling node with IPs: map[192.168.39.220:{}]
	I0501 03:10:06.824837       1 main.go:250] Node multinode-282238-m03 has CIDR [10.244.3.0/24] 
	I0501 03:10:16.849367       1 main.go:223] Handling node with IPs: map[192.168.39.139:{}]
	I0501 03:10:16.849591       1 main.go:227] handling current node
	I0501 03:10:16.849640       1 main.go:223] Handling node with IPs: map[192.168.39.29:{}]
	I0501 03:10:16.849653       1 main.go:250] Node multinode-282238-m02 has CIDR [10.244.1.0/24] 
	I0501 03:10:16.849959       1 main.go:223] Handling node with IPs: map[192.168.39.220:{}]
	I0501 03:10:16.849994       1 main.go:250] Node multinode-282238-m03 has CIDR [10.244.3.0/24] 
	I0501 03:10:26.855467       1 main.go:223] Handling node with IPs: map[192.168.39.139:{}]
	I0501 03:10:26.855620       1 main.go:227] handling current node
	I0501 03:10:26.855656       1 main.go:223] Handling node with IPs: map[192.168.39.29:{}]
	I0501 03:10:26.855677       1 main.go:250] Node multinode-282238-m02 has CIDR [10.244.1.0/24] 
	I0501 03:10:26.855789       1 main.go:223] Handling node with IPs: map[192.168.39.220:{}]
	I0501 03:10:26.855809       1 main.go:250] Node multinode-282238-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kindnet [fcab67c5c49010d68dd607692c8a87af7ad0be31da8b639dd24377f122a4e4d2] <==
	I0501 03:06:37.721882       1 main.go:250] Node multinode-282238-m03 has CIDR [10.244.3.0/24] 
	I0501 03:06:47.731165       1 main.go:223] Handling node with IPs: map[192.168.39.139:{}]
	I0501 03:06:47.731246       1 main.go:227] handling current node
	I0501 03:06:47.731268       1 main.go:223] Handling node with IPs: map[192.168.39.29:{}]
	I0501 03:06:47.731291       1 main.go:250] Node multinode-282238-m02 has CIDR [10.244.1.0/24] 
	I0501 03:06:47.731510       1 main.go:223] Handling node with IPs: map[192.168.39.220:{}]
	I0501 03:06:47.731552       1 main.go:250] Node multinode-282238-m03 has CIDR [10.244.3.0/24] 
	I0501 03:06:57.745376       1 main.go:223] Handling node with IPs: map[192.168.39.139:{}]
	I0501 03:06:57.745507       1 main.go:227] handling current node
	I0501 03:06:57.745519       1 main.go:223] Handling node with IPs: map[192.168.39.29:{}]
	I0501 03:06:57.745525       1 main.go:250] Node multinode-282238-m02 has CIDR [10.244.1.0/24] 
	I0501 03:06:57.745976       1 main.go:223] Handling node with IPs: map[192.168.39.220:{}]
	I0501 03:06:57.745988       1 main.go:250] Node multinode-282238-m03 has CIDR [10.244.3.0/24] 
	I0501 03:07:07.759493       1 main.go:223] Handling node with IPs: map[192.168.39.139:{}]
	I0501 03:07:07.759705       1 main.go:227] handling current node
	I0501 03:07:07.759755       1 main.go:223] Handling node with IPs: map[192.168.39.29:{}]
	I0501 03:07:07.759779       1 main.go:250] Node multinode-282238-m02 has CIDR [10.244.1.0/24] 
	I0501 03:07:07.759917       1 main.go:223] Handling node with IPs: map[192.168.39.220:{}]
	I0501 03:07:07.759945       1 main.go:250] Node multinode-282238-m03 has CIDR [10.244.3.0/24] 
	I0501 03:07:17.765694       1 main.go:223] Handling node with IPs: map[192.168.39.139:{}]
	I0501 03:07:17.765745       1 main.go:227] handling current node
	I0501 03:07:17.765756       1 main.go:223] Handling node with IPs: map[192.168.39.29:{}]
	I0501 03:07:17.765762       1 main.go:250] Node multinode-282238-m02 has CIDR [10.244.1.0/24] 
	I0501 03:07:17.765865       1 main.go:223] Handling node with IPs: map[192.168.39.220:{}]
	I0501 03:07:17.765962       1 main.go:250] Node multinode-282238-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [15b3a41e9b9b60d6c65946591a4b9d001a896ae747addb52cb7f2d0945f41fb6] <==
	W0501 03:07:26.064528       1 logging.go:59] [core] [Channel #85 SubChannel #86] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0501 03:07:26.064811       1 logging.go:59] [core] [Channel #142 SubChannel #143] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0501 03:07:26.065128       1 logging.go:59] [core] [Channel #166 SubChannel #167] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0501 03:07:26.065214       1 logging.go:59] [core] [Channel #73 SubChannel #74] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0501 03:07:26.065270       1 logging.go:59] [core] [Channel #163 SubChannel #164] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0501 03:07:26.065320       1 logging.go:59] [core] [Channel #127 SubChannel #128] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0501 03:07:26.065371       1 logging.go:59] [core] [Channel #154 SubChannel #155] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0501 03:07:26.065526       1 logging.go:59] [core] [Channel #160 SubChannel #161] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0501 03:07:26.065589       1 logging.go:59] [core] [Channel #136 SubChannel #137] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0501 03:07:26.065642       1 logging.go:59] [core] [Channel #82 SubChannel #83] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0501 03:07:26.065913       1 logging.go:59] [core] [Channel #145 SubChannel #146] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0501 03:07:26.067239       1 logging.go:59] [core] [Channel #64 SubChannel #65] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0501 03:07:26.069591       1 logging.go:59] [core] [Channel #58 SubChannel #59] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0501 03:07:26.069745       1 logging.go:59] [core] [Channel #76 SubChannel #77] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0501 03:07:26.069845       1 logging.go:59] [core] [Channel #88 SubChannel #89] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0501 03:07:26.070053       1 logging.go:59] [core] [Channel #169 SubChannel #170] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0501 03:07:26.070128       1 logging.go:59] [core] [Channel #31 SubChannel #32] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0501 03:07:26.070186       1 logging.go:59] [core] [Channel #61 SubChannel #62] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0501 03:07:26.070239       1 logging.go:59] [core] [Channel #79 SubChannel #80] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0501 03:07:26.070302       1 logging.go:59] [core] [Channel #109 SubChannel #110] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0501 03:07:26.070392       1 logging.go:59] [core] [Channel #55 SubChannel #56] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0501 03:07:26.070662       1 logging.go:59] [core] [Channel #151 SubChannel #152] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0501 03:07:26.070704       1 logging.go:59] [core] [Channel #28 SubChannel #29] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0501 03:07:26.071865       1 logging.go:59] [core] [Channel #148 SubChannel #149] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0501 03:07:26.071961       1 logging.go:59] [core] [Channel #91 SubChannel #92] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [f13ad7828f3b9e47354e8b7246e161db5528f5a208d0a771ee742358bb8a80ac] <==
	I0501 03:09:04.814606       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0501 03:09:04.815268       1 aggregator.go:165] initial CRD sync complete...
	I0501 03:09:04.815305       1 autoregister_controller.go:141] Starting autoregister controller
	I0501 03:09:04.815312       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0501 03:09:04.875083       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0501 03:09:04.876153       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0501 03:09:04.877357       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0501 03:09:04.877554       1 shared_informer.go:320] Caches are synced for configmaps
	I0501 03:09:04.877685       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0501 03:09:04.879207       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0501 03:09:04.884184       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	E0501 03:09:04.886930       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0501 03:09:04.898309       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0501 03:09:04.898495       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0501 03:09:04.898534       1 policy_source.go:224] refreshing policies
	I0501 03:09:04.921807       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0501 03:09:04.924730       1 cache.go:39] Caches are synced for autoregister controller
	I0501 03:09:05.796633       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0501 03:09:07.268259       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0501 03:09:07.405566       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0501 03:09:07.421362       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0501 03:09:07.487115       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0501 03:09:07.493280       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0501 03:09:17.319984       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0501 03:09:17.322099       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [15cc964ddd2cbb7aeaf774665a85e7d70e1a039125b8a3ccb7187eae1b9acb1d] <==
	I0501 03:09:17.946815       1 shared_informer.go:320] Caches are synced for garbage collector
	I0501 03:09:17.946900       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0501 03:09:17.972237       1 shared_informer.go:320] Caches are synced for garbage collector
	I0501 03:09:42.987361       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="39.629555ms"
	I0501 03:09:42.998897       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.301384ms"
	I0501 03:09:42.999163       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="142.694µs"
	I0501 03:09:43.176343       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="120.215µs"
	I0501 03:09:47.266865       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-282238-m02\" does not exist"
	I0501 03:09:47.277257       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-282238-m02" podCIDRs=["10.244.1.0/24"]
	I0501 03:09:49.153775       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="38.693µs"
	I0501 03:09:49.196949       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="44.145µs"
	I0501 03:09:49.208338       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="62.636µs"
	I0501 03:09:49.229079       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="73.351µs"
	I0501 03:09:49.238321       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="64.598µs"
	I0501 03:09:49.242992       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="101.173µs"
	I0501 03:09:56.412800       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-282238-m02"
	I0501 03:09:56.430876       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="44.956µs"
	I0501 03:09:56.448246       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="49.368µs"
	I0501 03:10:00.673596       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="7.611674ms"
	I0501 03:10:00.673704       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="42.505µs"
	I0501 03:10:17.101031       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-282238-m02"
	I0501 03:10:18.271127       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-282238-m03\" does not exist"
	I0501 03:10:18.271701       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-282238-m02"
	I0501 03:10:18.281031       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-282238-m03" podCIDRs=["10.244.2.0/24"]
	I0501 03:10:27.816170       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-282238-m02"
	
	
	==> kube-controller-manager [648ac51c97cf05ff6096b7f920d602e441f2909db28005c9395bbed15cf2716e] <==
	I0501 03:03:40.363271       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-282238-m02\" does not exist"
	I0501 03:03:40.381121       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-282238-m02" podCIDRs=["10.244.1.0/24"]
	I0501 03:03:45.162814       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-282238-m02"
	I0501 03:03:50.462851       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-282238-m02"
	I0501 03:03:53.049818       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="56.675509ms"
	I0501 03:03:53.089808       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="39.895739ms"
	I0501 03:03:53.118718       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="28.778738ms"
	I0501 03:03:53.118848       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="61.562µs"
	I0501 03:03:56.227545       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="5.329766ms"
	I0501 03:03:56.227755       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="82.954µs"
	I0501 03:03:56.378750       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="4.961427ms"
	I0501 03:03:56.378841       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="38.415µs"
	I0501 03:04:27.144571       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-282238-m02"
	I0501 03:04:27.143395       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-282238-m03\" does not exist"
	I0501 03:04:27.177748       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-282238-m03" podCIDRs=["10.244.2.0/24"]
	I0501 03:04:30.179200       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-282238-m03"
	I0501 03:04:37.435214       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-282238-m03"
	I0501 03:05:08.634653       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-282238-m02"
	I0501 03:05:10.085252       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-282238-m03\" does not exist"
	I0501 03:05:10.085369       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-282238-m02"
	I0501 03:05:10.095757       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-282238-m03" podCIDRs=["10.244.3.0/24"]
	I0501 03:05:19.331864       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-282238-m02"
	I0501 03:06:05.230059       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-282238-m03"
	I0501 03:06:05.289526       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.704875ms"
	I0501 03:06:05.289959       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="24.996µs"
	
	
	==> kube-proxy [58136eaaeacc9dbd72f4c4277026813eb767e299885032dbbb476301df4752f8] <==
	I0501 03:09:05.859123       1 server_linux.go:69] "Using iptables proxy"
	I0501 03:09:05.883148       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.139"]
	I0501 03:09:05.965512       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0501 03:09:05.965578       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0501 03:09:05.965595       1 server_linux.go:165] "Using iptables Proxier"
	I0501 03:09:05.983662       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0501 03:09:05.984099       1 server.go:872] "Version info" version="v1.30.0"
	I0501 03:09:05.984192       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0501 03:09:05.986388       1 config.go:192] "Starting service config controller"
	I0501 03:09:05.987667       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0501 03:09:05.987822       1 config.go:101] "Starting endpoint slice config controller"
	I0501 03:09:05.993528       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0501 03:09:05.993271       1 config.go:319] "Starting node config controller"
	I0501 03:09:05.993977       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0501 03:09:06.088722       1 shared_informer.go:320] Caches are synced for service config
	I0501 03:09:06.094528       1 shared_informer.go:320] Caches are synced for node config
	I0501 03:09:06.094812       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [be40d7b3a3ded9d9e8420ed6594c65c831f2543e4a62eaa9eff0e1d4b5922c1e] <==
	I0501 03:02:37.007610       1 server_linux.go:69] "Using iptables proxy"
	I0501 03:02:37.031243       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.139"]
	I0501 03:02:37.311584       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0501 03:02:37.311628       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0501 03:02:37.311740       1 server_linux.go:165] "Using iptables Proxier"
	I0501 03:02:37.351754       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0501 03:02:37.353329       1 server.go:872] "Version info" version="v1.30.0"
	I0501 03:02:37.353372       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0501 03:02:37.356201       1 config.go:192] "Starting service config controller"
	I0501 03:02:37.356308       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0501 03:02:37.356339       1 config.go:101] "Starting endpoint slice config controller"
	I0501 03:02:37.356343       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0501 03:02:37.357733       1 config.go:319] "Starting node config controller"
	I0501 03:02:37.357743       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0501 03:02:37.457382       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0501 03:02:37.457521       1 shared_informer.go:320] Caches are synced for service config
	I0501 03:02:37.460715       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [0bbe01883646dd171b19f4e453c14175e649929473ee82dae03eb7c7bce9b04c] <==
	E0501 03:02:19.911591       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0501 03:02:19.911314       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0501 03:02:19.911653       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0501 03:02:20.719629       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0501 03:02:20.719688       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0501 03:02:20.739056       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0501 03:02:20.739114       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0501 03:02:20.790877       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0501 03:02:20.790991       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0501 03:02:20.936066       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0501 03:02:20.936213       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0501 03:02:21.018286       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0501 03:02:21.018463       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0501 03:02:21.026991       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0501 03:02:21.027253       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0501 03:02:21.124675       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0501 03:02:21.124811       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0501 03:02:21.156207       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0501 03:02:21.156877       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0501 03:02:21.205544       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0501 03:02:21.205665       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0501 03:02:21.239614       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0501 03:02:21.239706       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0501 03:02:23.704341       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0501 03:07:26.047759       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [8778d60a896a8cf1338ee26951ff0c6bd7cc9899d8164db111249b76cd20b5c1] <==
	I0501 03:09:03.057377       1 serving.go:380] Generated self-signed cert in-memory
	W0501 03:09:04.835045       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0501 03:09:04.835223       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0501 03:09:04.835269       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0501 03:09:04.835293       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0501 03:09:04.850986       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0"
	I0501 03:09:04.851131       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0501 03:09:04.854966       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0501 03:09:04.855026       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0501 03:09:04.855318       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0501 03:09:04.855450       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0501 03:09:04.956171       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	May 01 03:09:02 multinode-282238 kubelet[3064]: E0501 03:09:02.080852    3064 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.139:8443: connect: connection refused
	May 01 03:09:02 multinode-282238 kubelet[3064]: I0501 03:09:02.551964    3064 kubelet_node_status.go:73] "Attempting to register node" node="multinode-282238"
	May 01 03:09:04 multinode-282238 kubelet[3064]: I0501 03:09:04.932569    3064 kubelet_node_status.go:112] "Node was previously registered" node="multinode-282238"
	May 01 03:09:04 multinode-282238 kubelet[3064]: I0501 03:09:04.932680    3064 kubelet_node_status.go:76] "Successfully registered node" node="multinode-282238"
	May 01 03:09:04 multinode-282238 kubelet[3064]: I0501 03:09:04.935088    3064 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	May 01 03:09:04 multinode-282238 kubelet[3064]: I0501 03:09:04.936363    3064 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	May 01 03:09:05 multinode-282238 kubelet[3064]: I0501 03:09:05.024809    3064 apiserver.go:52] "Watching apiserver"
	May 01 03:09:05 multinode-282238 kubelet[3064]: I0501 03:09:05.029361    3064 topology_manager.go:215] "Topology Admit Handler" podUID="fd0cbe33-025e-4a86-af98-8571c8f3340c" podNamespace="kube-system" podName="kindnet-hl7zh"
	May 01 03:09:05 multinode-282238 kubelet[3064]: I0501 03:09:05.031885    3064 topology_manager.go:215] "Topology Admit Handler" podUID="d33bb084-3ce9-4fa9-8703-b6bec622abac" podNamespace="kube-system" podName="kube-proxy-2rmjj"
	May 01 03:09:05 multinode-282238 kubelet[3064]: I0501 03:09:05.033118    3064 topology_manager.go:215] "Topology Admit Handler" podUID="2cb009de-6a0c-47b9-b6a9-5da24ed79f03" podNamespace="kube-system" podName="coredns-7db6d8ff4d-pq89m"
	May 01 03:09:05 multinode-282238 kubelet[3064]: I0501 03:09:05.033950    3064 topology_manager.go:215] "Topology Admit Handler" podUID="71ce398a-00b1-4aca-87ba-78b64361ed9d" podNamespace="kube-system" podName="storage-provisioner"
	May 01 03:09:05 multinode-282238 kubelet[3064]: I0501 03:09:05.034865    3064 topology_manager.go:215] "Topology Admit Handler" podUID="00cc3b07-24df-4bef-ba3f-b94a8c0cee87" podNamespace="default" podName="busybox-fc5497c4f-dpfrf"
	May 01 03:09:05 multinode-282238 kubelet[3064]: I0501 03:09:05.043775    3064 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	May 01 03:09:05 multinode-282238 kubelet[3064]: I0501 03:09:05.130859    3064 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/71ce398a-00b1-4aca-87ba-78b64361ed9d-tmp\") pod \"storage-provisioner\" (UID: \"71ce398a-00b1-4aca-87ba-78b64361ed9d\") " pod="kube-system/storage-provisioner"
	May 01 03:09:05 multinode-282238 kubelet[3064]: I0501 03:09:05.131849    3064 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/fd0cbe33-025e-4a86-af98-8571c8f3340c-cni-cfg\") pod \"kindnet-hl7zh\" (UID: \"fd0cbe33-025e-4a86-af98-8571c8f3340c\") " pod="kube-system/kindnet-hl7zh"
	May 01 03:09:05 multinode-282238 kubelet[3064]: I0501 03:09:05.131916    3064 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fd0cbe33-025e-4a86-af98-8571c8f3340c-lib-modules\") pod \"kindnet-hl7zh\" (UID: \"fd0cbe33-025e-4a86-af98-8571c8f3340c\") " pod="kube-system/kindnet-hl7zh"
	May 01 03:09:05 multinode-282238 kubelet[3064]: I0501 03:09:05.131936    3064 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d33bb084-3ce9-4fa9-8703-b6bec622abac-xtables-lock\") pod \"kube-proxy-2rmjj\" (UID: \"d33bb084-3ce9-4fa9-8703-b6bec622abac\") " pod="kube-system/kube-proxy-2rmjj"
	May 01 03:09:05 multinode-282238 kubelet[3064]: I0501 03:09:05.131950    3064 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d33bb084-3ce9-4fa9-8703-b6bec622abac-lib-modules\") pod \"kube-proxy-2rmjj\" (UID: \"d33bb084-3ce9-4fa9-8703-b6bec622abac\") " pod="kube-system/kube-proxy-2rmjj"
	May 01 03:09:05 multinode-282238 kubelet[3064]: I0501 03:09:05.131974    3064 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fd0cbe33-025e-4a86-af98-8571c8f3340c-xtables-lock\") pod \"kindnet-hl7zh\" (UID: \"fd0cbe33-025e-4a86-af98-8571c8f3340c\") " pod="kube-system/kindnet-hl7zh"
	May 01 03:09:13 multinode-282238 kubelet[3064]: I0501 03:09:13.732317    3064 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	May 01 03:10:01 multinode-282238 kubelet[3064]: E0501 03:10:01.113212    3064 iptables.go:577] "Could not set up iptables canary" err=<
	May 01 03:10:01 multinode-282238 kubelet[3064]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 01 03:10:01 multinode-282238 kubelet[3064]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 01 03:10:01 multinode-282238 kubelet[3064]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 01 03:10:01 multinode-282238 kubelet[3064]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0501 03:10:30.529290   52467 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18779-13391/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-282238 -n multinode-282238
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-282238 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (310.29s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (141.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-282238 stop
E0501 03:11:07.469384   20724 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/addons-286595/client.crt: no such file or directory
E0501 03:11:24.421849   20724 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/addons-286595/client.crt: no such file or directory
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-282238 stop: exit status 82 (2m0.477855197s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-282238-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-282238 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-282238 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-282238 status: exit status 3 (18.797323927s)

                                                
                                                
-- stdout --
	multinode-282238
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-282238-m02
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0501 03:12:54.246686   53120 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.29:22: connect: no route to host
	E0501 03:12:54.246733   53120 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.29:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:354: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-282238 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-282238 -n multinode-282238
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-282238 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-282238 logs -n 25: (1.529327597s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-282238 ssh -n                                                                 | multinode-282238 | jenkins | v1.33.0 | 01 May 24 03:04 UTC | 01 May 24 03:04 UTC |
	|         | multinode-282238-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-282238 cp multinode-282238-m02:/home/docker/cp-test.txt                       | multinode-282238 | jenkins | v1.33.0 | 01 May 24 03:04 UTC | 01 May 24 03:04 UTC |
	|         | multinode-282238:/home/docker/cp-test_multinode-282238-m02_multinode-282238.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-282238 ssh -n                                                                 | multinode-282238 | jenkins | v1.33.0 | 01 May 24 03:04 UTC | 01 May 24 03:04 UTC |
	|         | multinode-282238-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-282238 ssh -n multinode-282238 sudo cat                                       | multinode-282238 | jenkins | v1.33.0 | 01 May 24 03:04 UTC | 01 May 24 03:04 UTC |
	|         | /home/docker/cp-test_multinode-282238-m02_multinode-282238.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-282238 cp multinode-282238-m02:/home/docker/cp-test.txt                       | multinode-282238 | jenkins | v1.33.0 | 01 May 24 03:04 UTC | 01 May 24 03:04 UTC |
	|         | multinode-282238-m03:/home/docker/cp-test_multinode-282238-m02_multinode-282238-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-282238 ssh -n                                                                 | multinode-282238 | jenkins | v1.33.0 | 01 May 24 03:04 UTC | 01 May 24 03:04 UTC |
	|         | multinode-282238-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-282238 ssh -n multinode-282238-m03 sudo cat                                   | multinode-282238 | jenkins | v1.33.0 | 01 May 24 03:04 UTC | 01 May 24 03:04 UTC |
	|         | /home/docker/cp-test_multinode-282238-m02_multinode-282238-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-282238 cp testdata/cp-test.txt                                                | multinode-282238 | jenkins | v1.33.0 | 01 May 24 03:04 UTC | 01 May 24 03:04 UTC |
	|         | multinode-282238-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-282238 ssh -n                                                                 | multinode-282238 | jenkins | v1.33.0 | 01 May 24 03:04 UTC | 01 May 24 03:04 UTC |
	|         | multinode-282238-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-282238 cp multinode-282238-m03:/home/docker/cp-test.txt                       | multinode-282238 | jenkins | v1.33.0 | 01 May 24 03:04 UTC | 01 May 24 03:04 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile2058267319/001/cp-test_multinode-282238-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-282238 ssh -n                                                                 | multinode-282238 | jenkins | v1.33.0 | 01 May 24 03:04 UTC | 01 May 24 03:04 UTC |
	|         | multinode-282238-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-282238 cp multinode-282238-m03:/home/docker/cp-test.txt                       | multinode-282238 | jenkins | v1.33.0 | 01 May 24 03:04 UTC | 01 May 24 03:04 UTC |
	|         | multinode-282238:/home/docker/cp-test_multinode-282238-m03_multinode-282238.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-282238 ssh -n                                                                 | multinode-282238 | jenkins | v1.33.0 | 01 May 24 03:04 UTC | 01 May 24 03:04 UTC |
	|         | multinode-282238-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-282238 ssh -n multinode-282238 sudo cat                                       | multinode-282238 | jenkins | v1.33.0 | 01 May 24 03:04 UTC | 01 May 24 03:04 UTC |
	|         | /home/docker/cp-test_multinode-282238-m03_multinode-282238.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-282238 cp multinode-282238-m03:/home/docker/cp-test.txt                       | multinode-282238 | jenkins | v1.33.0 | 01 May 24 03:04 UTC | 01 May 24 03:04 UTC |
	|         | multinode-282238-m02:/home/docker/cp-test_multinode-282238-m03_multinode-282238-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-282238 ssh -n                                                                 | multinode-282238 | jenkins | v1.33.0 | 01 May 24 03:04 UTC | 01 May 24 03:04 UTC |
	|         | multinode-282238-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-282238 ssh -n multinode-282238-m02 sudo cat                                   | multinode-282238 | jenkins | v1.33.0 | 01 May 24 03:04 UTC | 01 May 24 03:04 UTC |
	|         | /home/docker/cp-test_multinode-282238-m03_multinode-282238-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-282238 node stop m03                                                          | multinode-282238 | jenkins | v1.33.0 | 01 May 24 03:04 UTC | 01 May 24 03:04 UTC |
	| node    | multinode-282238 node start                                                             | multinode-282238 | jenkins | v1.33.0 | 01 May 24 03:04 UTC | 01 May 24 03:05 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-282238                                                                | multinode-282238 | jenkins | v1.33.0 | 01 May 24 03:05 UTC |                     |
	| stop    | -p multinode-282238                                                                     | multinode-282238 | jenkins | v1.33.0 | 01 May 24 03:05 UTC |                     |
	| start   | -p multinode-282238                                                                     | multinode-282238 | jenkins | v1.33.0 | 01 May 24 03:07 UTC | 01 May 24 03:10 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-282238                                                                | multinode-282238 | jenkins | v1.33.0 | 01 May 24 03:10 UTC |                     |
	| node    | multinode-282238 node delete                                                            | multinode-282238 | jenkins | v1.33.0 | 01 May 24 03:10 UTC | 01 May 24 03:10 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-282238 stop                                                                   | multinode-282238 | jenkins | v1.33.0 | 01 May 24 03:10 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/01 03:07:25
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0501 03:07:25.032643   51401 out.go:291] Setting OutFile to fd 1 ...
	I0501 03:07:25.032903   51401 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 03:07:25.032912   51401 out.go:304] Setting ErrFile to fd 2...
	I0501 03:07:25.032916   51401 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 03:07:25.033101   51401 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18779-13391/.minikube/bin
	I0501 03:07:25.033643   51401 out.go:298] Setting JSON to false
	I0501 03:07:25.034574   51401 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6588,"bootTime":1714526257,"procs":190,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0501 03:07:25.034635   51401 start.go:139] virtualization: kvm guest
	I0501 03:07:25.036751   51401 out.go:177] * [multinode-282238] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0501 03:07:25.038279   51401 out.go:177]   - MINIKUBE_LOCATION=18779
	I0501 03:07:25.039543   51401 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0501 03:07:25.038290   51401 notify.go:220] Checking for updates...
	I0501 03:07:25.040849   51401 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18779-13391/kubeconfig
	I0501 03:07:25.042262   51401 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18779-13391/.minikube
	I0501 03:07:25.043550   51401 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0501 03:07:25.044963   51401 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0501 03:07:25.046788   51401 config.go:182] Loaded profile config "multinode-282238": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0501 03:07:25.046868   51401 driver.go:392] Setting default libvirt URI to qemu:///system
	I0501 03:07:25.047338   51401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:07:25.047373   51401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:07:25.063587   51401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43787
	I0501 03:07:25.063954   51401 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:07:25.064478   51401 main.go:141] libmachine: Using API Version  1
	I0501 03:07:25.064497   51401 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:07:25.064861   51401 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:07:25.065037   51401 main.go:141] libmachine: (multinode-282238) Calling .DriverName
	I0501 03:07:25.101733   51401 out.go:177] * Using the kvm2 driver based on existing profile
	I0501 03:07:25.103071   51401 start.go:297] selected driver: kvm2
	I0501 03:07:25.103084   51401 start.go:901] validating driver "kvm2" against &{Name:multinode-282238 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.0 ClusterName:multinode-282238 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.139 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.29 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.220 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingr
ess-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 03:07:25.103254   51401 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0501 03:07:25.103601   51401 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0501 03:07:25.103672   51401 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18779-13391/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0501 03:07:25.118248   51401 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0501 03:07:25.118983   51401 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0501 03:07:25.119055   51401 cni.go:84] Creating CNI manager for ""
	I0501 03:07:25.119067   51401 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0501 03:07:25.119129   51401 start.go:340] cluster config:
	{Name:multinode-282238 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-282238 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.139 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.29 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.220 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false k
ong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 03:07:25.119266   51401 iso.go:125] acquiring lock: {Name:mkcd2496aadb29931e179193d707c731bfda98b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0501 03:07:25.121032   51401 out.go:177] * Starting "multinode-282238" primary control-plane node in "multinode-282238" cluster
	I0501 03:07:25.122115   51401 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0501 03:07:25.122148   51401 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0501 03:07:25.122163   51401 cache.go:56] Caching tarball of preloaded images
	I0501 03:07:25.122250   51401 preload.go:173] Found /home/jenkins/minikube-integration/18779-13391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0501 03:07:25.122264   51401 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0501 03:07:25.122453   51401 profile.go:143] Saving config to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/multinode-282238/config.json ...
	I0501 03:07:25.122668   51401 start.go:360] acquireMachinesLock for multinode-282238: {Name:mkc4548e5afe1d4b0a833f6c522103562b0cefff Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0501 03:07:25.122712   51401 start.go:364] duration metric: took 25.941µs to acquireMachinesLock for "multinode-282238"
	I0501 03:07:25.122732   51401 start.go:96] Skipping create...Using existing machine configuration
	I0501 03:07:25.122738   51401 fix.go:54] fixHost starting: 
	I0501 03:07:25.123048   51401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:07:25.123081   51401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:07:25.137262   51401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41195
	I0501 03:07:25.137682   51401 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:07:25.138160   51401 main.go:141] libmachine: Using API Version  1
	I0501 03:07:25.138185   51401 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:07:25.138497   51401 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:07:25.138735   51401 main.go:141] libmachine: (multinode-282238) Calling .DriverName
	I0501 03:07:25.138873   51401 main.go:141] libmachine: (multinode-282238) Calling .GetState
	I0501 03:07:25.140461   51401 fix.go:112] recreateIfNeeded on multinode-282238: state=Running err=<nil>
	W0501 03:07:25.140489   51401 fix.go:138] unexpected machine state, will restart: <nil>
	I0501 03:07:25.142331   51401 out.go:177] * Updating the running kvm2 "multinode-282238" VM ...
	I0501 03:07:25.143539   51401 machine.go:94] provisionDockerMachine start ...
	I0501 03:07:25.143564   51401 main.go:141] libmachine: (multinode-282238) Calling .DriverName
	I0501 03:07:25.143791   51401 main.go:141] libmachine: (multinode-282238) Calling .GetSSHHostname
	I0501 03:07:25.146289   51401 main.go:141] libmachine: (multinode-282238) DBG | domain multinode-282238 has defined MAC address 52:54:00:3c:33:06 in network mk-multinode-282238
	I0501 03:07:25.146760   51401 main.go:141] libmachine: (multinode-282238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:33:06", ip: ""} in network mk-multinode-282238: {Iface:virbr1 ExpiryTime:2024-05-01 04:01:56 +0000 UTC Type:0 Mac:52:54:00:3c:33:06 Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:multinode-282238 Clientid:01:52:54:00:3c:33:06}
	I0501 03:07:25.146782   51401 main.go:141] libmachine: (multinode-282238) DBG | domain multinode-282238 has defined IP address 192.168.39.139 and MAC address 52:54:00:3c:33:06 in network mk-multinode-282238
	I0501 03:07:25.146932   51401 main.go:141] libmachine: (multinode-282238) Calling .GetSSHPort
	I0501 03:07:25.147099   51401 main.go:141] libmachine: (multinode-282238) Calling .GetSSHKeyPath
	I0501 03:07:25.147235   51401 main.go:141] libmachine: (multinode-282238) Calling .GetSSHKeyPath
	I0501 03:07:25.147376   51401 main.go:141] libmachine: (multinode-282238) Calling .GetSSHUsername
	I0501 03:07:25.147576   51401 main.go:141] libmachine: Using SSH client type: native
	I0501 03:07:25.147741   51401 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.139 22 <nil> <nil>}
	I0501 03:07:25.147752   51401 main.go:141] libmachine: About to run SSH command:
	hostname
	I0501 03:07:25.260618   51401 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-282238
	
	I0501 03:07:25.260653   51401 main.go:141] libmachine: (multinode-282238) Calling .GetMachineName
	I0501 03:07:25.260900   51401 buildroot.go:166] provisioning hostname "multinode-282238"
	I0501 03:07:25.260922   51401 main.go:141] libmachine: (multinode-282238) Calling .GetMachineName
	I0501 03:07:25.261065   51401 main.go:141] libmachine: (multinode-282238) Calling .GetSSHHostname
	I0501 03:07:25.264045   51401 main.go:141] libmachine: (multinode-282238) DBG | domain multinode-282238 has defined MAC address 52:54:00:3c:33:06 in network mk-multinode-282238
	I0501 03:07:25.264448   51401 main.go:141] libmachine: (multinode-282238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:33:06", ip: ""} in network mk-multinode-282238: {Iface:virbr1 ExpiryTime:2024-05-01 04:01:56 +0000 UTC Type:0 Mac:52:54:00:3c:33:06 Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:multinode-282238 Clientid:01:52:54:00:3c:33:06}
	I0501 03:07:25.264478   51401 main.go:141] libmachine: (multinode-282238) DBG | domain multinode-282238 has defined IP address 192.168.39.139 and MAC address 52:54:00:3c:33:06 in network mk-multinode-282238
	I0501 03:07:25.264713   51401 main.go:141] libmachine: (multinode-282238) Calling .GetSSHPort
	I0501 03:07:25.264900   51401 main.go:141] libmachine: (multinode-282238) Calling .GetSSHKeyPath
	I0501 03:07:25.265034   51401 main.go:141] libmachine: (multinode-282238) Calling .GetSSHKeyPath
	I0501 03:07:25.265129   51401 main.go:141] libmachine: (multinode-282238) Calling .GetSSHUsername
	I0501 03:07:25.265257   51401 main.go:141] libmachine: Using SSH client type: native
	I0501 03:07:25.265405   51401 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.139 22 <nil> <nil>}
	I0501 03:07:25.265418   51401 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-282238 && echo "multinode-282238" | sudo tee /etc/hostname
	I0501 03:07:25.390587   51401 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-282238
	
	I0501 03:07:25.390621   51401 main.go:141] libmachine: (multinode-282238) Calling .GetSSHHostname
	I0501 03:07:25.393307   51401 main.go:141] libmachine: (multinode-282238) DBG | domain multinode-282238 has defined MAC address 52:54:00:3c:33:06 in network mk-multinode-282238
	I0501 03:07:25.393641   51401 main.go:141] libmachine: (multinode-282238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:33:06", ip: ""} in network mk-multinode-282238: {Iface:virbr1 ExpiryTime:2024-05-01 04:01:56 +0000 UTC Type:0 Mac:52:54:00:3c:33:06 Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:multinode-282238 Clientid:01:52:54:00:3c:33:06}
	I0501 03:07:25.393670   51401 main.go:141] libmachine: (multinode-282238) DBG | domain multinode-282238 has defined IP address 192.168.39.139 and MAC address 52:54:00:3c:33:06 in network mk-multinode-282238
	I0501 03:07:25.393868   51401 main.go:141] libmachine: (multinode-282238) Calling .GetSSHPort
	I0501 03:07:25.394065   51401 main.go:141] libmachine: (multinode-282238) Calling .GetSSHKeyPath
	I0501 03:07:25.394211   51401 main.go:141] libmachine: (multinode-282238) Calling .GetSSHKeyPath
	I0501 03:07:25.394335   51401 main.go:141] libmachine: (multinode-282238) Calling .GetSSHUsername
	I0501 03:07:25.394485   51401 main.go:141] libmachine: Using SSH client type: native
	I0501 03:07:25.394678   51401 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.139 22 <nil> <nil>}
	I0501 03:07:25.394703   51401 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-282238' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-282238/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-282238' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0501 03:07:25.503494   51401 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0501 03:07:25.503530   51401 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18779-13391/.minikube CaCertPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18779-13391/.minikube}
	I0501 03:07:25.503556   51401 buildroot.go:174] setting up certificates
	I0501 03:07:25.503567   51401 provision.go:84] configureAuth start
	I0501 03:07:25.503580   51401 main.go:141] libmachine: (multinode-282238) Calling .GetMachineName
	I0501 03:07:25.503864   51401 main.go:141] libmachine: (multinode-282238) Calling .GetIP
	I0501 03:07:25.506274   51401 main.go:141] libmachine: (multinode-282238) DBG | domain multinode-282238 has defined MAC address 52:54:00:3c:33:06 in network mk-multinode-282238
	I0501 03:07:25.506622   51401 main.go:141] libmachine: (multinode-282238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:33:06", ip: ""} in network mk-multinode-282238: {Iface:virbr1 ExpiryTime:2024-05-01 04:01:56 +0000 UTC Type:0 Mac:52:54:00:3c:33:06 Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:multinode-282238 Clientid:01:52:54:00:3c:33:06}
	I0501 03:07:25.506656   51401 main.go:141] libmachine: (multinode-282238) DBG | domain multinode-282238 has defined IP address 192.168.39.139 and MAC address 52:54:00:3c:33:06 in network mk-multinode-282238
	I0501 03:07:25.506763   51401 main.go:141] libmachine: (multinode-282238) Calling .GetSSHHostname
	I0501 03:07:25.508928   51401 main.go:141] libmachine: (multinode-282238) DBG | domain multinode-282238 has defined MAC address 52:54:00:3c:33:06 in network mk-multinode-282238
	I0501 03:07:25.509281   51401 main.go:141] libmachine: (multinode-282238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:33:06", ip: ""} in network mk-multinode-282238: {Iface:virbr1 ExpiryTime:2024-05-01 04:01:56 +0000 UTC Type:0 Mac:52:54:00:3c:33:06 Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:multinode-282238 Clientid:01:52:54:00:3c:33:06}
	I0501 03:07:25.509308   51401 main.go:141] libmachine: (multinode-282238) DBG | domain multinode-282238 has defined IP address 192.168.39.139 and MAC address 52:54:00:3c:33:06 in network mk-multinode-282238
	I0501 03:07:25.509439   51401 provision.go:143] copyHostCerts
	I0501 03:07:25.509471   51401 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem
	I0501 03:07:25.509502   51401 exec_runner.go:144] found /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem, removing ...
	I0501 03:07:25.509510   51401 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem
	I0501 03:07:25.509577   51401 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem (1078 bytes)
	I0501 03:07:25.509655   51401 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem
	I0501 03:07:25.509682   51401 exec_runner.go:144] found /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem, removing ...
	I0501 03:07:25.509689   51401 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem
	I0501 03:07:25.509720   51401 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem (1123 bytes)
	I0501 03:07:25.509769   51401 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem
	I0501 03:07:25.509785   51401 exec_runner.go:144] found /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem, removing ...
	I0501 03:07:25.509792   51401 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem
	I0501 03:07:25.509811   51401 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem (1675 bytes)
	I0501 03:07:25.509862   51401 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca-key.pem org=jenkins.multinode-282238 san=[127.0.0.1 192.168.39.139 localhost minikube multinode-282238]
	I0501 03:07:25.741904   51401 provision.go:177] copyRemoteCerts
	I0501 03:07:25.741971   51401 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0501 03:07:25.741996   51401 main.go:141] libmachine: (multinode-282238) Calling .GetSSHHostname
	I0501 03:07:25.744626   51401 main.go:141] libmachine: (multinode-282238) DBG | domain multinode-282238 has defined MAC address 52:54:00:3c:33:06 in network mk-multinode-282238
	I0501 03:07:25.744995   51401 main.go:141] libmachine: (multinode-282238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:33:06", ip: ""} in network mk-multinode-282238: {Iface:virbr1 ExpiryTime:2024-05-01 04:01:56 +0000 UTC Type:0 Mac:52:54:00:3c:33:06 Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:multinode-282238 Clientid:01:52:54:00:3c:33:06}
	I0501 03:07:25.745025   51401 main.go:141] libmachine: (multinode-282238) DBG | domain multinode-282238 has defined IP address 192.168.39.139 and MAC address 52:54:00:3c:33:06 in network mk-multinode-282238
	I0501 03:07:25.745137   51401 main.go:141] libmachine: (multinode-282238) Calling .GetSSHPort
	I0501 03:07:25.745342   51401 main.go:141] libmachine: (multinode-282238) Calling .GetSSHKeyPath
	I0501 03:07:25.745515   51401 main.go:141] libmachine: (multinode-282238) Calling .GetSSHUsername
	I0501 03:07:25.745665   51401 sshutil.go:53] new ssh client: &{IP:192.168.39.139 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/multinode-282238/id_rsa Username:docker}
	I0501 03:07:25.826537   51401 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0501 03:07:25.826604   51401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0501 03:07:25.859022   51401 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0501 03:07:25.859094   51401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0501 03:07:25.887664   51401 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0501 03:07:25.887727   51401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0501 03:07:25.917389   51401 provision.go:87] duration metric: took 413.792925ms to configureAuth
	I0501 03:07:25.917417   51401 buildroot.go:189] setting minikube options for container-runtime
	I0501 03:07:25.917637   51401 config.go:182] Loaded profile config "multinode-282238": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0501 03:07:25.917716   51401 main.go:141] libmachine: (multinode-282238) Calling .GetSSHHostname
	I0501 03:07:25.920388   51401 main.go:141] libmachine: (multinode-282238) DBG | domain multinode-282238 has defined MAC address 52:54:00:3c:33:06 in network mk-multinode-282238
	I0501 03:07:25.920817   51401 main.go:141] libmachine: (multinode-282238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:33:06", ip: ""} in network mk-multinode-282238: {Iface:virbr1 ExpiryTime:2024-05-01 04:01:56 +0000 UTC Type:0 Mac:52:54:00:3c:33:06 Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:multinode-282238 Clientid:01:52:54:00:3c:33:06}
	I0501 03:07:25.920845   51401 main.go:141] libmachine: (multinode-282238) DBG | domain multinode-282238 has defined IP address 192.168.39.139 and MAC address 52:54:00:3c:33:06 in network mk-multinode-282238
	I0501 03:07:25.920969   51401 main.go:141] libmachine: (multinode-282238) Calling .GetSSHPort
	I0501 03:07:25.921144   51401 main.go:141] libmachine: (multinode-282238) Calling .GetSSHKeyPath
	I0501 03:07:25.921297   51401 main.go:141] libmachine: (multinode-282238) Calling .GetSSHKeyPath
	I0501 03:07:25.921419   51401 main.go:141] libmachine: (multinode-282238) Calling .GetSSHUsername
	I0501 03:07:25.921573   51401 main.go:141] libmachine: Using SSH client type: native
	I0501 03:07:25.921728   51401 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.139 22 <nil> <nil>}
	I0501 03:07:25.921744   51401 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0501 03:08:56.675142   51401 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0501 03:08:56.675175   51401 machine.go:97] duration metric: took 1m31.531621429s to provisionDockerMachine
	I0501 03:08:56.675191   51401 start.go:293] postStartSetup for "multinode-282238" (driver="kvm2")
	I0501 03:08:56.675206   51401 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0501 03:08:56.675253   51401 main.go:141] libmachine: (multinode-282238) Calling .DriverName
	I0501 03:08:56.675579   51401 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0501 03:08:56.675612   51401 main.go:141] libmachine: (multinode-282238) Calling .GetSSHHostname
	I0501 03:08:56.678601   51401 main.go:141] libmachine: (multinode-282238) DBG | domain multinode-282238 has defined MAC address 52:54:00:3c:33:06 in network mk-multinode-282238
	I0501 03:08:56.679020   51401 main.go:141] libmachine: (multinode-282238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:33:06", ip: ""} in network mk-multinode-282238: {Iface:virbr1 ExpiryTime:2024-05-01 04:01:56 +0000 UTC Type:0 Mac:52:54:00:3c:33:06 Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:multinode-282238 Clientid:01:52:54:00:3c:33:06}
	I0501 03:08:56.679047   51401 main.go:141] libmachine: (multinode-282238) DBG | domain multinode-282238 has defined IP address 192.168.39.139 and MAC address 52:54:00:3c:33:06 in network mk-multinode-282238
	I0501 03:08:56.679200   51401 main.go:141] libmachine: (multinode-282238) Calling .GetSSHPort
	I0501 03:08:56.679381   51401 main.go:141] libmachine: (multinode-282238) Calling .GetSSHKeyPath
	I0501 03:08:56.679535   51401 main.go:141] libmachine: (multinode-282238) Calling .GetSSHUsername
	I0501 03:08:56.679660   51401 sshutil.go:53] new ssh client: &{IP:192.168.39.139 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/multinode-282238/id_rsa Username:docker}
	I0501 03:08:56.763350   51401 ssh_runner.go:195] Run: cat /etc/os-release
	I0501 03:08:56.768099   51401 command_runner.go:130] > NAME=Buildroot
	I0501 03:08:56.768111   51401 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0501 03:08:56.768116   51401 command_runner.go:130] > ID=buildroot
	I0501 03:08:56.768120   51401 command_runner.go:130] > VERSION_ID=2023.02.9
	I0501 03:08:56.768126   51401 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0501 03:08:56.768150   51401 info.go:137] Remote host: Buildroot 2023.02.9
	I0501 03:08:56.768169   51401 filesync.go:126] Scanning /home/jenkins/minikube-integration/18779-13391/.minikube/addons for local assets ...
	I0501 03:08:56.768255   51401 filesync.go:126] Scanning /home/jenkins/minikube-integration/18779-13391/.minikube/files for local assets ...
	I0501 03:08:56.768326   51401 filesync.go:149] local asset: /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem -> 207242.pem in /etc/ssl/certs
	I0501 03:08:56.768334   51401 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem -> /etc/ssl/certs/207242.pem
	I0501 03:08:56.768411   51401 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0501 03:08:56.778385   51401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem --> /etc/ssl/certs/207242.pem (1708 bytes)
	I0501 03:08:56.805585   51401 start.go:296] duration metric: took 130.38193ms for postStartSetup
	I0501 03:08:56.805619   51401 fix.go:56] duration metric: took 1m31.682880587s for fixHost
	I0501 03:08:56.805637   51401 main.go:141] libmachine: (multinode-282238) Calling .GetSSHHostname
	I0501 03:08:56.808651   51401 main.go:141] libmachine: (multinode-282238) DBG | domain multinode-282238 has defined MAC address 52:54:00:3c:33:06 in network mk-multinode-282238
	I0501 03:08:56.809077   51401 main.go:141] libmachine: (multinode-282238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:33:06", ip: ""} in network mk-multinode-282238: {Iface:virbr1 ExpiryTime:2024-05-01 04:01:56 +0000 UTC Type:0 Mac:52:54:00:3c:33:06 Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:multinode-282238 Clientid:01:52:54:00:3c:33:06}
	I0501 03:08:56.809104   51401 main.go:141] libmachine: (multinode-282238) DBG | domain multinode-282238 has defined IP address 192.168.39.139 and MAC address 52:54:00:3c:33:06 in network mk-multinode-282238
	I0501 03:08:56.809273   51401 main.go:141] libmachine: (multinode-282238) Calling .GetSSHPort
	I0501 03:08:56.809456   51401 main.go:141] libmachine: (multinode-282238) Calling .GetSSHKeyPath
	I0501 03:08:56.809613   51401 main.go:141] libmachine: (multinode-282238) Calling .GetSSHKeyPath
	I0501 03:08:56.809772   51401 main.go:141] libmachine: (multinode-282238) Calling .GetSSHUsername
	I0501 03:08:56.809913   51401 main.go:141] libmachine: Using SSH client type: native
	I0501 03:08:56.810118   51401 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.139 22 <nil> <nil>}
	I0501 03:08:56.810134   51401 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0501 03:08:56.911693   51401 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714532936.883050544
	
	I0501 03:08:56.911710   51401 fix.go:216] guest clock: 1714532936.883050544
	I0501 03:08:56.911717   51401 fix.go:229] Guest: 2024-05-01 03:08:56.883050544 +0000 UTC Remote: 2024-05-01 03:08:56.805622688 +0000 UTC m=+91.819245890 (delta=77.427856ms)
	I0501 03:08:56.911746   51401 fix.go:200] guest clock delta is within tolerance: 77.427856ms
	I0501 03:08:56.911753   51401 start.go:83] releasing machines lock for "multinode-282238", held for 1m31.789028144s
	I0501 03:08:56.911776   51401 main.go:141] libmachine: (multinode-282238) Calling .DriverName
	I0501 03:08:56.912048   51401 main.go:141] libmachine: (multinode-282238) Calling .GetIP
	I0501 03:08:56.914352   51401 main.go:141] libmachine: (multinode-282238) DBG | domain multinode-282238 has defined MAC address 52:54:00:3c:33:06 in network mk-multinode-282238
	I0501 03:08:56.914696   51401 main.go:141] libmachine: (multinode-282238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:33:06", ip: ""} in network mk-multinode-282238: {Iface:virbr1 ExpiryTime:2024-05-01 04:01:56 +0000 UTC Type:0 Mac:52:54:00:3c:33:06 Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:multinode-282238 Clientid:01:52:54:00:3c:33:06}
	I0501 03:08:56.914726   51401 main.go:141] libmachine: (multinode-282238) DBG | domain multinode-282238 has defined IP address 192.168.39.139 and MAC address 52:54:00:3c:33:06 in network mk-multinode-282238
	I0501 03:08:56.914896   51401 main.go:141] libmachine: (multinode-282238) Calling .DriverName
	I0501 03:08:56.915412   51401 main.go:141] libmachine: (multinode-282238) Calling .DriverName
	I0501 03:08:56.915602   51401 main.go:141] libmachine: (multinode-282238) Calling .DriverName
	I0501 03:08:56.915664   51401 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0501 03:08:56.915711   51401 main.go:141] libmachine: (multinode-282238) Calling .GetSSHHostname
	I0501 03:08:56.915822   51401 ssh_runner.go:195] Run: cat /version.json
	I0501 03:08:56.915845   51401 main.go:141] libmachine: (multinode-282238) Calling .GetSSHHostname
	I0501 03:08:56.918273   51401 main.go:141] libmachine: (multinode-282238) DBG | domain multinode-282238 has defined MAC address 52:54:00:3c:33:06 in network mk-multinode-282238
	I0501 03:08:56.918297   51401 main.go:141] libmachine: (multinode-282238) DBG | domain multinode-282238 has defined MAC address 52:54:00:3c:33:06 in network mk-multinode-282238
	I0501 03:08:56.918656   51401 main.go:141] libmachine: (multinode-282238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:33:06", ip: ""} in network mk-multinode-282238: {Iface:virbr1 ExpiryTime:2024-05-01 04:01:56 +0000 UTC Type:0 Mac:52:54:00:3c:33:06 Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:multinode-282238 Clientid:01:52:54:00:3c:33:06}
	I0501 03:08:56.918686   51401 main.go:141] libmachine: (multinode-282238) DBG | domain multinode-282238 has defined IP address 192.168.39.139 and MAC address 52:54:00:3c:33:06 in network mk-multinode-282238
	I0501 03:08:56.918717   51401 main.go:141] libmachine: (multinode-282238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:33:06", ip: ""} in network mk-multinode-282238: {Iface:virbr1 ExpiryTime:2024-05-01 04:01:56 +0000 UTC Type:0 Mac:52:54:00:3c:33:06 Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:multinode-282238 Clientid:01:52:54:00:3c:33:06}
	I0501 03:08:56.918733   51401 main.go:141] libmachine: (multinode-282238) DBG | domain multinode-282238 has defined IP address 192.168.39.139 and MAC address 52:54:00:3c:33:06 in network mk-multinode-282238
	I0501 03:08:56.918919   51401 main.go:141] libmachine: (multinode-282238) Calling .GetSSHPort
	I0501 03:08:56.918995   51401 main.go:141] libmachine: (multinode-282238) Calling .GetSSHPort
	I0501 03:08:56.919063   51401 main.go:141] libmachine: (multinode-282238) Calling .GetSSHKeyPath
	I0501 03:08:56.919194   51401 main.go:141] libmachine: (multinode-282238) Calling .GetSSHUsername
	I0501 03:08:56.919199   51401 main.go:141] libmachine: (multinode-282238) Calling .GetSSHKeyPath
	I0501 03:08:56.919370   51401 main.go:141] libmachine: (multinode-282238) Calling .GetSSHUsername
	I0501 03:08:56.919376   51401 sshutil.go:53] new ssh client: &{IP:192.168.39.139 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/multinode-282238/id_rsa Username:docker}
	I0501 03:08:56.919511   51401 sshutil.go:53] new ssh client: &{IP:192.168.39.139 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/multinode-282238/id_rsa Username:docker}
	I0501 03:08:56.995861   51401 command_runner.go:130] > {"iso_version": "v1.33.0-1714498396-18779", "kicbase_version": "v0.0.43-1714386659-18769", "minikube_version": "v1.33.0", "commit": "0c7995ab2d4914d5c74027eee5f5d102e19316f2"}
	I0501 03:08:56.995977   51401 ssh_runner.go:195] Run: systemctl --version
	I0501 03:08:57.021726   51401 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0501 03:08:57.021771   51401 command_runner.go:130] > systemd 252 (252)
	I0501 03:08:57.021788   51401 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0501 03:08:57.021850   51401 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0501 03:08:57.188063   51401 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0501 03:08:57.197104   51401 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0501 03:08:57.197518   51401 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0501 03:08:57.197578   51401 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0501 03:08:57.208117   51401 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0501 03:08:57.208144   51401 start.go:494] detecting cgroup driver to use...
	I0501 03:08:57.208223   51401 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0501 03:08:57.225667   51401 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0501 03:08:57.240786   51401 docker.go:217] disabling cri-docker service (if available) ...
	I0501 03:08:57.240851   51401 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0501 03:08:57.255199   51401 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0501 03:08:57.269525   51401 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0501 03:08:57.418649   51401 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0501 03:08:57.561947   51401 docker.go:233] disabling docker service ...
	I0501 03:08:57.562030   51401 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0501 03:08:57.581248   51401 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0501 03:08:57.596573   51401 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0501 03:08:57.740231   51401 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0501 03:08:57.885508   51401 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0501 03:08:57.902165   51401 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0501 03:08:57.924050   51401 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0501 03:08:57.924547   51401 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0501 03:08:57.924599   51401 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:08:57.936403   51401 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0501 03:08:57.936476   51401 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:08:57.948396   51401 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:08:57.960143   51401 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:08:57.971708   51401 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0501 03:08:57.983774   51401 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:08:57.995642   51401 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:08:58.010171   51401 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:08:58.022409   51401 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0501 03:08:58.033294   51401 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0501 03:08:58.033394   51401 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0501 03:08:58.043731   51401 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 03:08:58.187907   51401 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0501 03:08:58.451531   51401 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0501 03:08:58.451590   51401 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0501 03:08:58.458018   51401 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0501 03:08:58.458032   51401 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0501 03:08:58.458038   51401 command_runner.go:130] > Device: 0,22	Inode: 1319        Links: 1
	I0501 03:08:58.458045   51401 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0501 03:08:58.458051   51401 command_runner.go:130] > Access: 2024-05-01 03:08:58.315900439 +0000
	I0501 03:08:58.458057   51401 command_runner.go:130] > Modify: 2024-05-01 03:08:58.315900439 +0000
	I0501 03:08:58.458062   51401 command_runner.go:130] > Change: 2024-05-01 03:08:58.315900439 +0000
	I0501 03:08:58.458080   51401 command_runner.go:130] >  Birth: -
	I0501 03:08:58.458355   51401 start.go:562] Will wait 60s for crictl version
	I0501 03:08:58.458389   51401 ssh_runner.go:195] Run: which crictl
	I0501 03:08:58.462686   51401 command_runner.go:130] > /usr/bin/crictl
	I0501 03:08:58.462978   51401 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0501 03:08:58.513282   51401 command_runner.go:130] > Version:  0.1.0
	I0501 03:08:58.513301   51401 command_runner.go:130] > RuntimeName:  cri-o
	I0501 03:08:58.513305   51401 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0501 03:08:58.513312   51401 command_runner.go:130] > RuntimeApiVersion:  v1
	I0501 03:08:58.513529   51401 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0501 03:08:58.513588   51401 ssh_runner.go:195] Run: crio --version
	I0501 03:08:58.548493   51401 command_runner.go:130] > crio version 1.29.1
	I0501 03:08:58.548512   51401 command_runner.go:130] > Version:        1.29.1
	I0501 03:08:58.548518   51401 command_runner.go:130] > GitCommit:      unknown
	I0501 03:08:58.548522   51401 command_runner.go:130] > GitCommitDate:  unknown
	I0501 03:08:58.548526   51401 command_runner.go:130] > GitTreeState:   clean
	I0501 03:08:58.548532   51401 command_runner.go:130] > BuildDate:      2024-04-30T23:23:49Z
	I0501 03:08:58.548537   51401 command_runner.go:130] > GoVersion:      go1.21.6
	I0501 03:08:58.548541   51401 command_runner.go:130] > Compiler:       gc
	I0501 03:08:58.548546   51401 command_runner.go:130] > Platform:       linux/amd64
	I0501 03:08:58.548550   51401 command_runner.go:130] > Linkmode:       dynamic
	I0501 03:08:58.548566   51401 command_runner.go:130] > BuildTags:      
	I0501 03:08:58.548574   51401 command_runner.go:130] >   containers_image_ostree_stub
	I0501 03:08:58.548578   51401 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0501 03:08:58.548582   51401 command_runner.go:130] >   btrfs_noversion
	I0501 03:08:58.548587   51401 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0501 03:08:58.548594   51401 command_runner.go:130] >   libdm_no_deferred_remove
	I0501 03:08:58.548597   51401 command_runner.go:130] >   seccomp
	I0501 03:08:58.548601   51401 command_runner.go:130] > LDFlags:          unknown
	I0501 03:08:58.548607   51401 command_runner.go:130] > SeccompEnabled:   true
	I0501 03:08:58.548611   51401 command_runner.go:130] > AppArmorEnabled:  false
	I0501 03:08:58.550047   51401 ssh_runner.go:195] Run: crio --version
	I0501 03:08:58.588796   51401 command_runner.go:130] > crio version 1.29.1
	I0501 03:08:58.588819   51401 command_runner.go:130] > Version:        1.29.1
	I0501 03:08:58.588837   51401 command_runner.go:130] > GitCommit:      unknown
	I0501 03:08:58.588841   51401 command_runner.go:130] > GitCommitDate:  unknown
	I0501 03:08:58.588845   51401 command_runner.go:130] > GitTreeState:   clean
	I0501 03:08:58.588851   51401 command_runner.go:130] > BuildDate:      2024-04-30T23:23:49Z
	I0501 03:08:58.588858   51401 command_runner.go:130] > GoVersion:      go1.21.6
	I0501 03:08:58.588865   51401 command_runner.go:130] > Compiler:       gc
	I0501 03:08:58.588872   51401 command_runner.go:130] > Platform:       linux/amd64
	I0501 03:08:58.588880   51401 command_runner.go:130] > Linkmode:       dynamic
	I0501 03:08:58.588888   51401 command_runner.go:130] > BuildTags:      
	I0501 03:08:58.588901   51401 command_runner.go:130] >   containers_image_ostree_stub
	I0501 03:08:58.588905   51401 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0501 03:08:58.588909   51401 command_runner.go:130] >   btrfs_noversion
	I0501 03:08:58.588913   51401 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0501 03:08:58.588919   51401 command_runner.go:130] >   libdm_no_deferred_remove
	I0501 03:08:58.588923   51401 command_runner.go:130] >   seccomp
	I0501 03:08:58.588927   51401 command_runner.go:130] > LDFlags:          unknown
	I0501 03:08:58.588932   51401 command_runner.go:130] > SeccompEnabled:   true
	I0501 03:08:58.588937   51401 command_runner.go:130] > AppArmorEnabled:  false
	I0501 03:08:58.592169   51401 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0501 03:08:58.593781   51401 main.go:141] libmachine: (multinode-282238) Calling .GetIP
	I0501 03:08:58.596558   51401 main.go:141] libmachine: (multinode-282238) DBG | domain multinode-282238 has defined MAC address 52:54:00:3c:33:06 in network mk-multinode-282238
	I0501 03:08:58.596907   51401 main.go:141] libmachine: (multinode-282238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:33:06", ip: ""} in network mk-multinode-282238: {Iface:virbr1 ExpiryTime:2024-05-01 04:01:56 +0000 UTC Type:0 Mac:52:54:00:3c:33:06 Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:multinode-282238 Clientid:01:52:54:00:3c:33:06}
	I0501 03:08:58.596930   51401 main.go:141] libmachine: (multinode-282238) DBG | domain multinode-282238 has defined IP address 192.168.39.139 and MAC address 52:54:00:3c:33:06 in network mk-multinode-282238
	I0501 03:08:58.597185   51401 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0501 03:08:58.602113   51401 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0501 03:08:58.602335   51401 kubeadm.go:877] updating cluster {Name:multinode-282238 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
30.0 ClusterName:multinode-282238 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.139 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.29 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.220 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fal
se inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0501 03:08:58.602509   51401 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0501 03:08:58.602567   51401 ssh_runner.go:195] Run: sudo crictl images --output json
	I0501 03:08:58.653378   51401 command_runner.go:130] > {
	I0501 03:08:58.653404   51401 command_runner.go:130] >   "images": [
	I0501 03:08:58.653410   51401 command_runner.go:130] >     {
	I0501 03:08:58.653422   51401 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0501 03:08:58.653429   51401 command_runner.go:130] >       "repoTags": [
	I0501 03:08:58.653438   51401 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0501 03:08:58.653453   51401 command_runner.go:130] >       ],
	I0501 03:08:58.653469   51401 command_runner.go:130] >       "repoDigests": [
	I0501 03:08:58.653483   51401 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0501 03:08:58.653494   51401 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0501 03:08:58.653499   51401 command_runner.go:130] >       ],
	I0501 03:08:58.653503   51401 command_runner.go:130] >       "size": "65291810",
	I0501 03:08:58.653507   51401 command_runner.go:130] >       "uid": null,
	I0501 03:08:58.653512   51401 command_runner.go:130] >       "username": "",
	I0501 03:08:58.653518   51401 command_runner.go:130] >       "spec": null,
	I0501 03:08:58.653525   51401 command_runner.go:130] >       "pinned": false
	I0501 03:08:58.653528   51401 command_runner.go:130] >     },
	I0501 03:08:58.653531   51401 command_runner.go:130] >     {
	I0501 03:08:58.653539   51401 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0501 03:08:58.653543   51401 command_runner.go:130] >       "repoTags": [
	I0501 03:08:58.653548   51401 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0501 03:08:58.653553   51401 command_runner.go:130] >       ],
	I0501 03:08:58.653557   51401 command_runner.go:130] >       "repoDigests": [
	I0501 03:08:58.653566   51401 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0501 03:08:58.653573   51401 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0501 03:08:58.653580   51401 command_runner.go:130] >       ],
	I0501 03:08:58.653584   51401 command_runner.go:130] >       "size": "1363676",
	I0501 03:08:58.653590   51401 command_runner.go:130] >       "uid": null,
	I0501 03:08:58.653599   51401 command_runner.go:130] >       "username": "",
	I0501 03:08:58.653605   51401 command_runner.go:130] >       "spec": null,
	I0501 03:08:58.653609   51401 command_runner.go:130] >       "pinned": false
	I0501 03:08:58.653615   51401 command_runner.go:130] >     },
	I0501 03:08:58.653618   51401 command_runner.go:130] >     {
	I0501 03:08:58.653627   51401 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0501 03:08:58.653635   51401 command_runner.go:130] >       "repoTags": [
	I0501 03:08:58.653641   51401 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0501 03:08:58.653646   51401 command_runner.go:130] >       ],
	I0501 03:08:58.653651   51401 command_runner.go:130] >       "repoDigests": [
	I0501 03:08:58.653660   51401 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0501 03:08:58.653670   51401 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0501 03:08:58.653676   51401 command_runner.go:130] >       ],
	I0501 03:08:58.653680   51401 command_runner.go:130] >       "size": "31470524",
	I0501 03:08:58.653695   51401 command_runner.go:130] >       "uid": null,
	I0501 03:08:58.653705   51401 command_runner.go:130] >       "username": "",
	I0501 03:08:58.653715   51401 command_runner.go:130] >       "spec": null,
	I0501 03:08:58.653725   51401 command_runner.go:130] >       "pinned": false
	I0501 03:08:58.653733   51401 command_runner.go:130] >     },
	I0501 03:08:58.653742   51401 command_runner.go:130] >     {
	I0501 03:08:58.653755   51401 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0501 03:08:58.653764   51401 command_runner.go:130] >       "repoTags": [
	I0501 03:08:58.653775   51401 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0501 03:08:58.653783   51401 command_runner.go:130] >       ],
	I0501 03:08:58.653792   51401 command_runner.go:130] >       "repoDigests": [
	I0501 03:08:58.653807   51401 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0501 03:08:58.653826   51401 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0501 03:08:58.653832   51401 command_runner.go:130] >       ],
	I0501 03:08:58.653836   51401 command_runner.go:130] >       "size": "61245718",
	I0501 03:08:58.653842   51401 command_runner.go:130] >       "uid": null,
	I0501 03:08:58.653847   51401 command_runner.go:130] >       "username": "nonroot",
	I0501 03:08:58.653853   51401 command_runner.go:130] >       "spec": null,
	I0501 03:08:58.653857   51401 command_runner.go:130] >       "pinned": false
	I0501 03:08:58.653863   51401 command_runner.go:130] >     },
	I0501 03:08:58.653867   51401 command_runner.go:130] >     {
	I0501 03:08:58.653875   51401 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0501 03:08:58.653881   51401 command_runner.go:130] >       "repoTags": [
	I0501 03:08:58.653886   51401 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0501 03:08:58.653892   51401 command_runner.go:130] >       ],
	I0501 03:08:58.653896   51401 command_runner.go:130] >       "repoDigests": [
	I0501 03:08:58.653903   51401 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0501 03:08:58.653911   51401 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0501 03:08:58.653917   51401 command_runner.go:130] >       ],
	I0501 03:08:58.653921   51401 command_runner.go:130] >       "size": "150779692",
	I0501 03:08:58.653927   51401 command_runner.go:130] >       "uid": {
	I0501 03:08:58.653931   51401 command_runner.go:130] >         "value": "0"
	I0501 03:08:58.653937   51401 command_runner.go:130] >       },
	I0501 03:08:58.653941   51401 command_runner.go:130] >       "username": "",
	I0501 03:08:58.653947   51401 command_runner.go:130] >       "spec": null,
	I0501 03:08:58.653951   51401 command_runner.go:130] >       "pinned": false
	I0501 03:08:58.653961   51401 command_runner.go:130] >     },
	I0501 03:08:58.653966   51401 command_runner.go:130] >     {
	I0501 03:08:58.653972   51401 command_runner.go:130] >       "id": "c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0",
	I0501 03:08:58.653978   51401 command_runner.go:130] >       "repoTags": [
	I0501 03:08:58.653983   51401 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.0"
	I0501 03:08:58.653989   51401 command_runner.go:130] >       ],
	I0501 03:08:58.653992   51401 command_runner.go:130] >       "repoDigests": [
	I0501 03:08:58.654001   51401 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:31282cf15b67192cd35f847715a9571f5dd4ac0e130290a408a866bd040bcd81",
	I0501 03:08:58.654010   51401 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:6b8e197b2d39c321189a475ac755a77896e34b56729425590fbc99f3a96468a3"
	I0501 03:08:58.654016   51401 command_runner.go:130] >       ],
	I0501 03:08:58.654022   51401 command_runner.go:130] >       "size": "117609952",
	I0501 03:08:58.654028   51401 command_runner.go:130] >       "uid": {
	I0501 03:08:58.654033   51401 command_runner.go:130] >         "value": "0"
	I0501 03:08:58.654039   51401 command_runner.go:130] >       },
	I0501 03:08:58.654043   51401 command_runner.go:130] >       "username": "",
	I0501 03:08:58.654051   51401 command_runner.go:130] >       "spec": null,
	I0501 03:08:58.654057   51401 command_runner.go:130] >       "pinned": false
	I0501 03:08:58.654060   51401 command_runner.go:130] >     },
	I0501 03:08:58.654064   51401 command_runner.go:130] >     {
	I0501 03:08:58.654072   51401 command_runner.go:130] >       "id": "c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b",
	I0501 03:08:58.654077   51401 command_runner.go:130] >       "repoTags": [
	I0501 03:08:58.654082   51401 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.0"
	I0501 03:08:58.654088   51401 command_runner.go:130] >       ],
	I0501 03:08:58.654092   51401 command_runner.go:130] >       "repoDigests": [
	I0501 03:08:58.654102   51401 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:5f52f00f17d5784b5ca004dffca59710fa1a9eec8d54cebdf9433a1d134150fe",
	I0501 03:08:58.654117   51401 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:b7622a0826b7690a307eea994e2abc918f35a27a08e30c37b58c9e3f8336a450"
	I0501 03:08:58.654123   51401 command_runner.go:130] >       ],
	I0501 03:08:58.654128   51401 command_runner.go:130] >       "size": "112170310",
	I0501 03:08:58.654133   51401 command_runner.go:130] >       "uid": {
	I0501 03:08:58.654137   51401 command_runner.go:130] >         "value": "0"
	I0501 03:08:58.654143   51401 command_runner.go:130] >       },
	I0501 03:08:58.654147   51401 command_runner.go:130] >       "username": "",
	I0501 03:08:58.654153   51401 command_runner.go:130] >       "spec": null,
	I0501 03:08:58.654157   51401 command_runner.go:130] >       "pinned": false
	I0501 03:08:58.654162   51401 command_runner.go:130] >     },
	I0501 03:08:58.654167   51401 command_runner.go:130] >     {
	I0501 03:08:58.654179   51401 command_runner.go:130] >       "id": "a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b",
	I0501 03:08:58.654185   51401 command_runner.go:130] >       "repoTags": [
	I0501 03:08:58.654191   51401 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.0"
	I0501 03:08:58.654197   51401 command_runner.go:130] >       ],
	I0501 03:08:58.654201   51401 command_runner.go:130] >       "repoDigests": [
	I0501 03:08:58.654224   51401 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:880f26b53295d384d2f1fed06aa4d58567e3038157f70a1151a7dd8ef8afaa68",
	I0501 03:08:58.654241   51401 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:ec532ff47eaf39822387e51ec73f1f2502eb74658c6303319db88d2c380d0210"
	I0501 03:08:58.654245   51401 command_runner.go:130] >       ],
	I0501 03:08:58.654249   51401 command_runner.go:130] >       "size": "85932953",
	I0501 03:08:58.654253   51401 command_runner.go:130] >       "uid": null,
	I0501 03:08:58.654263   51401 command_runner.go:130] >       "username": "",
	I0501 03:08:58.654269   51401 command_runner.go:130] >       "spec": null,
	I0501 03:08:58.654273   51401 command_runner.go:130] >       "pinned": false
	I0501 03:08:58.654276   51401 command_runner.go:130] >     },
	I0501 03:08:58.654279   51401 command_runner.go:130] >     {
	I0501 03:08:58.654285   51401 command_runner.go:130] >       "id": "259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced",
	I0501 03:08:58.654288   51401 command_runner.go:130] >       "repoTags": [
	I0501 03:08:58.654293   51401 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.0"
	I0501 03:08:58.654296   51401 command_runner.go:130] >       ],
	I0501 03:08:58.654300   51401 command_runner.go:130] >       "repoDigests": [
	I0501 03:08:58.654307   51401 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2353c3a1803229970fcb571cffc9b2f120372350e01c7381b4b650c4a02b9d67",
	I0501 03:08:58.654314   51401 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d2c2a1d9de7a42d91bfedba5ed4f58126f9cff702d35419d78ce4e7cb07f3b7a"
	I0501 03:08:58.654317   51401 command_runner.go:130] >       ],
	I0501 03:08:58.654321   51401 command_runner.go:130] >       "size": "63026502",
	I0501 03:08:58.654325   51401 command_runner.go:130] >       "uid": {
	I0501 03:08:58.654328   51401 command_runner.go:130] >         "value": "0"
	I0501 03:08:58.654332   51401 command_runner.go:130] >       },
	I0501 03:08:58.654335   51401 command_runner.go:130] >       "username": "",
	I0501 03:08:58.654339   51401 command_runner.go:130] >       "spec": null,
	I0501 03:08:58.654342   51401 command_runner.go:130] >       "pinned": false
	I0501 03:08:58.654345   51401 command_runner.go:130] >     },
	I0501 03:08:58.654348   51401 command_runner.go:130] >     {
	I0501 03:08:58.654356   51401 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0501 03:08:58.654360   51401 command_runner.go:130] >       "repoTags": [
	I0501 03:08:58.654364   51401 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0501 03:08:58.654367   51401 command_runner.go:130] >       ],
	I0501 03:08:58.654382   51401 command_runner.go:130] >       "repoDigests": [
	I0501 03:08:58.654392   51401 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0501 03:08:58.654415   51401 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0501 03:08:58.654424   51401 command_runner.go:130] >       ],
	I0501 03:08:58.654430   51401 command_runner.go:130] >       "size": "750414",
	I0501 03:08:58.654437   51401 command_runner.go:130] >       "uid": {
	I0501 03:08:58.654442   51401 command_runner.go:130] >         "value": "65535"
	I0501 03:08:58.654445   51401 command_runner.go:130] >       },
	I0501 03:08:58.654450   51401 command_runner.go:130] >       "username": "",
	I0501 03:08:58.654456   51401 command_runner.go:130] >       "spec": null,
	I0501 03:08:58.654459   51401 command_runner.go:130] >       "pinned": true
	I0501 03:08:58.654465   51401 command_runner.go:130] >     }
	I0501 03:08:58.654469   51401 command_runner.go:130] >   ]
	I0501 03:08:58.654474   51401 command_runner.go:130] > }
	I0501 03:08:58.654659   51401 crio.go:514] all images are preloaded for cri-o runtime.
	I0501 03:08:58.654674   51401 crio.go:433] Images already preloaded, skipping extraction
	I0501 03:08:58.654732   51401 ssh_runner.go:195] Run: sudo crictl images --output json
	I0501 03:08:58.690809   51401 command_runner.go:130] > {
	I0501 03:08:58.690838   51401 command_runner.go:130] >   "images": [
	I0501 03:08:58.690844   51401 command_runner.go:130] >     {
	I0501 03:08:58.690857   51401 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0501 03:08:58.690864   51401 command_runner.go:130] >       "repoTags": [
	I0501 03:08:58.690879   51401 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0501 03:08:58.690888   51401 command_runner.go:130] >       ],
	I0501 03:08:58.690895   51401 command_runner.go:130] >       "repoDigests": [
	I0501 03:08:58.690914   51401 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0501 03:08:58.690929   51401 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0501 03:08:58.690938   51401 command_runner.go:130] >       ],
	I0501 03:08:58.690949   51401 command_runner.go:130] >       "size": "65291810",
	I0501 03:08:58.690959   51401 command_runner.go:130] >       "uid": null,
	I0501 03:08:58.690969   51401 command_runner.go:130] >       "username": "",
	I0501 03:08:58.690982   51401 command_runner.go:130] >       "spec": null,
	I0501 03:08:58.690992   51401 command_runner.go:130] >       "pinned": false
	I0501 03:08:58.691001   51401 command_runner.go:130] >     },
	I0501 03:08:58.691010   51401 command_runner.go:130] >     {
	I0501 03:08:58.691023   51401 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0501 03:08:58.691033   51401 command_runner.go:130] >       "repoTags": [
	I0501 03:08:58.691044   51401 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0501 03:08:58.691053   51401 command_runner.go:130] >       ],
	I0501 03:08:58.691064   51401 command_runner.go:130] >       "repoDigests": [
	I0501 03:08:58.691076   51401 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0501 03:08:58.691092   51401 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0501 03:08:58.691100   51401 command_runner.go:130] >       ],
	I0501 03:08:58.691110   51401 command_runner.go:130] >       "size": "1363676",
	I0501 03:08:58.691119   51401 command_runner.go:130] >       "uid": null,
	I0501 03:08:58.691137   51401 command_runner.go:130] >       "username": "",
	I0501 03:08:58.691146   51401 command_runner.go:130] >       "spec": null,
	I0501 03:08:58.691156   51401 command_runner.go:130] >       "pinned": false
	I0501 03:08:58.691164   51401 command_runner.go:130] >     },
	I0501 03:08:58.691173   51401 command_runner.go:130] >     {
	I0501 03:08:58.691187   51401 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0501 03:08:58.691197   51401 command_runner.go:130] >       "repoTags": [
	I0501 03:08:58.691211   51401 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0501 03:08:58.691219   51401 command_runner.go:130] >       ],
	I0501 03:08:58.691226   51401 command_runner.go:130] >       "repoDigests": [
	I0501 03:08:58.691241   51401 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0501 03:08:58.691255   51401 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0501 03:08:58.691269   51401 command_runner.go:130] >       ],
	I0501 03:08:58.691278   51401 command_runner.go:130] >       "size": "31470524",
	I0501 03:08:58.691287   51401 command_runner.go:130] >       "uid": null,
	I0501 03:08:58.691295   51401 command_runner.go:130] >       "username": "",
	I0501 03:08:58.691303   51401 command_runner.go:130] >       "spec": null,
	I0501 03:08:58.691311   51401 command_runner.go:130] >       "pinned": false
	I0501 03:08:58.691316   51401 command_runner.go:130] >     },
	I0501 03:08:58.691323   51401 command_runner.go:130] >     {
	I0501 03:08:58.691332   51401 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0501 03:08:58.691341   51401 command_runner.go:130] >       "repoTags": [
	I0501 03:08:58.691350   51401 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0501 03:08:58.691358   51401 command_runner.go:130] >       ],
	I0501 03:08:58.691364   51401 command_runner.go:130] >       "repoDigests": [
	I0501 03:08:58.691378   51401 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0501 03:08:58.691394   51401 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0501 03:08:58.691401   51401 command_runner.go:130] >       ],
	I0501 03:08:58.691407   51401 command_runner.go:130] >       "size": "61245718",
	I0501 03:08:58.691415   51401 command_runner.go:130] >       "uid": null,
	I0501 03:08:58.691424   51401 command_runner.go:130] >       "username": "nonroot",
	I0501 03:08:58.691433   51401 command_runner.go:130] >       "spec": null,
	I0501 03:08:58.691443   51401 command_runner.go:130] >       "pinned": false
	I0501 03:08:58.691450   51401 command_runner.go:130] >     },
	I0501 03:08:58.691458   51401 command_runner.go:130] >     {
	I0501 03:08:58.691466   51401 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0501 03:08:58.691475   51401 command_runner.go:130] >       "repoTags": [
	I0501 03:08:58.691485   51401 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0501 03:08:58.691493   51401 command_runner.go:130] >       ],
	I0501 03:08:58.691499   51401 command_runner.go:130] >       "repoDigests": [
	I0501 03:08:58.691512   51401 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0501 03:08:58.691526   51401 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0501 03:08:58.691536   51401 command_runner.go:130] >       ],
	I0501 03:08:58.691546   51401 command_runner.go:130] >       "size": "150779692",
	I0501 03:08:58.691555   51401 command_runner.go:130] >       "uid": {
	I0501 03:08:58.691564   51401 command_runner.go:130] >         "value": "0"
	I0501 03:08:58.691572   51401 command_runner.go:130] >       },
	I0501 03:08:58.691577   51401 command_runner.go:130] >       "username": "",
	I0501 03:08:58.691583   51401 command_runner.go:130] >       "spec": null,
	I0501 03:08:58.691592   51401 command_runner.go:130] >       "pinned": false
	I0501 03:08:58.691600   51401 command_runner.go:130] >     },
	I0501 03:08:58.691609   51401 command_runner.go:130] >     {
	I0501 03:08:58.691619   51401 command_runner.go:130] >       "id": "c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0",
	I0501 03:08:58.691629   51401 command_runner.go:130] >       "repoTags": [
	I0501 03:08:58.691641   51401 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.0"
	I0501 03:08:58.691650   51401 command_runner.go:130] >       ],
	I0501 03:08:58.691659   51401 command_runner.go:130] >       "repoDigests": [
	I0501 03:08:58.691675   51401 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:31282cf15b67192cd35f847715a9571f5dd4ac0e130290a408a866bd040bcd81",
	I0501 03:08:58.691689   51401 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:6b8e197b2d39c321189a475ac755a77896e34b56729425590fbc99f3a96468a3"
	I0501 03:08:58.691697   51401 command_runner.go:130] >       ],
	I0501 03:08:58.691707   51401 command_runner.go:130] >       "size": "117609952",
	I0501 03:08:58.691716   51401 command_runner.go:130] >       "uid": {
	I0501 03:08:58.691725   51401 command_runner.go:130] >         "value": "0"
	I0501 03:08:58.691734   51401 command_runner.go:130] >       },
	I0501 03:08:58.691742   51401 command_runner.go:130] >       "username": "",
	I0501 03:08:58.691752   51401 command_runner.go:130] >       "spec": null,
	I0501 03:08:58.691760   51401 command_runner.go:130] >       "pinned": false
	I0501 03:08:58.691768   51401 command_runner.go:130] >     },
	I0501 03:08:58.691776   51401 command_runner.go:130] >     {
	I0501 03:08:58.691790   51401 command_runner.go:130] >       "id": "c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b",
	I0501 03:08:58.691800   51401 command_runner.go:130] >       "repoTags": [
	I0501 03:08:58.691812   51401 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.0"
	I0501 03:08:58.691819   51401 command_runner.go:130] >       ],
	I0501 03:08:58.691825   51401 command_runner.go:130] >       "repoDigests": [
	I0501 03:08:58.691838   51401 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:5f52f00f17d5784b5ca004dffca59710fa1a9eec8d54cebdf9433a1d134150fe",
	I0501 03:08:58.691850   51401 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:b7622a0826b7690a307eea994e2abc918f35a27a08e30c37b58c9e3f8336a450"
	I0501 03:08:58.691859   51401 command_runner.go:130] >       ],
	I0501 03:08:58.691870   51401 command_runner.go:130] >       "size": "112170310",
	I0501 03:08:58.691879   51401 command_runner.go:130] >       "uid": {
	I0501 03:08:58.691890   51401 command_runner.go:130] >         "value": "0"
	I0501 03:08:58.691898   51401 command_runner.go:130] >       },
	I0501 03:08:58.691904   51401 command_runner.go:130] >       "username": "",
	I0501 03:08:58.691912   51401 command_runner.go:130] >       "spec": null,
	I0501 03:08:58.691921   51401 command_runner.go:130] >       "pinned": false
	I0501 03:08:58.691928   51401 command_runner.go:130] >     },
	I0501 03:08:58.691936   51401 command_runner.go:130] >     {
	I0501 03:08:58.691944   51401 command_runner.go:130] >       "id": "a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b",
	I0501 03:08:58.691953   51401 command_runner.go:130] >       "repoTags": [
	I0501 03:08:58.691964   51401 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.0"
	I0501 03:08:58.691974   51401 command_runner.go:130] >       ],
	I0501 03:08:58.691984   51401 command_runner.go:130] >       "repoDigests": [
	I0501 03:08:58.692003   51401 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:880f26b53295d384d2f1fed06aa4d58567e3038157f70a1151a7dd8ef8afaa68",
	I0501 03:08:58.692018   51401 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:ec532ff47eaf39822387e51ec73f1f2502eb74658c6303319db88d2c380d0210"
	I0501 03:08:58.692024   51401 command_runner.go:130] >       ],
	I0501 03:08:58.692032   51401 command_runner.go:130] >       "size": "85932953",
	I0501 03:08:58.692040   51401 command_runner.go:130] >       "uid": null,
	I0501 03:08:58.692050   51401 command_runner.go:130] >       "username": "",
	I0501 03:08:58.692058   51401 command_runner.go:130] >       "spec": null,
	I0501 03:08:58.692066   51401 command_runner.go:130] >       "pinned": false
	I0501 03:08:58.692071   51401 command_runner.go:130] >     },
	I0501 03:08:58.692079   51401 command_runner.go:130] >     {
	I0501 03:08:58.692088   51401 command_runner.go:130] >       "id": "259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced",
	I0501 03:08:58.692096   51401 command_runner.go:130] >       "repoTags": [
	I0501 03:08:58.692107   51401 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.0"
	I0501 03:08:58.692115   51401 command_runner.go:130] >       ],
	I0501 03:08:58.692124   51401 command_runner.go:130] >       "repoDigests": [
	I0501 03:08:58.692137   51401 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2353c3a1803229970fcb571cffc9b2f120372350e01c7381b4b650c4a02b9d67",
	I0501 03:08:58.692151   51401 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d2c2a1d9de7a42d91bfedba5ed4f58126f9cff702d35419d78ce4e7cb07f3b7a"
	I0501 03:08:58.692160   51401 command_runner.go:130] >       ],
	I0501 03:08:58.692166   51401 command_runner.go:130] >       "size": "63026502",
	I0501 03:08:58.692176   51401 command_runner.go:130] >       "uid": {
	I0501 03:08:58.692185   51401 command_runner.go:130] >         "value": "0"
	I0501 03:08:58.692193   51401 command_runner.go:130] >       },
	I0501 03:08:58.692201   51401 command_runner.go:130] >       "username": "",
	I0501 03:08:58.692209   51401 command_runner.go:130] >       "spec": null,
	I0501 03:08:58.692218   51401 command_runner.go:130] >       "pinned": false
	I0501 03:08:58.692226   51401 command_runner.go:130] >     },
	I0501 03:08:58.692234   51401 command_runner.go:130] >     {
	I0501 03:08:58.692248   51401 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0501 03:08:58.692257   51401 command_runner.go:130] >       "repoTags": [
	I0501 03:08:58.692274   51401 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0501 03:08:58.692282   51401 command_runner.go:130] >       ],
	I0501 03:08:58.692291   51401 command_runner.go:130] >       "repoDigests": [
	I0501 03:08:58.692319   51401 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0501 03:08:58.692334   51401 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0501 03:08:58.692345   51401 command_runner.go:130] >       ],
	I0501 03:08:58.692354   51401 command_runner.go:130] >       "size": "750414",
	I0501 03:08:58.692363   51401 command_runner.go:130] >       "uid": {
	I0501 03:08:58.692371   51401 command_runner.go:130] >         "value": "65535"
	I0501 03:08:58.692379   51401 command_runner.go:130] >       },
	I0501 03:08:58.692388   51401 command_runner.go:130] >       "username": "",
	I0501 03:08:58.692397   51401 command_runner.go:130] >       "spec": null,
	I0501 03:08:58.692406   51401 command_runner.go:130] >       "pinned": true
	I0501 03:08:58.692414   51401 command_runner.go:130] >     }
	I0501 03:08:58.692420   51401 command_runner.go:130] >   ]
	I0501 03:08:58.692428   51401 command_runner.go:130] > }
	I0501 03:08:58.692767   51401 crio.go:514] all images are preloaded for cri-o runtime.
	I0501 03:08:58.692794   51401 cache_images.go:84] Images are preloaded, skipping loading
	I0501 03:08:58.692808   51401 kubeadm.go:928] updating node { 192.168.39.139 8443 v1.30.0 crio true true} ...
	I0501 03:08:58.692923   51401 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-282238 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.139
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:multinode-282238 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0501 03:08:58.692997   51401 ssh_runner.go:195] Run: crio config
	I0501 03:08:58.741538   51401 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0501 03:08:58.741570   51401 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0501 03:08:58.741580   51401 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0501 03:08:58.741584   51401 command_runner.go:130] > #
	I0501 03:08:58.741595   51401 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0501 03:08:58.741604   51401 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0501 03:08:58.741614   51401 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0501 03:08:58.741629   51401 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0501 03:08:58.741642   51401 command_runner.go:130] > # reload'.
	I0501 03:08:58.741654   51401 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0501 03:08:58.741668   51401 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0501 03:08:58.741682   51401 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0501 03:08:58.741695   51401 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0501 03:08:58.741704   51401 command_runner.go:130] > [crio]
	I0501 03:08:58.741715   51401 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0501 03:08:58.741727   51401 command_runner.go:130] > # containers images, in this directory.
	I0501 03:08:58.741941   51401 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0501 03:08:58.741973   51401 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0501 03:08:58.742110   51401 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0501 03:08:58.742127   51401 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0501 03:08:58.742439   51401 command_runner.go:130] > # imagestore = ""
	I0501 03:08:58.742454   51401 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0501 03:08:58.742465   51401 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0501 03:08:58.742476   51401 command_runner.go:130] > storage_driver = "overlay"
	I0501 03:08:58.742489   51401 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0501 03:08:58.742503   51401 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0501 03:08:58.742510   51401 command_runner.go:130] > storage_option = [
	I0501 03:08:58.742685   51401 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0501 03:08:58.742755   51401 command_runner.go:130] > ]
	I0501 03:08:58.742771   51401 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0501 03:08:58.742781   51401 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0501 03:08:58.743264   51401 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0501 03:08:58.743280   51401 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0501 03:08:58.743291   51401 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0501 03:08:58.743298   51401 command_runner.go:130] > # always happen on a node reboot
	I0501 03:08:58.743621   51401 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0501 03:08:58.743642   51401 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0501 03:08:58.743652   51401 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0501 03:08:58.743664   51401 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0501 03:08:58.743922   51401 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0501 03:08:58.743937   51401 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0501 03:08:58.743953   51401 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0501 03:08:58.744308   51401 command_runner.go:130] > # internal_wipe = true
	I0501 03:08:58.744325   51401 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0501 03:08:58.744333   51401 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0501 03:08:58.744625   51401 command_runner.go:130] > # internal_repair = false
	I0501 03:08:58.744637   51401 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0501 03:08:58.744647   51401 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0501 03:08:58.744656   51401 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0501 03:08:58.745081   51401 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0501 03:08:58.745094   51401 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0501 03:08:58.745100   51401 command_runner.go:130] > [crio.api]
	I0501 03:08:58.745108   51401 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0501 03:08:58.745453   51401 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0501 03:08:58.745465   51401 command_runner.go:130] > # IP address on which the stream server will listen.
	I0501 03:08:58.745957   51401 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0501 03:08:58.745971   51401 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0501 03:08:58.745980   51401 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0501 03:08:58.746317   51401 command_runner.go:130] > # stream_port = "0"
	I0501 03:08:58.746337   51401 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0501 03:08:58.746678   51401 command_runner.go:130] > # stream_enable_tls = false
	I0501 03:08:58.746693   51401 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0501 03:08:58.746947   51401 command_runner.go:130] > # stream_idle_timeout = ""
	I0501 03:08:58.746966   51401 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0501 03:08:58.746976   51401 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0501 03:08:58.746983   51401 command_runner.go:130] > # minutes.
	I0501 03:08:58.747056   51401 command_runner.go:130] > # stream_tls_cert = ""
	I0501 03:08:58.747071   51401 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0501 03:08:58.747081   51401 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0501 03:08:58.747088   51401 command_runner.go:130] > # stream_tls_key = ""
	I0501 03:08:58.747099   51401 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0501 03:08:58.747115   51401 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0501 03:08:58.747147   51401 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0501 03:08:58.747157   51401 command_runner.go:130] > # stream_tls_ca = ""
	I0501 03:08:58.747170   51401 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0501 03:08:58.747183   51401 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0501 03:08:58.747205   51401 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0501 03:08:58.747217   51401 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0501 03:08:58.747231   51401 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0501 03:08:58.747245   51401 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0501 03:08:58.747254   51401 command_runner.go:130] > [crio.runtime]
	I0501 03:08:58.747264   51401 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0501 03:08:58.747283   51401 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0501 03:08:58.747299   51401 command_runner.go:130] > # "nofile=1024:2048"
	I0501 03:08:58.747313   51401 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0501 03:08:58.747323   51401 command_runner.go:130] > # default_ulimits = [
	I0501 03:08:58.747331   51401 command_runner.go:130] > # ]
	I0501 03:08:58.747342   51401 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0501 03:08:58.747354   51401 command_runner.go:130] > # no_pivot = false
	I0501 03:08:58.747367   51401 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0501 03:08:58.747380   51401 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0501 03:08:58.747392   51401 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0501 03:08:58.747406   51401 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0501 03:08:58.747417   51401 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0501 03:08:58.747429   51401 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0501 03:08:58.747440   51401 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0501 03:08:58.747448   51401 command_runner.go:130] > # Cgroup setting for conmon
	I0501 03:08:58.747464   51401 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0501 03:08:58.747474   51401 command_runner.go:130] > conmon_cgroup = "pod"
	I0501 03:08:58.747488   51401 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0501 03:08:58.747497   51401 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0501 03:08:58.747511   51401 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0501 03:08:58.747521   51401 command_runner.go:130] > conmon_env = [
	I0501 03:08:58.747532   51401 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0501 03:08:58.747540   51401 command_runner.go:130] > ]
	I0501 03:08:58.747549   51401 command_runner.go:130] > # Additional environment variables to set for all the
	I0501 03:08:58.747561   51401 command_runner.go:130] > # containers. These are overridden if set in the
	I0501 03:08:58.747574   51401 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0501 03:08:58.747584   51401 command_runner.go:130] > # default_env = [
	I0501 03:08:58.747590   51401 command_runner.go:130] > # ]
	I0501 03:08:58.747603   51401 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0501 03:08:58.747619   51401 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0501 03:08:58.747641   51401 command_runner.go:130] > # selinux = false
	I0501 03:08:58.747656   51401 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0501 03:08:58.747670   51401 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0501 03:08:58.747686   51401 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0501 03:08:58.747694   51401 command_runner.go:130] > # seccomp_profile = ""
	I0501 03:08:58.747707   51401 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0501 03:08:58.747721   51401 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0501 03:08:58.747735   51401 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0501 03:08:58.747746   51401 command_runner.go:130] > # which might increase security.
	I0501 03:08:58.747754   51401 command_runner.go:130] > # This option is currently deprecated,
	I0501 03:08:58.747768   51401 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0501 03:08:58.747778   51401 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0501 03:08:58.747788   51401 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0501 03:08:58.747802   51401 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0501 03:08:58.747815   51401 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0501 03:08:58.747829   51401 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0501 03:08:58.747840   51401 command_runner.go:130] > # This option supports live configuration reload.
	I0501 03:08:58.747849   51401 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0501 03:08:58.747861   51401 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0501 03:08:58.747873   51401 command_runner.go:130] > # the cgroup blockio controller.
	I0501 03:08:58.747882   51401 command_runner.go:130] > # blockio_config_file = ""
	I0501 03:08:58.747894   51401 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0501 03:08:58.747904   51401 command_runner.go:130] > # blockio parameters.
	I0501 03:08:58.747912   51401 command_runner.go:130] > # blockio_reload = false
	I0501 03:08:58.747923   51401 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0501 03:08:58.747930   51401 command_runner.go:130] > # irqbalance daemon.
	I0501 03:08:58.747937   51401 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0501 03:08:58.747946   51401 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0501 03:08:58.747953   51401 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0501 03:08:58.747962   51401 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0501 03:08:58.747967   51401 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0501 03:08:58.747976   51401 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0501 03:08:58.747987   51401 command_runner.go:130] > # This option supports live configuration reload.
	I0501 03:08:58.747997   51401 command_runner.go:130] > # rdt_config_file = ""
	I0501 03:08:58.748009   51401 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0501 03:08:58.748019   51401 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0501 03:08:58.748046   51401 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0501 03:08:58.748054   51401 command_runner.go:130] > # separate_pull_cgroup = ""
	I0501 03:08:58.748060   51401 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0501 03:08:58.748068   51401 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0501 03:08:58.748077   51401 command_runner.go:130] > # will be added.
	I0501 03:08:58.748084   51401 command_runner.go:130] > # default_capabilities = [
	I0501 03:08:58.748094   51401 command_runner.go:130] > # 	"CHOWN",
	I0501 03:08:58.748101   51401 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0501 03:08:58.748110   51401 command_runner.go:130] > # 	"FSETID",
	I0501 03:08:58.748116   51401 command_runner.go:130] > # 	"FOWNER",
	I0501 03:08:58.748123   51401 command_runner.go:130] > # 	"SETGID",
	I0501 03:08:58.748129   51401 command_runner.go:130] > # 	"SETUID",
	I0501 03:08:58.748138   51401 command_runner.go:130] > # 	"SETPCAP",
	I0501 03:08:58.748144   51401 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0501 03:08:58.748157   51401 command_runner.go:130] > # 	"KILL",
	I0501 03:08:58.748166   51401 command_runner.go:130] > # ]
	I0501 03:08:58.748179   51401 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0501 03:08:58.748192   51401 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0501 03:08:58.748203   51401 command_runner.go:130] > # add_inheritable_capabilities = false
	I0501 03:08:58.748213   51401 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0501 03:08:58.748226   51401 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0501 03:08:58.748232   51401 command_runner.go:130] > default_sysctls = [
	I0501 03:08:58.748237   51401 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0501 03:08:58.748241   51401 command_runner.go:130] > ]
	I0501 03:08:58.748249   51401 command_runner.go:130] > # List of devices on the host that a
	I0501 03:08:58.748263   51401 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0501 03:08:58.748277   51401 command_runner.go:130] > # allowed_devices = [
	I0501 03:08:58.748283   51401 command_runner.go:130] > # 	"/dev/fuse",
	I0501 03:08:58.748291   51401 command_runner.go:130] > # ]
	I0501 03:08:58.748300   51401 command_runner.go:130] > # List of additional devices. specified as
	I0501 03:08:58.748314   51401 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0501 03:08:58.748323   51401 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0501 03:08:58.748331   51401 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0501 03:08:58.748341   51401 command_runner.go:130] > # additional_devices = [
	I0501 03:08:58.748351   51401 command_runner.go:130] > # ]
	I0501 03:08:58.748360   51401 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0501 03:08:58.748374   51401 command_runner.go:130] > # cdi_spec_dirs = [
	I0501 03:08:58.748384   51401 command_runner.go:130] > # 	"/etc/cdi",
	I0501 03:08:58.748391   51401 command_runner.go:130] > # 	"/var/run/cdi",
	I0501 03:08:58.748398   51401 command_runner.go:130] > # ]
	I0501 03:08:58.748406   51401 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0501 03:08:58.748416   51401 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0501 03:08:58.748422   51401 command_runner.go:130] > # Defaults to false.
	I0501 03:08:58.748434   51401 command_runner.go:130] > # device_ownership_from_security_context = false
	I0501 03:08:58.748449   51401 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0501 03:08:58.748461   51401 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0501 03:08:58.748468   51401 command_runner.go:130] > # hooks_dir = [
	I0501 03:08:58.748477   51401 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0501 03:08:58.748483   51401 command_runner.go:130] > # ]
	I0501 03:08:58.748492   51401 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0501 03:08:58.748502   51401 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0501 03:08:58.748512   51401 command_runner.go:130] > # its default mounts from the following two files:
	I0501 03:08:58.748520   51401 command_runner.go:130] > #
	I0501 03:08:58.748531   51401 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0501 03:08:58.748544   51401 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0501 03:08:58.748554   51401 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0501 03:08:58.748560   51401 command_runner.go:130] > #
	I0501 03:08:58.748573   51401 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0501 03:08:58.748583   51401 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0501 03:08:58.748592   51401 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0501 03:08:58.748603   51401 command_runner.go:130] > #      only add mounts it finds in this file.
	I0501 03:08:58.748612   51401 command_runner.go:130] > #
	I0501 03:08:58.748621   51401 command_runner.go:130] > # default_mounts_file = ""
	I0501 03:08:58.748632   51401 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0501 03:08:58.748646   51401 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0501 03:08:58.748657   51401 command_runner.go:130] > pids_limit = 1024
	I0501 03:08:58.748664   51401 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0501 03:08:58.748676   51401 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0501 03:08:58.748689   51401 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0501 03:08:58.748706   51401 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0501 03:08:58.748715   51401 command_runner.go:130] > # log_size_max = -1
	I0501 03:08:58.748727   51401 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0501 03:08:58.748738   51401 command_runner.go:130] > # log_to_journald = false
	I0501 03:08:58.748749   51401 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0501 03:08:58.748755   51401 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0501 03:08:58.748766   51401 command_runner.go:130] > # Path to directory for container attach sockets.
	I0501 03:08:58.748775   51401 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0501 03:08:58.748787   51401 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0501 03:08:58.748797   51401 command_runner.go:130] > # bind_mount_prefix = ""
	I0501 03:08:58.748810   51401 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0501 03:08:58.748819   51401 command_runner.go:130] > # read_only = false
	I0501 03:08:58.748832   51401 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0501 03:08:58.748842   51401 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0501 03:08:58.748847   51401 command_runner.go:130] > # live configuration reload.
	I0501 03:08:58.748857   51401 command_runner.go:130] > # log_level = "info"
	I0501 03:08:58.748866   51401 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0501 03:08:58.748878   51401 command_runner.go:130] > # This option supports live configuration reload.
	I0501 03:08:58.748884   51401 command_runner.go:130] > # log_filter = ""
	I0501 03:08:58.748897   51401 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0501 03:08:58.748910   51401 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0501 03:08:58.748919   51401 command_runner.go:130] > # separated by comma.
	I0501 03:08:58.748927   51401 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0501 03:08:58.748935   51401 command_runner.go:130] > # uid_mappings = ""
	I0501 03:08:58.748944   51401 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0501 03:08:58.748958   51401 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0501 03:08:58.748968   51401 command_runner.go:130] > # separated by comma.
	I0501 03:08:58.748983   51401 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0501 03:08:58.748993   51401 command_runner.go:130] > # gid_mappings = ""
	I0501 03:08:58.749005   51401 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0501 03:08:58.749015   51401 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0501 03:08:58.749027   51401 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0501 03:08:58.749043   51401 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0501 03:08:58.749053   51401 command_runner.go:130] > # minimum_mappable_uid = -1
	I0501 03:08:58.749066   51401 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0501 03:08:58.749079   51401 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0501 03:08:58.749091   51401 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0501 03:08:58.749099   51401 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0501 03:08:58.749105   51401 command_runner.go:130] > # minimum_mappable_gid = -1
	I0501 03:08:58.749116   51401 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0501 03:08:58.749130   51401 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0501 03:08:58.749138   51401 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0501 03:08:58.749148   51401 command_runner.go:130] > # ctr_stop_timeout = 30
	I0501 03:08:58.749158   51401 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0501 03:08:58.749170   51401 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0501 03:08:58.749185   51401 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0501 03:08:58.749196   51401 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0501 03:08:58.749211   51401 command_runner.go:130] > drop_infra_ctr = false
	I0501 03:08:58.749222   51401 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0501 03:08:58.749234   51401 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0501 03:08:58.749246   51401 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0501 03:08:58.749256   51401 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0501 03:08:58.749267   51401 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0501 03:08:58.749279   51401 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0501 03:08:58.749288   51401 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0501 03:08:58.749300   51401 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0501 03:08:58.749310   51401 command_runner.go:130] > # shared_cpuset = ""
	I0501 03:08:58.749320   51401 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0501 03:08:58.749332   51401 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0501 03:08:58.749342   51401 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0501 03:08:58.749355   51401 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0501 03:08:58.749366   51401 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0501 03:08:58.749379   51401 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0501 03:08:58.749392   51401 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0501 03:08:58.749402   51401 command_runner.go:130] > # enable_criu_support = false
	I0501 03:08:58.749413   51401 command_runner.go:130] > # Enable/disable the generation of the container,
	I0501 03:08:58.749425   51401 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0501 03:08:58.749435   51401 command_runner.go:130] > # enable_pod_events = false
	I0501 03:08:58.749443   51401 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0501 03:08:58.749453   51401 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0501 03:08:58.749462   51401 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0501 03:08:58.749472   51401 command_runner.go:130] > # default_runtime = "runc"
	I0501 03:08:58.749481   51401 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0501 03:08:58.749496   51401 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0501 03:08:58.749513   51401 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0501 03:08:58.749530   51401 command_runner.go:130] > # creation as a file is not desired either.
	I0501 03:08:58.749544   51401 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0501 03:08:58.749556   51401 command_runner.go:130] > # the hostname is being managed dynamically.
	I0501 03:08:58.749566   51401 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0501 03:08:58.749572   51401 command_runner.go:130] > # ]
	I0501 03:08:58.749584   51401 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0501 03:08:58.749597   51401 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0501 03:08:58.749610   51401 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0501 03:08:58.749618   51401 command_runner.go:130] > # Each entry in the table should follow the format:
	I0501 03:08:58.749621   51401 command_runner.go:130] > #
	I0501 03:08:58.749628   51401 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0501 03:08:58.749639   51401 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0501 03:08:58.749698   51401 command_runner.go:130] > # runtime_type = "oci"
	I0501 03:08:58.749705   51401 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0501 03:08:58.749711   51401 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0501 03:08:58.749718   51401 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0501 03:08:58.749729   51401 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0501 03:08:58.749735   51401 command_runner.go:130] > # monitor_env = []
	I0501 03:08:58.749746   51401 command_runner.go:130] > # privileged_without_host_devices = false
	I0501 03:08:58.749754   51401 command_runner.go:130] > # allowed_annotations = []
	I0501 03:08:58.749766   51401 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0501 03:08:58.749775   51401 command_runner.go:130] > # Where:
	I0501 03:08:58.749784   51401 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0501 03:08:58.749793   51401 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0501 03:08:58.749802   51401 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0501 03:08:58.749816   51401 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0501 03:08:58.749823   51401 command_runner.go:130] > #   in $PATH.
	I0501 03:08:58.749836   51401 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0501 03:08:58.749845   51401 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0501 03:08:58.749858   51401 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0501 03:08:58.749865   51401 command_runner.go:130] > #   state.
	I0501 03:08:58.749875   51401 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0501 03:08:58.749883   51401 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0501 03:08:58.749893   51401 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0501 03:08:58.749905   51401 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0501 03:08:58.749919   51401 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0501 03:08:58.749935   51401 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0501 03:08:58.749945   51401 command_runner.go:130] > #   The currently recognized values are:
	I0501 03:08:58.749956   51401 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0501 03:08:58.749965   51401 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0501 03:08:58.749973   51401 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0501 03:08:58.749985   51401 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0501 03:08:58.750001   51401 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0501 03:08:58.750014   51401 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0501 03:08:58.750028   51401 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0501 03:08:58.750041   51401 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0501 03:08:58.750049   51401 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0501 03:08:58.750057   51401 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0501 03:08:58.750066   51401 command_runner.go:130] > #   deprecated option "conmon".
	I0501 03:08:58.750078   51401 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0501 03:08:58.750089   51401 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0501 03:08:58.750102   51401 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0501 03:08:58.750113   51401 command_runner.go:130] > #   should be moved to the container's cgroup
	I0501 03:08:58.750124   51401 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0501 03:08:58.750134   51401 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0501 03:08:58.750142   51401 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0501 03:08:58.750153   51401 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0501 03:08:58.750159   51401 command_runner.go:130] > #
	I0501 03:08:58.750166   51401 command_runner.go:130] > # Using the seccomp notifier feature:
	I0501 03:08:58.750174   51401 command_runner.go:130] > #
	I0501 03:08:58.750184   51401 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0501 03:08:58.750197   51401 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0501 03:08:58.750204   51401 command_runner.go:130] > #
	I0501 03:08:58.750215   51401 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0501 03:08:58.750224   51401 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0501 03:08:58.750228   51401 command_runner.go:130] > #
	I0501 03:08:58.750238   51401 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0501 03:08:58.750248   51401 command_runner.go:130] > # feature.
	I0501 03:08:58.750253   51401 command_runner.go:130] > #
	I0501 03:08:58.750266   51401 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0501 03:08:58.750282   51401 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0501 03:08:58.750295   51401 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0501 03:08:58.750310   51401 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0501 03:08:58.750323   51401 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0501 03:08:58.750331   51401 command_runner.go:130] > #
	I0501 03:08:58.750342   51401 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0501 03:08:58.750354   51401 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0501 03:08:58.750362   51401 command_runner.go:130] > #
	I0501 03:08:58.750372   51401 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0501 03:08:58.750384   51401 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0501 03:08:58.750388   51401 command_runner.go:130] > #
	I0501 03:08:58.750409   51401 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0501 03:08:58.750424   51401 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0501 03:08:58.750433   51401 command_runner.go:130] > # limitation.
	I0501 03:08:58.750441   51401 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0501 03:08:58.750451   51401 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0501 03:08:58.750460   51401 command_runner.go:130] > runtime_type = "oci"
	I0501 03:08:58.750468   51401 command_runner.go:130] > runtime_root = "/run/runc"
	I0501 03:08:58.750478   51401 command_runner.go:130] > runtime_config_path = ""
	I0501 03:08:58.750485   51401 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0501 03:08:58.750496   51401 command_runner.go:130] > monitor_cgroup = "pod"
	I0501 03:08:58.750503   51401 command_runner.go:130] > monitor_exec_cgroup = ""
	I0501 03:08:58.750512   51401 command_runner.go:130] > monitor_env = [
	I0501 03:08:58.750521   51401 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0501 03:08:58.750529   51401 command_runner.go:130] > ]
	I0501 03:08:58.750537   51401 command_runner.go:130] > privileged_without_host_devices = false
	I0501 03:08:58.750549   51401 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0501 03:08:58.750558   51401 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0501 03:08:58.750566   51401 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0501 03:08:58.750582   51401 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0501 03:08:58.750595   51401 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0501 03:08:58.750607   51401 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0501 03:08:58.750622   51401 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0501 03:08:58.750637   51401 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0501 03:08:58.750645   51401 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0501 03:08:58.750657   51401 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0501 03:08:58.750666   51401 command_runner.go:130] > # Example:
	I0501 03:08:58.750674   51401 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0501 03:08:58.750692   51401 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0501 03:08:58.750703   51401 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0501 03:08:58.750715   51401 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0501 03:08:58.750723   51401 command_runner.go:130] > # cpuset = 0
	I0501 03:08:58.750728   51401 command_runner.go:130] > # cpushares = "0-1"
	I0501 03:08:58.750735   51401 command_runner.go:130] > # Where:
	I0501 03:08:58.750743   51401 command_runner.go:130] > # The workload name is workload-type.
	I0501 03:08:58.750758   51401 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0501 03:08:58.750770   51401 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0501 03:08:58.750780   51401 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0501 03:08:58.750796   51401 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0501 03:08:58.750808   51401 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0501 03:08:58.750816   51401 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0501 03:08:58.750824   51401 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0501 03:08:58.750835   51401 command_runner.go:130] > # Default value is set to true
	I0501 03:08:58.750845   51401 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0501 03:08:58.750863   51401 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0501 03:08:58.750874   51401 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0501 03:08:58.750880   51401 command_runner.go:130] > # Default value is set to 'false'
	I0501 03:08:58.750890   51401 command_runner.go:130] > # disable_hostport_mapping = false
	I0501 03:08:58.750898   51401 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0501 03:08:58.750904   51401 command_runner.go:130] > #
	I0501 03:08:58.750914   51401 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0501 03:08:58.750928   51401 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0501 03:08:58.750943   51401 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0501 03:08:58.750953   51401 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0501 03:08:58.750962   51401 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0501 03:08:58.750967   51401 command_runner.go:130] > [crio.image]
	I0501 03:08:58.750977   51401 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0501 03:08:58.750983   51401 command_runner.go:130] > # default_transport = "docker://"
	I0501 03:08:58.750989   51401 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0501 03:08:58.750999   51401 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0501 03:08:58.751005   51401 command_runner.go:130] > # global_auth_file = ""
	I0501 03:08:58.751013   51401 command_runner.go:130] > # The image used to instantiate infra containers.
	I0501 03:08:58.751022   51401 command_runner.go:130] > # This option supports live configuration reload.
	I0501 03:08:58.751030   51401 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0501 03:08:58.751047   51401 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0501 03:08:58.751057   51401 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0501 03:08:58.751065   51401 command_runner.go:130] > # This option supports live configuration reload.
	I0501 03:08:58.751070   51401 command_runner.go:130] > # pause_image_auth_file = ""
	I0501 03:08:58.751076   51401 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0501 03:08:58.751085   51401 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0501 03:08:58.751095   51401 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0501 03:08:58.751104   51401 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0501 03:08:58.751110   51401 command_runner.go:130] > # pause_command = "/pause"
	I0501 03:08:58.751120   51401 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0501 03:08:58.751130   51401 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0501 03:08:58.751138   51401 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0501 03:08:58.751148   51401 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0501 03:08:58.751155   51401 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0501 03:08:58.751160   51401 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0501 03:08:58.751165   51401 command_runner.go:130] > # pinned_images = [
	I0501 03:08:58.751173   51401 command_runner.go:130] > # ]
	I0501 03:08:58.751183   51401 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0501 03:08:58.751198   51401 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0501 03:08:58.751211   51401 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0501 03:08:58.751223   51401 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0501 03:08:58.751235   51401 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0501 03:08:58.751240   51401 command_runner.go:130] > # signature_policy = ""
	I0501 03:08:58.751245   51401 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0501 03:08:58.751258   51401 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0501 03:08:58.751276   51401 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0501 03:08:58.751289   51401 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0501 03:08:58.751301   51401 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0501 03:08:58.751312   51401 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0501 03:08:58.751324   51401 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0501 03:08:58.751332   51401 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0501 03:08:58.751337   51401 command_runner.go:130] > # changing them here.
	I0501 03:08:58.751347   51401 command_runner.go:130] > # insecure_registries = [
	I0501 03:08:58.751352   51401 command_runner.go:130] > # ]
	I0501 03:08:58.751365   51401 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0501 03:08:58.751377   51401 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0501 03:08:58.751392   51401 command_runner.go:130] > # image_volumes = "mkdir"
	I0501 03:08:58.751404   51401 command_runner.go:130] > # Temporary directory to use for storing big files
	I0501 03:08:58.751410   51401 command_runner.go:130] > # big_files_temporary_dir = ""
	I0501 03:08:58.751419   51401 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0501 03:08:58.751423   51401 command_runner.go:130] > # CNI plugins.
	I0501 03:08:58.751429   51401 command_runner.go:130] > [crio.network]
	I0501 03:08:58.751434   51401 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0501 03:08:58.751442   51401 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0501 03:08:58.751449   51401 command_runner.go:130] > # cni_default_network = ""
	I0501 03:08:58.751462   51401 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0501 03:08:58.751472   51401 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0501 03:08:58.751484   51401 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0501 03:08:58.751493   51401 command_runner.go:130] > # plugin_dirs = [
	I0501 03:08:58.751500   51401 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0501 03:08:58.751508   51401 command_runner.go:130] > # ]
	I0501 03:08:58.751518   51401 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0501 03:08:58.751524   51401 command_runner.go:130] > [crio.metrics]
	I0501 03:08:58.751528   51401 command_runner.go:130] > # Globally enable or disable metrics support.
	I0501 03:08:58.751534   51401 command_runner.go:130] > enable_metrics = true
	I0501 03:08:58.751538   51401 command_runner.go:130] > # Specify enabled metrics collectors.
	I0501 03:08:58.751545   51401 command_runner.go:130] > # Per default all metrics are enabled.
	I0501 03:08:58.751551   51401 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0501 03:08:58.751559   51401 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0501 03:08:58.751565   51401 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0501 03:08:58.751571   51401 command_runner.go:130] > # metrics_collectors = [
	I0501 03:08:58.751574   51401 command_runner.go:130] > # 	"operations",
	I0501 03:08:58.751579   51401 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0501 03:08:58.751585   51401 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0501 03:08:58.751589   51401 command_runner.go:130] > # 	"operations_errors",
	I0501 03:08:58.751595   51401 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0501 03:08:58.751605   51401 command_runner.go:130] > # 	"image_pulls_by_name",
	I0501 03:08:58.751612   51401 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0501 03:08:58.751624   51401 command_runner.go:130] > # 	"image_pulls_failures",
	I0501 03:08:58.751632   51401 command_runner.go:130] > # 	"image_pulls_successes",
	I0501 03:08:58.751639   51401 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0501 03:08:58.751648   51401 command_runner.go:130] > # 	"image_layer_reuse",
	I0501 03:08:58.751662   51401 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0501 03:08:58.751669   51401 command_runner.go:130] > # 	"containers_oom_total",
	I0501 03:08:58.751673   51401 command_runner.go:130] > # 	"containers_oom",
	I0501 03:08:58.751679   51401 command_runner.go:130] > # 	"processes_defunct",
	I0501 03:08:58.751683   51401 command_runner.go:130] > # 	"operations_total",
	I0501 03:08:58.751687   51401 command_runner.go:130] > # 	"operations_latency_seconds",
	I0501 03:08:58.751694   51401 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0501 03:08:58.751698   51401 command_runner.go:130] > # 	"operations_errors_total",
	I0501 03:08:58.751703   51401 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0501 03:08:58.751708   51401 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0501 03:08:58.751712   51401 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0501 03:08:58.751716   51401 command_runner.go:130] > # 	"image_pulls_success_total",
	I0501 03:08:58.751723   51401 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0501 03:08:58.751727   51401 command_runner.go:130] > # 	"containers_oom_count_total",
	I0501 03:08:58.751732   51401 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0501 03:08:58.751738   51401 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0501 03:08:58.751741   51401 command_runner.go:130] > # ]
	I0501 03:08:58.751746   51401 command_runner.go:130] > # The port on which the metrics server will listen.
	I0501 03:08:58.751752   51401 command_runner.go:130] > # metrics_port = 9090
	I0501 03:08:58.751757   51401 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0501 03:08:58.751763   51401 command_runner.go:130] > # metrics_socket = ""
	I0501 03:08:58.751767   51401 command_runner.go:130] > # The certificate for the secure metrics server.
	I0501 03:08:58.751776   51401 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0501 03:08:58.751782   51401 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0501 03:08:58.751787   51401 command_runner.go:130] > # certificate on any modification event.
	I0501 03:08:58.751793   51401 command_runner.go:130] > # metrics_cert = ""
	I0501 03:08:58.751797   51401 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0501 03:08:58.751802   51401 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0501 03:08:58.751806   51401 command_runner.go:130] > # metrics_key = ""
	I0501 03:08:58.751811   51401 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0501 03:08:58.751816   51401 command_runner.go:130] > [crio.tracing]
	I0501 03:08:58.751825   51401 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0501 03:08:58.751836   51401 command_runner.go:130] > # enable_tracing = false
	I0501 03:08:58.751844   51401 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0501 03:08:58.751850   51401 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0501 03:08:58.751856   51401 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0501 03:08:58.751867   51401 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0501 03:08:58.751874   51401 command_runner.go:130] > # CRI-O NRI configuration.
	I0501 03:08:58.751878   51401 command_runner.go:130] > [crio.nri]
	I0501 03:08:58.751881   51401 command_runner.go:130] > # Globally enable or disable NRI.
	I0501 03:08:58.751885   51401 command_runner.go:130] > # enable_nri = false
	I0501 03:08:58.751889   51401 command_runner.go:130] > # NRI socket to listen on.
	I0501 03:08:58.751894   51401 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0501 03:08:58.751900   51401 command_runner.go:130] > # NRI plugin directory to use.
	I0501 03:08:58.751904   51401 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0501 03:08:58.751909   51401 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0501 03:08:58.751914   51401 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0501 03:08:58.751920   51401 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0501 03:08:58.751924   51401 command_runner.go:130] > # nri_disable_connections = false
	I0501 03:08:58.751929   51401 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0501 03:08:58.751936   51401 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0501 03:08:58.751942   51401 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0501 03:08:58.751948   51401 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0501 03:08:58.751954   51401 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0501 03:08:58.751959   51401 command_runner.go:130] > [crio.stats]
	I0501 03:08:58.751965   51401 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0501 03:08:58.751975   51401 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0501 03:08:58.751979   51401 command_runner.go:130] > # stats_collection_period = 0
	I0501 03:08:58.752000   51401 command_runner.go:130] ! time="2024-05-01 03:08:58.704459703Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0501 03:08:58.752017   51401 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0501 03:08:58.752156   51401 cni.go:84] Creating CNI manager for ""
	I0501 03:08:58.752167   51401 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0501 03:08:58.752175   51401 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0501 03:08:58.752201   51401 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.139 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-282238 NodeName:multinode-282238 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.139"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.139 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0501 03:08:58.752372   51401 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.139
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-282238"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.139
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.139"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0501 03:08:58.752432   51401 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0501 03:08:58.763902   51401 command_runner.go:130] > kubeadm
	I0501 03:08:58.763924   51401 command_runner.go:130] > kubectl
	I0501 03:08:58.763931   51401 command_runner.go:130] > kubelet
	I0501 03:08:58.763960   51401 binaries.go:44] Found k8s binaries, skipping transfer
	I0501 03:08:58.764007   51401 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0501 03:08:58.774652   51401 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0501 03:08:58.794100   51401 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0501 03:08:58.814078   51401 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0501 03:08:58.833561   51401 ssh_runner.go:195] Run: grep 192.168.39.139	control-plane.minikube.internal$ /etc/hosts
	I0501 03:08:58.838098   51401 command_runner.go:130] > 192.168.39.139	control-plane.minikube.internal
	I0501 03:08:58.838174   51401 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 03:08:58.981244   51401 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0501 03:08:58.997269   51401 certs.go:68] Setting up /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/multinode-282238 for IP: 192.168.39.139
	I0501 03:08:58.997288   51401 certs.go:194] generating shared ca certs ...
	I0501 03:08:58.997321   51401 certs.go:226] acquiring lock for ca certs: {Name:mk85a75d14c00780d4bf09cd9bdaf0f96f1b8fce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 03:08:58.997459   51401 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18779-13391/.minikube/ca.key
	I0501 03:08:58.997516   51401 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.key
	I0501 03:08:58.997531   51401 certs.go:256] generating profile certs ...
	I0501 03:08:58.997612   51401 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/multinode-282238/client.key
	I0501 03:08:58.997715   51401 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/multinode-282238/apiserver.key.0a59ce72
	I0501 03:08:58.997776   51401 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/multinode-282238/proxy-client.key
	I0501 03:08:58.997791   51401 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0501 03:08:58.997812   51401 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0501 03:08:58.997831   51401 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0501 03:08:58.997861   51401 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0501 03:08:58.997879   51401 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/multinode-282238/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0501 03:08:58.997897   51401 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/multinode-282238/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0501 03:08:58.997916   51401 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/multinode-282238/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0501 03:08:58.997936   51401 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/multinode-282238/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0501 03:08:58.998007   51401 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/20724.pem (1338 bytes)
	W0501 03:08:58.998050   51401 certs.go:480] ignoring /home/jenkins/minikube-integration/18779-13391/.minikube/certs/20724_empty.pem, impossibly tiny 0 bytes
	I0501 03:08:58.998064   51401 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca-key.pem (1679 bytes)
	I0501 03:08:58.998103   51401 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem (1078 bytes)
	I0501 03:08:58.998138   51401 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem (1123 bytes)
	I0501 03:08:58.998170   51401 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem (1675 bytes)
	I0501 03:08:58.998222   51401 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem (1708 bytes)
	I0501 03:08:58.998271   51401 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem -> /usr/share/ca-certificates/207242.pem
	I0501 03:08:58.998291   51401 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0501 03:08:58.998309   51401 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/20724.pem -> /usr/share/ca-certificates/20724.pem
	I0501 03:08:58.998929   51401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0501 03:08:59.028281   51401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0501 03:08:59.055386   51401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0501 03:08:59.081449   51401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0501 03:08:59.107554   51401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/multinode-282238/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0501 03:08:59.134226   51401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/multinode-282238/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0501 03:08:59.161017   51401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/multinode-282238/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0501 03:08:59.188800   51401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/multinode-282238/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0501 03:08:59.216208   51401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem --> /usr/share/ca-certificates/207242.pem (1708 bytes)
	I0501 03:08:59.242429   51401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0501 03:08:59.268406   51401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/certs/20724.pem --> /usr/share/ca-certificates/20724.pem (1338 bytes)
	I0501 03:08:59.294188   51401 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0501 03:08:59.337102   51401 ssh_runner.go:195] Run: openssl version
	I0501 03:08:59.345257   51401 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0501 03:08:59.345765   51401 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/207242.pem && ln -fs /usr/share/ca-certificates/207242.pem /etc/ssl/certs/207242.pem"
	I0501 03:08:59.357996   51401 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/207242.pem
	I0501 03:08:59.363159   51401 command_runner.go:130] > -rw-r--r-- 1 root root 1708 May  1 02:20 /usr/share/ca-certificates/207242.pem
	I0501 03:08:59.363185   51401 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May  1 02:20 /usr/share/ca-certificates/207242.pem
	I0501 03:08:59.363228   51401 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/207242.pem
	I0501 03:08:59.369429   51401 command_runner.go:130] > 3ec20f2e
	I0501 03:08:59.369495   51401 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/207242.pem /etc/ssl/certs/3ec20f2e.0"
	I0501 03:08:59.379776   51401 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0501 03:08:59.391951   51401 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0501 03:08:59.397315   51401 command_runner.go:130] > -rw-r--r-- 1 root root 1111 May  1 02:08 /usr/share/ca-certificates/minikubeCA.pem
	I0501 03:08:59.397354   51401 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May  1 02:08 /usr/share/ca-certificates/minikubeCA.pem
	I0501 03:08:59.397413   51401 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0501 03:08:59.403679   51401 command_runner.go:130] > b5213941
	I0501 03:08:59.404021   51401 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0501 03:08:59.414483   51401 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20724.pem && ln -fs /usr/share/ca-certificates/20724.pem /etc/ssl/certs/20724.pem"
	I0501 03:08:59.427042   51401 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20724.pem
	I0501 03:08:59.432006   51401 command_runner.go:130] > -rw-r--r-- 1 root root 1338 May  1 02:20 /usr/share/ca-certificates/20724.pem
	I0501 03:08:59.432230   51401 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May  1 02:20 /usr/share/ca-certificates/20724.pem
	I0501 03:08:59.432294   51401 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20724.pem
	I0501 03:08:59.438518   51401 command_runner.go:130] > 51391683
	I0501 03:08:59.438626   51401 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20724.pem /etc/ssl/certs/51391683.0"
	I0501 03:08:59.448906   51401 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0501 03:08:59.453718   51401 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0501 03:08:59.453742   51401 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0501 03:08:59.453750   51401 command_runner.go:130] > Device: 253,1	Inode: 533782      Links: 1
	I0501 03:08:59.453766   51401 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0501 03:08:59.453776   51401 command_runner.go:130] > Access: 2024-05-01 03:02:13.652300468 +0000
	I0501 03:08:59.453793   51401 command_runner.go:130] > Modify: 2024-05-01 03:02:13.652300468 +0000
	I0501 03:08:59.453805   51401 command_runner.go:130] > Change: 2024-05-01 03:02:13.652300468 +0000
	I0501 03:08:59.453812   51401 command_runner.go:130] >  Birth: 2024-05-01 03:02:13.652300468 +0000
	I0501 03:08:59.454017   51401 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0501 03:08:59.460048   51401 command_runner.go:130] > Certificate will not expire
	I0501 03:08:59.460252   51401 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0501 03:08:59.466464   51401 command_runner.go:130] > Certificate will not expire
	I0501 03:08:59.466721   51401 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0501 03:08:59.473593   51401 command_runner.go:130] > Certificate will not expire
	I0501 03:08:59.473633   51401 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0501 03:08:59.480332   51401 command_runner.go:130] > Certificate will not expire
	I0501 03:08:59.480376   51401 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0501 03:08:59.487005   51401 command_runner.go:130] > Certificate will not expire
	I0501 03:08:59.487058   51401 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0501 03:08:59.493475   51401 command_runner.go:130] > Certificate will not expire
	I0501 03:08:59.493731   51401 kubeadm.go:391] StartCluster: {Name:multinode-282238 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
0 ClusterName:multinode-282238 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.139 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.29 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.220 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 03:08:59.493816   51401 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0501 03:08:59.493861   51401 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0501 03:08:59.539427   51401 command_runner.go:130] > 8d816c0bbdea7d46440e648d984d842c5ae02201b1d5cd0bd5201c544dfda5e0
	I0501 03:08:59.539452   51401 command_runner.go:130] > bf5e18923ea3471ed4501abb095806293ca3da0eae78afe97a349f8747346b9b
	I0501 03:08:59.539458   51401 command_runner.go:130] > fcab67c5c49010d68dd607692c8a87af7ad0be31da8b639dd24377f122a4e4d2
	I0501 03:08:59.539465   51401 command_runner.go:130] > be40d7b3a3ded9d9e8420ed6594c65c831f2543e4a62eaa9eff0e1d4b5922c1e
	I0501 03:08:59.539470   51401 command_runner.go:130] > 0338a9652764e389d91bc7c406553a4271aaf3b47de0bef43e26752ddb86033f
	I0501 03:08:59.539475   51401 command_runner.go:130] > 15b3a41e9b9b60d6c65946591a4b9d001a896ae747addb52cb7f2d0945f41fb6
	I0501 03:08:59.539481   51401 command_runner.go:130] > 0bbe01883646dd171b19f4e453c14175e649929473ee82dae03eb7c7bce9b04c
	I0501 03:08:59.539490   51401 command_runner.go:130] > 648ac51c97cf05ff6096b7f920d602e441f2909db28005c9395bbed15cf2716e
	I0501 03:08:59.539512   51401 cri.go:89] found id: "8d816c0bbdea7d46440e648d984d842c5ae02201b1d5cd0bd5201c544dfda5e0"
	I0501 03:08:59.539524   51401 cri.go:89] found id: "bf5e18923ea3471ed4501abb095806293ca3da0eae78afe97a349f8747346b9b"
	I0501 03:08:59.539531   51401 cri.go:89] found id: "fcab67c5c49010d68dd607692c8a87af7ad0be31da8b639dd24377f122a4e4d2"
	I0501 03:08:59.539536   51401 cri.go:89] found id: "be40d7b3a3ded9d9e8420ed6594c65c831f2543e4a62eaa9eff0e1d4b5922c1e"
	I0501 03:08:59.539541   51401 cri.go:89] found id: "0338a9652764e389d91bc7c406553a4271aaf3b47de0bef43e26752ddb86033f"
	I0501 03:08:59.539555   51401 cri.go:89] found id: "15b3a41e9b9b60d6c65946591a4b9d001a896ae747addb52cb7f2d0945f41fb6"
	I0501 03:08:59.539563   51401 cri.go:89] found id: "0bbe01883646dd171b19f4e453c14175e649929473ee82dae03eb7c7bce9b04c"
	I0501 03:08:59.539568   51401 cri.go:89] found id: "648ac51c97cf05ff6096b7f920d602e441f2909db28005c9395bbed15cf2716e"
	I0501 03:08:59.539572   51401 cri.go:89] found id: ""
	I0501 03:08:59.539610   51401 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	May 01 03:12:54 multinode-282238 crio[2849]: time="2024-05-01 03:12:54.919568617Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5035dc34-664e-4d7e-8226-1f9722b00290 name=/runtime.v1.RuntimeService/Version
	May 01 03:12:54 multinode-282238 crio[2849]: time="2024-05-01 03:12:54.920748810Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0463cfaf-6fc7-4c54-a0aa-7c1f72626bc5 name=/runtime.v1.ImageService/ImageFsInfo
	May 01 03:12:54 multinode-282238 crio[2849]: time="2024-05-01 03:12:54.921144483Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714533174921123590,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133243,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0463cfaf-6fc7-4c54-a0aa-7c1f72626bc5 name=/runtime.v1.ImageService/ImageFsInfo
	May 01 03:12:54 multinode-282238 crio[2849]: time="2024-05-01 03:12:54.921659970Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e6435d84-4fe0-4317-82e8-e364c7a80da4 name=/runtime.v1.RuntimeService/ListContainers
	May 01 03:12:54 multinode-282238 crio[2849]: time="2024-05-01 03:12:54.921745275Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e6435d84-4fe0-4317-82e8-e364c7a80da4 name=/runtime.v1.RuntimeService/ListContainers
	May 01 03:12:54 multinode-282238 crio[2849]: time="2024-05-01 03:12:54.922073345Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ca7d4a2291b8008cfc87a2a7a5d4cf0c8a6e669f1ee014c86468b717378c4b2b,PodSandboxId:267b67cd8e9aec3f447b68d71fc5eb8e141345fb7b842519ad433030f85b0e9f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714532979318228186,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-dpfrf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 00cc3b07-24df-4bef-ba3f-b94a8c0cee87,},Annotations:map[string]string{io.kubernetes.container.hash: b1a3b5af,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:622e78c84bf099cdfac90652360cd9687ba7c350f0987f85a311200a32222190,PodSandboxId:08243fbb491296ecab007610cf4ccf95ac72d53f773dc944788f0a6a73eaac26,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1714532945820330367,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hl7zh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd0cbe33-025e-4a86-af98-8571c8f3340c,},Annotations:map[string]string{io.kubernetes.container.hash: 8129d9dd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebdfee87078676a4391b0ef7c57976ea25bca9367b33f20c56bdcb4233d1cd89,PodSandboxId:85780ac0c333442c21ced239eb561039c3b04a203f4434fa715d4f2d2a6e3731,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714532945747685214,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-pq89m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2cb009de-6a0c-47b9-b6a9-5da24ed79f03,},Annotations:map[string]string{io.kubernetes.container.hash: 8ae1ce26,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58136eaaeacc9dbd72f4c4277026813eb767e299885032dbbb476301df4752f8,PodSandboxId:9c6ad18fe2eada8be65551384789eeb57f735b475bf67391bcb4783f7275d144,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714532945560331855,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2rmjj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d33bb084-3ce9-4fa9-8703-b6bec622abac,},Annotations:map[string]
string{io.kubernetes.container.hash: 28f6cbf6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b3c0442530e9983851afe3d169ed4a274795a2286ebfca3103f85f523883d22,PodSandboxId:8547df9332914e6c38cb8cab5d43db58589eaa99f005db71df08ab4bc6b7648e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714532945557883042,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71ce398a-00b1-4aca-87ba-78b64361ed9d,},Annotations:map[string]string{io.ku
bernetes.container.hash: a91656d3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8778d60a896a8cf1338ee26951ff0c6bd7cc9899d8164db111249b76cd20b5c1,PodSandboxId:40cd32c06dc51aee52d568510435bc404498ba920cab07ecacccea061a3da55f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714532941777552486,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-282238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f907d837c32ea71bc11fb00ea245331,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f48f5b734c7eaf5babad6a7bd38e9f26e8d2c8f3b507d0eec92fc34dce752934,PodSandboxId:db1198f918a56cbc9fb24d6ca0f44c0e8c5a872ba5be28700a0748d75b1a8fdd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714532941719029370,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-282238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29ab78d11237a7f5525934b54837aa37,},Annotations:map[string]string{io.kubernetes.container.hash: bb56aa0,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f13ad7828f3b9e47354e8b7246e161db5528f5a208d0a771ee742358bb8a80ac,PodSandboxId:dced73734ef8e274e7401316d6e87d73307602cdf12eb3eeb95170669709509e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714532941790217260,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-282238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9a1a37448d80a6171236c69ab0170a9,},Annotations:map[string]string{io.kubernetes.container.hash: c7e3da59,io.kubernetes.container.restartCount: 1
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15cc964ddd2cbb7aeaf774665a85e7d70e1a039125b8a3ccb7187eae1b9acb1d,PodSandboxId:0a1604e1df4b5063f217fcd0922064b1ede7a7a7717952e80e80edcc53bfd012,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714532941720747981,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-282238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79340a67faa633be7e3979355e36a28d,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:601633c5701193e7c25d13b66c9ba48678106c948d479514bd1a335978bb232d,PodSandboxId:d0b7f0f8a027c07631c29c6f64a50ff65b53fb0efe3befffdee3ed16d8d69a33,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1714532636030480407,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-dpfrf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 00cc3b07-24df-4bef-ba3f-b94a8c0cee87,},Annotations:map[string]string{io.kubernetes.container.hash: b1a3b5af,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d816c0bbdea7d46440e648d984d842c5ae02201b1d5cd0bd5201c544dfda5e0,PodSandboxId:d8211874c627fa99ac5b154c3e365bbf270492c48671b0065f5a65145e408766,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714532588538185911,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-pq89m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2cb009de-6a0c-47b9-b6a9-5da24ed79f03,},Annotations:map[string]string{io.kubernetes.container.hash: 8ae1ce26,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf5e18923ea3471ed4501abb095806293ca3da0eae78afe97a349f8747346b9b,PodSandboxId:ccd4646808c1ec640dfd982c5725de9482cbe9a08b729a209b509eb6fb39a0be,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1714532588476758295,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 71ce398a-00b1-4aca-87ba-78b64361ed9d,},Annotations:map[string]string{io.kubernetes.container.hash: a91656d3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fcab67c5c49010d68dd607692c8a87af7ad0be31da8b639dd24377f122a4e4d2,PodSandboxId:c9f27cd653d1ac17d946a88eaf2d554d4f915c565df269a4cf12750f437ed0e2,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1714532556726871327,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hl7zh,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: fd0cbe33-025e-4a86-af98-8571c8f3340c,},Annotations:map[string]string{io.kubernetes.container.hash: 8129d9dd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be40d7b3a3ded9d9e8420ed6594c65c831f2543e4a62eaa9eff0e1d4b5922c1e,PodSandboxId:9e8fc276935812799c155d1dce8ea68c5a989b9e99762fdcd2b4155a38e76649,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1714532556632680326,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2rmjj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d33bb084-3ce9-4fa9-8703-
b6bec622abac,},Annotations:map[string]string{io.kubernetes.container.hash: 28f6cbf6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0338a9652764e389d91bc7c406553a4271aaf3b47de0bef43e26752ddb86033f,PodSandboxId:4967e3688b6353284da03ee8da5f159d0991064029ae317efc177e7530e3e659,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1714532537276922056,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-282238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29ab78d11237a7f5525934b54837aa37,},Annotations:map[string]string{
io.kubernetes.container.hash: bb56aa0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15b3a41e9b9b60d6c65946591a4b9d001a896ae747addb52cb7f2d0945f41fb6,PodSandboxId:24e5dd5fe8240df051208753ab2af06a002da8c9d72fe7e3e6765b7ea0933a46,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714532537247256083,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-282238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9a1a37448d80a6171236c69ab0170a9,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: c7e3da59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bbe01883646dd171b19f4e453c14175e649929473ee82dae03eb7c7bce9b04c,PodSandboxId:f60182d3a6d766d6c12a4ee997df3d3b9d01d4940479ab0014410f5556848ec2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1714532537226298789,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-282238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f907d837c32ea71bc11fb00ea245331,},Annotations:map[string]string{io.kubernetes.container.hash: d
e199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:648ac51c97cf05ff6096b7f920d602e441f2909db28005c9395bbed15cf2716e,PodSandboxId:252d7ef8f1bfd8e50ef4cce4f12d70526cd0a401d98a33056cd9fcd26d02136e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714532537219728283,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-282238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79340a67faa633be7e3979355e36a28d,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e6435d84-4fe0-4317-82e8-e364c7a80da4 name=/runtime.v1.RuntimeService/ListContainers
	May 01 03:12:54 multinode-282238 crio[2849]: time="2024-05-01 03:12:54.968184781Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=59b03aa0-aa74-4b95-8742-c4e11c7db290 name=/runtime.v1.RuntimeService/Version
	May 01 03:12:54 multinode-282238 crio[2849]: time="2024-05-01 03:12:54.968278907Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=59b03aa0-aa74-4b95-8742-c4e11c7db290 name=/runtime.v1.RuntimeService/Version
	May 01 03:12:54 multinode-282238 crio[2849]: time="2024-05-01 03:12:54.969558303Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3d807b72-b6a8-4ed0-9fc9-50d4366e9ab9 name=/runtime.v1.ImageService/ImageFsInfo
	May 01 03:12:54 multinode-282238 crio[2849]: time="2024-05-01 03:12:54.969938887Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714533174969917170,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133243,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3d807b72-b6a8-4ed0-9fc9-50d4366e9ab9 name=/runtime.v1.ImageService/ImageFsInfo
	May 01 03:12:54 multinode-282238 crio[2849]: time="2024-05-01 03:12:54.970656664Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=eb3fb5d5-bed4-4a0f-91d9-9c316a2208f8 name=/runtime.v1.RuntimeService/ListContainers
	May 01 03:12:54 multinode-282238 crio[2849]: time="2024-05-01 03:12:54.970737438Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=eb3fb5d5-bed4-4a0f-91d9-9c316a2208f8 name=/runtime.v1.RuntimeService/ListContainers
	May 01 03:12:54 multinode-282238 crio[2849]: time="2024-05-01 03:12:54.971095878Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ca7d4a2291b8008cfc87a2a7a5d4cf0c8a6e669f1ee014c86468b717378c4b2b,PodSandboxId:267b67cd8e9aec3f447b68d71fc5eb8e141345fb7b842519ad433030f85b0e9f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714532979318228186,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-dpfrf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 00cc3b07-24df-4bef-ba3f-b94a8c0cee87,},Annotations:map[string]string{io.kubernetes.container.hash: b1a3b5af,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:622e78c84bf099cdfac90652360cd9687ba7c350f0987f85a311200a32222190,PodSandboxId:08243fbb491296ecab007610cf4ccf95ac72d53f773dc944788f0a6a73eaac26,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1714532945820330367,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hl7zh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd0cbe33-025e-4a86-af98-8571c8f3340c,},Annotations:map[string]string{io.kubernetes.container.hash: 8129d9dd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebdfee87078676a4391b0ef7c57976ea25bca9367b33f20c56bdcb4233d1cd89,PodSandboxId:85780ac0c333442c21ced239eb561039c3b04a203f4434fa715d4f2d2a6e3731,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714532945747685214,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-pq89m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2cb009de-6a0c-47b9-b6a9-5da24ed79f03,},Annotations:map[string]string{io.kubernetes.container.hash: 8ae1ce26,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58136eaaeacc9dbd72f4c4277026813eb767e299885032dbbb476301df4752f8,PodSandboxId:9c6ad18fe2eada8be65551384789eeb57f735b475bf67391bcb4783f7275d144,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714532945560331855,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2rmjj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d33bb084-3ce9-4fa9-8703-b6bec622abac,},Annotations:map[string]
string{io.kubernetes.container.hash: 28f6cbf6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b3c0442530e9983851afe3d169ed4a274795a2286ebfca3103f85f523883d22,PodSandboxId:8547df9332914e6c38cb8cab5d43db58589eaa99f005db71df08ab4bc6b7648e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714532945557883042,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71ce398a-00b1-4aca-87ba-78b64361ed9d,},Annotations:map[string]string{io.ku
bernetes.container.hash: a91656d3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8778d60a896a8cf1338ee26951ff0c6bd7cc9899d8164db111249b76cd20b5c1,PodSandboxId:40cd32c06dc51aee52d568510435bc404498ba920cab07ecacccea061a3da55f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714532941777552486,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-282238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f907d837c32ea71bc11fb00ea245331,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f48f5b734c7eaf5babad6a7bd38e9f26e8d2c8f3b507d0eec92fc34dce752934,PodSandboxId:db1198f918a56cbc9fb24d6ca0f44c0e8c5a872ba5be28700a0748d75b1a8fdd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714532941719029370,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-282238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29ab78d11237a7f5525934b54837aa37,},Annotations:map[string]string{io.kubernetes.container.hash: bb56aa0,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f13ad7828f3b9e47354e8b7246e161db5528f5a208d0a771ee742358bb8a80ac,PodSandboxId:dced73734ef8e274e7401316d6e87d73307602cdf12eb3eeb95170669709509e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714532941790217260,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-282238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9a1a37448d80a6171236c69ab0170a9,},Annotations:map[string]string{io.kubernetes.container.hash: c7e3da59,io.kubernetes.container.restartCount: 1
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15cc964ddd2cbb7aeaf774665a85e7d70e1a039125b8a3ccb7187eae1b9acb1d,PodSandboxId:0a1604e1df4b5063f217fcd0922064b1ede7a7a7717952e80e80edcc53bfd012,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714532941720747981,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-282238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79340a67faa633be7e3979355e36a28d,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:601633c5701193e7c25d13b66c9ba48678106c948d479514bd1a335978bb232d,PodSandboxId:d0b7f0f8a027c07631c29c6f64a50ff65b53fb0efe3befffdee3ed16d8d69a33,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1714532636030480407,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-dpfrf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 00cc3b07-24df-4bef-ba3f-b94a8c0cee87,},Annotations:map[string]string{io.kubernetes.container.hash: b1a3b5af,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d816c0bbdea7d46440e648d984d842c5ae02201b1d5cd0bd5201c544dfda5e0,PodSandboxId:d8211874c627fa99ac5b154c3e365bbf270492c48671b0065f5a65145e408766,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714532588538185911,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-pq89m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2cb009de-6a0c-47b9-b6a9-5da24ed79f03,},Annotations:map[string]string{io.kubernetes.container.hash: 8ae1ce26,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf5e18923ea3471ed4501abb095806293ca3da0eae78afe97a349f8747346b9b,PodSandboxId:ccd4646808c1ec640dfd982c5725de9482cbe9a08b729a209b509eb6fb39a0be,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1714532588476758295,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 71ce398a-00b1-4aca-87ba-78b64361ed9d,},Annotations:map[string]string{io.kubernetes.container.hash: a91656d3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fcab67c5c49010d68dd607692c8a87af7ad0be31da8b639dd24377f122a4e4d2,PodSandboxId:c9f27cd653d1ac17d946a88eaf2d554d4f915c565df269a4cf12750f437ed0e2,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1714532556726871327,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hl7zh,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: fd0cbe33-025e-4a86-af98-8571c8f3340c,},Annotations:map[string]string{io.kubernetes.container.hash: 8129d9dd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be40d7b3a3ded9d9e8420ed6594c65c831f2543e4a62eaa9eff0e1d4b5922c1e,PodSandboxId:9e8fc276935812799c155d1dce8ea68c5a989b9e99762fdcd2b4155a38e76649,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1714532556632680326,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2rmjj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d33bb084-3ce9-4fa9-8703-
b6bec622abac,},Annotations:map[string]string{io.kubernetes.container.hash: 28f6cbf6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0338a9652764e389d91bc7c406553a4271aaf3b47de0bef43e26752ddb86033f,PodSandboxId:4967e3688b6353284da03ee8da5f159d0991064029ae317efc177e7530e3e659,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1714532537276922056,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-282238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29ab78d11237a7f5525934b54837aa37,},Annotations:map[string]string{
io.kubernetes.container.hash: bb56aa0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15b3a41e9b9b60d6c65946591a4b9d001a896ae747addb52cb7f2d0945f41fb6,PodSandboxId:24e5dd5fe8240df051208753ab2af06a002da8c9d72fe7e3e6765b7ea0933a46,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714532537247256083,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-282238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9a1a37448d80a6171236c69ab0170a9,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: c7e3da59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bbe01883646dd171b19f4e453c14175e649929473ee82dae03eb7c7bce9b04c,PodSandboxId:f60182d3a6d766d6c12a4ee997df3d3b9d01d4940479ab0014410f5556848ec2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1714532537226298789,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-282238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f907d837c32ea71bc11fb00ea245331,},Annotations:map[string]string{io.kubernetes.container.hash: d
e199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:648ac51c97cf05ff6096b7f920d602e441f2909db28005c9395bbed15cf2716e,PodSandboxId:252d7ef8f1bfd8e50ef4cce4f12d70526cd0a401d98a33056cd9fcd26d02136e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714532537219728283,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-282238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79340a67faa633be7e3979355e36a28d,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=eb3fb5d5-bed4-4a0f-91d9-9c316a2208f8 name=/runtime.v1.RuntimeService/ListContainers
	May 01 03:12:55 multinode-282238 crio[2849]: time="2024-05-01 03:12:55.013928186Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=51c5bdd8-ceff-4d26-ac54-47c0e4e73083 name=/runtime.v1.RuntimeService/Version
	May 01 03:12:55 multinode-282238 crio[2849]: time="2024-05-01 03:12:55.014139857Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=51c5bdd8-ceff-4d26-ac54-47c0e4e73083 name=/runtime.v1.RuntimeService/Version
	May 01 03:12:55 multinode-282238 crio[2849]: time="2024-05-01 03:12:55.016857900Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=05b2a90b-12e9-4630-ae8d-ab44bfc8c2e0 name=/runtime.v1.ImageService/ImageFsInfo
	May 01 03:12:55 multinode-282238 crio[2849]: time="2024-05-01 03:12:55.022155970Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714533175022131570,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133243,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=05b2a90b-12e9-4630-ae8d-ab44bfc8c2e0 name=/runtime.v1.ImageService/ImageFsInfo
	May 01 03:12:55 multinode-282238 crio[2849]: time="2024-05-01 03:12:55.022784042Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=809b6047-8c16-41db-91ae-80e5d6f6d8d6 name=/runtime.v1.RuntimeService/ListContainers
	May 01 03:12:55 multinode-282238 crio[2849]: time="2024-05-01 03:12:55.022859648Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=809b6047-8c16-41db-91ae-80e5d6f6d8d6 name=/runtime.v1.RuntimeService/ListContainers
	May 01 03:12:55 multinode-282238 crio[2849]: time="2024-05-01 03:12:55.023299192Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ca7d4a2291b8008cfc87a2a7a5d4cf0c8a6e669f1ee014c86468b717378c4b2b,PodSandboxId:267b67cd8e9aec3f447b68d71fc5eb8e141345fb7b842519ad433030f85b0e9f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714532979318228186,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-dpfrf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 00cc3b07-24df-4bef-ba3f-b94a8c0cee87,},Annotations:map[string]string{io.kubernetes.container.hash: b1a3b5af,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:622e78c84bf099cdfac90652360cd9687ba7c350f0987f85a311200a32222190,PodSandboxId:08243fbb491296ecab007610cf4ccf95ac72d53f773dc944788f0a6a73eaac26,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1714532945820330367,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hl7zh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd0cbe33-025e-4a86-af98-8571c8f3340c,},Annotations:map[string]string{io.kubernetes.container.hash: 8129d9dd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebdfee87078676a4391b0ef7c57976ea25bca9367b33f20c56bdcb4233d1cd89,PodSandboxId:85780ac0c333442c21ced239eb561039c3b04a203f4434fa715d4f2d2a6e3731,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714532945747685214,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-pq89m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2cb009de-6a0c-47b9-b6a9-5da24ed79f03,},Annotations:map[string]string{io.kubernetes.container.hash: 8ae1ce26,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58136eaaeacc9dbd72f4c4277026813eb767e299885032dbbb476301df4752f8,PodSandboxId:9c6ad18fe2eada8be65551384789eeb57f735b475bf67391bcb4783f7275d144,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714532945560331855,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2rmjj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d33bb084-3ce9-4fa9-8703-b6bec622abac,},Annotations:map[string]
string{io.kubernetes.container.hash: 28f6cbf6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b3c0442530e9983851afe3d169ed4a274795a2286ebfca3103f85f523883d22,PodSandboxId:8547df9332914e6c38cb8cab5d43db58589eaa99f005db71df08ab4bc6b7648e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714532945557883042,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71ce398a-00b1-4aca-87ba-78b64361ed9d,},Annotations:map[string]string{io.ku
bernetes.container.hash: a91656d3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8778d60a896a8cf1338ee26951ff0c6bd7cc9899d8164db111249b76cd20b5c1,PodSandboxId:40cd32c06dc51aee52d568510435bc404498ba920cab07ecacccea061a3da55f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714532941777552486,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-282238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f907d837c32ea71bc11fb00ea245331,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f48f5b734c7eaf5babad6a7bd38e9f26e8d2c8f3b507d0eec92fc34dce752934,PodSandboxId:db1198f918a56cbc9fb24d6ca0f44c0e8c5a872ba5be28700a0748d75b1a8fdd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714532941719029370,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-282238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29ab78d11237a7f5525934b54837aa37,},Annotations:map[string]string{io.kubernetes.container.hash: bb56aa0,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f13ad7828f3b9e47354e8b7246e161db5528f5a208d0a771ee742358bb8a80ac,PodSandboxId:dced73734ef8e274e7401316d6e87d73307602cdf12eb3eeb95170669709509e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714532941790217260,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-282238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9a1a37448d80a6171236c69ab0170a9,},Annotations:map[string]string{io.kubernetes.container.hash: c7e3da59,io.kubernetes.container.restartCount: 1
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15cc964ddd2cbb7aeaf774665a85e7d70e1a039125b8a3ccb7187eae1b9acb1d,PodSandboxId:0a1604e1df4b5063f217fcd0922064b1ede7a7a7717952e80e80edcc53bfd012,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714532941720747981,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-282238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79340a67faa633be7e3979355e36a28d,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:601633c5701193e7c25d13b66c9ba48678106c948d479514bd1a335978bb232d,PodSandboxId:d0b7f0f8a027c07631c29c6f64a50ff65b53fb0efe3befffdee3ed16d8d69a33,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1714532636030480407,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-dpfrf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 00cc3b07-24df-4bef-ba3f-b94a8c0cee87,},Annotations:map[string]string{io.kubernetes.container.hash: b1a3b5af,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d816c0bbdea7d46440e648d984d842c5ae02201b1d5cd0bd5201c544dfda5e0,PodSandboxId:d8211874c627fa99ac5b154c3e365bbf270492c48671b0065f5a65145e408766,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714532588538185911,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-pq89m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2cb009de-6a0c-47b9-b6a9-5da24ed79f03,},Annotations:map[string]string{io.kubernetes.container.hash: 8ae1ce26,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf5e18923ea3471ed4501abb095806293ca3da0eae78afe97a349f8747346b9b,PodSandboxId:ccd4646808c1ec640dfd982c5725de9482cbe9a08b729a209b509eb6fb39a0be,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1714532588476758295,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 71ce398a-00b1-4aca-87ba-78b64361ed9d,},Annotations:map[string]string{io.kubernetes.container.hash: a91656d3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fcab67c5c49010d68dd607692c8a87af7ad0be31da8b639dd24377f122a4e4d2,PodSandboxId:c9f27cd653d1ac17d946a88eaf2d554d4f915c565df269a4cf12750f437ed0e2,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1714532556726871327,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hl7zh,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: fd0cbe33-025e-4a86-af98-8571c8f3340c,},Annotations:map[string]string{io.kubernetes.container.hash: 8129d9dd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be40d7b3a3ded9d9e8420ed6594c65c831f2543e4a62eaa9eff0e1d4b5922c1e,PodSandboxId:9e8fc276935812799c155d1dce8ea68c5a989b9e99762fdcd2b4155a38e76649,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1714532556632680326,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2rmjj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d33bb084-3ce9-4fa9-8703-
b6bec622abac,},Annotations:map[string]string{io.kubernetes.container.hash: 28f6cbf6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0338a9652764e389d91bc7c406553a4271aaf3b47de0bef43e26752ddb86033f,PodSandboxId:4967e3688b6353284da03ee8da5f159d0991064029ae317efc177e7530e3e659,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1714532537276922056,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-282238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29ab78d11237a7f5525934b54837aa37,},Annotations:map[string]string{
io.kubernetes.container.hash: bb56aa0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15b3a41e9b9b60d6c65946591a4b9d001a896ae747addb52cb7f2d0945f41fb6,PodSandboxId:24e5dd5fe8240df051208753ab2af06a002da8c9d72fe7e3e6765b7ea0933a46,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714532537247256083,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-282238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9a1a37448d80a6171236c69ab0170a9,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: c7e3da59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bbe01883646dd171b19f4e453c14175e649929473ee82dae03eb7c7bce9b04c,PodSandboxId:f60182d3a6d766d6c12a4ee997df3d3b9d01d4940479ab0014410f5556848ec2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1714532537226298789,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-282238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f907d837c32ea71bc11fb00ea245331,},Annotations:map[string]string{io.kubernetes.container.hash: d
e199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:648ac51c97cf05ff6096b7f920d602e441f2909db28005c9395bbed15cf2716e,PodSandboxId:252d7ef8f1bfd8e50ef4cce4f12d70526cd0a401d98a33056cd9fcd26d02136e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714532537219728283,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-282238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79340a67faa633be7e3979355e36a28d,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=809b6047-8c16-41db-91ae-80e5d6f6d8d6 name=/runtime.v1.RuntimeService/ListContainers
	May 01 03:12:55 multinode-282238 crio[2849]: time="2024-05-01 03:12:55.050753939Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=aa2fa16d-3b55-4f79-a1bf-8d778a753aa3 name=/runtime.v1.RuntimeService/ListPodSandbox
	May 01 03:12:55 multinode-282238 crio[2849]: time="2024-05-01 03:12:55.050981927Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:267b67cd8e9aec3f447b68d71fc5eb8e141345fb7b842519ad433030f85b0e9f,Metadata:&PodSandboxMetadata{Name:busybox-fc5497c4f-dpfrf,Uid:00cc3b07-24df-4bef-ba3f-b94a8c0cee87,Namespace:default,Attempt:1,},State:SANDBOX_READY,CreatedAt:1714532979156372937,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-fc5497c4f-dpfrf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 00cc3b07-24df-4bef-ba3f-b94a8c0cee87,pod-template-hash: fc5497c4f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-05-01T03:09:05.029138205Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:85780ac0c333442c21ced239eb561039c3b04a203f4434fa715d4f2d2a6e3731,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-pq89m,Uid:2cb009de-6a0c-47b9-b6a9-5da24ed79f03,Namespace:kube-system,Attempt:1,}
,State:SANDBOX_READY,CreatedAt:1714532945474043196,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-pq89m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2cb009de-6a0c-47b9-b6a9-5da24ed79f03,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-05-01T03:09:05.029139212Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9c6ad18fe2eada8be65551384789eeb57f735b475bf67391bcb4783f7275d144,Metadata:&PodSandboxMetadata{Name:kube-proxy-2rmjj,Uid:d33bb084-3ce9-4fa9-8703-b6bec622abac,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1714532945377543537,Labels:map[string]string{controller-revision-hash: 79cf874c65,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-2rmjj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d33bb084-3ce9-4fa9-8703-b6bec622abac,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{
kubernetes.io/config.seen: 2024-05-01T03:09:05.029134059Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:08243fbb491296ecab007610cf4ccf95ac72d53f773dc944788f0a6a73eaac26,Metadata:&PodSandboxMetadata{Name:kindnet-hl7zh,Uid:fd0cbe33-025e-4a86-af98-8571c8f3340c,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1714532945373975168,Labels:map[string]string{app: kindnet,controller-revision-hash: 64fdfd5c6d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-hl7zh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd0cbe33-025e-4a86-af98-8571c8f3340c,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-05-01T03:09:05.029126976Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8547df9332914e6c38cb8cab5d43db58589eaa99f005db71df08ab4bc6b7648e,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:71ce398a-00b1-4aca-87ba-78b64361ed9d,Namespace:kube-system,Attempt:1,},State
:SANDBOX_READY,CreatedAt:1714532945366582799,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71ce398a-00b1-4aca-87ba-78b64361ed9d,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp
\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-05-01T03:09:05.029137067Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:40cd32c06dc51aee52d568510435bc404498ba920cab07ecacccea061a3da55f,Metadata:&PodSandboxMetadata{Name:kube-scheduler-multinode-282238,Uid:9f907d837c32ea71bc11fb00ea245331,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1714532941514006496,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-multinode-282238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f907d837c32ea71bc11fb00ea245331,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 9f907d837c32ea71bc11fb00ea245331,kubernetes.io/config.seen: 2024-05-01T03:09:01.029158127Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:0a1604e1df4b5063f217fcd0922064b1ede7a7a7717952e80e80edcc53bfd012,Metadata:&PodSandboxMetadata{Name:kube-controller-mana
ger-multinode-282238,Uid:79340a67faa633be7e3979355e36a28d,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1714532941512287707,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-multinode-282238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79340a67faa633be7e3979355e36a28d,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 79340a67faa633be7e3979355e36a28d,kubernetes.io/config.seen: 2024-05-01T03:09:01.029157007Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:db1198f918a56cbc9fb24d6ca0f44c0e8c5a872ba5be28700a0748d75b1a8fdd,Metadata:&PodSandboxMetadata{Name:etcd-multinode-282238,Uid:29ab78d11237a7f5525934b54837aa37,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1714532941479821760,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-multinode-282238,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: 29ab78d11237a7f5525934b54837aa37,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.139:2379,kubernetes.io/config.hash: 29ab78d11237a7f5525934b54837aa37,kubernetes.io/config.seen: 2024-05-01T03:09:01.029151867Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:dced73734ef8e274e7401316d6e87d73307602cdf12eb3eeb95170669709509e,Metadata:&PodSandboxMetadata{Name:kube-apiserver-multinode-282238,Uid:d9a1a37448d80a6171236c69ab0170a9,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1714532941472877870,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-multinode-282238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9a1a37448d80a6171236c69ab0170a9,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.139:8443,kuberne
tes.io/config.hash: d9a1a37448d80a6171236c69ab0170a9,kubernetes.io/config.seen: 2024-05-01T03:09:01.029155941Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=aa2fa16d-3b55-4f79-a1bf-8d778a753aa3 name=/runtime.v1.RuntimeService/ListPodSandbox
	May 01 03:12:55 multinode-282238 crio[2849]: time="2024-05-01 03:12:55.051869004Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=26ffca74-dda7-48c0-aca6-f0603c2c78fa name=/runtime.v1.RuntimeService/ListContainers
	May 01 03:12:55 multinode-282238 crio[2849]: time="2024-05-01 03:12:55.051955153Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=26ffca74-dda7-48c0-aca6-f0603c2c78fa name=/runtime.v1.RuntimeService/ListContainers
	May 01 03:12:55 multinode-282238 crio[2849]: time="2024-05-01 03:12:55.052146263Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ca7d4a2291b8008cfc87a2a7a5d4cf0c8a6e669f1ee014c86468b717378c4b2b,PodSandboxId:267b67cd8e9aec3f447b68d71fc5eb8e141345fb7b842519ad433030f85b0e9f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714532979318228186,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-dpfrf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 00cc3b07-24df-4bef-ba3f-b94a8c0cee87,},Annotations:map[string]string{io.kubernetes.container.hash: b1a3b5af,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:622e78c84bf099cdfac90652360cd9687ba7c350f0987f85a311200a32222190,PodSandboxId:08243fbb491296ecab007610cf4ccf95ac72d53f773dc944788f0a6a73eaac26,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1714532945820330367,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hl7zh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd0cbe33-025e-4a86-af98-8571c8f3340c,},Annotations:map[string]string{io.kubernetes.container.hash: 8129d9dd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebdfee87078676a4391b0ef7c57976ea25bca9367b33f20c56bdcb4233d1cd89,PodSandboxId:85780ac0c333442c21ced239eb561039c3b04a203f4434fa715d4f2d2a6e3731,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714532945747685214,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-pq89m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2cb009de-6a0c-47b9-b6a9-5da24ed79f03,},Annotations:map[string]string{io.kubernetes.container.hash: 8ae1ce26,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58136eaaeacc9dbd72f4c4277026813eb767e299885032dbbb476301df4752f8,PodSandboxId:9c6ad18fe2eada8be65551384789eeb57f735b475bf67391bcb4783f7275d144,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714532945560331855,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2rmjj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d33bb084-3ce9-4fa9-8703-b6bec622abac,},Annotations:map[string]
string{io.kubernetes.container.hash: 28f6cbf6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b3c0442530e9983851afe3d169ed4a274795a2286ebfca3103f85f523883d22,PodSandboxId:8547df9332914e6c38cb8cab5d43db58589eaa99f005db71df08ab4bc6b7648e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714532945557883042,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71ce398a-00b1-4aca-87ba-78b64361ed9d,},Annotations:map[string]string{io.ku
bernetes.container.hash: a91656d3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8778d60a896a8cf1338ee26951ff0c6bd7cc9899d8164db111249b76cd20b5c1,PodSandboxId:40cd32c06dc51aee52d568510435bc404498ba920cab07ecacccea061a3da55f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714532941777552486,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-282238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f907d837c32ea71bc11fb00ea245331,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f48f5b734c7eaf5babad6a7bd38e9f26e8d2c8f3b507d0eec92fc34dce752934,PodSandboxId:db1198f918a56cbc9fb24d6ca0f44c0e8c5a872ba5be28700a0748d75b1a8fdd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714532941719029370,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-282238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29ab78d11237a7f5525934b54837aa37,},Annotations:map[string]string{io.kubernetes.container.hash: bb56aa0,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f13ad7828f3b9e47354e8b7246e161db5528f5a208d0a771ee742358bb8a80ac,PodSandboxId:dced73734ef8e274e7401316d6e87d73307602cdf12eb3eeb95170669709509e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714532941790217260,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-282238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9a1a37448d80a6171236c69ab0170a9,},Annotations:map[string]string{io.kubernetes.container.hash: c7e3da59,io.kubernetes.container.restartCount: 1
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15cc964ddd2cbb7aeaf774665a85e7d70e1a039125b8a3ccb7187eae1b9acb1d,PodSandboxId:0a1604e1df4b5063f217fcd0922064b1ede7a7a7717952e80e80edcc53bfd012,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714532941720747981,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-282238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79340a67faa633be7e3979355e36a28d,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=26ffca74-dda7-48c0-aca6-f0603c2c78fa name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	ca7d4a2291b80       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      3 minutes ago       Running             busybox                   1                   267b67cd8e9ae       busybox-fc5497c4f-dpfrf
	622e78c84bf09       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      3 minutes ago       Running             kindnet-cni               1                   08243fbb49129       kindnet-hl7zh
	ebdfee8707867       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      3 minutes ago       Running             coredns                   1                   85780ac0c3334       coredns-7db6d8ff4d-pq89m
	58136eaaeacc9       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b                                      3 minutes ago       Running             kube-proxy                1                   9c6ad18fe2ead       kube-proxy-2rmjj
	3b3c0442530e9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Running             storage-provisioner       1                   8547df9332914       storage-provisioner
	f13ad7828f3b9       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0                                      3 minutes ago       Running             kube-apiserver            1                   dced73734ef8e       kube-apiserver-multinode-282238
	8778d60a896a8       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced                                      3 minutes ago       Running             kube-scheduler            1                   40cd32c06dc51       kube-scheduler-multinode-282238
	15cc964ddd2cb       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b                                      3 minutes ago       Running             kube-controller-manager   1                   0a1604e1df4b5       kube-controller-manager-multinode-282238
	f48f5b734c7ea       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      3 minutes ago       Running             etcd                      1                   db1198f918a56       etcd-multinode-282238
	601633c570119       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   8 minutes ago       Exited              busybox                   0                   d0b7f0f8a027c       busybox-fc5497c4f-dpfrf
	8d816c0bbdea7       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      9 minutes ago       Exited              coredns                   0                   d8211874c627f       coredns-7db6d8ff4d-pq89m
	bf5e18923ea34       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      9 minutes ago       Exited              storage-provisioner       0                   ccd4646808c1e       storage-provisioner
	fcab67c5c4901       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      10 minutes ago      Exited              kindnet-cni               0                   c9f27cd653d1a       kindnet-hl7zh
	be40d7b3a3ded       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b                                      10 minutes ago      Exited              kube-proxy                0                   9e8fc27693581       kube-proxy-2rmjj
	0338a9652764e       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      10 minutes ago      Exited              etcd                      0                   4967e3688b635       etcd-multinode-282238
	15b3a41e9b9b6       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0                                      10 minutes ago      Exited              kube-apiserver            0                   24e5dd5fe8240       kube-apiserver-multinode-282238
	0bbe01883646d       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced                                      10 minutes ago      Exited              kube-scheduler            0                   f60182d3a6d76       kube-scheduler-multinode-282238
	648ac51c97cf0       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b                                      10 minutes ago      Exited              kube-controller-manager   0                   252d7ef8f1bfd       kube-controller-manager-multinode-282238
	
	
	==> coredns [8d816c0bbdea7d46440e648d984d842c5ae02201b1d5cd0bd5201c544dfda5e0] <==
	[INFO] 10.244.1.2:59618 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001510725s
	[INFO] 10.244.1.2:60456 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000156562s
	[INFO] 10.244.1.2:36252 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000077922s
	[INFO] 10.244.1.2:47181 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001366745s
	[INFO] 10.244.1.2:37037 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000071647s
	[INFO] 10.244.1.2:36317 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000181442s
	[INFO] 10.244.1.2:38996 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000086856s
	[INFO] 10.244.0.3:37679 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000148366s
	[INFO] 10.244.0.3:53590 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000090081s
	[INFO] 10.244.0.3:39061 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000045676s
	[INFO] 10.244.0.3:51107 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000027055s
	[INFO] 10.244.1.2:39063 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000121533s
	[INFO] 10.244.1.2:46771 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000200225s
	[INFO] 10.244.1.2:41167 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000090026s
	[INFO] 10.244.1.2:33744 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000134405s
	[INFO] 10.244.0.3:54357 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000120434s
	[INFO] 10.244.0.3:37819 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000158211s
	[INFO] 10.244.0.3:53355 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000195347s
	[INFO] 10.244.0.3:59846 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000102906s
	[INFO] 10.244.1.2:48867 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000166885s
	[INFO] 10.244.1.2:33516 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000095903s
	[INFO] 10.244.1.2:33876 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000097965s
	[INFO] 10.244.1.2:51976 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000082889s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [ebdfee87078676a4391b0ef7c57976ea25bca9367b33f20c56bdcb4233d1cd89] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:44128 - 20402 "HINFO IN 8562177580602459877.2340631428550283688. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.022513189s
	
	
	==> describe nodes <==
	Name:               multinode-282238
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-282238
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2c4eae41cda912e6a762d77f0d8868e00f97bb4e
	                    minikube.k8s.io/name=multinode-282238
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_01T03_02_23_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 01 May 2024 03:02:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-282238
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 01 May 2024 03:12:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 01 May 2024 03:09:04 +0000   Wed, 01 May 2024 03:02:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 01 May 2024 03:09:04 +0000   Wed, 01 May 2024 03:02:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 01 May 2024 03:09:04 +0000   Wed, 01 May 2024 03:02:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 01 May 2024 03:09:04 +0000   Wed, 01 May 2024 03:03:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.139
	  Hostname:    multinode-282238
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ac8b8e4c2ce042738c18c8a843898f22
	  System UUID:                ac8b8e4c-2ce0-4273-8c18-c8a843898f22
	  Boot ID:                    8ab7d952-245f-482d-8568-788991e02aaa
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-dpfrf                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m2s
	  kube-system                 coredns-7db6d8ff4d-pq89m                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     10m
	  kube-system                 etcd-multinode-282238                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         10m
	  kube-system                 kindnet-hl7zh                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-apiserver-multinode-282238             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-controller-manager-multinode-282238    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-proxy-2rmjj                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-scheduler-multinode-282238             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 10m                    kube-proxy       
	  Normal  Starting                 3m49s                  kube-proxy       
	  Normal  Starting                 10m                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)      kubelet          Node multinode-282238 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)      kubelet          Node multinode-282238 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x7 over 10m)      kubelet          Node multinode-282238 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 10m                    kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  10m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    10m                    kubelet          Node multinode-282238 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m                    kubelet          Node multinode-282238 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  10m                    kubelet          Node multinode-282238 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           10m                    node-controller  Node multinode-282238 event: Registered Node multinode-282238 in Controller
	  Normal  NodeReady                9m48s                  kubelet          Node multinode-282238 status is now: NodeReady
	  Normal  Starting                 3m54s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m54s (x8 over 3m54s)  kubelet          Node multinode-282238 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m54s (x8 over 3m54s)  kubelet          Node multinode-282238 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m54s (x7 over 3m54s)  kubelet          Node multinode-282238 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m54s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m38s                  node-controller  Node multinode-282238 event: Registered Node multinode-282238 in Controller
	
	
	Name:               multinode-282238-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-282238-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2c4eae41cda912e6a762d77f0d8868e00f97bb4e
	                    minikube.k8s.io/name=multinode-282238
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_01T03_09_47_0700
	                    minikube.k8s.io/version=v1.33.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 01 May 2024 03:09:47 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-282238-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 01 May 2024 03:10:28 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Wed, 01 May 2024 03:10:18 +0000   Wed, 01 May 2024 03:11:12 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Wed, 01 May 2024 03:10:18 +0000   Wed, 01 May 2024 03:11:12 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Wed, 01 May 2024 03:10:18 +0000   Wed, 01 May 2024 03:11:12 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Wed, 01 May 2024 03:10:18 +0000   Wed, 01 May 2024 03:11:12 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.29
	  Hostname:    multinode-282238-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f5a400bf04a54c3c826bef4e8e41d9b6
	  System UUID:                f5a400bf-04a5-4c3c-826b-ef4e8e41d9b6
	  Boot ID:                    6d2f4221-f3cd-4281-b31f-9ca638e646c8
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-j8jhq    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m13s
	  kube-system                 kindnet-rxg49              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m15s
	  kube-system                 kube-proxy-66kjs           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m15s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m3s                   kube-proxy       
	  Normal  Starting                 9m9s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  9m15s (x2 over 9m15s)  kubelet          Node multinode-282238-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m15s (x2 over 9m15s)  kubelet          Node multinode-282238-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m15s (x2 over 9m15s)  kubelet          Node multinode-282238-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m15s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                9m5s                   kubelet          Node multinode-282238-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  3m8s (x2 over 3m8s)    kubelet          Node multinode-282238-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m8s (x2 over 3m8s)    kubelet          Node multinode-282238-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m8s (x2 over 3m8s)    kubelet          Node multinode-282238-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m8s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                2m59s                  kubelet          Node multinode-282238-m02 status is now: NodeReady
	  Normal  NodeNotReady             103s                   node-controller  Node multinode-282238-m02 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.072848] systemd-fstab-generator[610]: Ignoring "noauto" option for root device
	[  +0.201477] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +0.140585] systemd-fstab-generator[636]: Ignoring "noauto" option for root device
	[  +0.321892] systemd-fstab-generator[666]: Ignoring "noauto" option for root device
	[  +4.611878] systemd-fstab-generator[764]: Ignoring "noauto" option for root device
	[  +0.067544] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.131398] systemd-fstab-generator[947]: Ignoring "noauto" option for root device
	[  +1.079333] kauditd_printk_skb: 57 callbacks suppressed
	[  +5.467486] systemd-fstab-generator[1280]: Ignoring "noauto" option for root device
	[  +0.094157] kauditd_printk_skb: 30 callbacks suppressed
	[ +13.724442] kauditd_printk_skb: 21 callbacks suppressed
	[  +0.058462] systemd-fstab-generator[1517]: Ignoring "noauto" option for root device
	[May 1 03:03] kauditd_printk_skb: 60 callbacks suppressed
	[ +45.076888] kauditd_printk_skb: 14 callbacks suppressed
	[May 1 03:08] systemd-fstab-generator[2769]: Ignoring "noauto" option for root device
	[  +0.147830] systemd-fstab-generator[2782]: Ignoring "noauto" option for root device
	[  +0.175865] systemd-fstab-generator[2796]: Ignoring "noauto" option for root device
	[  +0.151955] systemd-fstab-generator[2808]: Ignoring "noauto" option for root device
	[  +0.297260] systemd-fstab-generator[2836]: Ignoring "noauto" option for root device
	[  +0.795652] systemd-fstab-generator[2932]: Ignoring "noauto" option for root device
	[May 1 03:09] systemd-fstab-generator[3057]: Ignoring "noauto" option for root device
	[  +4.617919] kauditd_printk_skb: 184 callbacks suppressed
	[ +11.908207] kauditd_printk_skb: 32 callbacks suppressed
	[  +4.232000] systemd-fstab-generator[3894]: Ignoring "noauto" option for root device
	[ +17.670652] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [0338a9652764e389d91bc7c406553a4271aaf3b47de0bef43e26752ddb86033f] <==
	{"level":"info","ts":"2024-05-01T03:02:18.108243Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-05-01T03:02:18.108506Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"warn","ts":"2024-05-01T03:03:40.358763Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"169.078835ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15793436913611130661 username:\"kube-apiserver-etcd-client\" auth_revision:1 > lease_grant:<ttl:3660-second id:5b2d8f321a554724>","response":"size:40"}
	{"level":"info","ts":"2024-05-01T03:03:40.359118Z","caller":"traceutil/trace.go:171","msg":"trace[356406267] transaction","detail":"{read_only:false; response_revision:486; number_of_response:1; }","duration":"174.873603ms","start":"2024-05-01T03:03:40.184228Z","end":"2024-05-01T03:03:40.359102Z","steps":["trace[356406267] 'process raft request'  (duration: 174.785494ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-01T03:03:40.359288Z","caller":"traceutil/trace.go:171","msg":"trace[315325814] linearizableReadLoop","detail":"{readStateIndex:511; appliedIndex:510; }","duration":"223.249847ms","start":"2024-05-01T03:03:40.136027Z","end":"2024-05-01T03:03:40.359277Z","steps":["trace[315325814] 'read index received'  (duration: 53.539797ms)","trace[315325814] 'applied index is now lower than readState.Index'  (duration: 169.709134ms)"],"step_count":2}
	{"level":"warn","ts":"2024-05-01T03:03:40.359566Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"171.407877ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-282238-m02\" ","response":"range_response_count:1 size:1925"}
	{"level":"info","ts":"2024-05-01T03:03:40.359644Z","caller":"traceutil/trace.go:171","msg":"trace[65675065] range","detail":"{range_begin:/registry/minions/multinode-282238-m02; range_end:; response_count:1; response_revision:486; }","duration":"171.49882ms","start":"2024-05-01T03:03:40.188134Z","end":"2024-05-01T03:03:40.359633Z","steps":["trace[65675065] 'agreement among raft nodes before linearized reading'  (duration: 171.383754ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-01T03:03:40.359565Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"223.522226ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-05-01T03:03:40.359812Z","caller":"traceutil/trace.go:171","msg":"trace[1585261514] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:486; }","duration":"223.797988ms","start":"2024-05-01T03:03:40.136004Z","end":"2024-05-01T03:03:40.359802Z","steps":["trace[1585261514] 'agreement among raft nodes before linearized reading'  (duration: 223.381212ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-01T03:04:27.133348Z","caller":"traceutil/trace.go:171","msg":"trace[85424170] transaction","detail":"{read_only:false; response_revision:615; number_of_response:1; }","duration":"248.119873ms","start":"2024-05-01T03:04:26.88518Z","end":"2024-05-01T03:04:27.1333Z","steps":["trace[85424170] 'process raft request'  (duration: 208.513234ms)","trace[85424170] 'compare'  (duration: 39.517932ms)"],"step_count":2}
	{"level":"info","ts":"2024-05-01T03:04:27.140519Z","caller":"traceutil/trace.go:171","msg":"trace[373897446] transaction","detail":"{read_only:false; response_revision:616; number_of_response:1; }","duration":"208.475179ms","start":"2024-05-01T03:04:26.932031Z","end":"2024-05-01T03:04:27.140506Z","steps":["trace[373897446] 'process raft request'  (duration: 207.678956ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-01T03:04:30.529679Z","caller":"traceutil/trace.go:171","msg":"trace[52786717] linearizableReadLoop","detail":"{readStateIndex:696; appliedIndex:695; }","duration":"209.792007ms","start":"2024-05-01T03:04:30.319868Z","end":"2024-05-01T03:04:30.52966Z","steps":["trace[52786717] 'read index received'  (duration: 147.884295ms)","trace[52786717] 'applied index is now lower than readState.Index'  (duration: 61.907017ms)"],"step_count":2}
	{"level":"warn","ts":"2024-05-01T03:04:30.53001Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"210.052329ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-282238-m03\" ","response":"range_response_count:1 size:3229"}
	{"level":"info","ts":"2024-05-01T03:04:30.53018Z","caller":"traceutil/trace.go:171","msg":"trace[2146336304] range","detail":"{range_begin:/registry/minions/multinode-282238-m03; range_end:; response_count:1; response_revision:649; }","duration":"210.323107ms","start":"2024-05-01T03:04:30.319835Z","end":"2024-05-01T03:04:30.530158Z","steps":["trace[2146336304] 'agreement among raft nodes before linearized reading'  (duration: 209.975113ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-01T03:04:30.530003Z","caller":"traceutil/trace.go:171","msg":"trace[282043103] transaction","detail":"{read_only:false; response_revision:649; number_of_response:1; }","duration":"255.241718ms","start":"2024-05-01T03:04:30.274743Z","end":"2024-05-01T03:04:30.529985Z","steps":["trace[282043103] 'process raft request'  (duration: 193.059812ms)","trace[282043103] 'compare'  (duration: 61.745625ms)"],"step_count":2}
	{"level":"info","ts":"2024-05-01T03:07:26.037635Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-05-01T03:07:26.037764Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-282238","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.139:2380"],"advertise-client-urls":["https://192.168.39.139:2379"]}
	{"level":"warn","ts":"2024-05-01T03:07:26.037921Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-05-01T03:07:26.038006Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-05-01T03:07:26.115154Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.139:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-05-01T03:07:26.115218Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.139:2379: use of closed network connection"}
	{"level":"info","ts":"2024-05-01T03:07:26.115313Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"3cbdd43a8949db2d","current-leader-member-id":"3cbdd43a8949db2d"}
	{"level":"info","ts":"2024-05-01T03:07:26.118018Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.139:2380"}
	{"level":"info","ts":"2024-05-01T03:07:26.118133Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.139:2380"}
	{"level":"info","ts":"2024-05-01T03:07:26.118142Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-282238","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.139:2380"],"advertise-client-urls":["https://192.168.39.139:2379"]}
	
	
	==> etcd [f48f5b734c7eaf5babad6a7bd38e9f26e8d2c8f3b507d0eec92fc34dce752934] <==
	{"level":"info","ts":"2024-05-01T03:09:02.231691Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3cbdd43a8949db2d switched to configuration voters=(4376887760750500653)"}
	{"level":"info","ts":"2024-05-01T03:09:02.232022Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"4af51893258ecb17","local-member-id":"3cbdd43a8949db2d","added-peer-id":"3cbdd43a8949db2d","added-peer-peer-urls":["https://192.168.39.139:2380"]}
	{"level":"info","ts":"2024-05-01T03:09:02.233558Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"4af51893258ecb17","local-member-id":"3cbdd43a8949db2d","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-01T03:09:02.23362Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-01T03:09:02.240663Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-05-01T03:09:02.240934Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"3cbdd43a8949db2d","initial-advertise-peer-urls":["https://192.168.39.139:2380"],"listen-peer-urls":["https://192.168.39.139:2380"],"advertise-client-urls":["https://192.168.39.139:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.139:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-05-01T03:09:02.242591Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-05-01T03:09:02.243618Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.139:2380"}
	{"level":"info","ts":"2024-05-01T03:09:02.251297Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.139:2380"}
	{"level":"info","ts":"2024-05-01T03:09:03.377504Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3cbdd43a8949db2d is starting a new election at term 2"}
	{"level":"info","ts":"2024-05-01T03:09:03.377577Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3cbdd43a8949db2d became pre-candidate at term 2"}
	{"level":"info","ts":"2024-05-01T03:09:03.37761Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3cbdd43a8949db2d received MsgPreVoteResp from 3cbdd43a8949db2d at term 2"}
	{"level":"info","ts":"2024-05-01T03:09:03.377623Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3cbdd43a8949db2d became candidate at term 3"}
	{"level":"info","ts":"2024-05-01T03:09:03.37764Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3cbdd43a8949db2d received MsgVoteResp from 3cbdd43a8949db2d at term 3"}
	{"level":"info","ts":"2024-05-01T03:09:03.377649Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3cbdd43a8949db2d became leader at term 3"}
	{"level":"info","ts":"2024-05-01T03:09:03.377659Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 3cbdd43a8949db2d elected leader 3cbdd43a8949db2d at term 3"}
	{"level":"info","ts":"2024-05-01T03:09:03.386632Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"3cbdd43a8949db2d","local-member-attributes":"{Name:multinode-282238 ClientURLs:[https://192.168.39.139:2379]}","request-path":"/0/members/3cbdd43a8949db2d/attributes","cluster-id":"4af51893258ecb17","publish-timeout":"7s"}
	{"level":"info","ts":"2024-05-01T03:09:03.386692Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-01T03:09:03.387131Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-01T03:09:03.39082Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-05-01T03:09:03.397899Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.139:2379"}
	{"level":"info","ts":"2024-05-01T03:09:03.397999Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-05-01T03:09:03.398034Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-05-01T03:10:22.538993Z","caller":"traceutil/trace.go:171","msg":"trace[1615329494] transaction","detail":"{read_only:false; response_revision:1147; number_of_response:1; }","duration":"159.125983ms","start":"2024-05-01T03:10:22.379826Z","end":"2024-05-01T03:10:22.538952Z","steps":["trace[1615329494] 'process raft request'  (duration: 158.998728ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-01T03:10:22.576394Z","caller":"traceutil/trace.go:171","msg":"trace[1303630415] transaction","detail":"{read_only:false; response_revision:1148; number_of_response:1; }","duration":"170.713237ms","start":"2024-05-01T03:10:22.405658Z","end":"2024-05-01T03:10:22.576371Z","steps":["trace[1303630415] 'process raft request'  (duration: 169.475774ms)"],"step_count":1}
	
	
	==> kernel <==
	 03:12:55 up 11 min,  0 users,  load average: 0.43, 0.24, 0.12
	Linux multinode-282238 5.10.207 #1 SMP Tue Apr 30 22:38:43 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [622e78c84bf099cdfac90652360cd9687ba7c350f0987f85a311200a32222190] <==
	I0501 03:11:46.930141       1 main.go:250] Node multinode-282238-m02 has CIDR [10.244.1.0/24] 
	I0501 03:11:56.940099       1 main.go:223] Handling node with IPs: map[192.168.39.139:{}]
	I0501 03:11:56.940179       1 main.go:227] handling current node
	I0501 03:11:56.940201       1 main.go:223] Handling node with IPs: map[192.168.39.29:{}]
	I0501 03:11:56.940220       1 main.go:250] Node multinode-282238-m02 has CIDR [10.244.1.0/24] 
	I0501 03:12:06.962815       1 main.go:223] Handling node with IPs: map[192.168.39.139:{}]
	I0501 03:12:06.962864       1 main.go:227] handling current node
	I0501 03:12:06.962875       1 main.go:223] Handling node with IPs: map[192.168.39.29:{}]
	I0501 03:12:06.962881       1 main.go:250] Node multinode-282238-m02 has CIDR [10.244.1.0/24] 
	I0501 03:12:16.971072       1 main.go:223] Handling node with IPs: map[192.168.39.139:{}]
	I0501 03:12:16.971335       1 main.go:227] handling current node
	I0501 03:12:16.971378       1 main.go:223] Handling node with IPs: map[192.168.39.29:{}]
	I0501 03:12:16.971396       1 main.go:250] Node multinode-282238-m02 has CIDR [10.244.1.0/24] 
	I0501 03:12:26.983122       1 main.go:223] Handling node with IPs: map[192.168.39.139:{}]
	I0501 03:12:26.983178       1 main.go:227] handling current node
	I0501 03:12:26.983188       1 main.go:223] Handling node with IPs: map[192.168.39.29:{}]
	I0501 03:12:26.983199       1 main.go:250] Node multinode-282238-m02 has CIDR [10.244.1.0/24] 
	I0501 03:12:36.998569       1 main.go:223] Handling node with IPs: map[192.168.39.139:{}]
	I0501 03:12:36.998677       1 main.go:227] handling current node
	I0501 03:12:36.998709       1 main.go:223] Handling node with IPs: map[192.168.39.29:{}]
	I0501 03:12:36.998736       1 main.go:250] Node multinode-282238-m02 has CIDR [10.244.1.0/24] 
	I0501 03:12:47.012537       1 main.go:223] Handling node with IPs: map[192.168.39.139:{}]
	I0501 03:12:47.012703       1 main.go:227] handling current node
	I0501 03:12:47.012737       1 main.go:223] Handling node with IPs: map[192.168.39.29:{}]
	I0501 03:12:47.012756       1 main.go:250] Node multinode-282238-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kindnet [fcab67c5c49010d68dd607692c8a87af7ad0be31da8b639dd24377f122a4e4d2] <==
	I0501 03:06:37.721882       1 main.go:250] Node multinode-282238-m03 has CIDR [10.244.3.0/24] 
	I0501 03:06:47.731165       1 main.go:223] Handling node with IPs: map[192.168.39.139:{}]
	I0501 03:06:47.731246       1 main.go:227] handling current node
	I0501 03:06:47.731268       1 main.go:223] Handling node with IPs: map[192.168.39.29:{}]
	I0501 03:06:47.731291       1 main.go:250] Node multinode-282238-m02 has CIDR [10.244.1.0/24] 
	I0501 03:06:47.731510       1 main.go:223] Handling node with IPs: map[192.168.39.220:{}]
	I0501 03:06:47.731552       1 main.go:250] Node multinode-282238-m03 has CIDR [10.244.3.0/24] 
	I0501 03:06:57.745376       1 main.go:223] Handling node with IPs: map[192.168.39.139:{}]
	I0501 03:06:57.745507       1 main.go:227] handling current node
	I0501 03:06:57.745519       1 main.go:223] Handling node with IPs: map[192.168.39.29:{}]
	I0501 03:06:57.745525       1 main.go:250] Node multinode-282238-m02 has CIDR [10.244.1.0/24] 
	I0501 03:06:57.745976       1 main.go:223] Handling node with IPs: map[192.168.39.220:{}]
	I0501 03:06:57.745988       1 main.go:250] Node multinode-282238-m03 has CIDR [10.244.3.0/24] 
	I0501 03:07:07.759493       1 main.go:223] Handling node with IPs: map[192.168.39.139:{}]
	I0501 03:07:07.759705       1 main.go:227] handling current node
	I0501 03:07:07.759755       1 main.go:223] Handling node with IPs: map[192.168.39.29:{}]
	I0501 03:07:07.759779       1 main.go:250] Node multinode-282238-m02 has CIDR [10.244.1.0/24] 
	I0501 03:07:07.759917       1 main.go:223] Handling node with IPs: map[192.168.39.220:{}]
	I0501 03:07:07.759945       1 main.go:250] Node multinode-282238-m03 has CIDR [10.244.3.0/24] 
	I0501 03:07:17.765694       1 main.go:223] Handling node with IPs: map[192.168.39.139:{}]
	I0501 03:07:17.765745       1 main.go:227] handling current node
	I0501 03:07:17.765756       1 main.go:223] Handling node with IPs: map[192.168.39.29:{}]
	I0501 03:07:17.765762       1 main.go:250] Node multinode-282238-m02 has CIDR [10.244.1.0/24] 
	I0501 03:07:17.765865       1 main.go:223] Handling node with IPs: map[192.168.39.220:{}]
	I0501 03:07:17.765962       1 main.go:250] Node multinode-282238-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [15b3a41e9b9b60d6c65946591a4b9d001a896ae747addb52cb7f2d0945f41fb6] <==
	W0501 03:07:26.064528       1 logging.go:59] [core] [Channel #85 SubChannel #86] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0501 03:07:26.064811       1 logging.go:59] [core] [Channel #142 SubChannel #143] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0501 03:07:26.065128       1 logging.go:59] [core] [Channel #166 SubChannel #167] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0501 03:07:26.065214       1 logging.go:59] [core] [Channel #73 SubChannel #74] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0501 03:07:26.065270       1 logging.go:59] [core] [Channel #163 SubChannel #164] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0501 03:07:26.065320       1 logging.go:59] [core] [Channel #127 SubChannel #128] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0501 03:07:26.065371       1 logging.go:59] [core] [Channel #154 SubChannel #155] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0501 03:07:26.065526       1 logging.go:59] [core] [Channel #160 SubChannel #161] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0501 03:07:26.065589       1 logging.go:59] [core] [Channel #136 SubChannel #137] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0501 03:07:26.065642       1 logging.go:59] [core] [Channel #82 SubChannel #83] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0501 03:07:26.065913       1 logging.go:59] [core] [Channel #145 SubChannel #146] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0501 03:07:26.067239       1 logging.go:59] [core] [Channel #64 SubChannel #65] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0501 03:07:26.069591       1 logging.go:59] [core] [Channel #58 SubChannel #59] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0501 03:07:26.069745       1 logging.go:59] [core] [Channel #76 SubChannel #77] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0501 03:07:26.069845       1 logging.go:59] [core] [Channel #88 SubChannel #89] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0501 03:07:26.070053       1 logging.go:59] [core] [Channel #169 SubChannel #170] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0501 03:07:26.070128       1 logging.go:59] [core] [Channel #31 SubChannel #32] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0501 03:07:26.070186       1 logging.go:59] [core] [Channel #61 SubChannel #62] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0501 03:07:26.070239       1 logging.go:59] [core] [Channel #79 SubChannel #80] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0501 03:07:26.070302       1 logging.go:59] [core] [Channel #109 SubChannel #110] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0501 03:07:26.070392       1 logging.go:59] [core] [Channel #55 SubChannel #56] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0501 03:07:26.070662       1 logging.go:59] [core] [Channel #151 SubChannel #152] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0501 03:07:26.070704       1 logging.go:59] [core] [Channel #28 SubChannel #29] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0501 03:07:26.071865       1 logging.go:59] [core] [Channel #148 SubChannel #149] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0501 03:07:26.071961       1 logging.go:59] [core] [Channel #91 SubChannel #92] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [f13ad7828f3b9e47354e8b7246e161db5528f5a208d0a771ee742358bb8a80ac] <==
	I0501 03:09:04.814606       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0501 03:09:04.815268       1 aggregator.go:165] initial CRD sync complete...
	I0501 03:09:04.815305       1 autoregister_controller.go:141] Starting autoregister controller
	I0501 03:09:04.815312       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0501 03:09:04.875083       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0501 03:09:04.876153       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0501 03:09:04.877357       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0501 03:09:04.877554       1 shared_informer.go:320] Caches are synced for configmaps
	I0501 03:09:04.877685       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0501 03:09:04.879207       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0501 03:09:04.884184       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	E0501 03:09:04.886930       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0501 03:09:04.898309       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0501 03:09:04.898495       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0501 03:09:04.898534       1 policy_source.go:224] refreshing policies
	I0501 03:09:04.921807       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0501 03:09:04.924730       1 cache.go:39] Caches are synced for autoregister controller
	I0501 03:09:05.796633       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0501 03:09:07.268259       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0501 03:09:07.405566       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0501 03:09:07.421362       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0501 03:09:07.487115       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0501 03:09:07.493280       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0501 03:09:17.319984       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0501 03:09:17.322099       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [15cc964ddd2cbb7aeaf774665a85e7d70e1a039125b8a3ccb7187eae1b9acb1d] <==
	I0501 03:09:47.266865       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-282238-m02\" does not exist"
	I0501 03:09:47.277257       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-282238-m02" podCIDRs=["10.244.1.0/24"]
	I0501 03:09:49.153775       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="38.693µs"
	I0501 03:09:49.196949       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="44.145µs"
	I0501 03:09:49.208338       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="62.636µs"
	I0501 03:09:49.229079       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="73.351µs"
	I0501 03:09:49.238321       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="64.598µs"
	I0501 03:09:49.242992       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="101.173µs"
	I0501 03:09:56.412800       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-282238-m02"
	I0501 03:09:56.430876       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="44.956µs"
	I0501 03:09:56.448246       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="49.368µs"
	I0501 03:10:00.673596       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="7.611674ms"
	I0501 03:10:00.673704       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="42.505µs"
	I0501 03:10:17.101031       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-282238-m02"
	I0501 03:10:18.271127       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-282238-m03\" does not exist"
	I0501 03:10:18.271701       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-282238-m02"
	I0501 03:10:18.281031       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-282238-m03" podCIDRs=["10.244.2.0/24"]
	I0501 03:10:27.816170       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-282238-m02"
	I0501 03:10:33.646991       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-282238-m02"
	I0501 03:10:57.355083       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-lwglr"
	I0501 03:10:57.384718       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-lwglr"
	I0501 03:10:57.384796       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-z96xb"
	I0501 03:10:57.410063       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-z96xb"
	I0501 03:11:12.444579       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="18.966264ms"
	I0501 03:11:12.445201       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="98.18µs"
	
	
	==> kube-controller-manager [648ac51c97cf05ff6096b7f920d602e441f2909db28005c9395bbed15cf2716e] <==
	I0501 03:03:40.363271       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-282238-m02\" does not exist"
	I0501 03:03:40.381121       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-282238-m02" podCIDRs=["10.244.1.0/24"]
	I0501 03:03:45.162814       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-282238-m02"
	I0501 03:03:50.462851       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-282238-m02"
	I0501 03:03:53.049818       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="56.675509ms"
	I0501 03:03:53.089808       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="39.895739ms"
	I0501 03:03:53.118718       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="28.778738ms"
	I0501 03:03:53.118848       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="61.562µs"
	I0501 03:03:56.227545       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="5.329766ms"
	I0501 03:03:56.227755       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="82.954µs"
	I0501 03:03:56.378750       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="4.961427ms"
	I0501 03:03:56.378841       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="38.415µs"
	I0501 03:04:27.144571       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-282238-m02"
	I0501 03:04:27.143395       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-282238-m03\" does not exist"
	I0501 03:04:27.177748       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-282238-m03" podCIDRs=["10.244.2.0/24"]
	I0501 03:04:30.179200       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-282238-m03"
	I0501 03:04:37.435214       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-282238-m03"
	I0501 03:05:08.634653       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-282238-m02"
	I0501 03:05:10.085252       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-282238-m03\" does not exist"
	I0501 03:05:10.085369       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-282238-m02"
	I0501 03:05:10.095757       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-282238-m03" podCIDRs=["10.244.3.0/24"]
	I0501 03:05:19.331864       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-282238-m02"
	I0501 03:06:05.230059       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-282238-m03"
	I0501 03:06:05.289526       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.704875ms"
	I0501 03:06:05.289959       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="24.996µs"
	
	
	==> kube-proxy [58136eaaeacc9dbd72f4c4277026813eb767e299885032dbbb476301df4752f8] <==
	I0501 03:09:05.859123       1 server_linux.go:69] "Using iptables proxy"
	I0501 03:09:05.883148       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.139"]
	I0501 03:09:05.965512       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0501 03:09:05.965578       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0501 03:09:05.965595       1 server_linux.go:165] "Using iptables Proxier"
	I0501 03:09:05.983662       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0501 03:09:05.984099       1 server.go:872] "Version info" version="v1.30.0"
	I0501 03:09:05.984192       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0501 03:09:05.986388       1 config.go:192] "Starting service config controller"
	I0501 03:09:05.987667       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0501 03:09:05.987822       1 config.go:101] "Starting endpoint slice config controller"
	I0501 03:09:05.993528       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0501 03:09:05.993271       1 config.go:319] "Starting node config controller"
	I0501 03:09:05.993977       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0501 03:09:06.088722       1 shared_informer.go:320] Caches are synced for service config
	I0501 03:09:06.094528       1 shared_informer.go:320] Caches are synced for node config
	I0501 03:09:06.094812       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [be40d7b3a3ded9d9e8420ed6594c65c831f2543e4a62eaa9eff0e1d4b5922c1e] <==
	I0501 03:02:37.007610       1 server_linux.go:69] "Using iptables proxy"
	I0501 03:02:37.031243       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.139"]
	I0501 03:02:37.311584       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0501 03:02:37.311628       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0501 03:02:37.311740       1 server_linux.go:165] "Using iptables Proxier"
	I0501 03:02:37.351754       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0501 03:02:37.353329       1 server.go:872] "Version info" version="v1.30.0"
	I0501 03:02:37.353372       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0501 03:02:37.356201       1 config.go:192] "Starting service config controller"
	I0501 03:02:37.356308       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0501 03:02:37.356339       1 config.go:101] "Starting endpoint slice config controller"
	I0501 03:02:37.356343       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0501 03:02:37.357733       1 config.go:319] "Starting node config controller"
	I0501 03:02:37.357743       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0501 03:02:37.457382       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0501 03:02:37.457521       1 shared_informer.go:320] Caches are synced for service config
	I0501 03:02:37.460715       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [0bbe01883646dd171b19f4e453c14175e649929473ee82dae03eb7c7bce9b04c] <==
	E0501 03:02:19.911591       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0501 03:02:19.911314       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0501 03:02:19.911653       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0501 03:02:20.719629       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0501 03:02:20.719688       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0501 03:02:20.739056       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0501 03:02:20.739114       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0501 03:02:20.790877       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0501 03:02:20.790991       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0501 03:02:20.936066       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0501 03:02:20.936213       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0501 03:02:21.018286       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0501 03:02:21.018463       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0501 03:02:21.026991       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0501 03:02:21.027253       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0501 03:02:21.124675       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0501 03:02:21.124811       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0501 03:02:21.156207       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0501 03:02:21.156877       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0501 03:02:21.205544       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0501 03:02:21.205665       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0501 03:02:21.239614       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0501 03:02:21.239706       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0501 03:02:23.704341       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0501 03:07:26.047759       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [8778d60a896a8cf1338ee26951ff0c6bd7cc9899d8164db111249b76cd20b5c1] <==
	I0501 03:09:03.057377       1 serving.go:380] Generated self-signed cert in-memory
	W0501 03:09:04.835045       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0501 03:09:04.835223       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0501 03:09:04.835269       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0501 03:09:04.835293       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0501 03:09:04.850986       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0"
	I0501 03:09:04.851131       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0501 03:09:04.854966       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0501 03:09:04.855026       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0501 03:09:04.855318       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0501 03:09:04.855450       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0501 03:09:04.956171       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	May 01 03:09:05 multinode-282238 kubelet[3064]: I0501 03:09:05.033950    3064 topology_manager.go:215] "Topology Admit Handler" podUID="71ce398a-00b1-4aca-87ba-78b64361ed9d" podNamespace="kube-system" podName="storage-provisioner"
	May 01 03:09:05 multinode-282238 kubelet[3064]: I0501 03:09:05.034865    3064 topology_manager.go:215] "Topology Admit Handler" podUID="00cc3b07-24df-4bef-ba3f-b94a8c0cee87" podNamespace="default" podName="busybox-fc5497c4f-dpfrf"
	May 01 03:09:05 multinode-282238 kubelet[3064]: I0501 03:09:05.043775    3064 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	May 01 03:09:05 multinode-282238 kubelet[3064]: I0501 03:09:05.130859    3064 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/71ce398a-00b1-4aca-87ba-78b64361ed9d-tmp\") pod \"storage-provisioner\" (UID: \"71ce398a-00b1-4aca-87ba-78b64361ed9d\") " pod="kube-system/storage-provisioner"
	May 01 03:09:05 multinode-282238 kubelet[3064]: I0501 03:09:05.131849    3064 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/fd0cbe33-025e-4a86-af98-8571c8f3340c-cni-cfg\") pod \"kindnet-hl7zh\" (UID: \"fd0cbe33-025e-4a86-af98-8571c8f3340c\") " pod="kube-system/kindnet-hl7zh"
	May 01 03:09:05 multinode-282238 kubelet[3064]: I0501 03:09:05.131916    3064 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fd0cbe33-025e-4a86-af98-8571c8f3340c-lib-modules\") pod \"kindnet-hl7zh\" (UID: \"fd0cbe33-025e-4a86-af98-8571c8f3340c\") " pod="kube-system/kindnet-hl7zh"
	May 01 03:09:05 multinode-282238 kubelet[3064]: I0501 03:09:05.131936    3064 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d33bb084-3ce9-4fa9-8703-b6bec622abac-xtables-lock\") pod \"kube-proxy-2rmjj\" (UID: \"d33bb084-3ce9-4fa9-8703-b6bec622abac\") " pod="kube-system/kube-proxy-2rmjj"
	May 01 03:09:05 multinode-282238 kubelet[3064]: I0501 03:09:05.131950    3064 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d33bb084-3ce9-4fa9-8703-b6bec622abac-lib-modules\") pod \"kube-proxy-2rmjj\" (UID: \"d33bb084-3ce9-4fa9-8703-b6bec622abac\") " pod="kube-system/kube-proxy-2rmjj"
	May 01 03:09:05 multinode-282238 kubelet[3064]: I0501 03:09:05.131974    3064 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fd0cbe33-025e-4a86-af98-8571c8f3340c-xtables-lock\") pod \"kindnet-hl7zh\" (UID: \"fd0cbe33-025e-4a86-af98-8571c8f3340c\") " pod="kube-system/kindnet-hl7zh"
	May 01 03:09:13 multinode-282238 kubelet[3064]: I0501 03:09:13.732317    3064 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	May 01 03:10:01 multinode-282238 kubelet[3064]: E0501 03:10:01.113212    3064 iptables.go:577] "Could not set up iptables canary" err=<
	May 01 03:10:01 multinode-282238 kubelet[3064]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 01 03:10:01 multinode-282238 kubelet[3064]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 01 03:10:01 multinode-282238 kubelet[3064]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 01 03:10:01 multinode-282238 kubelet[3064]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 01 03:11:01 multinode-282238 kubelet[3064]: E0501 03:11:01.112607    3064 iptables.go:577] "Could not set up iptables canary" err=<
	May 01 03:11:01 multinode-282238 kubelet[3064]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 01 03:11:01 multinode-282238 kubelet[3064]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 01 03:11:01 multinode-282238 kubelet[3064]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 01 03:11:01 multinode-282238 kubelet[3064]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 01 03:12:01 multinode-282238 kubelet[3064]: E0501 03:12:01.113652    3064 iptables.go:577] "Could not set up iptables canary" err=<
	May 01 03:12:01 multinode-282238 kubelet[3064]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 01 03:12:01 multinode-282238 kubelet[3064]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 01 03:12:01 multinode-282238 kubelet[3064]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 01 03:12:01 multinode-282238 kubelet[3064]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0501 03:12:54.597576   53270 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18779-13391/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-282238 -n multinode-282238
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-282238 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (141.48s)

                                                
                                    
x
+
TestPreload (266.08s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-872415 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-872415 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (2m58.921283552s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-872415 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-872415 image pull gcr.io/k8s-minikube/busybox: (3.012365234s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-872415
E0501 03:19:56.198505   20724 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/functional-960026/client.crt: no such file or directory
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-872415: (7.596771999s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-872415 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-872415 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (1m13.34363165s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-872415 image list
preload_test.go:76: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.7
	registry.k8s.io/kube-scheduler:v1.24.4
	registry.k8s.io/kube-proxy:v1.24.4
	registry.k8s.io/kube-controller-manager:v1.24.4
	registry.k8s.io/kube-apiserver:v1.24.4
	registry.k8s.io/etcd:3.5.3-0
	registry.k8s.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/kube-scheduler:v1.24.4
	k8s.gcr.io/kube-proxy:v1.24.4
	k8s.gcr.io/kube-controller-manager:v1.24.4
	k8s.gcr.io/kube-apiserver:v1.24.4
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20220726-ed811e41

                                                
                                                
-- /stdout --
panic.go:626: *** TestPreload FAILED at 2024-05-01 03:21:10.399101057 +0000 UTC m=+4445.505017358
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-872415 -n test-preload-872415
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-872415 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p test-preload-872415 logs -n 25: (1.155509123s)
helpers_test.go:252: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-282238 ssh -n                                                                 | multinode-282238     | jenkins | v1.33.0 | 01 May 24 03:04 UTC | 01 May 24 03:04 UTC |
	|         | multinode-282238-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-282238 ssh -n multinode-282238 sudo cat                                       | multinode-282238     | jenkins | v1.33.0 | 01 May 24 03:04 UTC | 01 May 24 03:04 UTC |
	|         | /home/docker/cp-test_multinode-282238-m03_multinode-282238.txt                          |                      |         |         |                     |                     |
	| cp      | multinode-282238 cp multinode-282238-m03:/home/docker/cp-test.txt                       | multinode-282238     | jenkins | v1.33.0 | 01 May 24 03:04 UTC | 01 May 24 03:04 UTC |
	|         | multinode-282238-m02:/home/docker/cp-test_multinode-282238-m03_multinode-282238-m02.txt |                      |         |         |                     |                     |
	| ssh     | multinode-282238 ssh -n                                                                 | multinode-282238     | jenkins | v1.33.0 | 01 May 24 03:04 UTC | 01 May 24 03:04 UTC |
	|         | multinode-282238-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-282238 ssh -n multinode-282238-m02 sudo cat                                   | multinode-282238     | jenkins | v1.33.0 | 01 May 24 03:04 UTC | 01 May 24 03:04 UTC |
	|         | /home/docker/cp-test_multinode-282238-m03_multinode-282238-m02.txt                      |                      |         |         |                     |                     |
	| node    | multinode-282238 node stop m03                                                          | multinode-282238     | jenkins | v1.33.0 | 01 May 24 03:04 UTC | 01 May 24 03:04 UTC |
	| node    | multinode-282238 node start                                                             | multinode-282238     | jenkins | v1.33.0 | 01 May 24 03:04 UTC | 01 May 24 03:05 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                      |         |         |                     |                     |
	| node    | list -p multinode-282238                                                                | multinode-282238     | jenkins | v1.33.0 | 01 May 24 03:05 UTC |                     |
	| stop    | -p multinode-282238                                                                     | multinode-282238     | jenkins | v1.33.0 | 01 May 24 03:05 UTC |                     |
	| start   | -p multinode-282238                                                                     | multinode-282238     | jenkins | v1.33.0 | 01 May 24 03:07 UTC | 01 May 24 03:10 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	| node    | list -p multinode-282238                                                                | multinode-282238     | jenkins | v1.33.0 | 01 May 24 03:10 UTC |                     |
	| node    | multinode-282238 node delete                                                            | multinode-282238     | jenkins | v1.33.0 | 01 May 24 03:10 UTC | 01 May 24 03:10 UTC |
	|         | m03                                                                                     |                      |         |         |                     |                     |
	| stop    | multinode-282238 stop                                                                   | multinode-282238     | jenkins | v1.33.0 | 01 May 24 03:10 UTC |                     |
	| start   | -p multinode-282238                                                                     | multinode-282238     | jenkins | v1.33.0 | 01 May 24 03:12 UTC | 01 May 24 03:15 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | list -p multinode-282238                                                                | multinode-282238     | jenkins | v1.33.0 | 01 May 24 03:15 UTC |                     |
	| start   | -p multinode-282238-m02                                                                 | multinode-282238-m02 | jenkins | v1.33.0 | 01 May 24 03:15 UTC |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| start   | -p multinode-282238-m03                                                                 | multinode-282238-m03 | jenkins | v1.33.0 | 01 May 24 03:15 UTC | 01 May 24 03:16 UTC |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | add -p multinode-282238                                                                 | multinode-282238     | jenkins | v1.33.0 | 01 May 24 03:16 UTC |                     |
	| delete  | -p multinode-282238-m03                                                                 | multinode-282238-m03 | jenkins | v1.33.0 | 01 May 24 03:16 UTC | 01 May 24 03:16 UTC |
	| delete  | -p multinode-282238                                                                     | multinode-282238     | jenkins | v1.33.0 | 01 May 24 03:16 UTC | 01 May 24 03:16 UTC |
	| start   | -p test-preload-872415                                                                  | test-preload-872415  | jenkins | v1.33.0 | 01 May 24 03:16 UTC | 01 May 24 03:19 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                                                           |                      |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                                                           |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                               |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                      |         |         |                     |                     |
	| image   | test-preload-872415 image pull                                                          | test-preload-872415  | jenkins | v1.33.0 | 01 May 24 03:19 UTC | 01 May 24 03:19 UTC |
	|         | gcr.io/k8s-minikube/busybox                                                             |                      |         |         |                     |                     |
	| stop    | -p test-preload-872415                                                                  | test-preload-872415  | jenkins | v1.33.0 | 01 May 24 03:19 UTC | 01 May 24 03:19 UTC |
	| start   | -p test-preload-872415                                                                  | test-preload-872415  | jenkins | v1.33.0 | 01 May 24 03:19 UTC | 01 May 24 03:21 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                      |         |         |                     |                     |
	|         | --wait=true --driver=kvm2                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| image   | test-preload-872415 image list                                                          | test-preload-872415  | jenkins | v1.33.0 | 01 May 24 03:21 UTC | 01 May 24 03:21 UTC |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/01 03:19:56
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0501 03:19:56.868243   56414 out.go:291] Setting OutFile to fd 1 ...
	I0501 03:19:56.868376   56414 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 03:19:56.868388   56414 out.go:304] Setting ErrFile to fd 2...
	I0501 03:19:56.868394   56414 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 03:19:56.868598   56414 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18779-13391/.minikube/bin
	I0501 03:19:56.869133   56414 out.go:298] Setting JSON to false
	I0501 03:19:56.870057   56414 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7340,"bootTime":1714526257,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0501 03:19:56.870124   56414 start.go:139] virtualization: kvm guest
	I0501 03:19:56.872358   56414 out.go:177] * [test-preload-872415] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0501 03:19:56.874245   56414 out.go:177]   - MINIKUBE_LOCATION=18779
	I0501 03:19:56.874244   56414 notify.go:220] Checking for updates...
	I0501 03:19:56.875563   56414 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0501 03:19:56.877001   56414 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18779-13391/kubeconfig
	I0501 03:19:56.878178   56414 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18779-13391/.minikube
	I0501 03:19:56.879263   56414 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0501 03:19:56.880322   56414 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0501 03:19:56.881687   56414 config.go:182] Loaded profile config "test-preload-872415": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0501 03:19:56.882104   56414 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:19:56.882150   56414 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:19:56.896766   56414 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37971
	I0501 03:19:56.897122   56414 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:19:56.897651   56414 main.go:141] libmachine: Using API Version  1
	I0501 03:19:56.897673   56414 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:19:56.898003   56414 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:19:56.898179   56414 main.go:141] libmachine: (test-preload-872415) Calling .DriverName
	I0501 03:19:56.899997   56414 out.go:177] * Kubernetes 1.30.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.0
	I0501 03:19:56.901171   56414 driver.go:392] Setting default libvirt URI to qemu:///system
	I0501 03:19:56.901444   56414 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:19:56.901482   56414 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:19:56.915602   56414 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38951
	I0501 03:19:56.915997   56414 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:19:56.916424   56414 main.go:141] libmachine: Using API Version  1
	I0501 03:19:56.916446   56414 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:19:56.916750   56414 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:19:56.916869   56414 main.go:141] libmachine: (test-preload-872415) Calling .DriverName
	I0501 03:19:56.949611   56414 out.go:177] * Using the kvm2 driver based on existing profile
	I0501 03:19:56.950779   56414 start.go:297] selected driver: kvm2
	I0501 03:19:56.950789   56414 start.go:901] validating driver "kvm2" against &{Name:test-preload-872415 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.24.4 ClusterName:test-preload-872415 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.71 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L M
ountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 03:19:56.950875   56414 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0501 03:19:56.951483   56414 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0501 03:19:56.951548   56414 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18779-13391/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0501 03:19:56.965078   56414 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0501 03:19:56.965364   56414 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0501 03:19:56.965427   56414 cni.go:84] Creating CNI manager for ""
	I0501 03:19:56.965440   56414 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0501 03:19:56.965486   56414 start.go:340] cluster config:
	{Name:test-preload-872415 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-872415 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.71 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 03:19:56.965567   56414 iso.go:125] acquiring lock: {Name:mkcd2496aadb29931e179193d707c731bfda98b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0501 03:19:56.967081   56414 out.go:177] * Starting "test-preload-872415" primary control-plane node in "test-preload-872415" cluster
	I0501 03:19:56.968290   56414 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0501 03:19:57.079422   56414 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0501 03:19:57.079448   56414 cache.go:56] Caching tarball of preloaded images
	I0501 03:19:57.079584   56414 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0501 03:19:57.081402   56414 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I0501 03:19:57.082636   56414 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0501 03:19:57.191868   56414 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b2ee0ab83ed99f9e7ff71cb0cf27e8f9 -> /home/jenkins/minikube-integration/18779-13391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0501 03:20:09.493516   56414 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0501 03:20:09.493617   56414 preload.go:255] verifying checksum of /home/jenkins/minikube-integration/18779-13391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0501 03:20:10.332539   56414 cache.go:59] Finished verifying existence of preloaded tar for v1.24.4 on crio
	I0501 03:20:10.332688   56414 profile.go:143] Saving config to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/test-preload-872415/config.json ...
	I0501 03:20:10.332904   56414 start.go:360] acquireMachinesLock for test-preload-872415: {Name:mkc4548e5afe1d4b0a833f6c522103562b0cefff Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0501 03:20:10.332968   56414 start.go:364] duration metric: took 43.8µs to acquireMachinesLock for "test-preload-872415"
	I0501 03:20:10.332983   56414 start.go:96] Skipping create...Using existing machine configuration
	I0501 03:20:10.332993   56414 fix.go:54] fixHost starting: 
	I0501 03:20:10.333323   56414 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:20:10.333360   56414 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:20:10.347866   56414 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33123
	I0501 03:20:10.348303   56414 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:20:10.348740   56414 main.go:141] libmachine: Using API Version  1
	I0501 03:20:10.348759   56414 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:20:10.349071   56414 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:20:10.349267   56414 main.go:141] libmachine: (test-preload-872415) Calling .DriverName
	I0501 03:20:10.349423   56414 main.go:141] libmachine: (test-preload-872415) Calling .GetState
	I0501 03:20:10.351035   56414 fix.go:112] recreateIfNeeded on test-preload-872415: state=Stopped err=<nil>
	I0501 03:20:10.351065   56414 main.go:141] libmachine: (test-preload-872415) Calling .DriverName
	W0501 03:20:10.351187   56414 fix.go:138] unexpected machine state, will restart: <nil>
	I0501 03:20:10.353159   56414 out.go:177] * Restarting existing kvm2 VM for "test-preload-872415" ...
	I0501 03:20:10.354479   56414 main.go:141] libmachine: (test-preload-872415) Calling .Start
	I0501 03:20:10.354635   56414 main.go:141] libmachine: (test-preload-872415) Ensuring networks are active...
	I0501 03:20:10.355407   56414 main.go:141] libmachine: (test-preload-872415) Ensuring network default is active
	I0501 03:20:10.355704   56414 main.go:141] libmachine: (test-preload-872415) Ensuring network mk-test-preload-872415 is active
	I0501 03:20:10.356033   56414 main.go:141] libmachine: (test-preload-872415) Getting domain xml...
	I0501 03:20:10.356762   56414 main.go:141] libmachine: (test-preload-872415) Creating domain...
	I0501 03:20:11.528279   56414 main.go:141] libmachine: (test-preload-872415) Waiting to get IP...
	I0501 03:20:11.529239   56414 main.go:141] libmachine: (test-preload-872415) DBG | domain test-preload-872415 has defined MAC address 52:54:00:5c:81:4c in network mk-test-preload-872415
	I0501 03:20:11.529701   56414 main.go:141] libmachine: (test-preload-872415) DBG | unable to find current IP address of domain test-preload-872415 in network mk-test-preload-872415
	I0501 03:20:11.529776   56414 main.go:141] libmachine: (test-preload-872415) DBG | I0501 03:20:11.529679   56498 retry.go:31] will retry after 202.795226ms: waiting for machine to come up
	I0501 03:20:11.734323   56414 main.go:141] libmachine: (test-preload-872415) DBG | domain test-preload-872415 has defined MAC address 52:54:00:5c:81:4c in network mk-test-preload-872415
	I0501 03:20:11.734750   56414 main.go:141] libmachine: (test-preload-872415) DBG | unable to find current IP address of domain test-preload-872415 in network mk-test-preload-872415
	I0501 03:20:11.734842   56414 main.go:141] libmachine: (test-preload-872415) DBG | I0501 03:20:11.734704   56498 retry.go:31] will retry after 360.144178ms: waiting for machine to come up
	I0501 03:20:12.096250   56414 main.go:141] libmachine: (test-preload-872415) DBG | domain test-preload-872415 has defined MAC address 52:54:00:5c:81:4c in network mk-test-preload-872415
	I0501 03:20:12.096652   56414 main.go:141] libmachine: (test-preload-872415) DBG | unable to find current IP address of domain test-preload-872415 in network mk-test-preload-872415
	I0501 03:20:12.096676   56414 main.go:141] libmachine: (test-preload-872415) DBG | I0501 03:20:12.096615   56498 retry.go:31] will retry after 317.514123ms: waiting for machine to come up
	I0501 03:20:12.416190   56414 main.go:141] libmachine: (test-preload-872415) DBG | domain test-preload-872415 has defined MAC address 52:54:00:5c:81:4c in network mk-test-preload-872415
	I0501 03:20:12.416680   56414 main.go:141] libmachine: (test-preload-872415) DBG | unable to find current IP address of domain test-preload-872415 in network mk-test-preload-872415
	I0501 03:20:12.416724   56414 main.go:141] libmachine: (test-preload-872415) DBG | I0501 03:20:12.416618   56498 retry.go:31] will retry after 507.165618ms: waiting for machine to come up
	I0501 03:20:12.925437   56414 main.go:141] libmachine: (test-preload-872415) DBG | domain test-preload-872415 has defined MAC address 52:54:00:5c:81:4c in network mk-test-preload-872415
	I0501 03:20:12.925874   56414 main.go:141] libmachine: (test-preload-872415) DBG | unable to find current IP address of domain test-preload-872415 in network mk-test-preload-872415
	I0501 03:20:12.925906   56414 main.go:141] libmachine: (test-preload-872415) DBG | I0501 03:20:12.925826   56498 retry.go:31] will retry after 668.999546ms: waiting for machine to come up
	I0501 03:20:13.596601   56414 main.go:141] libmachine: (test-preload-872415) DBG | domain test-preload-872415 has defined MAC address 52:54:00:5c:81:4c in network mk-test-preload-872415
	I0501 03:20:13.597027   56414 main.go:141] libmachine: (test-preload-872415) DBG | unable to find current IP address of domain test-preload-872415 in network mk-test-preload-872415
	I0501 03:20:13.597052   56414 main.go:141] libmachine: (test-preload-872415) DBG | I0501 03:20:13.596984   56498 retry.go:31] will retry after 711.783484ms: waiting for machine to come up
	I0501 03:20:14.309849   56414 main.go:141] libmachine: (test-preload-872415) DBG | domain test-preload-872415 has defined MAC address 52:54:00:5c:81:4c in network mk-test-preload-872415
	I0501 03:20:14.310273   56414 main.go:141] libmachine: (test-preload-872415) DBG | unable to find current IP address of domain test-preload-872415 in network mk-test-preload-872415
	I0501 03:20:14.310308   56414 main.go:141] libmachine: (test-preload-872415) DBG | I0501 03:20:14.310235   56498 retry.go:31] will retry after 948.621755ms: waiting for machine to come up
	I0501 03:20:15.260332   56414 main.go:141] libmachine: (test-preload-872415) DBG | domain test-preload-872415 has defined MAC address 52:54:00:5c:81:4c in network mk-test-preload-872415
	I0501 03:20:15.260830   56414 main.go:141] libmachine: (test-preload-872415) DBG | unable to find current IP address of domain test-preload-872415 in network mk-test-preload-872415
	I0501 03:20:15.260894   56414 main.go:141] libmachine: (test-preload-872415) DBG | I0501 03:20:15.260812   56498 retry.go:31] will retry after 1.438332895s: waiting for machine to come up
	I0501 03:20:16.700286   56414 main.go:141] libmachine: (test-preload-872415) DBG | domain test-preload-872415 has defined MAC address 52:54:00:5c:81:4c in network mk-test-preload-872415
	I0501 03:20:16.700805   56414 main.go:141] libmachine: (test-preload-872415) DBG | unable to find current IP address of domain test-preload-872415 in network mk-test-preload-872415
	I0501 03:20:16.700830   56414 main.go:141] libmachine: (test-preload-872415) DBG | I0501 03:20:16.700778   56498 retry.go:31] will retry after 1.247130891s: waiting for machine to come up
	I0501 03:20:17.950350   56414 main.go:141] libmachine: (test-preload-872415) DBG | domain test-preload-872415 has defined MAC address 52:54:00:5c:81:4c in network mk-test-preload-872415
	I0501 03:20:17.950778   56414 main.go:141] libmachine: (test-preload-872415) DBG | unable to find current IP address of domain test-preload-872415 in network mk-test-preload-872415
	I0501 03:20:17.950807   56414 main.go:141] libmachine: (test-preload-872415) DBG | I0501 03:20:17.950726   56498 retry.go:31] will retry after 1.937477874s: waiting for machine to come up
	I0501 03:20:19.890777   56414 main.go:141] libmachine: (test-preload-872415) DBG | domain test-preload-872415 has defined MAC address 52:54:00:5c:81:4c in network mk-test-preload-872415
	I0501 03:20:19.891168   56414 main.go:141] libmachine: (test-preload-872415) DBG | unable to find current IP address of domain test-preload-872415 in network mk-test-preload-872415
	I0501 03:20:19.891198   56414 main.go:141] libmachine: (test-preload-872415) DBG | I0501 03:20:19.891115   56498 retry.go:31] will retry after 2.264557262s: waiting for machine to come up
	I0501 03:20:22.158057   56414 main.go:141] libmachine: (test-preload-872415) DBG | domain test-preload-872415 has defined MAC address 52:54:00:5c:81:4c in network mk-test-preload-872415
	I0501 03:20:22.158448   56414 main.go:141] libmachine: (test-preload-872415) DBG | unable to find current IP address of domain test-preload-872415 in network mk-test-preload-872415
	I0501 03:20:22.158468   56414 main.go:141] libmachine: (test-preload-872415) DBG | I0501 03:20:22.158421   56498 retry.go:31] will retry after 2.389813276s: waiting for machine to come up
	I0501 03:20:24.550904   56414 main.go:141] libmachine: (test-preload-872415) DBG | domain test-preload-872415 has defined MAC address 52:54:00:5c:81:4c in network mk-test-preload-872415
	I0501 03:20:24.551261   56414 main.go:141] libmachine: (test-preload-872415) DBG | unable to find current IP address of domain test-preload-872415 in network mk-test-preload-872415
	I0501 03:20:24.551292   56414 main.go:141] libmachine: (test-preload-872415) DBG | I0501 03:20:24.551221   56498 retry.go:31] will retry after 3.131844309s: waiting for machine to come up
	I0501 03:20:27.685818   56414 main.go:141] libmachine: (test-preload-872415) DBG | domain test-preload-872415 has defined MAC address 52:54:00:5c:81:4c in network mk-test-preload-872415
	I0501 03:20:27.686128   56414 main.go:141] libmachine: (test-preload-872415) DBG | unable to find current IP address of domain test-preload-872415 in network mk-test-preload-872415
	I0501 03:20:27.686155   56414 main.go:141] libmachine: (test-preload-872415) DBG | I0501 03:20:27.686089   56498 retry.go:31] will retry after 4.667144005s: waiting for machine to come up
	I0501 03:20:32.355474   56414 main.go:141] libmachine: (test-preload-872415) DBG | domain test-preload-872415 has defined MAC address 52:54:00:5c:81:4c in network mk-test-preload-872415
	I0501 03:20:32.355851   56414 main.go:141] libmachine: (test-preload-872415) DBG | domain test-preload-872415 has current primary IP address 192.168.39.71 and MAC address 52:54:00:5c:81:4c in network mk-test-preload-872415
	I0501 03:20:32.355872   56414 main.go:141] libmachine: (test-preload-872415) Found IP for machine: 192.168.39.71
	I0501 03:20:32.355884   56414 main.go:141] libmachine: (test-preload-872415) Reserving static IP address...
	I0501 03:20:32.356305   56414 main.go:141] libmachine: (test-preload-872415) Reserved static IP address: 192.168.39.71
	I0501 03:20:32.356349   56414 main.go:141] libmachine: (test-preload-872415) DBG | found host DHCP lease matching {name: "test-preload-872415", mac: "52:54:00:5c:81:4c", ip: "192.168.39.71"} in network mk-test-preload-872415: {Iface:virbr1 ExpiryTime:2024-05-01 04:17:03 +0000 UTC Type:0 Mac:52:54:00:5c:81:4c Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:test-preload-872415 Clientid:01:52:54:00:5c:81:4c}
	I0501 03:20:32.356360   56414 main.go:141] libmachine: (test-preload-872415) Waiting for SSH to be available...
	I0501 03:20:32.356387   56414 main.go:141] libmachine: (test-preload-872415) DBG | skip adding static IP to network mk-test-preload-872415 - found existing host DHCP lease matching {name: "test-preload-872415", mac: "52:54:00:5c:81:4c", ip: "192.168.39.71"}
	I0501 03:20:32.356399   56414 main.go:141] libmachine: (test-preload-872415) DBG | Getting to WaitForSSH function...
	I0501 03:20:32.358798   56414 main.go:141] libmachine: (test-preload-872415) DBG | domain test-preload-872415 has defined MAC address 52:54:00:5c:81:4c in network mk-test-preload-872415
	I0501 03:20:32.359125   56414 main.go:141] libmachine: (test-preload-872415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:81:4c", ip: ""} in network mk-test-preload-872415: {Iface:virbr1 ExpiryTime:2024-05-01 04:17:03 +0000 UTC Type:0 Mac:52:54:00:5c:81:4c Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:test-preload-872415 Clientid:01:52:54:00:5c:81:4c}
	I0501 03:20:32.359155   56414 main.go:141] libmachine: (test-preload-872415) DBG | domain test-preload-872415 has defined IP address 192.168.39.71 and MAC address 52:54:00:5c:81:4c in network mk-test-preload-872415
	I0501 03:20:32.359277   56414 main.go:141] libmachine: (test-preload-872415) DBG | Using SSH client type: external
	I0501 03:20:32.359299   56414 main.go:141] libmachine: (test-preload-872415) DBG | Using SSH private key: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/test-preload-872415/id_rsa (-rw-------)
	I0501 03:20:32.359358   56414 main.go:141] libmachine: (test-preload-872415) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.71 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18779-13391/.minikube/machines/test-preload-872415/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0501 03:20:32.359368   56414 main.go:141] libmachine: (test-preload-872415) DBG | About to run SSH command:
	I0501 03:20:32.359378   56414 main.go:141] libmachine: (test-preload-872415) DBG | exit 0
	I0501 03:20:32.486703   56414 main.go:141] libmachine: (test-preload-872415) DBG | SSH cmd err, output: <nil>: 
	I0501 03:20:32.487014   56414 main.go:141] libmachine: (test-preload-872415) Calling .GetConfigRaw
	I0501 03:20:32.487588   56414 main.go:141] libmachine: (test-preload-872415) Calling .GetIP
	I0501 03:20:32.490060   56414 main.go:141] libmachine: (test-preload-872415) DBG | domain test-preload-872415 has defined MAC address 52:54:00:5c:81:4c in network mk-test-preload-872415
	I0501 03:20:32.490364   56414 main.go:141] libmachine: (test-preload-872415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:81:4c", ip: ""} in network mk-test-preload-872415: {Iface:virbr1 ExpiryTime:2024-05-01 04:17:03 +0000 UTC Type:0 Mac:52:54:00:5c:81:4c Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:test-preload-872415 Clientid:01:52:54:00:5c:81:4c}
	I0501 03:20:32.490390   56414 main.go:141] libmachine: (test-preload-872415) DBG | domain test-preload-872415 has defined IP address 192.168.39.71 and MAC address 52:54:00:5c:81:4c in network mk-test-preload-872415
	I0501 03:20:32.490620   56414 profile.go:143] Saving config to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/test-preload-872415/config.json ...
	I0501 03:20:32.490802   56414 machine.go:94] provisionDockerMachine start ...
	I0501 03:20:32.490819   56414 main.go:141] libmachine: (test-preload-872415) Calling .DriverName
	I0501 03:20:32.491000   56414 main.go:141] libmachine: (test-preload-872415) Calling .GetSSHHostname
	I0501 03:20:32.492981   56414 main.go:141] libmachine: (test-preload-872415) DBG | domain test-preload-872415 has defined MAC address 52:54:00:5c:81:4c in network mk-test-preload-872415
	I0501 03:20:32.493285   56414 main.go:141] libmachine: (test-preload-872415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:81:4c", ip: ""} in network mk-test-preload-872415: {Iface:virbr1 ExpiryTime:2024-05-01 04:17:03 +0000 UTC Type:0 Mac:52:54:00:5c:81:4c Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:test-preload-872415 Clientid:01:52:54:00:5c:81:4c}
	I0501 03:20:32.493317   56414 main.go:141] libmachine: (test-preload-872415) DBG | domain test-preload-872415 has defined IP address 192.168.39.71 and MAC address 52:54:00:5c:81:4c in network mk-test-preload-872415
	I0501 03:20:32.493439   56414 main.go:141] libmachine: (test-preload-872415) Calling .GetSSHPort
	I0501 03:20:32.493643   56414 main.go:141] libmachine: (test-preload-872415) Calling .GetSSHKeyPath
	I0501 03:20:32.493806   56414 main.go:141] libmachine: (test-preload-872415) Calling .GetSSHKeyPath
	I0501 03:20:32.493931   56414 main.go:141] libmachine: (test-preload-872415) Calling .GetSSHUsername
	I0501 03:20:32.494062   56414 main.go:141] libmachine: Using SSH client type: native
	I0501 03:20:32.494239   56414 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.71 22 <nil> <nil>}
	I0501 03:20:32.494250   56414 main.go:141] libmachine: About to run SSH command:
	hostname
	I0501 03:20:32.607003   56414 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0501 03:20:32.607035   56414 main.go:141] libmachine: (test-preload-872415) Calling .GetMachineName
	I0501 03:20:32.607305   56414 buildroot.go:166] provisioning hostname "test-preload-872415"
	I0501 03:20:32.607335   56414 main.go:141] libmachine: (test-preload-872415) Calling .GetMachineName
	I0501 03:20:32.607527   56414 main.go:141] libmachine: (test-preload-872415) Calling .GetSSHHostname
	I0501 03:20:32.610322   56414 main.go:141] libmachine: (test-preload-872415) DBG | domain test-preload-872415 has defined MAC address 52:54:00:5c:81:4c in network mk-test-preload-872415
	I0501 03:20:32.610704   56414 main.go:141] libmachine: (test-preload-872415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:81:4c", ip: ""} in network mk-test-preload-872415: {Iface:virbr1 ExpiryTime:2024-05-01 04:17:03 +0000 UTC Type:0 Mac:52:54:00:5c:81:4c Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:test-preload-872415 Clientid:01:52:54:00:5c:81:4c}
	I0501 03:20:32.610734   56414 main.go:141] libmachine: (test-preload-872415) DBG | domain test-preload-872415 has defined IP address 192.168.39.71 and MAC address 52:54:00:5c:81:4c in network mk-test-preload-872415
	I0501 03:20:32.610861   56414 main.go:141] libmachine: (test-preload-872415) Calling .GetSSHPort
	I0501 03:20:32.611070   56414 main.go:141] libmachine: (test-preload-872415) Calling .GetSSHKeyPath
	I0501 03:20:32.611236   56414 main.go:141] libmachine: (test-preload-872415) Calling .GetSSHKeyPath
	I0501 03:20:32.611375   56414 main.go:141] libmachine: (test-preload-872415) Calling .GetSSHUsername
	I0501 03:20:32.611530   56414 main.go:141] libmachine: Using SSH client type: native
	I0501 03:20:32.611731   56414 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.71 22 <nil> <nil>}
	I0501 03:20:32.611753   56414 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-872415 && echo "test-preload-872415" | sudo tee /etc/hostname
	I0501 03:20:32.742369   56414 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-872415
	
	I0501 03:20:32.742429   56414 main.go:141] libmachine: (test-preload-872415) Calling .GetSSHHostname
	I0501 03:20:32.746303   56414 main.go:141] libmachine: (test-preload-872415) DBG | domain test-preload-872415 has defined MAC address 52:54:00:5c:81:4c in network mk-test-preload-872415
	I0501 03:20:32.746656   56414 main.go:141] libmachine: (test-preload-872415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:81:4c", ip: ""} in network mk-test-preload-872415: {Iface:virbr1 ExpiryTime:2024-05-01 04:17:03 +0000 UTC Type:0 Mac:52:54:00:5c:81:4c Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:test-preload-872415 Clientid:01:52:54:00:5c:81:4c}
	I0501 03:20:32.746687   56414 main.go:141] libmachine: (test-preload-872415) DBG | domain test-preload-872415 has defined IP address 192.168.39.71 and MAC address 52:54:00:5c:81:4c in network mk-test-preload-872415
	I0501 03:20:32.746882   56414 main.go:141] libmachine: (test-preload-872415) Calling .GetSSHPort
	I0501 03:20:32.747058   56414 main.go:141] libmachine: (test-preload-872415) Calling .GetSSHKeyPath
	I0501 03:20:32.747241   56414 main.go:141] libmachine: (test-preload-872415) Calling .GetSSHKeyPath
	I0501 03:20:32.747375   56414 main.go:141] libmachine: (test-preload-872415) Calling .GetSSHUsername
	I0501 03:20:32.747535   56414 main.go:141] libmachine: Using SSH client type: native
	I0501 03:20:32.747682   56414 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.71 22 <nil> <nil>}
	I0501 03:20:32.747698   56414 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-872415' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-872415/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-872415' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0501 03:20:32.875632   56414 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0501 03:20:32.875663   56414 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18779-13391/.minikube CaCertPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18779-13391/.minikube}
	I0501 03:20:32.875686   56414 buildroot.go:174] setting up certificates
	I0501 03:20:32.875697   56414 provision.go:84] configureAuth start
	I0501 03:20:32.875709   56414 main.go:141] libmachine: (test-preload-872415) Calling .GetMachineName
	I0501 03:20:32.875993   56414 main.go:141] libmachine: (test-preload-872415) Calling .GetIP
	I0501 03:20:32.878331   56414 main.go:141] libmachine: (test-preload-872415) DBG | domain test-preload-872415 has defined MAC address 52:54:00:5c:81:4c in network mk-test-preload-872415
	I0501 03:20:32.878694   56414 main.go:141] libmachine: (test-preload-872415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:81:4c", ip: ""} in network mk-test-preload-872415: {Iface:virbr1 ExpiryTime:2024-05-01 04:17:03 +0000 UTC Type:0 Mac:52:54:00:5c:81:4c Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:test-preload-872415 Clientid:01:52:54:00:5c:81:4c}
	I0501 03:20:32.878736   56414 main.go:141] libmachine: (test-preload-872415) DBG | domain test-preload-872415 has defined IP address 192.168.39.71 and MAC address 52:54:00:5c:81:4c in network mk-test-preload-872415
	I0501 03:20:32.878880   56414 main.go:141] libmachine: (test-preload-872415) Calling .GetSSHHostname
	I0501 03:20:32.881285   56414 main.go:141] libmachine: (test-preload-872415) DBG | domain test-preload-872415 has defined MAC address 52:54:00:5c:81:4c in network mk-test-preload-872415
	I0501 03:20:32.881630   56414 main.go:141] libmachine: (test-preload-872415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:81:4c", ip: ""} in network mk-test-preload-872415: {Iface:virbr1 ExpiryTime:2024-05-01 04:17:03 +0000 UTC Type:0 Mac:52:54:00:5c:81:4c Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:test-preload-872415 Clientid:01:52:54:00:5c:81:4c}
	I0501 03:20:32.881655   56414 main.go:141] libmachine: (test-preload-872415) DBG | domain test-preload-872415 has defined IP address 192.168.39.71 and MAC address 52:54:00:5c:81:4c in network mk-test-preload-872415
	I0501 03:20:32.881794   56414 provision.go:143] copyHostCerts
	I0501 03:20:32.881853   56414 exec_runner.go:144] found /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem, removing ...
	I0501 03:20:32.881866   56414 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem
	I0501 03:20:32.881948   56414 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem (1078 bytes)
	I0501 03:20:32.882063   56414 exec_runner.go:144] found /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem, removing ...
	I0501 03:20:32.882080   56414 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem
	I0501 03:20:32.882115   56414 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem (1123 bytes)
	I0501 03:20:32.882191   56414 exec_runner.go:144] found /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem, removing ...
	I0501 03:20:32.882201   56414 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem
	I0501 03:20:32.882233   56414 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem (1675 bytes)
	I0501 03:20:32.882314   56414 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca-key.pem org=jenkins.test-preload-872415 san=[127.0.0.1 192.168.39.71 localhost minikube test-preload-872415]
	I0501 03:20:33.080234   56414 provision.go:177] copyRemoteCerts
	I0501 03:20:33.080288   56414 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0501 03:20:33.080320   56414 main.go:141] libmachine: (test-preload-872415) Calling .GetSSHHostname
	I0501 03:20:33.082855   56414 main.go:141] libmachine: (test-preload-872415) DBG | domain test-preload-872415 has defined MAC address 52:54:00:5c:81:4c in network mk-test-preload-872415
	I0501 03:20:33.083205   56414 main.go:141] libmachine: (test-preload-872415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:81:4c", ip: ""} in network mk-test-preload-872415: {Iface:virbr1 ExpiryTime:2024-05-01 04:17:03 +0000 UTC Type:0 Mac:52:54:00:5c:81:4c Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:test-preload-872415 Clientid:01:52:54:00:5c:81:4c}
	I0501 03:20:33.083237   56414 main.go:141] libmachine: (test-preload-872415) DBG | domain test-preload-872415 has defined IP address 192.168.39.71 and MAC address 52:54:00:5c:81:4c in network mk-test-preload-872415
	I0501 03:20:33.083378   56414 main.go:141] libmachine: (test-preload-872415) Calling .GetSSHPort
	I0501 03:20:33.083547   56414 main.go:141] libmachine: (test-preload-872415) Calling .GetSSHKeyPath
	I0501 03:20:33.083718   56414 main.go:141] libmachine: (test-preload-872415) Calling .GetSSHUsername
	I0501 03:20:33.083828   56414 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/test-preload-872415/id_rsa Username:docker}
	I0501 03:20:33.173217   56414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0501 03:20:33.200099   56414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0501 03:20:33.226021   56414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0501 03:20:33.252048   56414 provision.go:87] duration metric: took 376.340897ms to configureAuth
	I0501 03:20:33.252070   56414 buildroot.go:189] setting minikube options for container-runtime
	I0501 03:20:33.252275   56414 config.go:182] Loaded profile config "test-preload-872415": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0501 03:20:33.252374   56414 main.go:141] libmachine: (test-preload-872415) Calling .GetSSHHostname
	I0501 03:20:33.254732   56414 main.go:141] libmachine: (test-preload-872415) DBG | domain test-preload-872415 has defined MAC address 52:54:00:5c:81:4c in network mk-test-preload-872415
	I0501 03:20:33.255147   56414 main.go:141] libmachine: (test-preload-872415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:81:4c", ip: ""} in network mk-test-preload-872415: {Iface:virbr1 ExpiryTime:2024-05-01 04:17:03 +0000 UTC Type:0 Mac:52:54:00:5c:81:4c Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:test-preload-872415 Clientid:01:52:54:00:5c:81:4c}
	I0501 03:20:33.255177   56414 main.go:141] libmachine: (test-preload-872415) DBG | domain test-preload-872415 has defined IP address 192.168.39.71 and MAC address 52:54:00:5c:81:4c in network mk-test-preload-872415
	I0501 03:20:33.255345   56414 main.go:141] libmachine: (test-preload-872415) Calling .GetSSHPort
	I0501 03:20:33.255532   56414 main.go:141] libmachine: (test-preload-872415) Calling .GetSSHKeyPath
	I0501 03:20:33.255679   56414 main.go:141] libmachine: (test-preload-872415) Calling .GetSSHKeyPath
	I0501 03:20:33.255864   56414 main.go:141] libmachine: (test-preload-872415) Calling .GetSSHUsername
	I0501 03:20:33.256054   56414 main.go:141] libmachine: Using SSH client type: native
	I0501 03:20:33.256206   56414 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.71 22 <nil> <nil>}
	I0501 03:20:33.256222   56414 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0501 03:20:33.541054   56414 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0501 03:20:33.541084   56414 machine.go:97] duration metric: took 1.050269187s to provisionDockerMachine
	I0501 03:20:33.541099   56414 start.go:293] postStartSetup for "test-preload-872415" (driver="kvm2")
	I0501 03:20:33.541113   56414 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0501 03:20:33.541130   56414 main.go:141] libmachine: (test-preload-872415) Calling .DriverName
	I0501 03:20:33.541452   56414 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0501 03:20:33.541484   56414 main.go:141] libmachine: (test-preload-872415) Calling .GetSSHHostname
	I0501 03:20:33.544167   56414 main.go:141] libmachine: (test-preload-872415) DBG | domain test-preload-872415 has defined MAC address 52:54:00:5c:81:4c in network mk-test-preload-872415
	I0501 03:20:33.544475   56414 main.go:141] libmachine: (test-preload-872415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:81:4c", ip: ""} in network mk-test-preload-872415: {Iface:virbr1 ExpiryTime:2024-05-01 04:17:03 +0000 UTC Type:0 Mac:52:54:00:5c:81:4c Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:test-preload-872415 Clientid:01:52:54:00:5c:81:4c}
	I0501 03:20:33.544496   56414 main.go:141] libmachine: (test-preload-872415) DBG | domain test-preload-872415 has defined IP address 192.168.39.71 and MAC address 52:54:00:5c:81:4c in network mk-test-preload-872415
	I0501 03:20:33.544658   56414 main.go:141] libmachine: (test-preload-872415) Calling .GetSSHPort
	I0501 03:20:33.544856   56414 main.go:141] libmachine: (test-preload-872415) Calling .GetSSHKeyPath
	I0501 03:20:33.544980   56414 main.go:141] libmachine: (test-preload-872415) Calling .GetSSHUsername
	I0501 03:20:33.545091   56414 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/test-preload-872415/id_rsa Username:docker}
	I0501 03:20:33.633887   56414 ssh_runner.go:195] Run: cat /etc/os-release
	I0501 03:20:33.638681   56414 info.go:137] Remote host: Buildroot 2023.02.9
	I0501 03:20:33.638707   56414 filesync.go:126] Scanning /home/jenkins/minikube-integration/18779-13391/.minikube/addons for local assets ...
	I0501 03:20:33.638790   56414 filesync.go:126] Scanning /home/jenkins/minikube-integration/18779-13391/.minikube/files for local assets ...
	I0501 03:20:33.638895   56414 filesync.go:149] local asset: /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem -> 207242.pem in /etc/ssl/certs
	I0501 03:20:33.638999   56414 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0501 03:20:33.649331   56414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem --> /etc/ssl/certs/207242.pem (1708 bytes)
	I0501 03:20:33.676489   56414 start.go:296] duration metric: took 135.373943ms for postStartSetup
	I0501 03:20:33.676535   56414 fix.go:56] duration metric: took 23.343540315s for fixHost
	I0501 03:20:33.676559   56414 main.go:141] libmachine: (test-preload-872415) Calling .GetSSHHostname
	I0501 03:20:33.679328   56414 main.go:141] libmachine: (test-preload-872415) DBG | domain test-preload-872415 has defined MAC address 52:54:00:5c:81:4c in network mk-test-preload-872415
	I0501 03:20:33.679657   56414 main.go:141] libmachine: (test-preload-872415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:81:4c", ip: ""} in network mk-test-preload-872415: {Iface:virbr1 ExpiryTime:2024-05-01 04:17:03 +0000 UTC Type:0 Mac:52:54:00:5c:81:4c Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:test-preload-872415 Clientid:01:52:54:00:5c:81:4c}
	I0501 03:20:33.679681   56414 main.go:141] libmachine: (test-preload-872415) DBG | domain test-preload-872415 has defined IP address 192.168.39.71 and MAC address 52:54:00:5c:81:4c in network mk-test-preload-872415
	I0501 03:20:33.679878   56414 main.go:141] libmachine: (test-preload-872415) Calling .GetSSHPort
	I0501 03:20:33.680083   56414 main.go:141] libmachine: (test-preload-872415) Calling .GetSSHKeyPath
	I0501 03:20:33.680271   56414 main.go:141] libmachine: (test-preload-872415) Calling .GetSSHKeyPath
	I0501 03:20:33.680404   56414 main.go:141] libmachine: (test-preload-872415) Calling .GetSSHUsername
	I0501 03:20:33.680569   56414 main.go:141] libmachine: Using SSH client type: native
	I0501 03:20:33.680791   56414 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.71 22 <nil> <nil>}
	I0501 03:20:33.680805   56414 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0501 03:20:33.796066   56414 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714533633.744496533
	
	I0501 03:20:33.796101   56414 fix.go:216] guest clock: 1714533633.744496533
	I0501 03:20:33.796108   56414 fix.go:229] Guest: 2024-05-01 03:20:33.744496533 +0000 UTC Remote: 2024-05-01 03:20:33.676540048 +0000 UTC m=+36.853638141 (delta=67.956485ms)
	I0501 03:20:33.796126   56414 fix.go:200] guest clock delta is within tolerance: 67.956485ms
	I0501 03:20:33.796131   56414 start.go:83] releasing machines lock for "test-preload-872415", held for 23.463153028s
	I0501 03:20:33.796148   56414 main.go:141] libmachine: (test-preload-872415) Calling .DriverName
	I0501 03:20:33.796412   56414 main.go:141] libmachine: (test-preload-872415) Calling .GetIP
	I0501 03:20:33.799108   56414 main.go:141] libmachine: (test-preload-872415) DBG | domain test-preload-872415 has defined MAC address 52:54:00:5c:81:4c in network mk-test-preload-872415
	I0501 03:20:33.799488   56414 main.go:141] libmachine: (test-preload-872415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:81:4c", ip: ""} in network mk-test-preload-872415: {Iface:virbr1 ExpiryTime:2024-05-01 04:17:03 +0000 UTC Type:0 Mac:52:54:00:5c:81:4c Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:test-preload-872415 Clientid:01:52:54:00:5c:81:4c}
	I0501 03:20:33.799514   56414 main.go:141] libmachine: (test-preload-872415) DBG | domain test-preload-872415 has defined IP address 192.168.39.71 and MAC address 52:54:00:5c:81:4c in network mk-test-preload-872415
	I0501 03:20:33.799612   56414 main.go:141] libmachine: (test-preload-872415) Calling .DriverName
	I0501 03:20:33.800080   56414 main.go:141] libmachine: (test-preload-872415) Calling .DriverName
	I0501 03:20:33.800259   56414 main.go:141] libmachine: (test-preload-872415) Calling .DriverName
	I0501 03:20:33.800331   56414 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0501 03:20:33.800369   56414 main.go:141] libmachine: (test-preload-872415) Calling .GetSSHHostname
	I0501 03:20:33.800481   56414 ssh_runner.go:195] Run: cat /version.json
	I0501 03:20:33.800497   56414 main.go:141] libmachine: (test-preload-872415) Calling .GetSSHHostname
	I0501 03:20:33.802900   56414 main.go:141] libmachine: (test-preload-872415) DBG | domain test-preload-872415 has defined MAC address 52:54:00:5c:81:4c in network mk-test-preload-872415
	I0501 03:20:33.803067   56414 main.go:141] libmachine: (test-preload-872415) DBG | domain test-preload-872415 has defined MAC address 52:54:00:5c:81:4c in network mk-test-preload-872415
	I0501 03:20:33.803255   56414 main.go:141] libmachine: (test-preload-872415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:81:4c", ip: ""} in network mk-test-preload-872415: {Iface:virbr1 ExpiryTime:2024-05-01 04:17:03 +0000 UTC Type:0 Mac:52:54:00:5c:81:4c Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:test-preload-872415 Clientid:01:52:54:00:5c:81:4c}
	I0501 03:20:33.803277   56414 main.go:141] libmachine: (test-preload-872415) DBG | domain test-preload-872415 has defined IP address 192.168.39.71 and MAC address 52:54:00:5c:81:4c in network mk-test-preload-872415
	I0501 03:20:33.803382   56414 main.go:141] libmachine: (test-preload-872415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:81:4c", ip: ""} in network mk-test-preload-872415: {Iface:virbr1 ExpiryTime:2024-05-01 04:17:03 +0000 UTC Type:0 Mac:52:54:00:5c:81:4c Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:test-preload-872415 Clientid:01:52:54:00:5c:81:4c}
	I0501 03:20:33.803413   56414 main.go:141] libmachine: (test-preload-872415) DBG | domain test-preload-872415 has defined IP address 192.168.39.71 and MAC address 52:54:00:5c:81:4c in network mk-test-preload-872415
	I0501 03:20:33.803465   56414 main.go:141] libmachine: (test-preload-872415) Calling .GetSSHPort
	I0501 03:20:33.803636   56414 main.go:141] libmachine: (test-preload-872415) Calling .GetSSHKeyPath
	I0501 03:20:33.803658   56414 main.go:141] libmachine: (test-preload-872415) Calling .GetSSHPort
	I0501 03:20:33.803815   56414 main.go:141] libmachine: (test-preload-872415) Calling .GetSSHUsername
	I0501 03:20:33.803820   56414 main.go:141] libmachine: (test-preload-872415) Calling .GetSSHKeyPath
	I0501 03:20:33.803979   56414 main.go:141] libmachine: (test-preload-872415) Calling .GetSSHUsername
	I0501 03:20:33.803972   56414 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/test-preload-872415/id_rsa Username:docker}
	I0501 03:20:33.804111   56414 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/test-preload-872415/id_rsa Username:docker}
	I0501 03:20:33.883242   56414 ssh_runner.go:195] Run: systemctl --version
	I0501 03:20:33.909840   56414 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0501 03:20:34.053876   56414 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0501 03:20:34.061566   56414 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0501 03:20:34.061624   56414 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0501 03:20:34.078367   56414 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0501 03:20:34.078387   56414 start.go:494] detecting cgroup driver to use...
	I0501 03:20:34.078447   56414 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0501 03:20:34.094433   56414 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0501 03:20:34.108317   56414 docker.go:217] disabling cri-docker service (if available) ...
	I0501 03:20:34.108357   56414 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0501 03:20:34.122487   56414 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0501 03:20:34.136651   56414 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0501 03:20:34.261446   56414 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0501 03:20:34.399134   56414 docker.go:233] disabling docker service ...
	I0501 03:20:34.399204   56414 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0501 03:20:34.414924   56414 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0501 03:20:34.429960   56414 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0501 03:20:34.574561   56414 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0501 03:20:34.703724   56414 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0501 03:20:34.728246   56414 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0501 03:20:34.749609   56414 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I0501 03:20:34.749672   56414 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:20:34.761740   56414 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0501 03:20:34.761794   56414 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:20:34.773794   56414 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:20:34.785889   56414 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:20:34.798006   56414 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0501 03:20:34.810297   56414 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:20:34.822268   56414 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:20:34.841349   56414 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:20:34.853845   56414 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0501 03:20:34.865397   56414 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0501 03:20:34.865443   56414 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0501 03:20:34.882298   56414 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0501 03:20:34.894408   56414 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 03:20:35.035692   56414 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0501 03:20:35.189687   56414 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0501 03:20:35.189755   56414 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0501 03:20:35.195293   56414 start.go:562] Will wait 60s for crictl version
	I0501 03:20:35.195349   56414 ssh_runner.go:195] Run: which crictl
	I0501 03:20:35.199606   56414 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0501 03:20:35.241870   56414 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0501 03:20:35.241954   56414 ssh_runner.go:195] Run: crio --version
	I0501 03:20:35.275442   56414 ssh_runner.go:195] Run: crio --version
	I0501 03:20:35.308842   56414 out.go:177] * Preparing Kubernetes v1.24.4 on CRI-O 1.29.1 ...
	I0501 03:20:35.310198   56414 main.go:141] libmachine: (test-preload-872415) Calling .GetIP
	I0501 03:20:35.312939   56414 main.go:141] libmachine: (test-preload-872415) DBG | domain test-preload-872415 has defined MAC address 52:54:00:5c:81:4c in network mk-test-preload-872415
	I0501 03:20:35.313292   56414 main.go:141] libmachine: (test-preload-872415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:81:4c", ip: ""} in network mk-test-preload-872415: {Iface:virbr1 ExpiryTime:2024-05-01 04:17:03 +0000 UTC Type:0 Mac:52:54:00:5c:81:4c Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:test-preload-872415 Clientid:01:52:54:00:5c:81:4c}
	I0501 03:20:35.313319   56414 main.go:141] libmachine: (test-preload-872415) DBG | domain test-preload-872415 has defined IP address 192.168.39.71 and MAC address 52:54:00:5c:81:4c in network mk-test-preload-872415
	I0501 03:20:35.313557   56414 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0501 03:20:35.318548   56414 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0501 03:20:35.333425   56414 kubeadm.go:877] updating cluster {Name:test-preload-872415 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.24.4 ClusterName:test-preload-872415 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.71 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker
MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0501 03:20:35.333516   56414 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0501 03:20:35.333568   56414 ssh_runner.go:195] Run: sudo crictl images --output json
	I0501 03:20:35.373369   56414 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0501 03:20:35.373450   56414 ssh_runner.go:195] Run: which lz4
	I0501 03:20:35.378248   56414 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0501 03:20:35.383263   56414 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0501 03:20:35.383295   56414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (459355427 bytes)
	I0501 03:20:37.205541   56414 crio.go:462] duration metric: took 1.827326675s to copy over tarball
	I0501 03:20:37.205630   56414 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0501 03:20:39.907600   56414 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.701934539s)
	I0501 03:20:39.907630   56414 crio.go:469] duration metric: took 2.702062079s to extract the tarball
	I0501 03:20:39.907641   56414 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0501 03:20:39.951236   56414 ssh_runner.go:195] Run: sudo crictl images --output json
	I0501 03:20:39.998076   56414 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0501 03:20:39.998104   56414 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.4 registry.k8s.io/kube-controller-manager:v1.24.4 registry.k8s.io/kube-scheduler:v1.24.4 registry.k8s.io/kube-proxy:v1.24.4 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0501 03:20:39.998159   56414 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0501 03:20:39.998189   56414 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0501 03:20:39.998211   56414 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0501 03:20:39.998244   56414 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0501 03:20:39.998282   56414 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0501 03:20:39.998442   56414 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0501 03:20:39.998470   56414 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0501 03:20:39.998854   56414 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0501 03:20:39.999507   56414 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0501 03:20:39.999520   56414 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0501 03:20:39.999520   56414 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0501 03:20:39.999527   56414 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0501 03:20:39.999512   56414 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0501 03:20:39.999549   56414 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0501 03:20:39.999567   56414 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0501 03:20:39.999698   56414 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0501 03:20:40.159896   56414 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.4
	I0501 03:20:40.209983   56414 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.4" needs transfer: "registry.k8s.io/kube-proxy:v1.24.4" does not exist at hash "7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7" in container runtime
	I0501 03:20:40.210031   56414 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.4
	I0501 03:20:40.210073   56414 ssh_runner.go:195] Run: which crictl
	I0501 03:20:40.214706   56414 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0501 03:20:40.249131   56414 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4
	I0501 03:20:40.249245   56414 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.24.4
	I0501 03:20:40.254631   56414 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.24.4 (exists)
	I0501 03:20:40.254648   56414 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.24.4
	I0501 03:20:40.254676   56414 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4
	I0501 03:20:40.309377   56414 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.4
	I0501 03:20:40.355846   56414 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0501 03:20:40.355884   56414 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.4
	I0501 03:20:40.355846   56414 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.4
	I0501 03:20:40.356251   56414 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0501 03:20:40.358660   56414 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0501 03:20:40.948827   56414 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0501 03:20:42.813517   56414 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.4: (2.504105674s)
	I0501 03:20:42.813541   56414 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4: (2.558844106s)
	I0501 03:20:42.813562   56414 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4 from cache
	I0501 03:20:42.813570   56414 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.4" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.4" does not exist at hash "1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48" in container runtime
	I0501 03:20:42.813597   56414 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0501 03:20:42.813597   56414 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0: (2.457322966s)
	I0501 03:20:42.813625   56414 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.4: (2.457698789s)
	I0501 03:20:42.813640   56414 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I0501 03:20:42.813660   56414 ssh_runner.go:195] Run: which crictl
	I0501 03:20:42.813707   56414 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.4: (2.457804813s)
	I0501 03:20:42.813734   56414 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.4" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.4" does not exist at hash "03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9" in container runtime
	I0501 03:20:42.813662   56414 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0501 03:20:42.813752   56414 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.4
	I0501 03:20:42.813767   56414 ssh_runner.go:195] Run: which crictl
	I0501 03:20:42.813792   56414 ssh_runner.go:195] Run: which crictl
	I0501 03:20:42.813671   56414 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.4" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.4" does not exist at hash "6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d" in container runtime
	I0501 03:20:42.813825   56414 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.4
	I0501 03:20:42.813850   56414 ssh_runner.go:195] Run: which crictl
	I0501 03:20:42.813879   56414 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7: (2.458004245s)
	I0501 03:20:42.813911   56414 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I0501 03:20:42.813914   56414 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6: (2.455235231s)
	I0501 03:20:42.813931   56414 cri.go:218] Removing image: registry.k8s.io/pause:3.7
	I0501 03:20:42.813937   56414 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I0501 03:20:42.813958   56414 ssh_runner.go:195] Run: which crictl
	I0501 03:20:42.813976   56414 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.865119291s)
	I0501 03:20:42.813959   56414 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0501 03:20:42.814020   56414 ssh_runner.go:195] Run: which crictl
	I0501 03:20:42.827010   56414 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0501 03:20:42.832044   56414 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0501 03:20:42.833690   56414 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0501 03:20:42.833731   56414 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0501 03:20:42.833766   56414 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0501 03:20:42.833691   56414 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0501 03:20:42.973324   56414 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I0501 03:20:42.973380   56414 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0501 03:20:42.973429   56414 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0
	I0501 03:20:42.973452   56414 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0501 03:20:42.983110   56414 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4
	I0501 03:20:42.983209   56414 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0501 03:20:42.983226   56414 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7
	I0501 03:20:42.983267   56414 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I0501 03:20:42.983313   56414 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7
	I0501 03:20:42.983335   56414 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4
	I0501 03:20:42.983346   56414 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6
	I0501 03:20:42.983427   56414 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0501 03:20:42.986560   56414 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.24.4 (exists)
	I0501 03:20:42.986576   56414 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0501 03:20:42.986612   56414 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0501 03:20:42.986929   56414 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
	I0501 03:20:42.988145   56414 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.24.4 (exists)
	I0501 03:20:42.996675   56414 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
	I0501 03:20:42.996741   56414 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.24.4 (exists)
	I0501 03:20:42.996923   56414 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
	I0501 03:20:43.742832   56414 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4 from cache
	I0501 03:20:43.742876   56414 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0501 03:20:43.742939   56414 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0
	I0501 03:20:45.804472   56414 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0: (2.061506786s)
	I0501 03:20:45.804496   56414 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0501 03:20:45.804520   56414 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0501 03:20:45.804563   56414 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0501 03:20:46.555170   56414 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4 from cache
	I0501 03:20:46.555208   56414 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.7
	I0501 03:20:46.555260   56414 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.7
	I0501 03:20:46.702378   56414 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 from cache
	I0501 03:20:46.702442   56414 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0501 03:20:46.702491   56414 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0501 03:20:47.150955   56414 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4 from cache
	I0501 03:20:47.151004   56414 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0501 03:20:47.151047   56414 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6
	I0501 03:20:47.599285   56414 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0501 03:20:47.599335   56414 cache_images.go:123] Successfully loaded all cached images
	I0501 03:20:47.599342   56414 cache_images.go:92] duration metric: took 7.601223979s to LoadCachedImages
	I0501 03:20:47.599357   56414 kubeadm.go:928] updating node { 192.168.39.71 8443 v1.24.4 crio true true} ...
	I0501 03:20:47.599445   56414 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=test-preload-872415 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.71
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.4 ClusterName:test-preload-872415 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0501 03:20:47.599504   56414 ssh_runner.go:195] Run: crio config
	I0501 03:20:47.647895   56414 cni.go:84] Creating CNI manager for ""
	I0501 03:20:47.647924   56414 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0501 03:20:47.647936   56414 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0501 03:20:47.647954   56414 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.71 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-872415 NodeName:test-preload-872415 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.71"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.71 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0501 03:20:47.648080   56414 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.71
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-872415"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.71
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.71"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0501 03:20:47.648149   56414 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
	I0501 03:20:47.660043   56414 binaries.go:44] Found k8s binaries, skipping transfer
	I0501 03:20:47.660115   56414 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0501 03:20:47.672094   56414 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0501 03:20:47.691528   56414 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0501 03:20:47.710322   56414 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2103 bytes)
	I0501 03:20:47.729886   56414 ssh_runner.go:195] Run: grep 192.168.39.71	control-plane.minikube.internal$ /etc/hosts
	I0501 03:20:47.734588   56414 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.71	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0501 03:20:47.749389   56414 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 03:20:47.872129   56414 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0501 03:20:47.891659   56414 certs.go:68] Setting up /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/test-preload-872415 for IP: 192.168.39.71
	I0501 03:20:47.891681   56414 certs.go:194] generating shared ca certs ...
	I0501 03:20:47.891695   56414 certs.go:226] acquiring lock for ca certs: {Name:mk85a75d14c00780d4bf09cd9bdaf0f96f1b8fce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 03:20:47.891861   56414 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18779-13391/.minikube/ca.key
	I0501 03:20:47.891915   56414 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.key
	I0501 03:20:47.891928   56414 certs.go:256] generating profile certs ...
	I0501 03:20:47.892028   56414 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/test-preload-872415/client.key
	I0501 03:20:47.892120   56414 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/test-preload-872415/apiserver.key.b2dbd54b
	I0501 03:20:47.892170   56414 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/test-preload-872415/proxy-client.key
	I0501 03:20:47.892342   56414 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/20724.pem (1338 bytes)
	W0501 03:20:47.892382   56414 certs.go:480] ignoring /home/jenkins/minikube-integration/18779-13391/.minikube/certs/20724_empty.pem, impossibly tiny 0 bytes
	I0501 03:20:47.892403   56414 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca-key.pem (1679 bytes)
	I0501 03:20:47.892442   56414 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem (1078 bytes)
	I0501 03:20:47.892475   56414 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem (1123 bytes)
	I0501 03:20:47.892510   56414 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem (1675 bytes)
	I0501 03:20:47.892565   56414 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem (1708 bytes)
	I0501 03:20:47.893216   56414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0501 03:20:47.967869   56414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0501 03:20:48.004789   56414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0501 03:20:48.039924   56414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0501 03:20:48.079606   56414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/test-preload-872415/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0501 03:20:48.111926   56414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/test-preload-872415/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0501 03:20:48.138152   56414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/test-preload-872415/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0501 03:20:48.163825   56414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/test-preload-872415/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0501 03:20:48.189488   56414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem --> /usr/share/ca-certificates/207242.pem (1708 bytes)
	I0501 03:20:48.214232   56414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0501 03:20:48.238766   56414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/certs/20724.pem --> /usr/share/ca-certificates/20724.pem (1338 bytes)
	I0501 03:20:48.263889   56414 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0501 03:20:48.282117   56414 ssh_runner.go:195] Run: openssl version
	I0501 03:20:48.288075   56414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/207242.pem && ln -fs /usr/share/ca-certificates/207242.pem /etc/ssl/certs/207242.pem"
	I0501 03:20:48.300373   56414 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/207242.pem
	I0501 03:20:48.305219   56414 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May  1 02:20 /usr/share/ca-certificates/207242.pem
	I0501 03:20:48.305269   56414 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/207242.pem
	I0501 03:20:48.311322   56414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/207242.pem /etc/ssl/certs/3ec20f2e.0"
	I0501 03:20:48.323765   56414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0501 03:20:48.336146   56414 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0501 03:20:48.341151   56414 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May  1 02:08 /usr/share/ca-certificates/minikubeCA.pem
	I0501 03:20:48.341193   56414 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0501 03:20:48.347652   56414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0501 03:20:48.360592   56414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20724.pem && ln -fs /usr/share/ca-certificates/20724.pem /etc/ssl/certs/20724.pem"
	I0501 03:20:48.373262   56414 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20724.pem
	I0501 03:20:48.378527   56414 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May  1 02:20 /usr/share/ca-certificates/20724.pem
	I0501 03:20:48.378593   56414 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20724.pem
	I0501 03:20:48.385249   56414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20724.pem /etc/ssl/certs/51391683.0"
	I0501 03:20:48.399140   56414 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0501 03:20:48.404045   56414 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0501 03:20:48.410411   56414 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0501 03:20:48.416678   56414 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0501 03:20:48.423188   56414 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0501 03:20:48.429469   56414 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0501 03:20:48.435796   56414 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0501 03:20:48.442081   56414 kubeadm.go:391] StartCluster: {Name:test-preload-872415 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
24.4 ClusterName:test-preload-872415 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.71 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 03:20:48.442160   56414 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0501 03:20:48.442216   56414 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0501 03:20:48.480822   56414 cri.go:89] found id: ""
	I0501 03:20:48.480891   56414 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0501 03:20:48.492843   56414 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0501 03:20:48.492872   56414 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0501 03:20:48.492879   56414 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0501 03:20:48.492933   56414 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0501 03:20:48.504631   56414 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0501 03:20:48.505035   56414 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-872415" does not appear in /home/jenkins/minikube-integration/18779-13391/kubeconfig
	I0501 03:20:48.505137   56414 kubeconfig.go:62] /home/jenkins/minikube-integration/18779-13391/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-872415" cluster setting kubeconfig missing "test-preload-872415" context setting]
	I0501 03:20:48.505392   56414 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18779-13391/kubeconfig: {Name:mk5d1131557f885800877622151c0915337cb23a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 03:20:48.505961   56414 kapi.go:59] client config for test-preload-872415: &rest.Config{Host:"https://192.168.39.71:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18779-13391/.minikube/profiles/test-preload-872415/client.crt", KeyFile:"/home/jenkins/minikube-integration/18779-13391/.minikube/profiles/test-preload-872415/client.key", CAFile:"/home/jenkins/minikube-integration/18779-13391/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(n
il), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02700), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0501 03:20:48.506592   56414 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0501 03:20:48.517887   56414 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.71
	I0501 03:20:48.517932   56414 kubeadm.go:1154] stopping kube-system containers ...
	I0501 03:20:48.517945   56414 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0501 03:20:48.517998   56414 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0501 03:20:48.563304   56414 cri.go:89] found id: ""
	I0501 03:20:48.563379   56414 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0501 03:20:48.580318   56414 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0501 03:20:48.591226   56414 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0501 03:20:48.591247   56414 kubeadm.go:156] found existing configuration files:
	
	I0501 03:20:48.591300   56414 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0501 03:20:48.601573   56414 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0501 03:20:48.601624   56414 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0501 03:20:48.612370   56414 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0501 03:20:48.622639   56414 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0501 03:20:48.622689   56414 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0501 03:20:48.633165   56414 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0501 03:20:48.643643   56414 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0501 03:20:48.643693   56414 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0501 03:20:48.654552   56414 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0501 03:20:48.664596   56414 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0501 03:20:48.664662   56414 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0501 03:20:48.675595   56414 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0501 03:20:48.686724   56414 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:20:48.793148   56414 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:20:49.447336   56414 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:20:49.724183   56414 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:20:49.789803   56414 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:20:49.868637   56414 api_server.go:52] waiting for apiserver process to appear ...
	I0501 03:20:49.868717   56414 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:20:50.369192   56414 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:20:50.869698   56414 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:20:50.899242   56414 api_server.go:72] duration metric: took 1.030604234s to wait for apiserver process to appear ...
	I0501 03:20:50.899276   56414 api_server.go:88] waiting for apiserver healthz status ...
	I0501 03:20:50.899298   56414 api_server.go:253] Checking apiserver healthz at https://192.168.39.71:8443/healthz ...
	I0501 03:20:50.899832   56414 api_server.go:269] stopped: https://192.168.39.71:8443/healthz: Get "https://192.168.39.71:8443/healthz": dial tcp 192.168.39.71:8443: connect: connection refused
	I0501 03:20:51.399379   56414 api_server.go:253] Checking apiserver healthz at https://192.168.39.71:8443/healthz ...
	I0501 03:20:54.941595   56414 api_server.go:279] https://192.168.39.71:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0501 03:20:54.941624   56414 api_server.go:103] status: https://192.168.39.71:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0501 03:20:54.941641   56414 api_server.go:253] Checking apiserver healthz at https://192.168.39.71:8443/healthz ...
	I0501 03:20:54.981039   56414 api_server.go:279] https://192.168.39.71:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0501 03:20:54.981062   56414 api_server.go:103] status: https://192.168.39.71:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0501 03:20:55.400125   56414 api_server.go:253] Checking apiserver healthz at https://192.168.39.71:8443/healthz ...
	I0501 03:20:55.406345   56414 api_server.go:279] https://192.168.39.71:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0501 03:20:55.406375   56414 api_server.go:103] status: https://192.168.39.71:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0501 03:20:55.900093   56414 api_server.go:253] Checking apiserver healthz at https://192.168.39.71:8443/healthz ...
	I0501 03:20:55.906074   56414 api_server.go:279] https://192.168.39.71:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0501 03:20:55.906110   56414 api_server.go:103] status: https://192.168.39.71:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0501 03:20:56.399652   56414 api_server.go:253] Checking apiserver healthz at https://192.168.39.71:8443/healthz ...
	I0501 03:20:56.405211   56414 api_server.go:279] https://192.168.39.71:8443/healthz returned 200:
	ok
	I0501 03:20:56.411769   56414 api_server.go:141] control plane version: v1.24.4
	I0501 03:20:56.411799   56414 api_server.go:131] duration metric: took 5.512511576s to wait for apiserver health ...
	I0501 03:20:56.411808   56414 cni.go:84] Creating CNI manager for ""
	I0501 03:20:56.411814   56414 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0501 03:20:56.413559   56414 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0501 03:20:56.414958   56414 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0501 03:20:56.426264   56414 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0501 03:20:56.448948   56414 system_pods.go:43] waiting for kube-system pods to appear ...
	I0501 03:20:56.457771   56414 system_pods.go:59] 8 kube-system pods found
	I0501 03:20:56.457807   56414 system_pods.go:61] "coredns-6d4b75cb6d-m6r88" [9ba11eba-fbc9-4c10-8073-3c09ac1e1d6e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0501 03:20:56.457813   56414 system_pods.go:61] "coredns-6d4b75cb6d-r9b4r" [7a466571-899d-4f6d-960d-10d662aef475] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0501 03:20:56.457820   56414 system_pods.go:61] "etcd-test-preload-872415" [2bcbe5e4-43bf-47a6-94f4-26159d4b0647] Running
	I0501 03:20:56.457830   56414 system_pods.go:61] "kube-apiserver-test-preload-872415" [e7d568a3-08b4-4058-af0b-8d05bf2431d7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0501 03:20:56.457835   56414 system_pods.go:61] "kube-controller-manager-test-preload-872415" [d35c632c-07f2-49eb-96cb-c3c8cdc0f61b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0501 03:20:56.457844   56414 system_pods.go:61] "kube-proxy-4fq5v" [ed9f2c83-5c71-4ee3-8182-1be6f2b45d4f] Running
	I0501 03:20:56.457856   56414 system_pods.go:61] "kube-scheduler-test-preload-872415" [69d2eec0-626e-4057-9307-31f9d4757318] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0501 03:20:56.457865   56414 system_pods.go:61] "storage-provisioner" [561fb514-2639-47f8-8a19-0a757c3993d8] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0501 03:20:56.457878   56414 system_pods.go:74] duration metric: took 8.90278ms to wait for pod list to return data ...
	I0501 03:20:56.457892   56414 node_conditions.go:102] verifying NodePressure condition ...
	I0501 03:20:56.461053   56414 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0501 03:20:56.461079   56414 node_conditions.go:123] node cpu capacity is 2
	I0501 03:20:56.461089   56414 node_conditions.go:105] duration metric: took 3.192591ms to run NodePressure ...
	I0501 03:20:56.461104   56414 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:20:56.647958   56414 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0501 03:20:56.653455   56414 kubeadm.go:733] kubelet initialised
	I0501 03:20:56.653475   56414 kubeadm.go:734] duration metric: took 5.495652ms waiting for restarted kubelet to initialise ...
	I0501 03:20:56.653481   56414 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0501 03:20:56.662805   56414 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-m6r88" in "kube-system" namespace to be "Ready" ...
	I0501 03:20:56.668424   56414 pod_ready.go:97] node "test-preload-872415" hosting pod "coredns-6d4b75cb6d-m6r88" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-872415" has status "Ready":"False"
	I0501 03:20:56.668447   56414 pod_ready.go:81] duration metric: took 5.61202ms for pod "coredns-6d4b75cb6d-m6r88" in "kube-system" namespace to be "Ready" ...
	E0501 03:20:56.668455   56414 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-872415" hosting pod "coredns-6d4b75cb6d-m6r88" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-872415" has status "Ready":"False"
	I0501 03:20:56.668462   56414 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-r9b4r" in "kube-system" namespace to be "Ready" ...
	I0501 03:20:56.673606   56414 pod_ready.go:97] node "test-preload-872415" hosting pod "coredns-6d4b75cb6d-r9b4r" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-872415" has status "Ready":"False"
	I0501 03:20:56.673634   56414 pod_ready.go:81] duration metric: took 5.161981ms for pod "coredns-6d4b75cb6d-r9b4r" in "kube-system" namespace to be "Ready" ...
	E0501 03:20:56.673646   56414 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-872415" hosting pod "coredns-6d4b75cb6d-r9b4r" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-872415" has status "Ready":"False"
	I0501 03:20:56.673653   56414 pod_ready.go:78] waiting up to 4m0s for pod "etcd-test-preload-872415" in "kube-system" namespace to be "Ready" ...
	I0501 03:20:56.680038   56414 pod_ready.go:97] node "test-preload-872415" hosting pod "etcd-test-preload-872415" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-872415" has status "Ready":"False"
	I0501 03:20:56.680066   56414 pod_ready.go:81] duration metric: took 6.401972ms for pod "etcd-test-preload-872415" in "kube-system" namespace to be "Ready" ...
	E0501 03:20:56.680078   56414 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-872415" hosting pod "etcd-test-preload-872415" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-872415" has status "Ready":"False"
	I0501 03:20:56.680086   56414 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-test-preload-872415" in "kube-system" namespace to be "Ready" ...
	I0501 03:20:56.852934   56414 pod_ready.go:97] node "test-preload-872415" hosting pod "kube-apiserver-test-preload-872415" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-872415" has status "Ready":"False"
	I0501 03:20:56.852965   56414 pod_ready.go:81] duration metric: took 172.866653ms for pod "kube-apiserver-test-preload-872415" in "kube-system" namespace to be "Ready" ...
	E0501 03:20:56.852977   56414 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-872415" hosting pod "kube-apiserver-test-preload-872415" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-872415" has status "Ready":"False"
	I0501 03:20:56.852987   56414 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-test-preload-872415" in "kube-system" namespace to be "Ready" ...
	I0501 03:20:57.254743   56414 pod_ready.go:97] node "test-preload-872415" hosting pod "kube-controller-manager-test-preload-872415" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-872415" has status "Ready":"False"
	I0501 03:20:57.254783   56414 pod_ready.go:81] duration metric: took 401.781852ms for pod "kube-controller-manager-test-preload-872415" in "kube-system" namespace to be "Ready" ...
	E0501 03:20:57.254797   56414 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-872415" hosting pod "kube-controller-manager-test-preload-872415" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-872415" has status "Ready":"False"
	I0501 03:20:57.254807   56414 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-4fq5v" in "kube-system" namespace to be "Ready" ...
	I0501 03:20:57.652530   56414 pod_ready.go:97] node "test-preload-872415" hosting pod "kube-proxy-4fq5v" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-872415" has status "Ready":"False"
	I0501 03:20:57.652561   56414 pod_ready.go:81] duration metric: took 397.735172ms for pod "kube-proxy-4fq5v" in "kube-system" namespace to be "Ready" ...
	E0501 03:20:57.652570   56414 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-872415" hosting pod "kube-proxy-4fq5v" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-872415" has status "Ready":"False"
	I0501 03:20:57.652576   56414 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-test-preload-872415" in "kube-system" namespace to be "Ready" ...
	I0501 03:20:58.053344   56414 pod_ready.go:97] node "test-preload-872415" hosting pod "kube-scheduler-test-preload-872415" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-872415" has status "Ready":"False"
	I0501 03:20:58.053377   56414 pod_ready.go:81] duration metric: took 400.792932ms for pod "kube-scheduler-test-preload-872415" in "kube-system" namespace to be "Ready" ...
	E0501 03:20:58.053390   56414 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-872415" hosting pod "kube-scheduler-test-preload-872415" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-872415" has status "Ready":"False"
	I0501 03:20:58.053401   56414 pod_ready.go:38] duration metric: took 1.399911188s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0501 03:20:58.053418   56414 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0501 03:20:58.066416   56414 ops.go:34] apiserver oom_adj: -16
	I0501 03:20:58.066443   56414 kubeadm.go:591] duration metric: took 9.573557976s to restartPrimaryControlPlane
	I0501 03:20:58.066454   56414 kubeadm.go:393] duration metric: took 9.624377358s to StartCluster
	I0501 03:20:58.066472   56414 settings.go:142] acquiring lock: {Name:mkcfe08bcadd45d99c00ed1eaf4e9c226a489fe2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 03:20:58.066552   56414 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18779-13391/kubeconfig
	I0501 03:20:58.067169   56414 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18779-13391/kubeconfig: {Name:mk5d1131557f885800877622151c0915337cb23a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 03:20:58.067385   56414 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.71 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0501 03:20:58.069171   56414 out.go:177] * Verifying Kubernetes components...
	I0501 03:20:58.067452   56414 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0501 03:20:58.067567   56414 config.go:182] Loaded profile config "test-preload-872415": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0501 03:20:58.070353   56414 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 03:20:58.070377   56414 addons.go:69] Setting default-storageclass=true in profile "test-preload-872415"
	I0501 03:20:58.070356   56414 addons.go:69] Setting storage-provisioner=true in profile "test-preload-872415"
	I0501 03:20:58.070417   56414 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-872415"
	I0501 03:20:58.070453   56414 addons.go:234] Setting addon storage-provisioner=true in "test-preload-872415"
	W0501 03:20:58.070468   56414 addons.go:243] addon storage-provisioner should already be in state true
	I0501 03:20:58.070505   56414 host.go:66] Checking if "test-preload-872415" exists ...
	I0501 03:20:58.070747   56414 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:20:58.070784   56414 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:20:58.070835   56414 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:20:58.070877   56414 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:20:58.085200   56414 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40557
	I0501 03:20:58.085390   56414 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39445
	I0501 03:20:58.085575   56414 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:20:58.085793   56414 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:20:58.086029   56414 main.go:141] libmachine: Using API Version  1
	I0501 03:20:58.086047   56414 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:20:58.086259   56414 main.go:141] libmachine: Using API Version  1
	I0501 03:20:58.086289   56414 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:20:58.086385   56414 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:20:58.086583   56414 main.go:141] libmachine: (test-preload-872415) Calling .GetState
	I0501 03:20:58.086721   56414 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:20:58.087279   56414 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:20:58.087320   56414 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:20:58.088960   56414 kapi.go:59] client config for test-preload-872415: &rest.Config{Host:"https://192.168.39.71:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18779-13391/.minikube/profiles/test-preload-872415/client.crt", KeyFile:"/home/jenkins/minikube-integration/18779-13391/.minikube/profiles/test-preload-872415/client.key", CAFile:"/home/jenkins/minikube-integration/18779-13391/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(n
il), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02700), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0501 03:20:58.089273   56414 addons.go:234] Setting addon default-storageclass=true in "test-preload-872415"
	W0501 03:20:58.089289   56414 addons.go:243] addon default-storageclass should already be in state true
	I0501 03:20:58.089317   56414 host.go:66] Checking if "test-preload-872415" exists ...
	I0501 03:20:58.089680   56414 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:20:58.089723   56414 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:20:58.101605   56414 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33351
	I0501 03:20:58.102045   56414 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:20:58.102516   56414 main.go:141] libmachine: Using API Version  1
	I0501 03:20:58.102536   56414 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:20:58.102851   56414 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:20:58.103004   56414 main.go:141] libmachine: (test-preload-872415) Calling .GetState
	I0501 03:20:58.103389   56414 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46011
	I0501 03:20:58.103745   56414 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:20:58.104191   56414 main.go:141] libmachine: Using API Version  1
	I0501 03:20:58.104209   56414 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:20:58.104604   56414 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:20:58.104664   56414 main.go:141] libmachine: (test-preload-872415) Calling .DriverName
	I0501 03:20:58.105148   56414 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:20:58.105185   56414 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:20:58.107193   56414 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0501 03:20:58.108591   56414 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0501 03:20:58.108611   56414 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0501 03:20:58.108629   56414 main.go:141] libmachine: (test-preload-872415) Calling .GetSSHHostname
	I0501 03:20:58.111543   56414 main.go:141] libmachine: (test-preload-872415) DBG | domain test-preload-872415 has defined MAC address 52:54:00:5c:81:4c in network mk-test-preload-872415
	I0501 03:20:58.111957   56414 main.go:141] libmachine: (test-preload-872415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:81:4c", ip: ""} in network mk-test-preload-872415: {Iface:virbr1 ExpiryTime:2024-05-01 04:17:03 +0000 UTC Type:0 Mac:52:54:00:5c:81:4c Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:test-preload-872415 Clientid:01:52:54:00:5c:81:4c}
	I0501 03:20:58.112008   56414 main.go:141] libmachine: (test-preload-872415) DBG | domain test-preload-872415 has defined IP address 192.168.39.71 and MAC address 52:54:00:5c:81:4c in network mk-test-preload-872415
	I0501 03:20:58.112117   56414 main.go:141] libmachine: (test-preload-872415) Calling .GetSSHPort
	I0501 03:20:58.112271   56414 main.go:141] libmachine: (test-preload-872415) Calling .GetSSHKeyPath
	I0501 03:20:58.112442   56414 main.go:141] libmachine: (test-preload-872415) Calling .GetSSHUsername
	I0501 03:20:58.112546   56414 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/test-preload-872415/id_rsa Username:docker}
	I0501 03:20:58.119081   56414 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34013
	I0501 03:20:58.119419   56414 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:20:58.119824   56414 main.go:141] libmachine: Using API Version  1
	I0501 03:20:58.119845   56414 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:20:58.120178   56414 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:20:58.120354   56414 main.go:141] libmachine: (test-preload-872415) Calling .GetState
	I0501 03:20:58.121887   56414 main.go:141] libmachine: (test-preload-872415) Calling .DriverName
	I0501 03:20:58.122161   56414 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0501 03:20:58.122179   56414 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0501 03:20:58.122195   56414 main.go:141] libmachine: (test-preload-872415) Calling .GetSSHHostname
	I0501 03:20:58.124779   56414 main.go:141] libmachine: (test-preload-872415) DBG | domain test-preload-872415 has defined MAC address 52:54:00:5c:81:4c in network mk-test-preload-872415
	I0501 03:20:58.125133   56414 main.go:141] libmachine: (test-preload-872415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:81:4c", ip: ""} in network mk-test-preload-872415: {Iface:virbr1 ExpiryTime:2024-05-01 04:17:03 +0000 UTC Type:0 Mac:52:54:00:5c:81:4c Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:test-preload-872415 Clientid:01:52:54:00:5c:81:4c}
	I0501 03:20:58.125155   56414 main.go:141] libmachine: (test-preload-872415) DBG | domain test-preload-872415 has defined IP address 192.168.39.71 and MAC address 52:54:00:5c:81:4c in network mk-test-preload-872415
	I0501 03:20:58.125277   56414 main.go:141] libmachine: (test-preload-872415) Calling .GetSSHPort
	I0501 03:20:58.125439   56414 main.go:141] libmachine: (test-preload-872415) Calling .GetSSHKeyPath
	I0501 03:20:58.125572   56414 main.go:141] libmachine: (test-preload-872415) Calling .GetSSHUsername
	I0501 03:20:58.125716   56414 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/test-preload-872415/id_rsa Username:docker}
	I0501 03:20:58.259370   56414 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0501 03:20:58.277043   56414 node_ready.go:35] waiting up to 6m0s for node "test-preload-872415" to be "Ready" ...
	I0501 03:20:58.362945   56414 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0501 03:20:58.383132   56414 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0501 03:20:59.357714   56414 main.go:141] libmachine: Making call to close driver server
	I0501 03:20:59.357744   56414 main.go:141] libmachine: (test-preload-872415) Calling .Close
	I0501 03:20:59.357936   56414 main.go:141] libmachine: Making call to close driver server
	I0501 03:20:59.357960   56414 main.go:141] libmachine: (test-preload-872415) Calling .Close
	I0501 03:20:59.358066   56414 main.go:141] libmachine: (test-preload-872415) DBG | Closing plugin on server side
	I0501 03:20:59.358122   56414 main.go:141] libmachine: Successfully made call to close driver server
	I0501 03:20:59.358135   56414 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 03:20:59.358151   56414 main.go:141] libmachine: Making call to close driver server
	I0501 03:20:59.358160   56414 main.go:141] libmachine: (test-preload-872415) Calling .Close
	I0501 03:20:59.358217   56414 main.go:141] libmachine: Successfully made call to close driver server
	I0501 03:20:59.358225   56414 main.go:141] libmachine: (test-preload-872415) DBG | Closing plugin on server side
	I0501 03:20:59.358232   56414 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 03:20:59.358243   56414 main.go:141] libmachine: Making call to close driver server
	I0501 03:20:59.358264   56414 main.go:141] libmachine: (test-preload-872415) Calling .Close
	I0501 03:20:59.358427   56414 main.go:141] libmachine: Successfully made call to close driver server
	I0501 03:20:59.358471   56414 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 03:20:59.358549   56414 main.go:141] libmachine: (test-preload-872415) DBG | Closing plugin on server side
	I0501 03:20:59.358552   56414 main.go:141] libmachine: Successfully made call to close driver server
	I0501 03:20:59.358567   56414 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 03:20:59.365543   56414 main.go:141] libmachine: Making call to close driver server
	I0501 03:20:59.365560   56414 main.go:141] libmachine: (test-preload-872415) Calling .Close
	I0501 03:20:59.365813   56414 main.go:141] libmachine: Successfully made call to close driver server
	I0501 03:20:59.365833   56414 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 03:20:59.365841   56414 main.go:141] libmachine: (test-preload-872415) DBG | Closing plugin on server side
	I0501 03:20:59.367932   56414 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0501 03:20:59.369274   56414 addons.go:505] duration metric: took 1.301821142s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0501 03:21:00.280499   56414 node_ready.go:53] node "test-preload-872415" has status "Ready":"False"
	I0501 03:21:02.281322   56414 node_ready.go:53] node "test-preload-872415" has status "Ready":"False"
	I0501 03:21:04.783377   56414 node_ready.go:53] node "test-preload-872415" has status "Ready":"False"
	I0501 03:21:05.281026   56414 node_ready.go:49] node "test-preload-872415" has status "Ready":"True"
	I0501 03:21:05.281047   56414 node_ready.go:38] duration metric: took 7.00396994s for node "test-preload-872415" to be "Ready" ...
	I0501 03:21:05.281056   56414 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0501 03:21:05.285975   56414 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-m6r88" in "kube-system" namespace to be "Ready" ...
	I0501 03:21:05.291647   56414 pod_ready.go:92] pod "coredns-6d4b75cb6d-m6r88" in "kube-system" namespace has status "Ready":"True"
	I0501 03:21:05.291672   56414 pod_ready.go:81] duration metric: took 5.670927ms for pod "coredns-6d4b75cb6d-m6r88" in "kube-system" namespace to be "Ready" ...
	I0501 03:21:05.291710   56414 pod_ready.go:78] waiting up to 6m0s for pod "etcd-test-preload-872415" in "kube-system" namespace to be "Ready" ...
	I0501 03:21:07.301493   56414 pod_ready.go:102] pod "etcd-test-preload-872415" in "kube-system" namespace has status "Ready":"False"
	I0501 03:21:08.799258   56414 pod_ready.go:92] pod "etcd-test-preload-872415" in "kube-system" namespace has status "Ready":"True"
	I0501 03:21:08.799291   56414 pod_ready.go:81] duration metric: took 3.507570285s for pod "etcd-test-preload-872415" in "kube-system" namespace to be "Ready" ...
	I0501 03:21:08.799303   56414 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-test-preload-872415" in "kube-system" namespace to be "Ready" ...
	I0501 03:21:08.804940   56414 pod_ready.go:92] pod "kube-apiserver-test-preload-872415" in "kube-system" namespace has status "Ready":"True"
	I0501 03:21:08.804959   56414 pod_ready.go:81] duration metric: took 5.648825ms for pod "kube-apiserver-test-preload-872415" in "kube-system" namespace to be "Ready" ...
	I0501 03:21:08.804967   56414 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-test-preload-872415" in "kube-system" namespace to be "Ready" ...
	I0501 03:21:08.809098   56414 pod_ready.go:92] pod "kube-controller-manager-test-preload-872415" in "kube-system" namespace has status "Ready":"True"
	I0501 03:21:08.809115   56414 pod_ready.go:81] duration metric: took 4.143022ms for pod "kube-controller-manager-test-preload-872415" in "kube-system" namespace to be "Ready" ...
	I0501 03:21:08.809124   56414 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4fq5v" in "kube-system" namespace to be "Ready" ...
	I0501 03:21:08.813583   56414 pod_ready.go:92] pod "kube-proxy-4fq5v" in "kube-system" namespace has status "Ready":"True"
	I0501 03:21:08.813606   56414 pod_ready.go:81] duration metric: took 4.474965ms for pod "kube-proxy-4fq5v" in "kube-system" namespace to be "Ready" ...
	I0501 03:21:08.813621   56414 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-test-preload-872415" in "kube-system" namespace to be "Ready" ...
	I0501 03:21:09.321897   56414 pod_ready.go:92] pod "kube-scheduler-test-preload-872415" in "kube-system" namespace has status "Ready":"True"
	I0501 03:21:09.321919   56414 pod_ready.go:81] duration metric: took 508.291043ms for pod "kube-scheduler-test-preload-872415" in "kube-system" namespace to be "Ready" ...
	I0501 03:21:09.321928   56414 pod_ready.go:38] duration metric: took 4.040858555s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0501 03:21:09.321942   56414 api_server.go:52] waiting for apiserver process to appear ...
	I0501 03:21:09.321992   56414 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:21:09.338339   56414 api_server.go:72] duration metric: took 11.270929711s to wait for apiserver process to appear ...
	I0501 03:21:09.338357   56414 api_server.go:88] waiting for apiserver healthz status ...
	I0501 03:21:09.338371   56414 api_server.go:253] Checking apiserver healthz at https://192.168.39.71:8443/healthz ...
	I0501 03:21:09.343752   56414 api_server.go:279] https://192.168.39.71:8443/healthz returned 200:
	ok
	I0501 03:21:09.345250   56414 api_server.go:141] control plane version: v1.24.4
	I0501 03:21:09.345275   56414 api_server.go:131] duration metric: took 6.912426ms to wait for apiserver health ...
	I0501 03:21:09.345288   56414 system_pods.go:43] waiting for kube-system pods to appear ...
	I0501 03:21:09.486295   56414 system_pods.go:59] 7 kube-system pods found
	I0501 03:21:09.486323   56414 system_pods.go:61] "coredns-6d4b75cb6d-m6r88" [9ba11eba-fbc9-4c10-8073-3c09ac1e1d6e] Running
	I0501 03:21:09.486327   56414 system_pods.go:61] "etcd-test-preload-872415" [2bcbe5e4-43bf-47a6-94f4-26159d4b0647] Running
	I0501 03:21:09.486335   56414 system_pods.go:61] "kube-apiserver-test-preload-872415" [e7d568a3-08b4-4058-af0b-8d05bf2431d7] Running
	I0501 03:21:09.486339   56414 system_pods.go:61] "kube-controller-manager-test-preload-872415" [d35c632c-07f2-49eb-96cb-c3c8cdc0f61b] Running
	I0501 03:21:09.486342   56414 system_pods.go:61] "kube-proxy-4fq5v" [ed9f2c83-5c71-4ee3-8182-1be6f2b45d4f] Running
	I0501 03:21:09.486345   56414 system_pods.go:61] "kube-scheduler-test-preload-872415" [69d2eec0-626e-4057-9307-31f9d4757318] Running
	I0501 03:21:09.486348   56414 system_pods.go:61] "storage-provisioner" [561fb514-2639-47f8-8a19-0a757c3993d8] Running
	I0501 03:21:09.486353   56414 system_pods.go:74] duration metric: took 141.059542ms to wait for pod list to return data ...
	I0501 03:21:09.486359   56414 default_sa.go:34] waiting for default service account to be created ...
	I0501 03:21:09.680740   56414 default_sa.go:45] found service account: "default"
	I0501 03:21:09.680766   56414 default_sa.go:55] duration metric: took 194.401313ms for default service account to be created ...
	I0501 03:21:09.680775   56414 system_pods.go:116] waiting for k8s-apps to be running ...
	I0501 03:21:09.884168   56414 system_pods.go:86] 7 kube-system pods found
	I0501 03:21:09.884199   56414 system_pods.go:89] "coredns-6d4b75cb6d-m6r88" [9ba11eba-fbc9-4c10-8073-3c09ac1e1d6e] Running
	I0501 03:21:09.884205   56414 system_pods.go:89] "etcd-test-preload-872415" [2bcbe5e4-43bf-47a6-94f4-26159d4b0647] Running
	I0501 03:21:09.884209   56414 system_pods.go:89] "kube-apiserver-test-preload-872415" [e7d568a3-08b4-4058-af0b-8d05bf2431d7] Running
	I0501 03:21:09.884213   56414 system_pods.go:89] "kube-controller-manager-test-preload-872415" [d35c632c-07f2-49eb-96cb-c3c8cdc0f61b] Running
	I0501 03:21:09.884222   56414 system_pods.go:89] "kube-proxy-4fq5v" [ed9f2c83-5c71-4ee3-8182-1be6f2b45d4f] Running
	I0501 03:21:09.884227   56414 system_pods.go:89] "kube-scheduler-test-preload-872415" [69d2eec0-626e-4057-9307-31f9d4757318] Running
	I0501 03:21:09.884230   56414 system_pods.go:89] "storage-provisioner" [561fb514-2639-47f8-8a19-0a757c3993d8] Running
	I0501 03:21:09.884236   56414 system_pods.go:126] duration metric: took 203.457034ms to wait for k8s-apps to be running ...
	I0501 03:21:09.884242   56414 system_svc.go:44] waiting for kubelet service to be running ....
	I0501 03:21:09.884281   56414 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 03:21:09.921231   56414 system_svc.go:56] duration metric: took 36.980725ms WaitForService to wait for kubelet
	I0501 03:21:09.921265   56414 kubeadm.go:576] duration metric: took 11.853856869s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0501 03:21:09.921286   56414 node_conditions.go:102] verifying NodePressure condition ...
	I0501 03:21:10.082632   56414 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0501 03:21:10.082663   56414 node_conditions.go:123] node cpu capacity is 2
	I0501 03:21:10.082680   56414 node_conditions.go:105] duration metric: took 161.388313ms to run NodePressure ...
	I0501 03:21:10.082693   56414 start.go:240] waiting for startup goroutines ...
	I0501 03:21:10.082701   56414 start.go:245] waiting for cluster config update ...
	I0501 03:21:10.082713   56414 start.go:254] writing updated cluster config ...
	I0501 03:21:10.083001   56414 ssh_runner.go:195] Run: rm -f paused
	I0501 03:21:10.129507   56414 start.go:600] kubectl: 1.30.0, cluster: 1.24.4 (minor skew: 6)
	I0501 03:21:10.131745   56414 out.go:177] 
	W0501 03:21:10.133402   56414 out.go:239] ! /usr/local/bin/kubectl is version 1.30.0, which may have incompatibilities with Kubernetes 1.24.4.
	I0501 03:21:10.134941   56414 out.go:177]   - Want kubectl v1.24.4? Try 'minikube kubectl -- get pods -A'
	I0501 03:21:10.136189   56414 out.go:177] * Done! kubectl is now configured to use "test-preload-872415" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	May 01 03:21:11 test-preload-872415 crio[702]: time="2024-05-01 03:21:11.094729242Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714533671094700630,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7e7b22b7-c2bb-4500-ba25-94298f58bbd5 name=/runtime.v1.ImageService/ImageFsInfo
	May 01 03:21:11 test-preload-872415 crio[702]: time="2024-05-01 03:21:11.095393835Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=14ec1049-7c91-4045-8088-95988f841ad0 name=/runtime.v1.RuntimeService/ListContainers
	May 01 03:21:11 test-preload-872415 crio[702]: time="2024-05-01 03:21:11.095452843Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=14ec1049-7c91-4045-8088-95988f841ad0 name=/runtime.v1.RuntimeService/ListContainers
	May 01 03:21:11 test-preload-872415 crio[702]: time="2024-05-01 03:21:11.095649836Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:41e412a27af92e8b402533d376e372d52140628f539877cfd421556c8f92b7a8,PodSandboxId:19e5e50571deac3f0ee648f1be33910ecc7814dd05fc553a17386dc0ce5f3d50,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1714533664108598183,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-m6r88,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ba11eba-fbc9-4c10-8073-3c09ac1e1d6e,},Annotations:map[string]string{io.kubernetes.container.hash: f105bf21,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7e47fd7ecc8c6e2bfe7262ae0ffbe104d5fcd1f4d59eaed33a45d256265f0cc,PodSandboxId:e3f68e8c7e94ecc0996151ba661ffd33ba174431842d81aaa4fd65a9ef009eb6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1714533656880382203,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4fq5v,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: ed9f2c83-5c71-4ee3-8182-1be6f2b45d4f,},Annotations:map[string]string{io.kubernetes.container.hash: 70d22b36,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b50a1ed385b3672c4fda569058e20b2505f0a6531357ca4808b3468345c191ed,PodSandboxId:6b606cb9be70d526c355fbbf936d65403135f7065b494cace959c7e5405619b0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714533656914517312,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56
1fb514-2639-47f8-8a19-0a757c3993d8,},Annotations:map[string]string{io.kubernetes.container.hash: 20ca1d0e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37b67fe3a8add0baf9308d5afcc75a442512d6f21d9e8d222f407e4e52cf4024,PodSandboxId:0031f929ff580fb20093cae48a3aa42355e4ce19f51260e65fc48ac188cab666,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1714533650655226341,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-872415,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13b1d877fcf9ea36dd1e244b11cc371f,},Anno
tations:map[string]string{io.kubernetes.container.hash: c6ebc495,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dfd95327a016d330238f36df8a8dfea62dd0ecc0fa983938d96bc63c2dda33bb,PodSandboxId:53babc70d277f0e2e9538fac6e7ae4a0f1e3aa56b7c815200539eacdffd3e15c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1714533650683274523,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-872415,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49bdb108bc70c81d0222b111205e3c1c,},Annotations:map
[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:089f16191e2df997b6460baf7f8254b11506fcfc0b8521270579f77faa697fe3,PodSandboxId:9851c8f2555f2649accf0fbb17ac2225cc31152061ed03afee2aca85dfd1d9ea,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1714533650620551805,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-872415,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 128dfe06279ccfed3a2b8447a2b5484d,}
,Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:835a05d68df5fde2bc7db70f0d9317d94b03525c3fa9db043e9c655c4bfa8a89,PodSandboxId:c678e32413eb4ccec3760b0770e095067699c173645345df2df30d85891efa9b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1714533650517248408,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-872415,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80b63f6abd72ff12507ec4ba9c3a7906,},Annotation
s:map[string]string{io.kubernetes.container.hash: e78960ef,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=14ec1049-7c91-4045-8088-95988f841ad0 name=/runtime.v1.RuntimeService/ListContainers
	May 01 03:21:11 test-preload-872415 crio[702]: time="2024-05-01 03:21:11.138438181Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1498d143-4ef4-4758-8bff-94861af3ab16 name=/runtime.v1.RuntimeService/Version
	May 01 03:21:11 test-preload-872415 crio[702]: time="2024-05-01 03:21:11.138507784Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1498d143-4ef4-4758-8bff-94861af3ab16 name=/runtime.v1.RuntimeService/Version
	May 01 03:21:11 test-preload-872415 crio[702]: time="2024-05-01 03:21:11.140776461Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fb546304-5772-4ffd-a23e-7401bbef3dc8 name=/runtime.v1.ImageService/ImageFsInfo
	May 01 03:21:11 test-preload-872415 crio[702]: time="2024-05-01 03:21:11.141348517Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714533671141322438,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fb546304-5772-4ffd-a23e-7401bbef3dc8 name=/runtime.v1.ImageService/ImageFsInfo
	May 01 03:21:11 test-preload-872415 crio[702]: time="2024-05-01 03:21:11.143774709Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d11f98fe-a4dc-4052-afc2-f3c0cda726ee name=/runtime.v1.RuntimeService/ListContainers
	May 01 03:21:11 test-preload-872415 crio[702]: time="2024-05-01 03:21:11.143937769Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d11f98fe-a4dc-4052-afc2-f3c0cda726ee name=/runtime.v1.RuntimeService/ListContainers
	May 01 03:21:11 test-preload-872415 crio[702]: time="2024-05-01 03:21:11.144131765Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:41e412a27af92e8b402533d376e372d52140628f539877cfd421556c8f92b7a8,PodSandboxId:19e5e50571deac3f0ee648f1be33910ecc7814dd05fc553a17386dc0ce5f3d50,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1714533664108598183,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-m6r88,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ba11eba-fbc9-4c10-8073-3c09ac1e1d6e,},Annotations:map[string]string{io.kubernetes.container.hash: f105bf21,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7e47fd7ecc8c6e2bfe7262ae0ffbe104d5fcd1f4d59eaed33a45d256265f0cc,PodSandboxId:e3f68e8c7e94ecc0996151ba661ffd33ba174431842d81aaa4fd65a9ef009eb6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1714533656880382203,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4fq5v,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: ed9f2c83-5c71-4ee3-8182-1be6f2b45d4f,},Annotations:map[string]string{io.kubernetes.container.hash: 70d22b36,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b50a1ed385b3672c4fda569058e20b2505f0a6531357ca4808b3468345c191ed,PodSandboxId:6b606cb9be70d526c355fbbf936d65403135f7065b494cace959c7e5405619b0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714533656914517312,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56
1fb514-2639-47f8-8a19-0a757c3993d8,},Annotations:map[string]string{io.kubernetes.container.hash: 20ca1d0e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37b67fe3a8add0baf9308d5afcc75a442512d6f21d9e8d222f407e4e52cf4024,PodSandboxId:0031f929ff580fb20093cae48a3aa42355e4ce19f51260e65fc48ac188cab666,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1714533650655226341,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-872415,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13b1d877fcf9ea36dd1e244b11cc371f,},Anno
tations:map[string]string{io.kubernetes.container.hash: c6ebc495,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dfd95327a016d330238f36df8a8dfea62dd0ecc0fa983938d96bc63c2dda33bb,PodSandboxId:53babc70d277f0e2e9538fac6e7ae4a0f1e3aa56b7c815200539eacdffd3e15c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1714533650683274523,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-872415,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49bdb108bc70c81d0222b111205e3c1c,},Annotations:map
[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:089f16191e2df997b6460baf7f8254b11506fcfc0b8521270579f77faa697fe3,PodSandboxId:9851c8f2555f2649accf0fbb17ac2225cc31152061ed03afee2aca85dfd1d9ea,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1714533650620551805,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-872415,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 128dfe06279ccfed3a2b8447a2b5484d,}
,Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:835a05d68df5fde2bc7db70f0d9317d94b03525c3fa9db043e9c655c4bfa8a89,PodSandboxId:c678e32413eb4ccec3760b0770e095067699c173645345df2df30d85891efa9b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1714533650517248408,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-872415,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80b63f6abd72ff12507ec4ba9c3a7906,},Annotation
s:map[string]string{io.kubernetes.container.hash: e78960ef,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d11f98fe-a4dc-4052-afc2-f3c0cda726ee name=/runtime.v1.RuntimeService/ListContainers
	May 01 03:21:11 test-preload-872415 crio[702]: time="2024-05-01 03:21:11.187614614Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d47672ba-5edf-4f11-9f5e-0cd1cdf56e19 name=/runtime.v1.RuntimeService/Version
	May 01 03:21:11 test-preload-872415 crio[702]: time="2024-05-01 03:21:11.187686145Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d47672ba-5edf-4f11-9f5e-0cd1cdf56e19 name=/runtime.v1.RuntimeService/Version
	May 01 03:21:11 test-preload-872415 crio[702]: time="2024-05-01 03:21:11.189031049Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ef1f9d82-1c13-42dc-8f01-7e6446bcf545 name=/runtime.v1.ImageService/ImageFsInfo
	May 01 03:21:11 test-preload-872415 crio[702]: time="2024-05-01 03:21:11.189448716Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714533671189426511,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ef1f9d82-1c13-42dc-8f01-7e6446bcf545 name=/runtime.v1.ImageService/ImageFsInfo
	May 01 03:21:11 test-preload-872415 crio[702]: time="2024-05-01 03:21:11.190005137Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4f6fbcdd-8eb9-477e-8a7c-d921a5d0eb32 name=/runtime.v1.RuntimeService/ListContainers
	May 01 03:21:11 test-preload-872415 crio[702]: time="2024-05-01 03:21:11.190061359Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4f6fbcdd-8eb9-477e-8a7c-d921a5d0eb32 name=/runtime.v1.RuntimeService/ListContainers
	May 01 03:21:11 test-preload-872415 crio[702]: time="2024-05-01 03:21:11.190278327Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:41e412a27af92e8b402533d376e372d52140628f539877cfd421556c8f92b7a8,PodSandboxId:19e5e50571deac3f0ee648f1be33910ecc7814dd05fc553a17386dc0ce5f3d50,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1714533664108598183,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-m6r88,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ba11eba-fbc9-4c10-8073-3c09ac1e1d6e,},Annotations:map[string]string{io.kubernetes.container.hash: f105bf21,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7e47fd7ecc8c6e2bfe7262ae0ffbe104d5fcd1f4d59eaed33a45d256265f0cc,PodSandboxId:e3f68e8c7e94ecc0996151ba661ffd33ba174431842d81aaa4fd65a9ef009eb6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1714533656880382203,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4fq5v,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: ed9f2c83-5c71-4ee3-8182-1be6f2b45d4f,},Annotations:map[string]string{io.kubernetes.container.hash: 70d22b36,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b50a1ed385b3672c4fda569058e20b2505f0a6531357ca4808b3468345c191ed,PodSandboxId:6b606cb9be70d526c355fbbf936d65403135f7065b494cace959c7e5405619b0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714533656914517312,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56
1fb514-2639-47f8-8a19-0a757c3993d8,},Annotations:map[string]string{io.kubernetes.container.hash: 20ca1d0e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37b67fe3a8add0baf9308d5afcc75a442512d6f21d9e8d222f407e4e52cf4024,PodSandboxId:0031f929ff580fb20093cae48a3aa42355e4ce19f51260e65fc48ac188cab666,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1714533650655226341,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-872415,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13b1d877fcf9ea36dd1e244b11cc371f,},Anno
tations:map[string]string{io.kubernetes.container.hash: c6ebc495,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dfd95327a016d330238f36df8a8dfea62dd0ecc0fa983938d96bc63c2dda33bb,PodSandboxId:53babc70d277f0e2e9538fac6e7ae4a0f1e3aa56b7c815200539eacdffd3e15c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1714533650683274523,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-872415,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49bdb108bc70c81d0222b111205e3c1c,},Annotations:map
[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:089f16191e2df997b6460baf7f8254b11506fcfc0b8521270579f77faa697fe3,PodSandboxId:9851c8f2555f2649accf0fbb17ac2225cc31152061ed03afee2aca85dfd1d9ea,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1714533650620551805,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-872415,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 128dfe06279ccfed3a2b8447a2b5484d,}
,Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:835a05d68df5fde2bc7db70f0d9317d94b03525c3fa9db043e9c655c4bfa8a89,PodSandboxId:c678e32413eb4ccec3760b0770e095067699c173645345df2df30d85891efa9b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1714533650517248408,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-872415,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80b63f6abd72ff12507ec4ba9c3a7906,},Annotation
s:map[string]string{io.kubernetes.container.hash: e78960ef,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4f6fbcdd-8eb9-477e-8a7c-d921a5d0eb32 name=/runtime.v1.RuntimeService/ListContainers
	May 01 03:21:11 test-preload-872415 crio[702]: time="2024-05-01 03:21:11.230150847Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9076d186-8270-4076-9898-061c411c4b76 name=/runtime.v1.RuntimeService/Version
	May 01 03:21:11 test-preload-872415 crio[702]: time="2024-05-01 03:21:11.230223627Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9076d186-8270-4076-9898-061c411c4b76 name=/runtime.v1.RuntimeService/Version
	May 01 03:21:11 test-preload-872415 crio[702]: time="2024-05-01 03:21:11.231788748Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=42001be6-bfb7-4647-a107-ed544f11ae60 name=/runtime.v1.ImageService/ImageFsInfo
	May 01 03:21:11 test-preload-872415 crio[702]: time="2024-05-01 03:21:11.232296766Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714533671232275763,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=42001be6-bfb7-4647-a107-ed544f11ae60 name=/runtime.v1.ImageService/ImageFsInfo
	May 01 03:21:11 test-preload-872415 crio[702]: time="2024-05-01 03:21:11.232994289Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8e9e5694-449a-41b5-b587-316a5b77a87f name=/runtime.v1.RuntimeService/ListContainers
	May 01 03:21:11 test-preload-872415 crio[702]: time="2024-05-01 03:21:11.233077087Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8e9e5694-449a-41b5-b587-316a5b77a87f name=/runtime.v1.RuntimeService/ListContainers
	May 01 03:21:11 test-preload-872415 crio[702]: time="2024-05-01 03:21:11.233232292Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:41e412a27af92e8b402533d376e372d52140628f539877cfd421556c8f92b7a8,PodSandboxId:19e5e50571deac3f0ee648f1be33910ecc7814dd05fc553a17386dc0ce5f3d50,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1714533664108598183,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-m6r88,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ba11eba-fbc9-4c10-8073-3c09ac1e1d6e,},Annotations:map[string]string{io.kubernetes.container.hash: f105bf21,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7e47fd7ecc8c6e2bfe7262ae0ffbe104d5fcd1f4d59eaed33a45d256265f0cc,PodSandboxId:e3f68e8c7e94ecc0996151ba661ffd33ba174431842d81aaa4fd65a9ef009eb6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1714533656880382203,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4fq5v,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: ed9f2c83-5c71-4ee3-8182-1be6f2b45d4f,},Annotations:map[string]string{io.kubernetes.container.hash: 70d22b36,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b50a1ed385b3672c4fda569058e20b2505f0a6531357ca4808b3468345c191ed,PodSandboxId:6b606cb9be70d526c355fbbf936d65403135f7065b494cace959c7e5405619b0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714533656914517312,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56
1fb514-2639-47f8-8a19-0a757c3993d8,},Annotations:map[string]string{io.kubernetes.container.hash: 20ca1d0e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37b67fe3a8add0baf9308d5afcc75a442512d6f21d9e8d222f407e4e52cf4024,PodSandboxId:0031f929ff580fb20093cae48a3aa42355e4ce19f51260e65fc48ac188cab666,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1714533650655226341,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-872415,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13b1d877fcf9ea36dd1e244b11cc371f,},Anno
tations:map[string]string{io.kubernetes.container.hash: c6ebc495,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dfd95327a016d330238f36df8a8dfea62dd0ecc0fa983938d96bc63c2dda33bb,PodSandboxId:53babc70d277f0e2e9538fac6e7ae4a0f1e3aa56b7c815200539eacdffd3e15c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1714533650683274523,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-872415,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49bdb108bc70c81d0222b111205e3c1c,},Annotations:map
[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:089f16191e2df997b6460baf7f8254b11506fcfc0b8521270579f77faa697fe3,PodSandboxId:9851c8f2555f2649accf0fbb17ac2225cc31152061ed03afee2aca85dfd1d9ea,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1714533650620551805,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-872415,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 128dfe06279ccfed3a2b8447a2b5484d,}
,Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:835a05d68df5fde2bc7db70f0d9317d94b03525c3fa9db043e9c655c4bfa8a89,PodSandboxId:c678e32413eb4ccec3760b0770e095067699c173645345df2df30d85891efa9b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1714533650517248408,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-872415,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80b63f6abd72ff12507ec4ba9c3a7906,},Annotation
s:map[string]string{io.kubernetes.container.hash: e78960ef,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8e9e5694-449a-41b5-b587-316a5b77a87f name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	41e412a27af92       a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03   7 seconds ago       Running             coredns                   1                   19e5e50571dea       coredns-6d4b75cb6d-m6r88
	b50a1ed385b36       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 seconds ago      Running             storage-provisioner       1                   6b606cb9be70d       storage-provisioner
	d7e47fd7ecc8c       7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7   14 seconds ago      Running             kube-proxy                1                   e3f68e8c7e94e       kube-proxy-4fq5v
	dfd95327a016d       03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9   20 seconds ago      Running             kube-scheduler            1                   53babc70d277f       kube-scheduler-test-preload-872415
	37b67fe3a8add       aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b   20 seconds ago      Running             etcd                      1                   0031f929ff580       etcd-test-preload-872415
	089f16191e2df       1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48   20 seconds ago      Running             kube-controller-manager   1                   9851c8f2555f2       kube-controller-manager-test-preload-872415
	835a05d68df5f       6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d   20 seconds ago      Running             kube-apiserver            1                   c678e32413eb4       kube-apiserver-test-preload-872415
	
	
	==> coredns [41e412a27af92e8b402533d376e372d52140628f539877cfd421556c8f92b7a8] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = bbeeddb09682f41960fef01b05cb3a3d
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:60854 - 18069 "HINFO IN 440430274095873179.88662585271155021. udp 54 false 512" NXDOMAIN qr,rd,ra 54 0.021430681s
	
	
	==> describe nodes <==
	Name:               test-preload-872415
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-872415
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2c4eae41cda912e6a762d77f0d8868e00f97bb4e
	                    minikube.k8s.io/name=test-preload-872415
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_01T03_19_29_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 01 May 2024 03:19:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-872415
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 01 May 2024 03:21:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 01 May 2024 03:21:05 +0000   Wed, 01 May 2024 03:19:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 01 May 2024 03:21:05 +0000   Wed, 01 May 2024 03:19:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 01 May 2024 03:21:05 +0000   Wed, 01 May 2024 03:19:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 01 May 2024 03:21:05 +0000   Wed, 01 May 2024 03:21:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.71
	  Hostname:    test-preload-872415
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c6d9093bf377435698d22ecbdf21ac90
	  System UUID:                c6d9093b-f377-4356-98d2-2ecbdf21ac90
	  Boot ID:                    1b9fcd04-e671-449f-b038-8f4bfe4354aa
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.24.4
	  Kube-Proxy Version:         v1.24.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-m6r88                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     88s
	  kube-system                 etcd-test-preload-872415                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         101s
	  kube-system                 kube-apiserver-test-preload-872415             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         102s
	  kube-system                 kube-controller-manager-test-preload-872415    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         101s
	  kube-system                 kube-proxy-4fq5v                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         88s
	  kube-system                 kube-scheduler-test-preload-872415             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         103s
	  kube-system                 storage-provisioner                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         86s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 14s                kube-proxy       
	  Normal  Starting                 85s                kube-proxy       
	  Normal  Starting                 102s               kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  102s               kubelet          Node test-preload-872415 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    102s               kubelet          Node test-preload-872415 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     102s               kubelet          Node test-preload-872415 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  101s               kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                91s                kubelet          Node test-preload-872415 status is now: NodeReady
	  Normal  RegisteredNode           89s                node-controller  Node test-preload-872415 event: Registered Node test-preload-872415 in Controller
	  Normal  Starting                 22s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  22s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  21s (x8 over 22s)  kubelet          Node test-preload-872415 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21s (x8 over 22s)  kubelet          Node test-preload-872415 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21s (x7 over 22s)  kubelet          Node test-preload-872415 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4s                 node-controller  Node test-preload-872415 event: Registered Node test-preload-872415 in Controller
	
	
	==> dmesg <==
	[May 1 03:20] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.055585] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.045057] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.646912] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.536063] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.697958] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000014] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +10.259167] systemd-fstab-generator[618]: Ignoring "noauto" option for root device
	[  +0.061282] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.056465] systemd-fstab-generator[630]: Ignoring "noauto" option for root device
	[  +0.181474] systemd-fstab-generator[644]: Ignoring "noauto" option for root device
	[  +0.149434] systemd-fstab-generator[656]: Ignoring "noauto" option for root device
	[  +0.322779] systemd-fstab-generator[686]: Ignoring "noauto" option for root device
	[ +12.833442] systemd-fstab-generator[963]: Ignoring "noauto" option for root device
	[  +0.065812] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.771250] systemd-fstab-generator[1094]: Ignoring "noauto" option for root device
	[  +3.718588] kauditd_printk_skb: 105 callbacks suppressed
	[  +4.805924] systemd-fstab-generator[1725]: Ignoring "noauto" option for root device
	[May 1 03:21] kauditd_printk_skb: 53 callbacks suppressed
	
	
	==> etcd [37b67fe3a8add0baf9308d5afcc75a442512d6f21d9e8d222f407e4e52cf4024] <==
	{"level":"info","ts":"2024-05-01T03:20:51.055Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"226d7ac4e2309206","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-05-01T03:20:51.057Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-05-01T03:20:51.057Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"226d7ac4e2309206 switched to configuration voters=(2480773955778023942)"}
	{"level":"info","ts":"2024-05-01T03:20:51.061Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"98fbf1e9ed6d9a6e","local-member-id":"226d7ac4e2309206","added-peer-id":"226d7ac4e2309206","added-peer-peer-urls":["https://192.168.39.71:2380"]}
	{"level":"info","ts":"2024-05-01T03:20:51.062Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"98fbf1e9ed6d9a6e","local-member-id":"226d7ac4e2309206","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-01T03:20:51.067Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-01T03:20:51.071Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-05-01T03:20:51.072Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"226d7ac4e2309206","initial-advertise-peer-urls":["https://192.168.39.71:2380"],"listen-peer-urls":["https://192.168.39.71:2380"],"advertise-client-urls":["https://192.168.39.71:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.71:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-05-01T03:20:51.072Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.71:2380"}
	{"level":"info","ts":"2024-05-01T03:20:51.072Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.71:2380"}
	{"level":"info","ts":"2024-05-01T03:20:51.072Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-05-01T03:20:52.406Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"226d7ac4e2309206 is starting a new election at term 2"}
	{"level":"info","ts":"2024-05-01T03:20:52.406Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"226d7ac4e2309206 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-05-01T03:20:52.406Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"226d7ac4e2309206 received MsgPreVoteResp from 226d7ac4e2309206 at term 2"}
	{"level":"info","ts":"2024-05-01T03:20:52.406Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"226d7ac4e2309206 became candidate at term 3"}
	{"level":"info","ts":"2024-05-01T03:20:52.406Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"226d7ac4e2309206 received MsgVoteResp from 226d7ac4e2309206 at term 3"}
	{"level":"info","ts":"2024-05-01T03:20:52.406Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"226d7ac4e2309206 became leader at term 3"}
	{"level":"info","ts":"2024-05-01T03:20:52.406Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 226d7ac4e2309206 elected leader 226d7ac4e2309206 at term 3"}
	{"level":"info","ts":"2024-05-01T03:20:52.407Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"226d7ac4e2309206","local-member-attributes":"{Name:test-preload-872415 ClientURLs:[https://192.168.39.71:2379]}","request-path":"/0/members/226d7ac4e2309206/attributes","cluster-id":"98fbf1e9ed6d9a6e","publish-timeout":"7s"}
	{"level":"info","ts":"2024-05-01T03:20:52.407Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-01T03:20:52.408Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-01T03:20:52.409Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.71:2379"}
	{"level":"info","ts":"2024-05-01T03:20:52.410Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-05-01T03:20:52.410Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-05-01T03:20:52.410Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 03:21:11 up 0 min,  0 users,  load average: 0.70, 0.19, 0.06
	Linux test-preload-872415 5.10.207 #1 SMP Tue Apr 30 22:38:43 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [835a05d68df5fde2bc7db70f0d9317d94b03525c3fa9db043e9c655c4bfa8a89] <==
	I0501 03:20:54.862199       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0501 03:20:54.862230       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0501 03:20:54.862259       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0501 03:20:54.866395       1 apiservice_controller.go:97] Starting APIServiceRegistrationController
	I0501 03:20:54.866505       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I0501 03:20:54.867316       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0501 03:20:54.867346       1 shared_informer.go:255] Waiting for caches to sync for crd-autoregister
	I0501 03:20:54.993441       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0501 03:20:55.031451       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0501 03:20:55.060301       1 cache.go:39] Caches are synced for autoregister controller
	I0501 03:20:55.060589       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0501 03:20:55.060727       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0501 03:20:55.060879       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0501 03:20:55.066672       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0501 03:20:55.068135       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0501 03:20:55.495175       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0501 03:20:55.857178       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0501 03:20:56.533486       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0501 03:20:56.548027       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0501 03:20:56.580140       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0501 03:20:56.595250       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0501 03:20:56.606549       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0501 03:20:57.340179       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0501 03:21:07.799206       1 controller.go:611] quota admission added evaluator for: endpoints
	I0501 03:21:07.840534       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [089f16191e2df997b6460baf7f8254b11506fcfc0b8521270579f77faa697fe3] <==
	I0501 03:21:07.808000       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-legacy-unknown
	I0501 03:21:07.808085       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-client
	I0501 03:21:07.808166       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0501 03:21:07.827023       1 shared_informer.go:262] Caches are synced for crt configmap
	I0501 03:21:07.830889       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I0501 03:21:07.837550       1 shared_informer.go:262] Caches are synced for cronjob
	I0501 03:21:07.850158       1 shared_informer.go:262] Caches are synced for ClusterRoleAggregator
	I0501 03:21:07.851596       1 shared_informer.go:262] Caches are synced for service account
	I0501 03:21:07.856168       1 shared_informer.go:262] Caches are synced for daemon sets
	I0501 03:21:07.862539       1 shared_informer.go:262] Caches are synced for deployment
	I0501 03:21:07.862627       1 shared_informer.go:262] Caches are synced for ReplicaSet
	I0501 03:21:07.876405       1 shared_informer.go:262] Caches are synced for HPA
	I0501 03:21:07.946050       1 shared_informer.go:262] Caches are synced for expand
	I0501 03:21:07.981738       1 shared_informer.go:262] Caches are synced for persistent volume
	I0501 03:21:07.983879       1 shared_informer.go:262] Caches are synced for disruption
	I0501 03:21:07.983916       1 disruption.go:371] Sending events to api server.
	I0501 03:21:08.003484       1 shared_informer.go:262] Caches are synced for ephemeral
	I0501 03:21:08.003600       1 shared_informer.go:262] Caches are synced for PVC protection
	I0501 03:21:08.023264       1 shared_informer.go:262] Caches are synced for attach detach
	I0501 03:21:08.029348       1 shared_informer.go:262] Caches are synced for resource quota
	I0501 03:21:08.052389       1 shared_informer.go:262] Caches are synced for resource quota
	I0501 03:21:08.066515       1 shared_informer.go:262] Caches are synced for stateful set
	I0501 03:21:08.482713       1 shared_informer.go:262] Caches are synced for garbage collector
	I0501 03:21:08.485050       1 shared_informer.go:262] Caches are synced for garbage collector
	I0501 03:21:08.485090       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	
	==> kube-proxy [d7e47fd7ecc8c6e2bfe7262ae0ffbe104d5fcd1f4d59eaed33a45d256265f0cc] <==
	I0501 03:20:57.289635       1 node.go:163] Successfully retrieved node IP: 192.168.39.71
	I0501 03:20:57.289718       1 server_others.go:138] "Detected node IP" address="192.168.39.71"
	I0501 03:20:57.289769       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0501 03:20:57.333009       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0501 03:20:57.333047       1 server_others.go:206] "Using iptables Proxier"
	I0501 03:20:57.334192       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0501 03:20:57.334992       1 server.go:661] "Version info" version="v1.24.4"
	I0501 03:20:57.335053       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0501 03:20:57.336134       1 config.go:317] "Starting service config controller"
	I0501 03:20:57.336361       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0501 03:20:57.336422       1 config.go:226] "Starting endpoint slice config controller"
	I0501 03:20:57.336429       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0501 03:20:57.337227       1 config.go:444] "Starting node config controller"
	I0501 03:20:57.337438       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0501 03:20:57.343985       1 shared_informer.go:262] Caches are synced for node config
	I0501 03:20:57.437452       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0501 03:20:57.437573       1 shared_informer.go:262] Caches are synced for service config
	
	
	==> kube-scheduler [dfd95327a016d330238f36df8a8dfea62dd0ecc0fa983938d96bc63c2dda33bb] <==
	I0501 03:20:51.579627       1 serving.go:348] Generated self-signed cert in-memory
	W0501 03:20:54.929190       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0501 03:20:54.930049       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0501 03:20:54.930192       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0501 03:20:54.930226       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0501 03:20:54.997015       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.4"
	I0501 03:20:54.997063       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0501 03:20:55.003919       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0501 03:20:55.004118       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0501 03:20:55.004160       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0501 03:20:55.004183       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0501 03:20:55.104579       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	May 01 03:20:55 test-preload-872415 kubelet[1101]: I0501 03:20:55.891746    1101 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkbnk\" (UniqueName: \"kubernetes.io/projected/561fb514-2639-47f8-8a19-0a757c3993d8-kube-api-access-dkbnk\") pod \"storage-provisioner\" (UID: \"561fb514-2639-47f8-8a19-0a757c3993d8\") " pod="kube-system/storage-provisioner"
	May 01 03:20:55 test-preload-872415 kubelet[1101]: I0501 03:20:55.891951    1101 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ed9f2c83-5c71-4ee3-8182-1be6f2b45d4f-kube-proxy\") pod \"kube-proxy-4fq5v\" (UID: \"ed9f2c83-5c71-4ee3-8182-1be6f2b45d4f\") " pod="kube-system/kube-proxy-4fq5v"
	May 01 03:20:55 test-preload-872415 kubelet[1101]: I0501 03:20:55.892026    1101 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ed9f2c83-5c71-4ee3-8182-1be6f2b45d4f-lib-modules\") pod \"kube-proxy-4fq5v\" (UID: \"ed9f2c83-5c71-4ee3-8182-1be6f2b45d4f\") " pod="kube-system/kube-proxy-4fq5v"
	May 01 03:20:55 test-preload-872415 kubelet[1101]: I0501 03:20:55.892047    1101 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rd78\" (UniqueName: \"kubernetes.io/projected/ed9f2c83-5c71-4ee3-8182-1be6f2b45d4f-kube-api-access-2rd78\") pod \"kube-proxy-4fq5v\" (UID: \"ed9f2c83-5c71-4ee3-8182-1be6f2b45d4f\") " pod="kube-system/kube-proxy-4fq5v"
	May 01 03:20:55 test-preload-872415 kubelet[1101]: I0501 03:20:55.892065    1101 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9ba11eba-fbc9-4c10-8073-3c09ac1e1d6e-config-volume\") pod \"coredns-6d4b75cb6d-m6r88\" (UID: \"9ba11eba-fbc9-4c10-8073-3c09ac1e1d6e\") " pod="kube-system/coredns-6d4b75cb6d-m6r88"
	May 01 03:20:55 test-preload-872415 kubelet[1101]: I0501 03:20:55.892086    1101 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ed9f2c83-5c71-4ee3-8182-1be6f2b45d4f-xtables-lock\") pod \"kube-proxy-4fq5v\" (UID: \"ed9f2c83-5c71-4ee3-8182-1be6f2b45d4f\") " pod="kube-system/kube-proxy-4fq5v"
	May 01 03:20:55 test-preload-872415 kubelet[1101]: I0501 03:20:55.892100    1101 reconciler.go:159] "Reconciler: start to sync state"
	May 01 03:20:56 test-preload-872415 kubelet[1101]: I0501 03:20:56.227472    1101 reconciler.go:201] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7a466571-899d-4f6d-960d-10d662aef475-config-volume\") pod \"7a466571-899d-4f6d-960d-10d662aef475\" (UID: \"7a466571-899d-4f6d-960d-10d662aef475\") "
	May 01 03:20:56 test-preload-872415 kubelet[1101]: I0501 03:20:56.227643    1101 reconciler.go:201] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kt9rg\" (UniqueName: \"kubernetes.io/projected/7a466571-899d-4f6d-960d-10d662aef475-kube-api-access-kt9rg\") pod \"7a466571-899d-4f6d-960d-10d662aef475\" (UID: \"7a466571-899d-4f6d-960d-10d662aef475\") "
	May 01 03:20:56 test-preload-872415 kubelet[1101]: E0501 03:20:56.228227    1101 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	May 01 03:20:56 test-preload-872415 kubelet[1101]: E0501 03:20:56.228430    1101 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/9ba11eba-fbc9-4c10-8073-3c09ac1e1d6e-config-volume podName:9ba11eba-fbc9-4c10-8073-3c09ac1e1d6e nodeName:}" failed. No retries permitted until 2024-05-01 03:20:56.728407409 +0000 UTC m=+7.054132634 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/9ba11eba-fbc9-4c10-8073-3c09ac1e1d6e-config-volume") pod "coredns-6d4b75cb6d-m6r88" (UID: "9ba11eba-fbc9-4c10-8073-3c09ac1e1d6e") : object "kube-system"/"coredns" not registered
	May 01 03:20:56 test-preload-872415 kubelet[1101]: W0501 03:20:56.229466    1101 empty_dir.go:519] Warning: Failed to clear quota on /var/lib/kubelet/pods/7a466571-899d-4f6d-960d-10d662aef475/volumes/kubernetes.io~projected/kube-api-access-kt9rg: clearQuota called, but quotas disabled
	May 01 03:20:56 test-preload-872415 kubelet[1101]: W0501 03:20:56.230060    1101 empty_dir.go:519] Warning: Failed to clear quota on /var/lib/kubelet/pods/7a466571-899d-4f6d-960d-10d662aef475/volumes/kubernetes.io~configmap/config-volume: clearQuota called, but quotas disabled
	May 01 03:20:56 test-preload-872415 kubelet[1101]: I0501 03:20:56.230117    1101 operation_generator.go:863] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a466571-899d-4f6d-960d-10d662aef475-kube-api-access-kt9rg" (OuterVolumeSpecName: "kube-api-access-kt9rg") pod "7a466571-899d-4f6d-960d-10d662aef475" (UID: "7a466571-899d-4f6d-960d-10d662aef475"). InnerVolumeSpecName "kube-api-access-kt9rg". PluginName "kubernetes.io/projected", VolumeGidValue ""
	May 01 03:20:56 test-preload-872415 kubelet[1101]: I0501 03:20:56.230587    1101 operation_generator.go:863] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a466571-899d-4f6d-960d-10d662aef475-config-volume" (OuterVolumeSpecName: "config-volume") pod "7a466571-899d-4f6d-960d-10d662aef475" (UID: "7a466571-899d-4f6d-960d-10d662aef475"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	May 01 03:20:56 test-preload-872415 kubelet[1101]: I0501 03:20:56.328133    1101 reconciler.go:384] "Volume detached for volume \"kube-api-access-kt9rg\" (UniqueName: \"kubernetes.io/projected/7a466571-899d-4f6d-960d-10d662aef475-kube-api-access-kt9rg\") on node \"test-preload-872415\" DevicePath \"\""
	May 01 03:20:56 test-preload-872415 kubelet[1101]: I0501 03:20:56.328165    1101 reconciler.go:384] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7a466571-899d-4f6d-960d-10d662aef475-config-volume\") on node \"test-preload-872415\" DevicePath \"\""
	May 01 03:20:56 test-preload-872415 kubelet[1101]: E0501 03:20:56.732311    1101 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	May 01 03:20:56 test-preload-872415 kubelet[1101]: E0501 03:20:56.732371    1101 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/9ba11eba-fbc9-4c10-8073-3c09ac1e1d6e-config-volume podName:9ba11eba-fbc9-4c10-8073-3c09ac1e1d6e nodeName:}" failed. No retries permitted until 2024-05-01 03:20:57.732348589 +0000 UTC m=+8.058073795 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/9ba11eba-fbc9-4c10-8073-3c09ac1e1d6e-config-volume") pod "coredns-6d4b75cb6d-m6r88" (UID: "9ba11eba-fbc9-4c10-8073-3c09ac1e1d6e") : object "kube-system"/"coredns" not registered
	May 01 03:20:57 test-preload-872415 kubelet[1101]: E0501 03:20:57.745197    1101 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	May 01 03:20:57 test-preload-872415 kubelet[1101]: E0501 03:20:57.745720    1101 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/9ba11eba-fbc9-4c10-8073-3c09ac1e1d6e-config-volume podName:9ba11eba-fbc9-4c10-8073-3c09ac1e1d6e nodeName:}" failed. No retries permitted until 2024-05-01 03:20:59.745697231 +0000 UTC m=+10.071422455 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/9ba11eba-fbc9-4c10-8073-3c09ac1e1d6e-config-volume") pod "coredns-6d4b75cb6d-m6r88" (UID: "9ba11eba-fbc9-4c10-8073-3c09ac1e1d6e") : object "kube-system"/"coredns" not registered
	May 01 03:20:57 test-preload-872415 kubelet[1101]: E0501 03:20:57.932203    1101 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-m6r88" podUID=9ba11eba-fbc9-4c10-8073-3c09ac1e1d6e
	May 01 03:20:59 test-preload-872415 kubelet[1101]: E0501 03:20:59.760513    1101 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	May 01 03:20:59 test-preload-872415 kubelet[1101]: E0501 03:20:59.761027    1101 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/9ba11eba-fbc9-4c10-8073-3c09ac1e1d6e-config-volume podName:9ba11eba-fbc9-4c10-8073-3c09ac1e1d6e nodeName:}" failed. No retries permitted until 2024-05-01 03:21:03.761004641 +0000 UTC m=+14.086729859 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/9ba11eba-fbc9-4c10-8073-3c09ac1e1d6e-config-volume") pod "coredns-6d4b75cb6d-m6r88" (UID: "9ba11eba-fbc9-4c10-8073-3c09ac1e1d6e") : object "kube-system"/"coredns" not registered
	May 01 03:20:59 test-preload-872415 kubelet[1101]: I0501 03:20:59.938182    1101 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=7a466571-899d-4f6d-960d-10d662aef475 path="/var/lib/kubelet/pods/7a466571-899d-4f6d-960d-10d662aef475/volumes"
	
	
	==> storage-provisioner [b50a1ed385b3672c4fda569058e20b2505f0a6531357ca4808b3468345c191ed] <==
	I0501 03:20:57.092061       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-872415 -n test-preload-872415
helpers_test.go:261: (dbg) Run:  kubectl --context test-preload-872415 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-872415" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-872415
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-872415: (1.145108652s)
--- FAIL: TestPreload (266.08s)

                                                
                                    
x
+
TestKubernetesUpgrade (474.18s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-046243 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-046243 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (5m16.583727894s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-046243] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18779
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18779-13391/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18779-13391/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-046243" primary control-plane node in "kubernetes-upgrade-046243" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0501 03:24:15.298345   58823 out.go:291] Setting OutFile to fd 1 ...
	I0501 03:24:15.298534   58823 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 03:24:15.298546   58823 out.go:304] Setting ErrFile to fd 2...
	I0501 03:24:15.298550   58823 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 03:24:15.298746   58823 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18779-13391/.minikube/bin
	I0501 03:24:15.299306   58823 out.go:298] Setting JSON to false
	I0501 03:24:15.300219   58823 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7598,"bootTime":1714526257,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0501 03:24:15.300281   58823 start.go:139] virtualization: kvm guest
	I0501 03:24:15.302601   58823 out.go:177] * [kubernetes-upgrade-046243] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0501 03:24:15.304131   58823 out.go:177]   - MINIKUBE_LOCATION=18779
	I0501 03:24:15.304225   58823 notify.go:220] Checking for updates...
	I0501 03:24:15.305618   58823 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0501 03:24:15.307202   58823 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18779-13391/kubeconfig
	I0501 03:24:15.308626   58823 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18779-13391/.minikube
	I0501 03:24:15.310025   58823 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0501 03:24:15.311394   58823 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0501 03:24:15.313078   58823 config.go:182] Loaded profile config "NoKubernetes-588224": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0501 03:24:15.313168   58823 config.go:182] Loaded profile config "cert-expiration-640426": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0501 03:24:15.313242   58823 config.go:182] Loaded profile config "force-systemd-env-604747": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0501 03:24:15.313329   58823 driver.go:392] Setting default libvirt URI to qemu:///system
	I0501 03:24:15.348499   58823 out.go:177] * Using the kvm2 driver based on user configuration
	I0501 03:24:15.349788   58823 start.go:297] selected driver: kvm2
	I0501 03:24:15.349799   58823 start.go:901] validating driver "kvm2" against <nil>
	I0501 03:24:15.349812   58823 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0501 03:24:15.350572   58823 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0501 03:24:15.350640   58823 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18779-13391/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0501 03:24:15.364999   58823 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0501 03:24:15.365055   58823 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0501 03:24:15.365292   58823 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0501 03:24:15.365347   58823 cni.go:84] Creating CNI manager for ""
	I0501 03:24:15.365364   58823 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0501 03:24:15.365373   58823 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0501 03:24:15.365422   58823 start.go:340] cluster config:
	{Name:kubernetes-upgrade-046243 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-046243 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 03:24:15.365531   58823 iso.go:125] acquiring lock: {Name:mkcd2496aadb29931e179193d707c731bfda98b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0501 03:24:15.368101   58823 out.go:177] * Starting "kubernetes-upgrade-046243" primary control-plane node in "kubernetes-upgrade-046243" cluster
	I0501 03:24:15.369358   58823 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0501 03:24:15.369383   58823 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0501 03:24:15.369391   58823 cache.go:56] Caching tarball of preloaded images
	I0501 03:24:15.369496   58823 preload.go:173] Found /home/jenkins/minikube-integration/18779-13391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0501 03:24:15.369509   58823 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0501 03:24:15.369595   58823 profile.go:143] Saving config to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/kubernetes-upgrade-046243/config.json ...
	I0501 03:24:15.369613   58823 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/kubernetes-upgrade-046243/config.json: {Name:mk7473eb6e50113be8701a82cf73464973aa224c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 03:24:15.369748   58823 start.go:360] acquireMachinesLock for kubernetes-upgrade-046243: {Name:mkc4548e5afe1d4b0a833f6c522103562b0cefff Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0501 03:24:58.455880   58823 start.go:364] duration metric: took 43.086089241s to acquireMachinesLock for "kubernetes-upgrade-046243"
	I0501 03:24:58.455950   58823 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-046243 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-046243 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0501 03:24:58.456087   58823 start.go:125] createHost starting for "" (driver="kvm2")
	I0501 03:24:58.458190   58823 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0501 03:24:58.459040   58823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:24:58.459109   58823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:24:58.477809   58823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46263
	I0501 03:24:58.478222   58823 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:24:58.478749   58823 main.go:141] libmachine: Using API Version  1
	I0501 03:24:58.478774   58823 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:24:58.479118   58823 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:24:58.479338   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) Calling .GetMachineName
	I0501 03:24:58.479509   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) Calling .DriverName
	I0501 03:24:58.479674   58823 start.go:159] libmachine.API.Create for "kubernetes-upgrade-046243" (driver="kvm2")
	I0501 03:24:58.479705   58823 client.go:168] LocalClient.Create starting
	I0501 03:24:58.479747   58823 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem
	I0501 03:24:58.479792   58823 main.go:141] libmachine: Decoding PEM data...
	I0501 03:24:58.479814   58823 main.go:141] libmachine: Parsing certificate...
	I0501 03:24:58.479888   58823 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem
	I0501 03:24:58.479920   58823 main.go:141] libmachine: Decoding PEM data...
	I0501 03:24:58.479940   58823 main.go:141] libmachine: Parsing certificate...
	I0501 03:24:58.479983   58823 main.go:141] libmachine: Running pre-create checks...
	I0501 03:24:58.480002   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) Calling .PreCreateCheck
	I0501 03:24:58.480344   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) Calling .GetConfigRaw
	I0501 03:24:58.480720   58823 main.go:141] libmachine: Creating machine...
	I0501 03:24:58.480739   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) Calling .Create
	I0501 03:24:58.480874   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) Creating KVM machine...
	I0501 03:24:58.482090   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | found existing default KVM network
	I0501 03:24:58.483104   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | I0501 03:24:58.482955   59346 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:ce:a3:07} reservation:<nil>}
	I0501 03:24:58.483717   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | I0501 03:24:58.483630   59346 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:ef:a8:e2} reservation:<nil>}
	I0501 03:24:58.485859   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | I0501 03:24:58.485726   59346 network.go:209] skipping subnet 192.168.61.0/24 that is reserved: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0501 03:24:58.486942   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | I0501 03:24:58.486859   59346 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00012d1c0}
	I0501 03:24:58.486968   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | created network xml: 
	I0501 03:24:58.486976   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | <network>
	I0501 03:24:58.486982   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG |   <name>mk-kubernetes-upgrade-046243</name>
	I0501 03:24:58.486989   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG |   <dns enable='no'/>
	I0501 03:24:58.486993   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG |   
	I0501 03:24:58.487000   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0501 03:24:58.487009   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG |     <dhcp>
	I0501 03:24:58.487016   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0501 03:24:58.487020   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG |     </dhcp>
	I0501 03:24:58.487028   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG |   </ip>
	I0501 03:24:58.487032   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG |   
	I0501 03:24:58.487037   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | </network>
	I0501 03:24:58.487042   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | 
	I0501 03:24:58.492136   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | trying to create private KVM network mk-kubernetes-upgrade-046243 192.168.72.0/24...
	I0501 03:24:58.566734   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | private KVM network mk-kubernetes-upgrade-046243 192.168.72.0/24 created
	I0501 03:24:58.566775   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) Setting up store path in /home/jenkins/minikube-integration/18779-13391/.minikube/machines/kubernetes-upgrade-046243 ...
	I0501 03:24:58.566792   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | I0501 03:24:58.566710   59346 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18779-13391/.minikube
	I0501 03:24:58.566806   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) Building disk image from file:///home/jenkins/minikube-integration/18779-13391/.minikube/cache/iso/amd64/minikube-v1.33.0-1714498396-18779-amd64.iso
	I0501 03:24:58.566901   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) Downloading /home/jenkins/minikube-integration/18779-13391/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18779-13391/.minikube/cache/iso/amd64/minikube-v1.33.0-1714498396-18779-amd64.iso...
	I0501 03:24:58.808207   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | I0501 03:24:58.808031   59346 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/kubernetes-upgrade-046243/id_rsa...
	I0501 03:24:59.103031   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | I0501 03:24:59.102889   59346 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/kubernetes-upgrade-046243/kubernetes-upgrade-046243.rawdisk...
	I0501 03:24:59.103077   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | Writing magic tar header
	I0501 03:24:59.103098   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | Writing SSH key tar header
	I0501 03:24:59.103114   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | I0501 03:24:59.103008   59346 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18779-13391/.minikube/machines/kubernetes-upgrade-046243 ...
	I0501 03:24:59.103139   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/kubernetes-upgrade-046243
	I0501 03:24:59.103179   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) Setting executable bit set on /home/jenkins/minikube-integration/18779-13391/.minikube/machines/kubernetes-upgrade-046243 (perms=drwx------)
	I0501 03:24:59.103245   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) Setting executable bit set on /home/jenkins/minikube-integration/18779-13391/.minikube/machines (perms=drwxr-xr-x)
	I0501 03:24:59.103270   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) Setting executable bit set on /home/jenkins/minikube-integration/18779-13391/.minikube (perms=drwxr-xr-x)
	I0501 03:24:59.103279   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18779-13391/.minikube/machines
	I0501 03:24:59.103297   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18779-13391/.minikube
	I0501 03:24:59.103312   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18779-13391
	I0501 03:24:59.103327   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) Setting executable bit set on /home/jenkins/minikube-integration/18779-13391 (perms=drwxrwxr-x)
	I0501 03:24:59.103338   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0501 03:24:59.103348   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0501 03:24:59.103356   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | Checking permissions on dir: /home/jenkins
	I0501 03:24:59.103365   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | Checking permissions on dir: /home
	I0501 03:24:59.103377   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | Skipping /home - not owner
	I0501 03:24:59.103391   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0501 03:24:59.103411   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) Creating domain...
	I0501 03:24:59.104553   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) define libvirt domain using xml: 
	I0501 03:24:59.104575   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) <domain type='kvm'>
	I0501 03:24:59.104582   58823 main.go:141] libmachine: (kubernetes-upgrade-046243)   <name>kubernetes-upgrade-046243</name>
	I0501 03:24:59.104600   58823 main.go:141] libmachine: (kubernetes-upgrade-046243)   <memory unit='MiB'>2200</memory>
	I0501 03:24:59.104608   58823 main.go:141] libmachine: (kubernetes-upgrade-046243)   <vcpu>2</vcpu>
	I0501 03:24:59.104621   58823 main.go:141] libmachine: (kubernetes-upgrade-046243)   <features>
	I0501 03:24:59.104633   58823 main.go:141] libmachine: (kubernetes-upgrade-046243)     <acpi/>
	I0501 03:24:59.104644   58823 main.go:141] libmachine: (kubernetes-upgrade-046243)     <apic/>
	I0501 03:24:59.104651   58823 main.go:141] libmachine: (kubernetes-upgrade-046243)     <pae/>
	I0501 03:24:59.104661   58823 main.go:141] libmachine: (kubernetes-upgrade-046243)     
	I0501 03:24:59.104695   58823 main.go:141] libmachine: (kubernetes-upgrade-046243)   </features>
	I0501 03:24:59.104724   58823 main.go:141] libmachine: (kubernetes-upgrade-046243)   <cpu mode='host-passthrough'>
	I0501 03:24:59.104739   58823 main.go:141] libmachine: (kubernetes-upgrade-046243)   
	I0501 03:24:59.104762   58823 main.go:141] libmachine: (kubernetes-upgrade-046243)   </cpu>
	I0501 03:24:59.104776   58823 main.go:141] libmachine: (kubernetes-upgrade-046243)   <os>
	I0501 03:24:59.104788   58823 main.go:141] libmachine: (kubernetes-upgrade-046243)     <type>hvm</type>
	I0501 03:24:59.104801   58823 main.go:141] libmachine: (kubernetes-upgrade-046243)     <boot dev='cdrom'/>
	I0501 03:24:59.104817   58823 main.go:141] libmachine: (kubernetes-upgrade-046243)     <boot dev='hd'/>
	I0501 03:24:59.104842   58823 main.go:141] libmachine: (kubernetes-upgrade-046243)     <bootmenu enable='no'/>
	I0501 03:24:59.104854   58823 main.go:141] libmachine: (kubernetes-upgrade-046243)   </os>
	I0501 03:24:59.104864   58823 main.go:141] libmachine: (kubernetes-upgrade-046243)   <devices>
	I0501 03:24:59.104878   58823 main.go:141] libmachine: (kubernetes-upgrade-046243)     <disk type='file' device='cdrom'>
	I0501 03:24:59.104902   58823 main.go:141] libmachine: (kubernetes-upgrade-046243)       <source file='/home/jenkins/minikube-integration/18779-13391/.minikube/machines/kubernetes-upgrade-046243/boot2docker.iso'/>
	I0501 03:24:59.104919   58823 main.go:141] libmachine: (kubernetes-upgrade-046243)       <target dev='hdc' bus='scsi'/>
	I0501 03:24:59.104936   58823 main.go:141] libmachine: (kubernetes-upgrade-046243)       <readonly/>
	I0501 03:24:59.104947   58823 main.go:141] libmachine: (kubernetes-upgrade-046243)     </disk>
	I0501 03:24:59.104961   58823 main.go:141] libmachine: (kubernetes-upgrade-046243)     <disk type='file' device='disk'>
	I0501 03:24:59.104975   58823 main.go:141] libmachine: (kubernetes-upgrade-046243)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0501 03:24:59.104997   58823 main.go:141] libmachine: (kubernetes-upgrade-046243)       <source file='/home/jenkins/minikube-integration/18779-13391/.minikube/machines/kubernetes-upgrade-046243/kubernetes-upgrade-046243.rawdisk'/>
	I0501 03:24:59.105013   58823 main.go:141] libmachine: (kubernetes-upgrade-046243)       <target dev='hda' bus='virtio'/>
	I0501 03:24:59.105026   58823 main.go:141] libmachine: (kubernetes-upgrade-046243)     </disk>
	I0501 03:24:59.105038   58823 main.go:141] libmachine: (kubernetes-upgrade-046243)     <interface type='network'>
	I0501 03:24:59.105053   58823 main.go:141] libmachine: (kubernetes-upgrade-046243)       <source network='mk-kubernetes-upgrade-046243'/>
	I0501 03:24:59.105065   58823 main.go:141] libmachine: (kubernetes-upgrade-046243)       <model type='virtio'/>
	I0501 03:24:59.105078   58823 main.go:141] libmachine: (kubernetes-upgrade-046243)     </interface>
	I0501 03:24:59.105095   58823 main.go:141] libmachine: (kubernetes-upgrade-046243)     <interface type='network'>
	I0501 03:24:59.105109   58823 main.go:141] libmachine: (kubernetes-upgrade-046243)       <source network='default'/>
	I0501 03:24:59.105120   58823 main.go:141] libmachine: (kubernetes-upgrade-046243)       <model type='virtio'/>
	I0501 03:24:59.105139   58823 main.go:141] libmachine: (kubernetes-upgrade-046243)     </interface>
	I0501 03:24:59.105151   58823 main.go:141] libmachine: (kubernetes-upgrade-046243)     <serial type='pty'>
	I0501 03:24:59.105172   58823 main.go:141] libmachine: (kubernetes-upgrade-046243)       <target port='0'/>
	I0501 03:24:59.105190   58823 main.go:141] libmachine: (kubernetes-upgrade-046243)     </serial>
	I0501 03:24:59.105204   58823 main.go:141] libmachine: (kubernetes-upgrade-046243)     <console type='pty'>
	I0501 03:24:59.105213   58823 main.go:141] libmachine: (kubernetes-upgrade-046243)       <target type='serial' port='0'/>
	I0501 03:24:59.105226   58823 main.go:141] libmachine: (kubernetes-upgrade-046243)     </console>
	I0501 03:24:59.105237   58823 main.go:141] libmachine: (kubernetes-upgrade-046243)     <rng model='virtio'>
	I0501 03:24:59.105248   58823 main.go:141] libmachine: (kubernetes-upgrade-046243)       <backend model='random'>/dev/random</backend>
	I0501 03:24:59.105262   58823 main.go:141] libmachine: (kubernetes-upgrade-046243)     </rng>
	I0501 03:24:59.105270   58823 main.go:141] libmachine: (kubernetes-upgrade-046243)     
	I0501 03:24:59.105282   58823 main.go:141] libmachine: (kubernetes-upgrade-046243)     
	I0501 03:24:59.105300   58823 main.go:141] libmachine: (kubernetes-upgrade-046243)   </devices>
	I0501 03:24:59.105318   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) </domain>
	I0501 03:24:59.105335   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) 
	I0501 03:24:59.109620   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | domain kubernetes-upgrade-046243 has defined MAC address 52:54:00:b5:d3:0f in network default
	I0501 03:24:59.110285   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) Ensuring networks are active...
	I0501 03:24:59.110308   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | domain kubernetes-upgrade-046243 has defined MAC address 52:54:00:ac:ba:ac in network mk-kubernetes-upgrade-046243
	I0501 03:24:59.111034   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) Ensuring network default is active
	I0501 03:24:59.111383   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) Ensuring network mk-kubernetes-upgrade-046243 is active
	I0501 03:24:59.112007   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) Getting domain xml...
	I0501 03:24:59.112736   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) Creating domain...
	I0501 03:25:00.565608   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) Waiting to get IP...
	I0501 03:25:00.566816   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | domain kubernetes-upgrade-046243 has defined MAC address 52:54:00:ac:ba:ac in network mk-kubernetes-upgrade-046243
	I0501 03:25:00.567368   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | unable to find current IP address of domain kubernetes-upgrade-046243 in network mk-kubernetes-upgrade-046243
	I0501 03:25:00.567439   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | I0501 03:25:00.567351   59346 retry.go:31] will retry after 208.916875ms: waiting for machine to come up
	I0501 03:25:00.777911   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | domain kubernetes-upgrade-046243 has defined MAC address 52:54:00:ac:ba:ac in network mk-kubernetes-upgrade-046243
	I0501 03:25:00.778472   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | unable to find current IP address of domain kubernetes-upgrade-046243 in network mk-kubernetes-upgrade-046243
	I0501 03:25:00.778502   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | I0501 03:25:00.778432   59346 retry.go:31] will retry after 302.149044ms: waiting for machine to come up
	I0501 03:25:01.082063   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | domain kubernetes-upgrade-046243 has defined MAC address 52:54:00:ac:ba:ac in network mk-kubernetes-upgrade-046243
	I0501 03:25:01.082578   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | unable to find current IP address of domain kubernetes-upgrade-046243 in network mk-kubernetes-upgrade-046243
	I0501 03:25:01.082609   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | I0501 03:25:01.082539   59346 retry.go:31] will retry after 397.736169ms: waiting for machine to come up
	I0501 03:25:01.482921   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | domain kubernetes-upgrade-046243 has defined MAC address 52:54:00:ac:ba:ac in network mk-kubernetes-upgrade-046243
	I0501 03:25:01.483424   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | unable to find current IP address of domain kubernetes-upgrade-046243 in network mk-kubernetes-upgrade-046243
	I0501 03:25:01.483573   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | I0501 03:25:01.483378   59346 retry.go:31] will retry after 405.210839ms: waiting for machine to come up
	I0501 03:25:01.890954   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | domain kubernetes-upgrade-046243 has defined MAC address 52:54:00:ac:ba:ac in network mk-kubernetes-upgrade-046243
	I0501 03:25:01.891451   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | unable to find current IP address of domain kubernetes-upgrade-046243 in network mk-kubernetes-upgrade-046243
	I0501 03:25:01.891474   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | I0501 03:25:01.891377   59346 retry.go:31] will retry after 700.604337ms: waiting for machine to come up
	I0501 03:25:02.593371   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | domain kubernetes-upgrade-046243 has defined MAC address 52:54:00:ac:ba:ac in network mk-kubernetes-upgrade-046243
	I0501 03:25:02.593832   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | unable to find current IP address of domain kubernetes-upgrade-046243 in network mk-kubernetes-upgrade-046243
	I0501 03:25:02.593867   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | I0501 03:25:02.593795   59346 retry.go:31] will retry after 635.696971ms: waiting for machine to come up
	I0501 03:25:03.231650   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | domain kubernetes-upgrade-046243 has defined MAC address 52:54:00:ac:ba:ac in network mk-kubernetes-upgrade-046243
	I0501 03:25:03.232157   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | unable to find current IP address of domain kubernetes-upgrade-046243 in network mk-kubernetes-upgrade-046243
	I0501 03:25:03.232191   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | I0501 03:25:03.232096   59346 retry.go:31] will retry after 994.966149ms: waiting for machine to come up
	I0501 03:25:04.228508   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | domain kubernetes-upgrade-046243 has defined MAC address 52:54:00:ac:ba:ac in network mk-kubernetes-upgrade-046243
	I0501 03:25:04.228936   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | unable to find current IP address of domain kubernetes-upgrade-046243 in network mk-kubernetes-upgrade-046243
	I0501 03:25:04.228967   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | I0501 03:25:04.228873   59346 retry.go:31] will retry after 1.346375314s: waiting for machine to come up
	I0501 03:25:05.576657   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | domain kubernetes-upgrade-046243 has defined MAC address 52:54:00:ac:ba:ac in network mk-kubernetes-upgrade-046243
	I0501 03:25:05.577081   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | unable to find current IP address of domain kubernetes-upgrade-046243 in network mk-kubernetes-upgrade-046243
	I0501 03:25:05.577109   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | I0501 03:25:05.577053   59346 retry.go:31] will retry after 1.293488579s: waiting for machine to come up
	I0501 03:25:06.872709   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | domain kubernetes-upgrade-046243 has defined MAC address 52:54:00:ac:ba:ac in network mk-kubernetes-upgrade-046243
	I0501 03:25:06.873353   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | unable to find current IP address of domain kubernetes-upgrade-046243 in network mk-kubernetes-upgrade-046243
	I0501 03:25:06.873384   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | I0501 03:25:06.873308   59346 retry.go:31] will retry after 1.663061624s: waiting for machine to come up
	I0501 03:25:08.538183   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | domain kubernetes-upgrade-046243 has defined MAC address 52:54:00:ac:ba:ac in network mk-kubernetes-upgrade-046243
	I0501 03:25:08.538755   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | unable to find current IP address of domain kubernetes-upgrade-046243 in network mk-kubernetes-upgrade-046243
	I0501 03:25:08.538810   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | I0501 03:25:08.538713   59346 retry.go:31] will retry after 1.811288218s: waiting for machine to come up
	I0501 03:25:10.351700   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | domain kubernetes-upgrade-046243 has defined MAC address 52:54:00:ac:ba:ac in network mk-kubernetes-upgrade-046243
	I0501 03:25:10.352306   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | unable to find current IP address of domain kubernetes-upgrade-046243 in network mk-kubernetes-upgrade-046243
	I0501 03:25:10.352336   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | I0501 03:25:10.352218   59346 retry.go:31] will retry after 3.272480579s: waiting for machine to come up
	I0501 03:25:13.625936   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | domain kubernetes-upgrade-046243 has defined MAC address 52:54:00:ac:ba:ac in network mk-kubernetes-upgrade-046243
	I0501 03:25:13.626425   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | unable to find current IP address of domain kubernetes-upgrade-046243 in network mk-kubernetes-upgrade-046243
	I0501 03:25:13.626458   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | I0501 03:25:13.626375   59346 retry.go:31] will retry after 3.732376713s: waiting for machine to come up
	I0501 03:25:17.363203   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | domain kubernetes-upgrade-046243 has defined MAC address 52:54:00:ac:ba:ac in network mk-kubernetes-upgrade-046243
	I0501 03:25:17.363619   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | unable to find current IP address of domain kubernetes-upgrade-046243 in network mk-kubernetes-upgrade-046243
	I0501 03:25:17.363637   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | I0501 03:25:17.363584   59346 retry.go:31] will retry after 4.289764249s: waiting for machine to come up
	I0501 03:25:21.655830   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | domain kubernetes-upgrade-046243 has defined MAC address 52:54:00:ac:ba:ac in network mk-kubernetes-upgrade-046243
	I0501 03:25:21.656353   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) Found IP for machine: 192.168.72.134
	I0501 03:25:21.656370   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) Reserving static IP address...
	I0501 03:25:21.656383   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | domain kubernetes-upgrade-046243 has current primary IP address 192.168.72.134 and MAC address 52:54:00:ac:ba:ac in network mk-kubernetes-upgrade-046243
	I0501 03:25:21.656744   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-046243", mac: "52:54:00:ac:ba:ac", ip: "192.168.72.134"} in network mk-kubernetes-upgrade-046243
	I0501 03:25:21.734009   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | Getting to WaitForSSH function...
	I0501 03:25:21.734039   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) Reserved static IP address: 192.168.72.134
	I0501 03:25:21.734054   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) Waiting for SSH to be available...
	I0501 03:25:21.736890   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | domain kubernetes-upgrade-046243 has defined MAC address 52:54:00:ac:ba:ac in network mk-kubernetes-upgrade-046243
	I0501 03:25:21.737358   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:ba:ac", ip: ""} in network mk-kubernetes-upgrade-046243: {Iface:virbr3 ExpiryTime:2024-05-01 04:25:15 +0000 UTC Type:0 Mac:52:54:00:ac:ba:ac Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:minikube Clientid:01:52:54:00:ac:ba:ac}
	I0501 03:25:21.737393   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | domain kubernetes-upgrade-046243 has defined IP address 192.168.72.134 and MAC address 52:54:00:ac:ba:ac in network mk-kubernetes-upgrade-046243
	I0501 03:25:21.737590   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | Using SSH client type: external
	I0501 03:25:21.737687   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | Using SSH private key: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/kubernetes-upgrade-046243/id_rsa (-rw-------)
	I0501 03:25:21.737719   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.134 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18779-13391/.minikube/machines/kubernetes-upgrade-046243/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0501 03:25:21.737738   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | About to run SSH command:
	I0501 03:25:21.737753   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | exit 0
	I0501 03:25:21.866742   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | SSH cmd err, output: <nil>: 
	I0501 03:25:21.867033   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) KVM machine creation complete!
	I0501 03:25:21.867330   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) Calling .GetConfigRaw
	I0501 03:25:21.867974   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) Calling .DriverName
	I0501 03:25:21.868151   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) Calling .DriverName
	I0501 03:25:21.868345   58823 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0501 03:25:21.868366   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) Calling .GetState
	I0501 03:25:21.869756   58823 main.go:141] libmachine: Detecting operating system of created instance...
	I0501 03:25:21.869774   58823 main.go:141] libmachine: Waiting for SSH to be available...
	I0501 03:25:21.869782   58823 main.go:141] libmachine: Getting to WaitForSSH function...
	I0501 03:25:21.869791   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) Calling .GetSSHHostname
	I0501 03:25:21.872356   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | domain kubernetes-upgrade-046243 has defined MAC address 52:54:00:ac:ba:ac in network mk-kubernetes-upgrade-046243
	I0501 03:25:21.872739   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:ba:ac", ip: ""} in network mk-kubernetes-upgrade-046243: {Iface:virbr3 ExpiryTime:2024-05-01 04:25:15 +0000 UTC Type:0 Mac:52:54:00:ac:ba:ac Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:kubernetes-upgrade-046243 Clientid:01:52:54:00:ac:ba:ac}
	I0501 03:25:21.872771   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | domain kubernetes-upgrade-046243 has defined IP address 192.168.72.134 and MAC address 52:54:00:ac:ba:ac in network mk-kubernetes-upgrade-046243
	I0501 03:25:21.872968   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) Calling .GetSSHPort
	I0501 03:25:21.873151   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) Calling .GetSSHKeyPath
	I0501 03:25:21.873335   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) Calling .GetSSHKeyPath
	I0501 03:25:21.873460   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) Calling .GetSSHUsername
	I0501 03:25:21.873655   58823 main.go:141] libmachine: Using SSH client type: native
	I0501 03:25:21.873853   58823 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.134 22 <nil> <nil>}
	I0501 03:25:21.873868   58823 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0501 03:25:21.982261   58823 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0501 03:25:21.982290   58823 main.go:141] libmachine: Detecting the provisioner...
	I0501 03:25:21.982301   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) Calling .GetSSHHostname
	I0501 03:25:21.985177   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | domain kubernetes-upgrade-046243 has defined MAC address 52:54:00:ac:ba:ac in network mk-kubernetes-upgrade-046243
	I0501 03:25:21.985596   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:ba:ac", ip: ""} in network mk-kubernetes-upgrade-046243: {Iface:virbr3 ExpiryTime:2024-05-01 04:25:15 +0000 UTC Type:0 Mac:52:54:00:ac:ba:ac Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:kubernetes-upgrade-046243 Clientid:01:52:54:00:ac:ba:ac}
	I0501 03:25:21.985629   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | domain kubernetes-upgrade-046243 has defined IP address 192.168.72.134 and MAC address 52:54:00:ac:ba:ac in network mk-kubernetes-upgrade-046243
	I0501 03:25:21.985778   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) Calling .GetSSHPort
	I0501 03:25:21.985986   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) Calling .GetSSHKeyPath
	I0501 03:25:21.986169   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) Calling .GetSSHKeyPath
	I0501 03:25:21.986320   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) Calling .GetSSHUsername
	I0501 03:25:21.986498   58823 main.go:141] libmachine: Using SSH client type: native
	I0501 03:25:21.986710   58823 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.134 22 <nil> <nil>}
	I0501 03:25:21.986726   58823 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0501 03:25:22.100038   58823 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0501 03:25:22.100138   58823 main.go:141] libmachine: found compatible host: buildroot
	I0501 03:25:22.100149   58823 main.go:141] libmachine: Provisioning with buildroot...
	I0501 03:25:22.100158   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) Calling .GetMachineName
	I0501 03:25:22.100418   58823 buildroot.go:166] provisioning hostname "kubernetes-upgrade-046243"
	I0501 03:25:22.100445   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) Calling .GetMachineName
	I0501 03:25:22.100655   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) Calling .GetSSHHostname
	I0501 03:25:22.103163   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | domain kubernetes-upgrade-046243 has defined MAC address 52:54:00:ac:ba:ac in network mk-kubernetes-upgrade-046243
	I0501 03:25:22.103495   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:ba:ac", ip: ""} in network mk-kubernetes-upgrade-046243: {Iface:virbr3 ExpiryTime:2024-05-01 04:25:15 +0000 UTC Type:0 Mac:52:54:00:ac:ba:ac Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:kubernetes-upgrade-046243 Clientid:01:52:54:00:ac:ba:ac}
	I0501 03:25:22.103538   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | domain kubernetes-upgrade-046243 has defined IP address 192.168.72.134 and MAC address 52:54:00:ac:ba:ac in network mk-kubernetes-upgrade-046243
	I0501 03:25:22.103737   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) Calling .GetSSHPort
	I0501 03:25:22.103906   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) Calling .GetSSHKeyPath
	I0501 03:25:22.104095   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) Calling .GetSSHKeyPath
	I0501 03:25:22.104228   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) Calling .GetSSHUsername
	I0501 03:25:22.104383   58823 main.go:141] libmachine: Using SSH client type: native
	I0501 03:25:22.104569   58823 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.134 22 <nil> <nil>}
	I0501 03:25:22.104583   58823 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-046243 && echo "kubernetes-upgrade-046243" | sudo tee /etc/hostname
	I0501 03:25:22.231126   58823 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-046243
	
	I0501 03:25:22.231162   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) Calling .GetSSHHostname
	I0501 03:25:22.233982   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | domain kubernetes-upgrade-046243 has defined MAC address 52:54:00:ac:ba:ac in network mk-kubernetes-upgrade-046243
	I0501 03:25:22.234481   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:ba:ac", ip: ""} in network mk-kubernetes-upgrade-046243: {Iface:virbr3 ExpiryTime:2024-05-01 04:25:15 +0000 UTC Type:0 Mac:52:54:00:ac:ba:ac Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:kubernetes-upgrade-046243 Clientid:01:52:54:00:ac:ba:ac}
	I0501 03:25:22.234517   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | domain kubernetes-upgrade-046243 has defined IP address 192.168.72.134 and MAC address 52:54:00:ac:ba:ac in network mk-kubernetes-upgrade-046243
	I0501 03:25:22.234685   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) Calling .GetSSHPort
	I0501 03:25:22.234916   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) Calling .GetSSHKeyPath
	I0501 03:25:22.235093   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) Calling .GetSSHKeyPath
	I0501 03:25:22.235237   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) Calling .GetSSHUsername
	I0501 03:25:22.235376   58823 main.go:141] libmachine: Using SSH client type: native
	I0501 03:25:22.235573   58823 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.134 22 <nil> <nil>}
	I0501 03:25:22.235600   58823 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-046243' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-046243/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-046243' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0501 03:25:22.352829   58823 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0501 03:25:22.352865   58823 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18779-13391/.minikube CaCertPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18779-13391/.minikube}
	I0501 03:25:22.352895   58823 buildroot.go:174] setting up certificates
	I0501 03:25:22.352911   58823 provision.go:84] configureAuth start
	I0501 03:25:22.352922   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) Calling .GetMachineName
	I0501 03:25:22.353254   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) Calling .GetIP
	I0501 03:25:22.356234   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | domain kubernetes-upgrade-046243 has defined MAC address 52:54:00:ac:ba:ac in network mk-kubernetes-upgrade-046243
	I0501 03:25:22.356603   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:ba:ac", ip: ""} in network mk-kubernetes-upgrade-046243: {Iface:virbr3 ExpiryTime:2024-05-01 04:25:15 +0000 UTC Type:0 Mac:52:54:00:ac:ba:ac Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:kubernetes-upgrade-046243 Clientid:01:52:54:00:ac:ba:ac}
	I0501 03:25:22.356630   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | domain kubernetes-upgrade-046243 has defined IP address 192.168.72.134 and MAC address 52:54:00:ac:ba:ac in network mk-kubernetes-upgrade-046243
	I0501 03:25:22.356748   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) Calling .GetSSHHostname
	I0501 03:25:22.359170   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | domain kubernetes-upgrade-046243 has defined MAC address 52:54:00:ac:ba:ac in network mk-kubernetes-upgrade-046243
	I0501 03:25:22.359518   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:ba:ac", ip: ""} in network mk-kubernetes-upgrade-046243: {Iface:virbr3 ExpiryTime:2024-05-01 04:25:15 +0000 UTC Type:0 Mac:52:54:00:ac:ba:ac Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:kubernetes-upgrade-046243 Clientid:01:52:54:00:ac:ba:ac}
	I0501 03:25:22.359554   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | domain kubernetes-upgrade-046243 has defined IP address 192.168.72.134 and MAC address 52:54:00:ac:ba:ac in network mk-kubernetes-upgrade-046243
	I0501 03:25:22.359698   58823 provision.go:143] copyHostCerts
	I0501 03:25:22.359798   58823 exec_runner.go:144] found /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem, removing ...
	I0501 03:25:22.359813   58823 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem
	I0501 03:25:22.359887   58823 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem (1675 bytes)
	I0501 03:25:22.360041   58823 exec_runner.go:144] found /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem, removing ...
	I0501 03:25:22.360064   58823 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem
	I0501 03:25:22.360183   58823 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem (1078 bytes)
	I0501 03:25:22.360294   58823 exec_runner.go:144] found /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem, removing ...
	I0501 03:25:22.360308   58823 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem
	I0501 03:25:22.360345   58823 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem (1123 bytes)
	I0501 03:25:22.360436   58823 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-046243 san=[127.0.0.1 192.168.72.134 kubernetes-upgrade-046243 localhost minikube]
	I0501 03:25:22.642256   58823 provision.go:177] copyRemoteCerts
	I0501 03:25:22.642315   58823 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0501 03:25:22.642341   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) Calling .GetSSHHostname
	I0501 03:25:22.644821   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | domain kubernetes-upgrade-046243 has defined MAC address 52:54:00:ac:ba:ac in network mk-kubernetes-upgrade-046243
	I0501 03:25:22.645191   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:ba:ac", ip: ""} in network mk-kubernetes-upgrade-046243: {Iface:virbr3 ExpiryTime:2024-05-01 04:25:15 +0000 UTC Type:0 Mac:52:54:00:ac:ba:ac Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:kubernetes-upgrade-046243 Clientid:01:52:54:00:ac:ba:ac}
	I0501 03:25:22.645220   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | domain kubernetes-upgrade-046243 has defined IP address 192.168.72.134 and MAC address 52:54:00:ac:ba:ac in network mk-kubernetes-upgrade-046243
	I0501 03:25:22.645388   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) Calling .GetSSHPort
	I0501 03:25:22.645587   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) Calling .GetSSHKeyPath
	I0501 03:25:22.645741   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) Calling .GetSSHUsername
	I0501 03:25:22.645845   58823 sshutil.go:53] new ssh client: &{IP:192.168.72.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/kubernetes-upgrade-046243/id_rsa Username:docker}
	I0501 03:25:22.733624   58823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0501 03:25:22.768548   58823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0501 03:25:22.795097   58823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0501 03:25:22.823849   58823 provision.go:87] duration metric: took 470.923726ms to configureAuth
	I0501 03:25:22.823880   58823 buildroot.go:189] setting minikube options for container-runtime
	I0501 03:25:22.824061   58823 config.go:182] Loaded profile config "kubernetes-upgrade-046243": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0501 03:25:22.824143   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) Calling .GetSSHHostname
	I0501 03:25:22.826926   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | domain kubernetes-upgrade-046243 has defined MAC address 52:54:00:ac:ba:ac in network mk-kubernetes-upgrade-046243
	I0501 03:25:22.827245   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:ba:ac", ip: ""} in network mk-kubernetes-upgrade-046243: {Iface:virbr3 ExpiryTime:2024-05-01 04:25:15 +0000 UTC Type:0 Mac:52:54:00:ac:ba:ac Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:kubernetes-upgrade-046243 Clientid:01:52:54:00:ac:ba:ac}
	I0501 03:25:22.827269   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | domain kubernetes-upgrade-046243 has defined IP address 192.168.72.134 and MAC address 52:54:00:ac:ba:ac in network mk-kubernetes-upgrade-046243
	I0501 03:25:22.827471   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) Calling .GetSSHPort
	I0501 03:25:22.827689   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) Calling .GetSSHKeyPath
	I0501 03:25:22.827864   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) Calling .GetSSHKeyPath
	I0501 03:25:22.828075   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) Calling .GetSSHUsername
	I0501 03:25:22.828260   58823 main.go:141] libmachine: Using SSH client type: native
	I0501 03:25:22.828464   58823 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.134 22 <nil> <nil>}
	I0501 03:25:22.828489   58823 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0501 03:25:23.117443   58823 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0501 03:25:23.117470   58823 main.go:141] libmachine: Checking connection to Docker...
	I0501 03:25:23.117478   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) Calling .GetURL
	I0501 03:25:23.118877   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | Using libvirt version 6000000
	I0501 03:25:23.121030   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | domain kubernetes-upgrade-046243 has defined MAC address 52:54:00:ac:ba:ac in network mk-kubernetes-upgrade-046243
	I0501 03:25:23.121366   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:ba:ac", ip: ""} in network mk-kubernetes-upgrade-046243: {Iface:virbr3 ExpiryTime:2024-05-01 04:25:15 +0000 UTC Type:0 Mac:52:54:00:ac:ba:ac Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:kubernetes-upgrade-046243 Clientid:01:52:54:00:ac:ba:ac}
	I0501 03:25:23.121403   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | domain kubernetes-upgrade-046243 has defined IP address 192.168.72.134 and MAC address 52:54:00:ac:ba:ac in network mk-kubernetes-upgrade-046243
	I0501 03:25:23.121548   58823 main.go:141] libmachine: Docker is up and running!
	I0501 03:25:23.121564   58823 main.go:141] libmachine: Reticulating splines...
	I0501 03:25:23.121571   58823 client.go:171] duration metric: took 24.641854941s to LocalClient.Create
	I0501 03:25:23.121592   58823 start.go:167] duration metric: took 24.641920145s to libmachine.API.Create "kubernetes-upgrade-046243"
	I0501 03:25:23.121605   58823 start.go:293] postStartSetup for "kubernetes-upgrade-046243" (driver="kvm2")
	I0501 03:25:23.121617   58823 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0501 03:25:23.121633   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) Calling .DriverName
	I0501 03:25:23.121844   58823 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0501 03:25:23.121869   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) Calling .GetSSHHostname
	I0501 03:25:23.124043   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | domain kubernetes-upgrade-046243 has defined MAC address 52:54:00:ac:ba:ac in network mk-kubernetes-upgrade-046243
	I0501 03:25:23.124361   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:ba:ac", ip: ""} in network mk-kubernetes-upgrade-046243: {Iface:virbr3 ExpiryTime:2024-05-01 04:25:15 +0000 UTC Type:0 Mac:52:54:00:ac:ba:ac Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:kubernetes-upgrade-046243 Clientid:01:52:54:00:ac:ba:ac}
	I0501 03:25:23.124401   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | domain kubernetes-upgrade-046243 has defined IP address 192.168.72.134 and MAC address 52:54:00:ac:ba:ac in network mk-kubernetes-upgrade-046243
	I0501 03:25:23.124549   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) Calling .GetSSHPort
	I0501 03:25:23.124739   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) Calling .GetSSHKeyPath
	I0501 03:25:23.124933   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) Calling .GetSSHUsername
	I0501 03:25:23.125108   58823 sshutil.go:53] new ssh client: &{IP:192.168.72.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/kubernetes-upgrade-046243/id_rsa Username:docker}
	I0501 03:25:23.211094   58823 ssh_runner.go:195] Run: cat /etc/os-release
	I0501 03:25:23.215808   58823 info.go:137] Remote host: Buildroot 2023.02.9
	I0501 03:25:23.215831   58823 filesync.go:126] Scanning /home/jenkins/minikube-integration/18779-13391/.minikube/addons for local assets ...
	I0501 03:25:23.215897   58823 filesync.go:126] Scanning /home/jenkins/minikube-integration/18779-13391/.minikube/files for local assets ...
	I0501 03:25:23.216002   58823 filesync.go:149] local asset: /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem -> 207242.pem in /etc/ssl/certs
	I0501 03:25:23.216113   58823 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0501 03:25:23.227372   58823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem --> /etc/ssl/certs/207242.pem (1708 bytes)
	I0501 03:25:23.256707   58823 start.go:296] duration metric: took 135.08605ms for postStartSetup
	I0501 03:25:23.256760   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) Calling .GetConfigRaw
	I0501 03:25:23.257352   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) Calling .GetIP
	I0501 03:25:23.259817   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | domain kubernetes-upgrade-046243 has defined MAC address 52:54:00:ac:ba:ac in network mk-kubernetes-upgrade-046243
	I0501 03:25:23.260155   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:ba:ac", ip: ""} in network mk-kubernetes-upgrade-046243: {Iface:virbr3 ExpiryTime:2024-05-01 04:25:15 +0000 UTC Type:0 Mac:52:54:00:ac:ba:ac Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:kubernetes-upgrade-046243 Clientid:01:52:54:00:ac:ba:ac}
	I0501 03:25:23.260189   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | domain kubernetes-upgrade-046243 has defined IP address 192.168.72.134 and MAC address 52:54:00:ac:ba:ac in network mk-kubernetes-upgrade-046243
	I0501 03:25:23.260495   58823 profile.go:143] Saving config to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/kubernetes-upgrade-046243/config.json ...
	I0501 03:25:23.260724   58823 start.go:128] duration metric: took 24.804624281s to createHost
	I0501 03:25:23.260748   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) Calling .GetSSHHostname
	I0501 03:25:23.263156   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | domain kubernetes-upgrade-046243 has defined MAC address 52:54:00:ac:ba:ac in network mk-kubernetes-upgrade-046243
	I0501 03:25:23.263528   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:ba:ac", ip: ""} in network mk-kubernetes-upgrade-046243: {Iface:virbr3 ExpiryTime:2024-05-01 04:25:15 +0000 UTC Type:0 Mac:52:54:00:ac:ba:ac Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:kubernetes-upgrade-046243 Clientid:01:52:54:00:ac:ba:ac}
	I0501 03:25:23.263551   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | domain kubernetes-upgrade-046243 has defined IP address 192.168.72.134 and MAC address 52:54:00:ac:ba:ac in network mk-kubernetes-upgrade-046243
	I0501 03:25:23.263704   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) Calling .GetSSHPort
	I0501 03:25:23.263872   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) Calling .GetSSHKeyPath
	I0501 03:25:23.264047   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) Calling .GetSSHKeyPath
	I0501 03:25:23.264208   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) Calling .GetSSHUsername
	I0501 03:25:23.264429   58823 main.go:141] libmachine: Using SSH client type: native
	I0501 03:25:23.264668   58823 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.134 22 <nil> <nil>}
	I0501 03:25:23.264685   58823 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0501 03:25:23.379766   58823 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714533923.359461880
	
	I0501 03:25:23.379791   58823 fix.go:216] guest clock: 1714533923.359461880
	I0501 03:25:23.379801   58823 fix.go:229] Guest: 2024-05-01 03:25:23.35946188 +0000 UTC Remote: 2024-05-01 03:25:23.260737595 +0000 UTC m=+68.015755695 (delta=98.724285ms)
	I0501 03:25:23.379826   58823 fix.go:200] guest clock delta is within tolerance: 98.724285ms
	I0501 03:25:23.379832   58823 start.go:83] releasing machines lock for "kubernetes-upgrade-046243", held for 24.923913919s
	I0501 03:25:23.379858   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) Calling .DriverName
	I0501 03:25:23.380132   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) Calling .GetIP
	I0501 03:25:23.383081   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | domain kubernetes-upgrade-046243 has defined MAC address 52:54:00:ac:ba:ac in network mk-kubernetes-upgrade-046243
	I0501 03:25:23.383511   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:ba:ac", ip: ""} in network mk-kubernetes-upgrade-046243: {Iface:virbr3 ExpiryTime:2024-05-01 04:25:15 +0000 UTC Type:0 Mac:52:54:00:ac:ba:ac Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:kubernetes-upgrade-046243 Clientid:01:52:54:00:ac:ba:ac}
	I0501 03:25:23.383547   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | domain kubernetes-upgrade-046243 has defined IP address 192.168.72.134 and MAC address 52:54:00:ac:ba:ac in network mk-kubernetes-upgrade-046243
	I0501 03:25:23.383697   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) Calling .DriverName
	I0501 03:25:23.384350   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) Calling .DriverName
	I0501 03:25:23.384536   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) Calling .DriverName
	I0501 03:25:23.384614   58823 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0501 03:25:23.384653   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) Calling .GetSSHHostname
	I0501 03:25:23.384758   58823 ssh_runner.go:195] Run: cat /version.json
	I0501 03:25:23.384779   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) Calling .GetSSHHostname
	I0501 03:25:23.387540   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | domain kubernetes-upgrade-046243 has defined MAC address 52:54:00:ac:ba:ac in network mk-kubernetes-upgrade-046243
	I0501 03:25:23.387862   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:ba:ac", ip: ""} in network mk-kubernetes-upgrade-046243: {Iface:virbr3 ExpiryTime:2024-05-01 04:25:15 +0000 UTC Type:0 Mac:52:54:00:ac:ba:ac Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:kubernetes-upgrade-046243 Clientid:01:52:54:00:ac:ba:ac}
	I0501 03:25:23.387887   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | domain kubernetes-upgrade-046243 has defined MAC address 52:54:00:ac:ba:ac in network mk-kubernetes-upgrade-046243
	I0501 03:25:23.387920   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | domain kubernetes-upgrade-046243 has defined IP address 192.168.72.134 and MAC address 52:54:00:ac:ba:ac in network mk-kubernetes-upgrade-046243
	I0501 03:25:23.388077   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) Calling .GetSSHPort
	I0501 03:25:23.388247   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) Calling .GetSSHKeyPath
	I0501 03:25:23.388414   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:ba:ac", ip: ""} in network mk-kubernetes-upgrade-046243: {Iface:virbr3 ExpiryTime:2024-05-01 04:25:15 +0000 UTC Type:0 Mac:52:54:00:ac:ba:ac Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:kubernetes-upgrade-046243 Clientid:01:52:54:00:ac:ba:ac}
	I0501 03:25:23.388429   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) Calling .GetSSHUsername
	I0501 03:25:23.388452   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | domain kubernetes-upgrade-046243 has defined IP address 192.168.72.134 and MAC address 52:54:00:ac:ba:ac in network mk-kubernetes-upgrade-046243
	I0501 03:25:23.388592   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) Calling .GetSSHPort
	I0501 03:25:23.388580   58823 sshutil.go:53] new ssh client: &{IP:192.168.72.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/kubernetes-upgrade-046243/id_rsa Username:docker}
	I0501 03:25:23.388717   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) Calling .GetSSHKeyPath
	I0501 03:25:23.388922   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) Calling .GetSSHUsername
	I0501 03:25:23.389082   58823 sshutil.go:53] new ssh client: &{IP:192.168.72.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/kubernetes-upgrade-046243/id_rsa Username:docker}
	I0501 03:25:23.495738   58823 ssh_runner.go:195] Run: systemctl --version
	I0501 03:25:23.502940   58823 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0501 03:25:23.680289   58823 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0501 03:25:23.687310   58823 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0501 03:25:23.687379   58823 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0501 03:25:23.709591   58823 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0501 03:25:23.709622   58823 start.go:494] detecting cgroup driver to use...
	I0501 03:25:23.709696   58823 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0501 03:25:23.729774   58823 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0501 03:25:23.745013   58823 docker.go:217] disabling cri-docker service (if available) ...
	I0501 03:25:23.745080   58823 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0501 03:25:23.759993   58823 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0501 03:25:23.776713   58823 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0501 03:25:23.917976   58823 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0501 03:25:24.092135   58823 docker.go:233] disabling docker service ...
	I0501 03:25:24.092214   58823 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0501 03:25:24.112777   58823 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0501 03:25:24.128475   58823 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0501 03:25:24.276653   58823 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0501 03:25:24.418436   58823 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0501 03:25:24.452652   58823 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0501 03:25:24.478334   58823 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0501 03:25:24.478387   58823 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:25:24.490511   58823 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0501 03:25:24.490575   58823 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:25:24.501950   58823 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:25:24.515496   58823 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:25:24.528655   58823 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0501 03:25:24.540186   58823 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0501 03:25:24.551261   58823 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0501 03:25:24.551337   58823 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0501 03:25:24.567904   58823 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0501 03:25:24.579602   58823 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 03:25:24.718152   58823 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0501 03:25:25.336148   58823 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0501 03:25:25.336210   58823 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0501 03:25:25.342203   58823 start.go:562] Will wait 60s for crictl version
	I0501 03:25:25.342265   58823 ssh_runner.go:195] Run: which crictl
	I0501 03:25:25.346956   58823 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0501 03:25:25.398297   58823 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0501 03:25:25.398386   58823 ssh_runner.go:195] Run: crio --version
	I0501 03:25:25.441750   58823 ssh_runner.go:195] Run: crio --version
	I0501 03:25:25.475755   58823 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0501 03:25:25.477135   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) Calling .GetIP
	I0501 03:25:25.480368   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | domain kubernetes-upgrade-046243 has defined MAC address 52:54:00:ac:ba:ac in network mk-kubernetes-upgrade-046243
	I0501 03:25:25.480795   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:ba:ac", ip: ""} in network mk-kubernetes-upgrade-046243: {Iface:virbr3 ExpiryTime:2024-05-01 04:25:15 +0000 UTC Type:0 Mac:52:54:00:ac:ba:ac Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:kubernetes-upgrade-046243 Clientid:01:52:54:00:ac:ba:ac}
	I0501 03:25:25.480821   58823 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | domain kubernetes-upgrade-046243 has defined IP address 192.168.72.134 and MAC address 52:54:00:ac:ba:ac in network mk-kubernetes-upgrade-046243
	I0501 03:25:25.481149   58823 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0501 03:25:25.486275   58823 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0501 03:25:25.505560   58823 kubeadm.go:877] updating cluster {Name:kubernetes-upgrade-046243 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-046243 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.134 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0501 03:25:25.505664   58823 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0501 03:25:25.505721   58823 ssh_runner.go:195] Run: sudo crictl images --output json
	I0501 03:25:25.552187   58823 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0501 03:25:25.552260   58823 ssh_runner.go:195] Run: which lz4
	I0501 03:25:25.557216   58823 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0501 03:25:25.562133   58823 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0501 03:25:25.562164   58823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0501 03:25:27.677521   58823 crio.go:462] duration metric: took 2.120352187s to copy over tarball
	I0501 03:25:27.677614   58823 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0501 03:25:30.779132   58823 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.101486011s)
	I0501 03:25:30.779193   58823 crio.go:469] duration metric: took 3.101614635s to extract the tarball
	I0501 03:25:30.779204   58823 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0501 03:25:30.835256   58823 ssh_runner.go:195] Run: sudo crictl images --output json
	I0501 03:25:30.892216   58823 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0501 03:25:30.892241   58823 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0501 03:25:30.892328   58823 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0501 03:25:30.892366   58823 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0501 03:25:30.892429   58823 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0501 03:25:30.892447   58823 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0501 03:25:30.892546   58823 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0501 03:25:30.892546   58823 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0501 03:25:30.892338   58823 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0501 03:25:30.892329   58823 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0501 03:25:30.893987   58823 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0501 03:25:30.893995   58823 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0501 03:25:30.894177   58823 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0501 03:25:30.894252   58823 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0501 03:25:30.894371   58823 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0501 03:25:30.894542   58823 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0501 03:25:30.894597   58823 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0501 03:25:30.894738   58823 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0501 03:25:31.051703   58823 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0501 03:25:31.064470   58823 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0501 03:25:31.080853   58823 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0501 03:25:31.091981   58823 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0501 03:25:31.140710   58823 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0501 03:25:31.150822   58823 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0501 03:25:31.150862   58823 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0501 03:25:31.150882   58823 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0501 03:25:31.150967   58823 ssh_runner.go:195] Run: which crictl
	I0501 03:25:31.150896   58823 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0501 03:25:31.151069   58823 ssh_runner.go:195] Run: which crictl
	I0501 03:25:31.185889   58823 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0501 03:25:31.185936   58823 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0501 03:25:31.185996   58823 ssh_runner.go:195] Run: which crictl
	I0501 03:25:31.215571   58823 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0501 03:25:31.215628   58823 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0501 03:25:31.215683   58823 ssh_runner.go:195] Run: which crictl
	I0501 03:25:31.239015   58823 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0501 03:25:31.239047   58823 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0501 03:25:31.239058   58823 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0501 03:25:31.239098   58823 ssh_runner.go:195] Run: which crictl
	I0501 03:25:31.239113   58823 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0501 03:25:31.239176   58823 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0501 03:25:31.239179   58823 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0501 03:25:31.297581   58823 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0501 03:25:31.323310   58823 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0501 03:25:31.372267   58823 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0501 03:25:31.372291   58823 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0501 03:25:31.372378   58823 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0501 03:25:31.372436   58823 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0501 03:25:31.372503   58823 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0501 03:25:31.425328   58823 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0501 03:25:31.425378   58823 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0501 03:25:31.425425   58823 ssh_runner.go:195] Run: which crictl
	I0501 03:25:31.431072   58823 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0501 03:25:31.431118   58823 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0501 03:25:31.431165   58823 ssh_runner.go:195] Run: which crictl
	I0501 03:25:31.452512   58823 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0501 03:25:31.452593   58823 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0501 03:25:31.452682   58823 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0501 03:25:31.515695   58823 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0501 03:25:31.515736   58823 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0501 03:25:31.837060   58823 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0501 03:25:31.990021   58823 cache_images.go:92] duration metric: took 1.097761871s to LoadCachedImages
	W0501 03:25:31.990150   58823 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	I0501 03:25:31.990175   58823 kubeadm.go:928] updating node { 192.168.72.134 8443 v1.20.0 crio true true} ...
	I0501 03:25:31.990301   58823 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-046243 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.134
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-046243 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0501 03:25:31.990412   58823 ssh_runner.go:195] Run: crio config
	I0501 03:25:32.062327   58823 cni.go:84] Creating CNI manager for ""
	I0501 03:25:32.062365   58823 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0501 03:25:32.062387   58823 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0501 03:25:32.062424   58823 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.134 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-046243 NodeName:kubernetes-upgrade-046243 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.134"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.134 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0501 03:25:32.062657   58823 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.134
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-046243"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.134
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.134"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0501 03:25:32.062754   58823 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0501 03:25:32.077132   58823 binaries.go:44] Found k8s binaries, skipping transfer
	I0501 03:25:32.077209   58823 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0501 03:25:32.090536   58823 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (433 bytes)
	I0501 03:25:32.111408   58823 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0501 03:25:32.130409   58823 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2126 bytes)
	I0501 03:25:32.152606   58823 ssh_runner.go:195] Run: grep 192.168.72.134	control-plane.minikube.internal$ /etc/hosts
	I0501 03:25:32.157370   58823 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.134	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0501 03:25:32.173522   58823 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 03:25:32.334251   58823 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0501 03:25:32.355557   58823 certs.go:68] Setting up /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/kubernetes-upgrade-046243 for IP: 192.168.72.134
	I0501 03:25:32.355584   58823 certs.go:194] generating shared ca certs ...
	I0501 03:25:32.355630   58823 certs.go:226] acquiring lock for ca certs: {Name:mk85a75d14c00780d4bf09cd9bdaf0f96f1b8fce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 03:25:32.355779   58823 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18779-13391/.minikube/ca.key
	I0501 03:25:32.355837   58823 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.key
	I0501 03:25:32.355852   58823 certs.go:256] generating profile certs ...
	I0501 03:25:32.355925   58823 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/kubernetes-upgrade-046243/client.key
	I0501 03:25:32.355943   58823 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/kubernetes-upgrade-046243/client.crt with IP's: []
	I0501 03:25:32.529117   58823 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/kubernetes-upgrade-046243/client.crt ...
	I0501 03:25:32.529159   58823 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/kubernetes-upgrade-046243/client.crt: {Name:mk2c354b2ccfd51ea56e4599a6652f294af5046e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 03:25:32.529387   58823 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/kubernetes-upgrade-046243/client.key ...
	I0501 03:25:32.529410   58823 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/kubernetes-upgrade-046243/client.key: {Name:mk8dcf8fc7d5013a2706d20bb64f379c72c47f6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 03:25:32.529532   58823 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/kubernetes-upgrade-046243/apiserver.key.cae910e0
	I0501 03:25:32.529556   58823 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/kubernetes-upgrade-046243/apiserver.crt.cae910e0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.134]
	I0501 03:25:32.595945   58823 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/kubernetes-upgrade-046243/apiserver.crt.cae910e0 ...
	I0501 03:25:32.595976   58823 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/kubernetes-upgrade-046243/apiserver.crt.cae910e0: {Name:mkc16ff9603709f3b9c99cc537dad2322015b647 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 03:25:32.673866   58823 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/kubernetes-upgrade-046243/apiserver.key.cae910e0 ...
	I0501 03:25:32.673921   58823 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/kubernetes-upgrade-046243/apiserver.key.cae910e0: {Name:mke91b9aca4791f5fd4d024e44f5dc5db2baa236 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 03:25:32.674059   58823 certs.go:381] copying /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/kubernetes-upgrade-046243/apiserver.crt.cae910e0 -> /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/kubernetes-upgrade-046243/apiserver.crt
	I0501 03:25:32.674190   58823 certs.go:385] copying /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/kubernetes-upgrade-046243/apiserver.key.cae910e0 -> /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/kubernetes-upgrade-046243/apiserver.key
	I0501 03:25:32.674281   58823 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/kubernetes-upgrade-046243/proxy-client.key
	I0501 03:25:32.674318   58823 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/kubernetes-upgrade-046243/proxy-client.crt with IP's: []
	I0501 03:25:32.911078   58823 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/kubernetes-upgrade-046243/proxy-client.crt ...
	I0501 03:25:32.911124   58823 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/kubernetes-upgrade-046243/proxy-client.crt: {Name:mke2b70d4a2ec21d0b16ff6e922a24f533c02121 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 03:25:32.911359   58823 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/kubernetes-upgrade-046243/proxy-client.key ...
	I0501 03:25:32.911385   58823 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/kubernetes-upgrade-046243/proxy-client.key: {Name:mk835ce6c9c780111b0d5daec136ff7500fe7e1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 03:25:32.911653   58823 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/20724.pem (1338 bytes)
	W0501 03:25:32.911713   58823 certs.go:480] ignoring /home/jenkins/minikube-integration/18779-13391/.minikube/certs/20724_empty.pem, impossibly tiny 0 bytes
	I0501 03:25:32.911729   58823 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca-key.pem (1679 bytes)
	I0501 03:25:32.911758   58823 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem (1078 bytes)
	I0501 03:25:32.911791   58823 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem (1123 bytes)
	I0501 03:25:32.911820   58823 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem (1675 bytes)
	I0501 03:25:32.911887   58823 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem (1708 bytes)
	I0501 03:25:32.912722   58823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0501 03:25:32.945432   58823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0501 03:25:32.976433   58823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0501 03:25:33.008282   58823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0501 03:25:33.040090   58823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/kubernetes-upgrade-046243/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0501 03:25:33.071553   58823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/kubernetes-upgrade-046243/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0501 03:25:33.104877   58823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/kubernetes-upgrade-046243/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0501 03:25:33.134205   58823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/kubernetes-upgrade-046243/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0501 03:25:33.162990   58823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem --> /usr/share/ca-certificates/207242.pem (1708 bytes)
	I0501 03:25:33.199151   58823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0501 03:25:33.240617   58823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/certs/20724.pem --> /usr/share/ca-certificates/20724.pem (1338 bytes)
	I0501 03:25:33.286749   58823 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0501 03:25:33.326888   58823 ssh_runner.go:195] Run: openssl version
	I0501 03:25:33.336243   58823 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/207242.pem && ln -fs /usr/share/ca-certificates/207242.pem /etc/ssl/certs/207242.pem"
	I0501 03:25:33.351825   58823 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/207242.pem
	I0501 03:25:33.357165   58823 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May  1 02:20 /usr/share/ca-certificates/207242.pem
	I0501 03:25:33.357254   58823 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/207242.pem
	I0501 03:25:33.363804   58823 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/207242.pem /etc/ssl/certs/3ec20f2e.0"
	I0501 03:25:33.375982   58823 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0501 03:25:33.390020   58823 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0501 03:25:33.396976   58823 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May  1 02:08 /usr/share/ca-certificates/minikubeCA.pem
	I0501 03:25:33.397043   58823 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0501 03:25:33.404809   58823 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0501 03:25:33.417880   58823 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20724.pem && ln -fs /usr/share/ca-certificates/20724.pem /etc/ssl/certs/20724.pem"
	I0501 03:25:33.430807   58823 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20724.pem
	I0501 03:25:33.437169   58823 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May  1 02:20 /usr/share/ca-certificates/20724.pem
	I0501 03:25:33.437268   58823 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20724.pem
	I0501 03:25:33.444498   58823 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20724.pem /etc/ssl/certs/51391683.0"
	I0501 03:25:33.457207   58823 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0501 03:25:33.462356   58823 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0501 03:25:33.462436   58823 kubeadm.go:391] StartCluster: {Name:kubernetes-upgrade-046243 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-046243 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.134 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 03:25:33.462507   58823 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0501 03:25:33.462544   58823 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0501 03:25:33.507920   58823 cri.go:89] found id: ""
	I0501 03:25:33.507999   58823 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0501 03:25:33.524282   58823 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0501 03:25:33.538853   58823 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0501 03:25:33.553494   58823 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0501 03:25:33.553516   58823 kubeadm.go:156] found existing configuration files:
	
	I0501 03:25:33.553566   58823 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0501 03:25:33.564879   58823 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0501 03:25:33.564951   58823 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0501 03:25:33.576528   58823 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0501 03:25:33.587532   58823 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0501 03:25:33.587608   58823 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0501 03:25:33.599398   58823 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0501 03:25:33.612250   58823 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0501 03:25:33.612353   58823 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0501 03:25:33.624676   58823 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0501 03:25:33.636512   58823 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0501 03:25:33.636585   58823 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0501 03:25:33.652503   58823 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0501 03:25:33.820023   58823 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0501 03:25:33.820314   58823 kubeadm.go:309] [preflight] Running pre-flight checks
	I0501 03:25:34.049472   58823 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0501 03:25:34.049615   58823 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0501 03:25:34.049736   58823 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0501 03:25:34.316585   58823 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0501 03:25:34.319420   58823 out.go:204]   - Generating certificates and keys ...
	I0501 03:25:34.319541   58823 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0501 03:25:34.319642   58823 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0501 03:25:34.383067   58823 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0501 03:25:34.480379   58823 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0501 03:25:34.620056   58823 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0501 03:25:35.043624   58823 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0501 03:25:35.167645   58823 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0501 03:25:35.167994   58823 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-046243 localhost] and IPs [192.168.72.134 127.0.0.1 ::1]
	I0501 03:25:35.460165   58823 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0501 03:25:35.465865   58823 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-046243 localhost] and IPs [192.168.72.134 127.0.0.1 ::1]
	I0501 03:25:35.842185   58823 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0501 03:25:35.996668   58823 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0501 03:25:36.222483   58823 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0501 03:25:36.222866   58823 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0501 03:25:36.422539   58823 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0501 03:25:36.586917   58823 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0501 03:25:36.730946   58823 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0501 03:25:36.989458   58823 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0501 03:25:37.012176   58823 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0501 03:25:37.013488   58823 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0501 03:25:37.013559   58823 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0501 03:25:37.202007   58823 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0501 03:25:37.203818   58823 out.go:204]   - Booting up control plane ...
	I0501 03:25:37.204030   58823 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0501 03:25:37.214582   58823 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0501 03:25:37.217926   58823 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0501 03:25:37.218714   58823 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0501 03:25:37.232285   58823 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0501 03:26:17.230839   58823 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0501 03:26:17.232007   58823 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0501 03:26:17.232268   58823 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0501 03:26:22.233129   58823 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0501 03:26:22.233388   58823 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0501 03:26:32.234310   58823 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0501 03:26:32.234542   58823 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0501 03:26:52.236150   58823 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0501 03:26:52.236408   58823 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0501 03:27:32.235924   58823 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0501 03:27:32.236198   58823 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0501 03:27:32.236213   58823 kubeadm.go:309] 
	I0501 03:27:32.236296   58823 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0501 03:27:32.236372   58823 kubeadm.go:309] 		timed out waiting for the condition
	I0501 03:27:32.236388   58823 kubeadm.go:309] 
	I0501 03:27:32.236417   58823 kubeadm.go:309] 	This error is likely caused by:
	I0501 03:27:32.236452   58823 kubeadm.go:309] 		- The kubelet is not running
	I0501 03:27:32.236577   58823 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0501 03:27:32.236591   58823 kubeadm.go:309] 
	I0501 03:27:32.236745   58823 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0501 03:27:32.236807   58823 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0501 03:27:32.236857   58823 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0501 03:27:32.236875   58823 kubeadm.go:309] 
	I0501 03:27:32.237038   58823 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0501 03:27:32.237175   58823 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0501 03:27:32.237193   58823 kubeadm.go:309] 
	I0501 03:27:32.237351   58823 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0501 03:27:32.237481   58823 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0501 03:27:32.237605   58823 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0501 03:27:32.237713   58823 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0501 03:27:32.237737   58823 kubeadm.go:309] 
	I0501 03:27:32.237897   58823 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0501 03:27:32.238022   58823 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0501 03:27:32.238142   58823 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0501 03:27:32.238300   58823 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-046243 localhost] and IPs [192.168.72.134 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-046243 localhost] and IPs [192.168.72.134 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-046243 localhost] and IPs [192.168.72.134 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-046243 localhost] and IPs [192.168.72.134 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0501 03:27:32.238354   58823 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0501 03:27:34.691331   58823 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.452942184s)
	I0501 03:27:34.691408   58823 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 03:27:34.707895   58823 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0501 03:27:34.719477   58823 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0501 03:27:34.719502   58823 kubeadm.go:156] found existing configuration files:
	
	I0501 03:27:34.719558   58823 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0501 03:27:34.730599   58823 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0501 03:27:34.730673   58823 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0501 03:27:34.743370   58823 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0501 03:27:34.755247   58823 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0501 03:27:34.755327   58823 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0501 03:27:34.767994   58823 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0501 03:27:34.779419   58823 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0501 03:27:34.779488   58823 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0501 03:27:34.791126   58823 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0501 03:27:34.802128   58823 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0501 03:27:34.802198   58823 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0501 03:27:34.813533   58823 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0501 03:27:35.049972   58823 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0501 03:29:31.114985   58823 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0501 03:29:31.115110   58823 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0501 03:29:31.116810   58823 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0501 03:29:31.116877   58823 kubeadm.go:309] [preflight] Running pre-flight checks
	I0501 03:29:31.116975   58823 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0501 03:29:31.117094   58823 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0501 03:29:31.117249   58823 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0501 03:29:31.117344   58823 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0501 03:29:31.119401   58823 out.go:204]   - Generating certificates and keys ...
	I0501 03:29:31.119505   58823 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0501 03:29:31.119587   58823 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0501 03:29:31.119686   58823 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0501 03:29:31.119748   58823 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0501 03:29:31.119812   58823 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0501 03:29:31.119866   58823 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0501 03:29:31.119919   58823 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0501 03:29:31.119984   58823 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0501 03:29:31.120072   58823 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0501 03:29:31.120141   58823 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0501 03:29:31.120175   58823 kubeadm.go:309] [certs] Using the existing "sa" key
	I0501 03:29:31.120238   58823 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0501 03:29:31.120303   58823 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0501 03:29:31.120352   58823 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0501 03:29:31.120408   58823 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0501 03:29:31.120459   58823 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0501 03:29:31.120571   58823 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0501 03:29:31.120658   58823 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0501 03:29:31.120708   58823 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0501 03:29:31.120788   58823 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0501 03:29:31.123449   58823 out.go:204]   - Booting up control plane ...
	I0501 03:29:31.123527   58823 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0501 03:29:31.123609   58823 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0501 03:29:31.123684   58823 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0501 03:29:31.123753   58823 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0501 03:29:31.123920   58823 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0501 03:29:31.124000   58823 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0501 03:29:31.124067   58823 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0501 03:29:31.124285   58823 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0501 03:29:31.124383   58823 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0501 03:29:31.124630   58823 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0501 03:29:31.124744   58823 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0501 03:29:31.124949   58823 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0501 03:29:31.125089   58823 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0501 03:29:31.125317   58823 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0501 03:29:31.125406   58823 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0501 03:29:31.125647   58823 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0501 03:29:31.125656   58823 kubeadm.go:309] 
	I0501 03:29:31.125689   58823 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0501 03:29:31.125743   58823 kubeadm.go:309] 		timed out waiting for the condition
	I0501 03:29:31.125756   58823 kubeadm.go:309] 
	I0501 03:29:31.125799   58823 kubeadm.go:309] 	This error is likely caused by:
	I0501 03:29:31.125846   58823 kubeadm.go:309] 		- The kubelet is not running
	I0501 03:29:31.125987   58823 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0501 03:29:31.126005   58823 kubeadm.go:309] 
	I0501 03:29:31.126148   58823 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0501 03:29:31.126193   58823 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0501 03:29:31.126236   58823 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0501 03:29:31.126250   58823 kubeadm.go:309] 
	I0501 03:29:31.126391   58823 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0501 03:29:31.126530   58823 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0501 03:29:31.126541   58823 kubeadm.go:309] 
	I0501 03:29:31.126682   58823 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0501 03:29:31.126801   58823 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0501 03:29:31.126902   58823 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0501 03:29:31.127016   58823 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0501 03:29:31.127058   58823 kubeadm.go:309] 
	I0501 03:29:31.127096   58823 kubeadm.go:393] duration metric: took 3m57.664665046s to StartCluster
	I0501 03:29:31.127142   58823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:29:31.127193   58823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:29:31.183330   58823 cri.go:89] found id: ""
	I0501 03:29:31.183349   58823 logs.go:276] 0 containers: []
	W0501 03:29:31.183362   58823 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:29:31.183368   58823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:29:31.183413   58823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:29:31.239598   58823 cri.go:89] found id: ""
	I0501 03:29:31.239626   58823 logs.go:276] 0 containers: []
	W0501 03:29:31.239645   58823 logs.go:278] No container was found matching "etcd"
	I0501 03:29:31.239653   58823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:29:31.239719   58823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:29:31.282312   58823 cri.go:89] found id: ""
	I0501 03:29:31.282336   58823 logs.go:276] 0 containers: []
	W0501 03:29:31.282343   58823 logs.go:278] No container was found matching "coredns"
	I0501 03:29:31.282348   58823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:29:31.282410   58823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:29:31.325361   58823 cri.go:89] found id: ""
	I0501 03:29:31.325384   58823 logs.go:276] 0 containers: []
	W0501 03:29:31.325399   58823 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:29:31.325409   58823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:29:31.325470   58823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:29:31.364098   58823 cri.go:89] found id: ""
	I0501 03:29:31.364124   58823 logs.go:276] 0 containers: []
	W0501 03:29:31.364135   58823 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:29:31.364142   58823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:29:31.364206   58823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:29:31.407614   58823 cri.go:89] found id: ""
	I0501 03:29:31.407644   58823 logs.go:276] 0 containers: []
	W0501 03:29:31.407655   58823 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:29:31.407663   58823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:29:31.407718   58823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:29:31.448060   58823 cri.go:89] found id: ""
	I0501 03:29:31.448092   58823 logs.go:276] 0 containers: []
	W0501 03:29:31.448104   58823 logs.go:278] No container was found matching "kindnet"
	I0501 03:29:31.448116   58823 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:29:31.448130   58823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:29:31.554863   58823 logs.go:123] Gathering logs for container status ...
	I0501 03:29:31.554901   58823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:29:31.600885   58823 logs.go:123] Gathering logs for kubelet ...
	I0501 03:29:31.600911   58823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:29:31.663391   58823 logs.go:123] Gathering logs for dmesg ...
	I0501 03:29:31.663428   58823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:29:31.680339   58823 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:29:31.680367   58823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:29:31.812062   58823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0501 03:29:31.812099   58823 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0501 03:29:31.812128   58823 out.go:239] * 
	* 
	W0501 03:29:31.812219   58823 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0501 03:29:31.812247   58823 out.go:239] * 
	* 
	W0501 03:29:31.813057   58823 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0501 03:29:31.816708   58823 out.go:177] 
	W0501 03:29:31.818026   58823 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0501 03:29:31.818096   58823 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0501 03:29:31.818125   58823 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0501 03:29:31.819648   58823 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-046243 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-046243
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-046243: (3.326177513s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-046243 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-046243 status --format={{.Host}}: exit status 7 (76.801082ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-046243 --memory=2200 --kubernetes-version=v1.30.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0501 03:29:56.199104   20724 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/functional-960026/client.crt: no such file or directory
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-046243 --memory=2200 --kubernetes-version=v1.30.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m0.259593415s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-046243 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-046243 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-046243 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (91.511427ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-046243] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18779
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18779-13391/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18779-13391/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.30.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-046243
	    minikube start -p kubernetes-upgrade-046243 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-0462432 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.30.0, by running:
	    
	    minikube start -p kubernetes-upgrade-046243 --kubernetes-version=v1.30.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-046243 --memory=2200 --kubernetes-version=v1.30.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0501 03:31:24.419590   20724 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/addons-286595/client.crt: no such file or directory
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-046243 --memory=2200 --kubernetes-version=v1.30.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m29.430875899s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-05-01 03:32:05.120938 +0000 UTC m=+5100.226854309
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-046243 -n kubernetes-upgrade-046243
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-046243 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-046243 logs -n 25: (2.180477451s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-731347 sudo cat             | cilium-731347             | jenkins | v1.33.0 | 01 May 24 03:28 UTC |                     |
	|         | /etc/containerd/config.toml           |                           |         |         |                     |                     |
	| ssh     | -p cilium-731347 sudo                 | cilium-731347             | jenkins | v1.33.0 | 01 May 24 03:28 UTC |                     |
	|         | containerd config dump                |                           |         |         |                     |                     |
	| ssh     | -p cilium-731347 sudo                 | cilium-731347             | jenkins | v1.33.0 | 01 May 24 03:28 UTC |                     |
	|         | systemctl status crio --all           |                           |         |         |                     |                     |
	|         | --full --no-pager                     |                           |         |         |                     |                     |
	| ssh     | -p cilium-731347 sudo                 | cilium-731347             | jenkins | v1.33.0 | 01 May 24 03:28 UTC |                     |
	|         | systemctl cat crio --no-pager         |                           |         |         |                     |                     |
	| ssh     | -p cilium-731347 sudo find            | cilium-731347             | jenkins | v1.33.0 | 01 May 24 03:28 UTC |                     |
	|         | /etc/crio -type f -exec sh -c         |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                  |                           |         |         |                     |                     |
	| ssh     | -p cilium-731347 sudo crio            | cilium-731347             | jenkins | v1.33.0 | 01 May 24 03:28 UTC |                     |
	|         | config                                |                           |         |         |                     |                     |
	| delete  | -p cilium-731347                      | cilium-731347             | jenkins | v1.33.0 | 01 May 24 03:28 UTC | 01 May 24 03:28 UTC |
	| start   | -p force-systemd-flag-616131          | force-systemd-flag-616131 | jenkins | v1.33.0 | 01 May 24 03:28 UTC | 01 May 24 03:29 UTC |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p running-upgrade-179111             | running-upgrade-179111    | jenkins | v1.33.0 | 01 May 24 03:29 UTC | 01 May 24 03:30 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-616131 ssh cat     | force-systemd-flag-616131 | jenkins | v1.33.0 | 01 May 24 03:29 UTC | 01 May 24 03:29 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf    |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-616131          | force-systemd-flag-616131 | jenkins | v1.33.0 | 01 May 24 03:29 UTC | 01 May 24 03:29 UTC |
	| start   | -p pause-542495                       | pause-542495              | jenkins | v1.33.0 | 01 May 24 03:29 UTC | 01 May 24 03:30 UTC |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p cert-options-582976                | cert-options-582976       | jenkins | v1.33.0 | 01 May 24 03:29 UTC | 01 May 24 03:30 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-046243          | kubernetes-upgrade-046243 | jenkins | v1.33.0 | 01 May 24 03:29 UTC | 01 May 24 03:29 UTC |
	| start   | -p kubernetes-upgrade-046243          | kubernetes-upgrade-046243 | jenkins | v1.33.0 | 01 May 24 03:29 UTC | 01 May 24 03:30 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-179111             | running-upgrade-179111    | jenkins | v1.33.0 | 01 May 24 03:30 UTC | 01 May 24 03:30 UTC |
	| start   | -p old-k8s-version-503971             | old-k8s-version-503971    | jenkins | v1.33.0 | 01 May 24 03:30 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --kvm-network=default                 |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system         |                           |         |         |                     |                     |
	|         | --disable-driver-mounts               |                           |         |         |                     |                     |
	|         | --keep-context=false                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	| ssh     | cert-options-582976 ssh               | cert-options-582976       | jenkins | v1.33.0 | 01 May 24 03:30 UTC | 01 May 24 03:30 UTC |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-582976 -- sudo        | cert-options-582976       | jenkins | v1.33.0 | 01 May 24 03:30 UTC | 01 May 24 03:30 UTC |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-582976                | cert-options-582976       | jenkins | v1.33.0 | 01 May 24 03:30 UTC | 01 May 24 03:30 UTC |
	| delete  | -p pause-542495                       | pause-542495              | jenkins | v1.33.0 | 01 May 24 03:30 UTC | 01 May 24 03:30 UTC |
	| start   | -p no-preload-892672                  | no-preload-892672         | jenkins | v1.33.0 | 01 May 24 03:30 UTC | 01 May 24 03:32 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --preload=false --driver=kvm2         |                           |         |         |                     |                     |
	|         |  --container-runtime=crio             |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0          |                           |         |         |                     |                     |
	| start   | -p embed-certs-277128                 | embed-certs-277128        | jenkins | v1.33.0 | 01 May 24 03:30 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2           |                           |         |         |                     |                     |
	|         |  --container-runtime=crio             |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0          |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-046243          | kubernetes-upgrade-046243 | jenkins | v1.33.0 | 01 May 24 03:30 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-046243          | kubernetes-upgrade-046243 | jenkins | v1.33.0 | 01 May 24 03:30 UTC | 01 May 24 03:32 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/01 03:30:35
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0501 03:30:35.738993   66261 out.go:291] Setting OutFile to fd 1 ...
	I0501 03:30:35.739112   66261 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 03:30:35.739122   66261 out.go:304] Setting ErrFile to fd 2...
	I0501 03:30:35.739127   66261 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 03:30:35.739363   66261 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18779-13391/.minikube/bin
	I0501 03:30:35.739887   66261 out.go:298] Setting JSON to false
	I0501 03:30:35.741389   66261 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7979,"bootTime":1714526257,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0501 03:30:35.741554   66261 start.go:139] virtualization: kvm guest
	I0501 03:30:35.744227   66261 out.go:177] * [kubernetes-upgrade-046243] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0501 03:30:35.745701   66261 out.go:177]   - MINIKUBE_LOCATION=18779
	I0501 03:30:35.745701   66261 notify.go:220] Checking for updates...
	I0501 03:30:35.747151   66261 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0501 03:30:35.748389   66261 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18779-13391/kubeconfig
	I0501 03:30:35.749476   66261 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18779-13391/.minikube
	I0501 03:30:35.750697   66261 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0501 03:30:35.752097   66261 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0501 03:30:35.753816   66261 config.go:182] Loaded profile config "kubernetes-upgrade-046243": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0501 03:30:35.754465   66261 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:30:35.754512   66261 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:30:35.769154   66261 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46541
	I0501 03:30:35.769553   66261 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:30:35.770137   66261 main.go:141] libmachine: Using API Version  1
	I0501 03:30:35.770164   66261 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:30:35.770482   66261 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:30:35.770676   66261 main.go:141] libmachine: (kubernetes-upgrade-046243) Calling .DriverName
	I0501 03:30:35.770912   66261 driver.go:392] Setting default libvirt URI to qemu:///system
	I0501 03:30:35.771241   66261 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:30:35.771297   66261 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:30:35.785855   66261 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37839
	I0501 03:30:35.786268   66261 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:30:35.786745   66261 main.go:141] libmachine: Using API Version  1
	I0501 03:30:35.786771   66261 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:30:35.787087   66261 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:30:35.787339   66261 main.go:141] libmachine: (kubernetes-upgrade-046243) Calling .DriverName
	I0501 03:30:35.818640   66261 out.go:177] * Using the kvm2 driver based on existing profile
	I0501 03:30:35.819821   66261 start.go:297] selected driver: kvm2
	I0501 03:30:35.819834   66261 start.go:901] validating driver "kvm2" against &{Name:kubernetes-upgrade-046243 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.30.0 ClusterName:kubernetes-upgrade-046243 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.134 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 03:30:35.819926   66261 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0501 03:30:35.820592   66261 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0501 03:30:35.820657   66261 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18779-13391/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0501 03:30:35.835075   66261 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0501 03:30:35.835495   66261 cni.go:84] Creating CNI manager for ""
	I0501 03:30:35.835515   66261 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0501 03:30:35.835562   66261 start.go:340] cluster config:
	{Name:kubernetes-upgrade-046243 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:kubernetes-upgrade-046243 Namespace:d
efault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.134 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 03:30:35.835688   66261 iso.go:125] acquiring lock: {Name:mkcd2496aadb29931e179193d707c731bfda98b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0501 03:30:35.838070   66261 out.go:177] * Starting "kubernetes-upgrade-046243" primary control-plane node in "kubernetes-upgrade-046243" cluster
	I0501 03:30:34.365139   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:30:34.365626   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | unable to find current IP address of domain old-k8s-version-503971 in network mk-old-k8s-version-503971
	I0501 03:30:34.365672   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | I0501 03:30:34.365581   65721 retry.go:31] will retry after 3.807407638s: waiting for machine to come up
	I0501 03:30:35.839192   66261 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0501 03:30:35.839246   66261 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0501 03:30:35.839266   66261 cache.go:56] Caching tarball of preloaded images
	I0501 03:30:35.839334   66261 preload.go:173] Found /home/jenkins/minikube-integration/18779-13391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0501 03:30:35.839346   66261 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0501 03:30:35.839437   66261 profile.go:143] Saving config to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/kubernetes-upgrade-046243/config.json ...
	I0501 03:30:35.839618   66261 start.go:360] acquireMachinesLock for kubernetes-upgrade-046243: {Name:mkc4548e5afe1d4b0a833f6c522103562b0cefff Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0501 03:30:38.176696   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:30:38.177157   65502 main.go:141] libmachine: (old-k8s-version-503971) Found IP for machine: 192.168.61.104
	I0501 03:30:38.177181   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has current primary IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:30:38.177187   65502 main.go:141] libmachine: (old-k8s-version-503971) Reserving static IP address...
	I0501 03:30:38.177481   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-503971", mac: "52:54:00:7d:68:83", ip: "192.168.61.104"} in network mk-old-k8s-version-503971
	I0501 03:30:38.251556   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | Getting to WaitForSSH function...
	I0501 03:30:38.251590   65502 main.go:141] libmachine: (old-k8s-version-503971) Reserved static IP address: 192.168.61.104
	I0501 03:30:38.251612   65502 main.go:141] libmachine: (old-k8s-version-503971) Waiting for SSH to be available...
	I0501 03:30:38.254035   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:30:38.254324   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:7d:68:83", ip: ""} in network mk-old-k8s-version-503971
	I0501 03:30:38.254351   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | unable to find defined IP address of network mk-old-k8s-version-503971 interface with MAC address 52:54:00:7d:68:83
	I0501 03:30:38.254485   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | Using SSH client type: external
	I0501 03:30:38.254546   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | Using SSH private key: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/old-k8s-version-503971/id_rsa (-rw-------)
	I0501 03:30:38.254588   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18779-13391/.minikube/machines/old-k8s-version-503971/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0501 03:30:38.254604   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | About to run SSH command:
	I0501 03:30:38.254622   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | exit 0
	I0501 03:30:38.258117   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | SSH cmd err, output: exit status 255: 
	I0501 03:30:38.258139   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0501 03:30:38.258146   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | command : exit 0
	I0501 03:30:38.258152   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | err     : exit status 255
	I0501 03:30:38.258159   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | output  : 
	I0501 03:30:41.258384   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | Getting to WaitForSSH function...
	I0501 03:30:41.260925   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:30:41.261296   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:68:83", ip: ""} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:30:41.261330   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:30:41.261420   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | Using SSH client type: external
	I0501 03:30:41.261444   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | Using SSH private key: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/old-k8s-version-503971/id_rsa (-rw-------)
	I0501 03:30:41.261490   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.104 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18779-13391/.minikube/machines/old-k8s-version-503971/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0501 03:30:41.261514   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | About to run SSH command:
	I0501 03:30:41.261528   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | exit 0
	I0501 03:30:41.386904   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | SSH cmd err, output: <nil>: 
	I0501 03:30:41.387199   65502 main.go:141] libmachine: (old-k8s-version-503971) KVM machine creation complete!
	I0501 03:30:41.387503   65502 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetConfigRaw
	I0501 03:30:41.388103   65502 main.go:141] libmachine: (old-k8s-version-503971) Calling .DriverName
	I0501 03:30:41.388344   65502 main.go:141] libmachine: (old-k8s-version-503971) Calling .DriverName
	I0501 03:30:41.388512   65502 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0501 03:30:41.388529   65502 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetState
	I0501 03:30:41.389801   65502 main.go:141] libmachine: Detecting operating system of created instance...
	I0501 03:30:41.389833   65502 main.go:141] libmachine: Waiting for SSH to be available...
	I0501 03:30:41.389844   65502 main.go:141] libmachine: Getting to WaitForSSH function...
	I0501 03:30:41.389859   65502 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHHostname
	I0501 03:30:41.392182   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:30:41.392545   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:68:83", ip: ""} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:30:41.392587   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:30:41.392746   65502 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHPort
	I0501 03:30:41.393036   65502 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHKeyPath
	I0501 03:30:41.393232   65502 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHKeyPath
	I0501 03:30:41.393406   65502 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHUsername
	I0501 03:30:41.393565   65502 main.go:141] libmachine: Using SSH client type: native
	I0501 03:30:41.393767   65502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.104 22 <nil> <nil>}
	I0501 03:30:41.393781   65502 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0501 03:30:41.494111   65502 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0501 03:30:41.494136   65502 main.go:141] libmachine: Detecting the provisioner...
	I0501 03:30:41.494146   65502 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHHostname
	I0501 03:30:41.496975   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:30:41.497297   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:68:83", ip: ""} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:30:41.497330   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:30:41.497571   65502 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHPort
	I0501 03:30:41.497760   65502 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHKeyPath
	I0501 03:30:41.497960   65502 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHKeyPath
	I0501 03:30:41.498060   65502 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHUsername
	I0501 03:30:41.498230   65502 main.go:141] libmachine: Using SSH client type: native
	I0501 03:30:41.498471   65502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.104 22 <nil> <nil>}
	I0501 03:30:41.498484   65502 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0501 03:30:41.599933   65502 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0501 03:30:41.600009   65502 main.go:141] libmachine: found compatible host: buildroot
	I0501 03:30:41.600018   65502 main.go:141] libmachine: Provisioning with buildroot...
	I0501 03:30:41.600026   65502 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetMachineName
	I0501 03:30:41.600282   65502 buildroot.go:166] provisioning hostname "old-k8s-version-503971"
	I0501 03:30:41.600305   65502 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetMachineName
	I0501 03:30:41.600459   65502 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHHostname
	I0501 03:30:41.603164   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:30:41.603594   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:68:83", ip: ""} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:30:41.603624   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:30:41.603796   65502 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHPort
	I0501 03:30:41.603962   65502 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHKeyPath
	I0501 03:30:41.604125   65502 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHKeyPath
	I0501 03:30:41.604272   65502 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHUsername
	I0501 03:30:41.604419   65502 main.go:141] libmachine: Using SSH client type: native
	I0501 03:30:41.604639   65502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.104 22 <nil> <nil>}
	I0501 03:30:41.604658   65502 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-503971 && echo "old-k8s-version-503971" | sudo tee /etc/hostname
	I0501 03:30:41.725848   65502 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-503971
	
	I0501 03:30:41.725882   65502 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHHostname
	I0501 03:30:41.728561   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:30:41.728959   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:68:83", ip: ""} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:30:41.729003   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:30:41.729180   65502 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHPort
	I0501 03:30:41.729382   65502 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHKeyPath
	I0501 03:30:41.729513   65502 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHKeyPath
	I0501 03:30:41.729604   65502 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHUsername
	I0501 03:30:41.729736   65502 main.go:141] libmachine: Using SSH client type: native
	I0501 03:30:41.729907   65502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.104 22 <nil> <nil>}
	I0501 03:30:41.729924   65502 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-503971' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-503971/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-503971' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0501 03:30:41.841082   65502 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0501 03:30:41.841114   65502 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18779-13391/.minikube CaCertPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18779-13391/.minikube}
	I0501 03:30:41.841137   65502 buildroot.go:174] setting up certificates
	I0501 03:30:41.841150   65502 provision.go:84] configureAuth start
	I0501 03:30:41.841163   65502 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetMachineName
	I0501 03:30:41.841471   65502 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetIP
	I0501 03:30:41.844393   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:30:41.844723   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:68:83", ip: ""} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:30:41.844757   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:30:41.844942   65502 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHHostname
	I0501 03:30:41.847201   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:30:41.847511   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:68:83", ip: ""} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:30:41.847533   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:30:41.847698   65502 provision.go:143] copyHostCerts
	I0501 03:30:41.847749   65502 exec_runner.go:144] found /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem, removing ...
	I0501 03:30:41.847760   65502 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem
	I0501 03:30:41.847815   65502 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem (1078 bytes)
	I0501 03:30:41.847906   65502 exec_runner.go:144] found /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem, removing ...
	I0501 03:30:41.847918   65502 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem
	I0501 03:30:41.847946   65502 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem (1123 bytes)
	I0501 03:30:41.848007   65502 exec_runner.go:144] found /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem, removing ...
	I0501 03:30:41.848014   65502 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem
	I0501 03:30:41.848040   65502 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem (1675 bytes)
	I0501 03:30:41.848101   65502 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-503971 san=[127.0.0.1 192.168.61.104 localhost minikube old-k8s-version-503971]
	I0501 03:30:42.129743   65502 provision.go:177] copyRemoteCerts
	I0501 03:30:42.129807   65502 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0501 03:30:42.129834   65502 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHHostname
	I0501 03:30:42.132552   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:30:42.132883   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:68:83", ip: ""} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:30:42.132912   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:30:42.133134   65502 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHPort
	I0501 03:30:42.133384   65502 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHKeyPath
	I0501 03:30:42.133585   65502 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHUsername
	I0501 03:30:42.133723   65502 sshutil.go:53] new ssh client: &{IP:192.168.61.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/old-k8s-version-503971/id_rsa Username:docker}
	I0501 03:30:42.219090   65502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0501 03:30:42.247347   65502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0501 03:30:42.275755   65502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0501 03:30:42.305243   65502 provision.go:87] duration metric: took 464.078319ms to configureAuth
	I0501 03:30:42.305275   65502 buildroot.go:189] setting minikube options for container-runtime
	I0501 03:30:42.305461   65502 config.go:182] Loaded profile config "old-k8s-version-503971": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0501 03:30:42.305560   65502 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHHostname
	I0501 03:30:42.308502   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:30:42.308899   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:68:83", ip: ""} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:30:42.308926   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:30:42.309137   65502 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHPort
	I0501 03:30:42.309338   65502 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHKeyPath
	I0501 03:30:42.309522   65502 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHKeyPath
	I0501 03:30:42.309669   65502 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHUsername
	I0501 03:30:42.309839   65502 main.go:141] libmachine: Using SSH client type: native
	I0501 03:30:42.309998   65502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.104 22 <nil> <nil>}
	I0501 03:30:42.310016   65502 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0501 03:30:42.872180   66006 start.go:364] duration metric: took 25.029075665s to acquireMachinesLock for "no-preload-892672"
	I0501 03:30:42.872249   66006 start.go:93] Provisioning new machine with config: &{Name:no-preload-892672 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.0 ClusterName:no-preload-892672 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0501 03:30:42.872438   66006 start.go:125] createHost starting for "" (driver="kvm2")
	I0501 03:30:42.617007   65502 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0501 03:30:42.617043   65502 main.go:141] libmachine: Checking connection to Docker...
	I0501 03:30:42.617070   65502 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetURL
	I0501 03:30:42.618412   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | Using libvirt version 6000000
	I0501 03:30:42.620667   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:30:42.621024   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:68:83", ip: ""} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:30:42.621047   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:30:42.621237   65502 main.go:141] libmachine: Docker is up and running!
	I0501 03:30:42.621253   65502 main.go:141] libmachine: Reticulating splines...
	I0501 03:30:42.621265   65502 client.go:171] duration metric: took 27.78581521s to LocalClient.Create
	I0501 03:30:42.621301   65502 start.go:167] duration metric: took 27.785892327s to libmachine.API.Create "old-k8s-version-503971"
	I0501 03:30:42.621316   65502 start.go:293] postStartSetup for "old-k8s-version-503971" (driver="kvm2")
	I0501 03:30:42.621335   65502 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0501 03:30:42.621360   65502 main.go:141] libmachine: (old-k8s-version-503971) Calling .DriverName
	I0501 03:30:42.621643   65502 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0501 03:30:42.621672   65502 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHHostname
	I0501 03:30:42.624286   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:30:42.624696   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:68:83", ip: ""} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:30:42.624726   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:30:42.624958   65502 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHPort
	I0501 03:30:42.625164   65502 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHKeyPath
	I0501 03:30:42.625376   65502 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHUsername
	I0501 03:30:42.625547   65502 sshutil.go:53] new ssh client: &{IP:192.168.61.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/old-k8s-version-503971/id_rsa Username:docker}
	I0501 03:30:42.713761   65502 ssh_runner.go:195] Run: cat /etc/os-release
	I0501 03:30:42.719682   65502 info.go:137] Remote host: Buildroot 2023.02.9
	I0501 03:30:42.719708   65502 filesync.go:126] Scanning /home/jenkins/minikube-integration/18779-13391/.minikube/addons for local assets ...
	I0501 03:30:42.719769   65502 filesync.go:126] Scanning /home/jenkins/minikube-integration/18779-13391/.minikube/files for local assets ...
	I0501 03:30:42.719857   65502 filesync.go:149] local asset: /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem -> 207242.pem in /etc/ssl/certs
	I0501 03:30:42.719975   65502 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0501 03:30:42.732076   65502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem --> /etc/ssl/certs/207242.pem (1708 bytes)
	I0501 03:30:42.763447   65502 start.go:296] duration metric: took 142.112552ms for postStartSetup
	I0501 03:30:42.763511   65502 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetConfigRaw
	I0501 03:30:42.764263   65502 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetIP
	I0501 03:30:42.767182   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:30:42.767626   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:68:83", ip: ""} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:30:42.767657   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:30:42.767988   65502 profile.go:143] Saving config to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/old-k8s-version-503971/config.json ...
	I0501 03:30:42.768204   65502 start.go:128] duration metric: took 27.959102304s to createHost
	I0501 03:30:42.768232   65502 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHHostname
	I0501 03:30:42.770545   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:30:42.770891   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:68:83", ip: ""} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:30:42.770916   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:30:42.771041   65502 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHPort
	I0501 03:30:42.771236   65502 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHKeyPath
	I0501 03:30:42.771386   65502 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHKeyPath
	I0501 03:30:42.771545   65502 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHUsername
	I0501 03:30:42.771697   65502 main.go:141] libmachine: Using SSH client type: native
	I0501 03:30:42.771926   65502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.104 22 <nil> <nil>}
	I0501 03:30:42.771941   65502 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0501 03:30:42.871965   65502 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714534242.856037398
	
	I0501 03:30:42.871993   65502 fix.go:216] guest clock: 1714534242.856037398
	I0501 03:30:42.872005   65502 fix.go:229] Guest: 2024-05-01 03:30:42.856037398 +0000 UTC Remote: 2024-05-01 03:30:42.768218477 +0000 UTC m=+30.278133484 (delta=87.818921ms)
	I0501 03:30:42.872062   65502 fix.go:200] guest clock delta is within tolerance: 87.818921ms
	I0501 03:30:42.872074   65502 start.go:83] releasing machines lock for "old-k8s-version-503971", held for 28.063153275s
	I0501 03:30:42.872110   65502 main.go:141] libmachine: (old-k8s-version-503971) Calling .DriverName
	I0501 03:30:42.872435   65502 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetIP
	I0501 03:30:42.875419   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:30:42.875757   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:68:83", ip: ""} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:30:42.875782   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:30:42.875975   65502 main.go:141] libmachine: (old-k8s-version-503971) Calling .DriverName
	I0501 03:30:42.876502   65502 main.go:141] libmachine: (old-k8s-version-503971) Calling .DriverName
	I0501 03:30:42.876671   65502 main.go:141] libmachine: (old-k8s-version-503971) Calling .DriverName
	I0501 03:30:42.876773   65502 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0501 03:30:42.876819   65502 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHHostname
	I0501 03:30:42.876933   65502 ssh_runner.go:195] Run: cat /version.json
	I0501 03:30:42.876956   65502 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHHostname
	I0501 03:30:42.879479   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:30:42.879734   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:30:42.879882   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:68:83", ip: ""} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:30:42.879905   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:30:42.880002   65502 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHPort
	I0501 03:30:42.880120   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:68:83", ip: ""} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:30:42.880146   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:30:42.880156   65502 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHKeyPath
	I0501 03:30:42.880333   65502 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHUsername
	I0501 03:30:42.880346   65502 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHPort
	I0501 03:30:42.880504   65502 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHKeyPath
	I0501 03:30:42.880513   65502 sshutil.go:53] new ssh client: &{IP:192.168.61.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/old-k8s-version-503971/id_rsa Username:docker}
	I0501 03:30:42.880638   65502 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHUsername
	I0501 03:30:42.880771   65502 sshutil.go:53] new ssh client: &{IP:192.168.61.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/old-k8s-version-503971/id_rsa Username:docker}
	I0501 03:30:42.961011   65502 ssh_runner.go:195] Run: systemctl --version
	I0501 03:30:42.988631   65502 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0501 03:30:43.163111   65502 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0501 03:30:43.170959   65502 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0501 03:30:43.171037   65502 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0501 03:30:43.198233   65502 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0501 03:30:43.198261   65502 start.go:494] detecting cgroup driver to use...
	I0501 03:30:43.198333   65502 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0501 03:30:43.217035   65502 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0501 03:30:43.232313   65502 docker.go:217] disabling cri-docker service (if available) ...
	I0501 03:30:43.232400   65502 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0501 03:30:43.251010   65502 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0501 03:30:43.267584   65502 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0501 03:30:43.408987   65502 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0501 03:30:43.581459   65502 docker.go:233] disabling docker service ...
	I0501 03:30:43.581529   65502 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0501 03:30:43.599496   65502 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0501 03:30:43.614428   65502 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0501 03:30:43.765839   65502 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0501 03:30:43.903703   65502 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0501 03:30:43.922440   65502 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0501 03:30:43.948364   65502 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0501 03:30:43.948428   65502 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:30:43.960971   65502 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0501 03:30:43.961039   65502 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:30:43.972949   65502 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:30:43.985013   65502 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:30:43.996978   65502 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0501 03:30:44.009121   65502 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0501 03:30:44.019692   65502 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0501 03:30:44.019749   65502 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0501 03:30:44.035122   65502 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0501 03:30:44.049560   65502 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 03:30:44.178215   65502 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0501 03:30:44.369046   65502 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0501 03:30:44.369133   65502 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0501 03:30:44.375374   65502 start.go:562] Will wait 60s for crictl version
	I0501 03:30:44.375449   65502 ssh_runner.go:195] Run: which crictl
	I0501 03:30:44.380830   65502 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0501 03:30:44.422063   65502 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0501 03:30:44.422157   65502 ssh_runner.go:195] Run: crio --version
	I0501 03:30:44.460265   65502 ssh_runner.go:195] Run: crio --version
	I0501 03:30:44.501809   65502 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0501 03:30:42.874586   66006 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0501 03:30:42.874756   66006 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:30:42.874833   66006 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:30:42.891392   66006 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39349
	I0501 03:30:42.891830   66006 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:30:42.892380   66006 main.go:141] libmachine: Using API Version  1
	I0501 03:30:42.892403   66006 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:30:42.892681   66006 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:30:42.892857   66006 main.go:141] libmachine: (no-preload-892672) Calling .GetMachineName
	I0501 03:30:42.892991   66006 main.go:141] libmachine: (no-preload-892672) Calling .DriverName
	I0501 03:30:42.893173   66006 start.go:159] libmachine.API.Create for "no-preload-892672" (driver="kvm2")
	I0501 03:30:42.893197   66006 client.go:168] LocalClient.Create starting
	I0501 03:30:42.893229   66006 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem
	I0501 03:30:42.893273   66006 main.go:141] libmachine: Decoding PEM data...
	I0501 03:30:42.893290   66006 main.go:141] libmachine: Parsing certificate...
	I0501 03:30:42.893357   66006 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem
	I0501 03:30:42.893383   66006 main.go:141] libmachine: Decoding PEM data...
	I0501 03:30:42.893400   66006 main.go:141] libmachine: Parsing certificate...
	I0501 03:30:42.893431   66006 main.go:141] libmachine: Running pre-create checks...
	I0501 03:30:42.893443   66006 main.go:141] libmachine: (no-preload-892672) Calling .PreCreateCheck
	I0501 03:30:42.893736   66006 main.go:141] libmachine: (no-preload-892672) Calling .GetConfigRaw
	I0501 03:30:42.894076   66006 main.go:141] libmachine: Creating machine...
	I0501 03:30:42.894091   66006 main.go:141] libmachine: (no-preload-892672) Calling .Create
	I0501 03:30:42.894214   66006 main.go:141] libmachine: (no-preload-892672) Creating KVM machine...
	I0501 03:30:42.895325   66006 main.go:141] libmachine: (no-preload-892672) DBG | found existing default KVM network
	I0501 03:30:42.896715   66006 main.go:141] libmachine: (no-preload-892672) DBG | I0501 03:30:42.896544   66339 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010f600}
	I0501 03:30:42.896737   66006 main.go:141] libmachine: (no-preload-892672) DBG | created network xml: 
	I0501 03:30:42.896751   66006 main.go:141] libmachine: (no-preload-892672) DBG | <network>
	I0501 03:30:42.896766   66006 main.go:141] libmachine: (no-preload-892672) DBG |   <name>mk-no-preload-892672</name>
	I0501 03:30:42.896781   66006 main.go:141] libmachine: (no-preload-892672) DBG |   <dns enable='no'/>
	I0501 03:30:42.896789   66006 main.go:141] libmachine: (no-preload-892672) DBG |   
	I0501 03:30:42.896799   66006 main.go:141] libmachine: (no-preload-892672) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0501 03:30:42.896806   66006 main.go:141] libmachine: (no-preload-892672) DBG |     <dhcp>
	I0501 03:30:42.896813   66006 main.go:141] libmachine: (no-preload-892672) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0501 03:30:42.896817   66006 main.go:141] libmachine: (no-preload-892672) DBG |     </dhcp>
	I0501 03:30:42.896822   66006 main.go:141] libmachine: (no-preload-892672) DBG |   </ip>
	I0501 03:30:42.896826   66006 main.go:141] libmachine: (no-preload-892672) DBG |   
	I0501 03:30:42.896832   66006 main.go:141] libmachine: (no-preload-892672) DBG | </network>
	I0501 03:30:42.896835   66006 main.go:141] libmachine: (no-preload-892672) DBG | 
	I0501 03:30:42.902504   66006 main.go:141] libmachine: (no-preload-892672) DBG | trying to create private KVM network mk-no-preload-892672 192.168.39.0/24...
	I0501 03:30:42.982170   66006 main.go:141] libmachine: (no-preload-892672) DBG | private KVM network mk-no-preload-892672 192.168.39.0/24 created
	I0501 03:30:42.982205   66006 main.go:141] libmachine: (no-preload-892672) Setting up store path in /home/jenkins/minikube-integration/18779-13391/.minikube/machines/no-preload-892672 ...
	I0501 03:30:42.982219   66006 main.go:141] libmachine: (no-preload-892672) DBG | I0501 03:30:42.982160   66339 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18779-13391/.minikube
	I0501 03:30:42.982238   66006 main.go:141] libmachine: (no-preload-892672) Building disk image from file:///home/jenkins/minikube-integration/18779-13391/.minikube/cache/iso/amd64/minikube-v1.33.0-1714498396-18779-amd64.iso
	I0501 03:30:42.982303   66006 main.go:141] libmachine: (no-preload-892672) Downloading /home/jenkins/minikube-integration/18779-13391/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18779-13391/.minikube/cache/iso/amd64/minikube-v1.33.0-1714498396-18779-amd64.iso...
	I0501 03:30:43.240720   66006 main.go:141] libmachine: (no-preload-892672) DBG | I0501 03:30:43.240616   66339 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/no-preload-892672/id_rsa...
	I0501 03:30:43.556620   66006 main.go:141] libmachine: (no-preload-892672) DBG | I0501 03:30:43.556467   66339 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/no-preload-892672/no-preload-892672.rawdisk...
	I0501 03:30:43.556666   66006 main.go:141] libmachine: (no-preload-892672) DBG | Writing magic tar header
	I0501 03:30:43.556687   66006 main.go:141] libmachine: (no-preload-892672) DBG | Writing SSH key tar header
	I0501 03:30:43.556702   66006 main.go:141] libmachine: (no-preload-892672) DBG | I0501 03:30:43.556619   66339 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18779-13391/.minikube/machines/no-preload-892672 ...
	I0501 03:30:43.556793   66006 main.go:141] libmachine: (no-preload-892672) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/no-preload-892672
	I0501 03:30:43.556822   66006 main.go:141] libmachine: (no-preload-892672) Setting executable bit set on /home/jenkins/minikube-integration/18779-13391/.minikube/machines/no-preload-892672 (perms=drwx------)
	I0501 03:30:43.556835   66006 main.go:141] libmachine: (no-preload-892672) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18779-13391/.minikube/machines
	I0501 03:30:43.556860   66006 main.go:141] libmachine: (no-preload-892672) Setting executable bit set on /home/jenkins/minikube-integration/18779-13391/.minikube/machines (perms=drwxr-xr-x)
	I0501 03:30:43.556879   66006 main.go:141] libmachine: (no-preload-892672) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18779-13391/.minikube
	I0501 03:30:43.556890   66006 main.go:141] libmachine: (no-preload-892672) Setting executable bit set on /home/jenkins/minikube-integration/18779-13391/.minikube (perms=drwxr-xr-x)
	I0501 03:30:43.556902   66006 main.go:141] libmachine: (no-preload-892672) Setting executable bit set on /home/jenkins/minikube-integration/18779-13391 (perms=drwxrwxr-x)
	I0501 03:30:43.556909   66006 main.go:141] libmachine: (no-preload-892672) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0501 03:30:43.556918   66006 main.go:141] libmachine: (no-preload-892672) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0501 03:30:43.556929   66006 main.go:141] libmachine: (no-preload-892672) Creating domain...
	I0501 03:30:43.556939   66006 main.go:141] libmachine: (no-preload-892672) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18779-13391
	I0501 03:30:43.556953   66006 main.go:141] libmachine: (no-preload-892672) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0501 03:30:43.556964   66006 main.go:141] libmachine: (no-preload-892672) DBG | Checking permissions on dir: /home/jenkins
	I0501 03:30:43.556978   66006 main.go:141] libmachine: (no-preload-892672) DBG | Checking permissions on dir: /home
	I0501 03:30:43.556988   66006 main.go:141] libmachine: (no-preload-892672) DBG | Skipping /home - not owner
	I0501 03:30:43.558657   66006 main.go:141] libmachine: (no-preload-892672) define libvirt domain using xml: 
	I0501 03:30:43.558677   66006 main.go:141] libmachine: (no-preload-892672) <domain type='kvm'>
	I0501 03:30:43.558687   66006 main.go:141] libmachine: (no-preload-892672)   <name>no-preload-892672</name>
	I0501 03:30:43.558695   66006 main.go:141] libmachine: (no-preload-892672)   <memory unit='MiB'>2200</memory>
	I0501 03:30:43.558709   66006 main.go:141] libmachine: (no-preload-892672)   <vcpu>2</vcpu>
	I0501 03:30:43.558717   66006 main.go:141] libmachine: (no-preload-892672)   <features>
	I0501 03:30:43.558725   66006 main.go:141] libmachine: (no-preload-892672)     <acpi/>
	I0501 03:30:43.558735   66006 main.go:141] libmachine: (no-preload-892672)     <apic/>
	I0501 03:30:43.558743   66006 main.go:141] libmachine: (no-preload-892672)     <pae/>
	I0501 03:30:43.558751   66006 main.go:141] libmachine: (no-preload-892672)     
	I0501 03:30:43.558792   66006 main.go:141] libmachine: (no-preload-892672)   </features>
	I0501 03:30:43.558818   66006 main.go:141] libmachine: (no-preload-892672)   <cpu mode='host-passthrough'>
	I0501 03:30:43.558828   66006 main.go:141] libmachine: (no-preload-892672)   
	I0501 03:30:43.558835   66006 main.go:141] libmachine: (no-preload-892672)   </cpu>
	I0501 03:30:43.558843   66006 main.go:141] libmachine: (no-preload-892672)   <os>
	I0501 03:30:43.558852   66006 main.go:141] libmachine: (no-preload-892672)     <type>hvm</type>
	I0501 03:30:43.558861   66006 main.go:141] libmachine: (no-preload-892672)     <boot dev='cdrom'/>
	I0501 03:30:43.558868   66006 main.go:141] libmachine: (no-preload-892672)     <boot dev='hd'/>
	I0501 03:30:43.558880   66006 main.go:141] libmachine: (no-preload-892672)     <bootmenu enable='no'/>
	I0501 03:30:43.558888   66006 main.go:141] libmachine: (no-preload-892672)   </os>
	I0501 03:30:43.558917   66006 main.go:141] libmachine: (no-preload-892672)   <devices>
	I0501 03:30:43.558934   66006 main.go:141] libmachine: (no-preload-892672)     <disk type='file' device='cdrom'>
	I0501 03:30:43.558949   66006 main.go:141] libmachine: (no-preload-892672)       <source file='/home/jenkins/minikube-integration/18779-13391/.minikube/machines/no-preload-892672/boot2docker.iso'/>
	I0501 03:30:43.558957   66006 main.go:141] libmachine: (no-preload-892672)       <target dev='hdc' bus='scsi'/>
	I0501 03:30:43.558965   66006 main.go:141] libmachine: (no-preload-892672)       <readonly/>
	I0501 03:30:43.558972   66006 main.go:141] libmachine: (no-preload-892672)     </disk>
	I0501 03:30:43.558981   66006 main.go:141] libmachine: (no-preload-892672)     <disk type='file' device='disk'>
	I0501 03:30:43.558993   66006 main.go:141] libmachine: (no-preload-892672)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0501 03:30:43.559006   66006 main.go:141] libmachine: (no-preload-892672)       <source file='/home/jenkins/minikube-integration/18779-13391/.minikube/machines/no-preload-892672/no-preload-892672.rawdisk'/>
	I0501 03:30:43.559020   66006 main.go:141] libmachine: (no-preload-892672)       <target dev='hda' bus='virtio'/>
	I0501 03:30:43.559030   66006 main.go:141] libmachine: (no-preload-892672)     </disk>
	I0501 03:30:43.559040   66006 main.go:141] libmachine: (no-preload-892672)     <interface type='network'>
	I0501 03:30:43.559050   66006 main.go:141] libmachine: (no-preload-892672)       <source network='mk-no-preload-892672'/>
	I0501 03:30:43.559060   66006 main.go:141] libmachine: (no-preload-892672)       <model type='virtio'/>
	I0501 03:30:43.559068   66006 main.go:141] libmachine: (no-preload-892672)     </interface>
	I0501 03:30:43.559075   66006 main.go:141] libmachine: (no-preload-892672)     <interface type='network'>
	I0501 03:30:43.559090   66006 main.go:141] libmachine: (no-preload-892672)       <source network='default'/>
	I0501 03:30:43.559104   66006 main.go:141] libmachine: (no-preload-892672)       <model type='virtio'/>
	I0501 03:30:43.559113   66006 main.go:141] libmachine: (no-preload-892672)     </interface>
	I0501 03:30:43.559122   66006 main.go:141] libmachine: (no-preload-892672)     <serial type='pty'>
	I0501 03:30:43.559131   66006 main.go:141] libmachine: (no-preload-892672)       <target port='0'/>
	I0501 03:30:43.559141   66006 main.go:141] libmachine: (no-preload-892672)     </serial>
	I0501 03:30:43.559150   66006 main.go:141] libmachine: (no-preload-892672)     <console type='pty'>
	I0501 03:30:43.559159   66006 main.go:141] libmachine: (no-preload-892672)       <target type='serial' port='0'/>
	I0501 03:30:43.559164   66006 main.go:141] libmachine: (no-preload-892672)     </console>
	I0501 03:30:43.559169   66006 main.go:141] libmachine: (no-preload-892672)     <rng model='virtio'>
	I0501 03:30:43.559175   66006 main.go:141] libmachine: (no-preload-892672)       <backend model='random'>/dev/random</backend>
	I0501 03:30:43.559179   66006 main.go:141] libmachine: (no-preload-892672)     </rng>
	I0501 03:30:43.559184   66006 main.go:141] libmachine: (no-preload-892672)     
	I0501 03:30:43.559188   66006 main.go:141] libmachine: (no-preload-892672)     
	I0501 03:30:43.559194   66006 main.go:141] libmachine: (no-preload-892672)   </devices>
	I0501 03:30:43.559198   66006 main.go:141] libmachine: (no-preload-892672) </domain>
	I0501 03:30:43.559205   66006 main.go:141] libmachine: (no-preload-892672) 
	I0501 03:30:43.563340   66006 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:86:ea:d0 in network default
	I0501 03:30:43.563993   66006 main.go:141] libmachine: (no-preload-892672) Ensuring networks are active...
	I0501 03:30:43.564016   66006 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:30:43.564667   66006 main.go:141] libmachine: (no-preload-892672) Ensuring network default is active
	I0501 03:30:43.565034   66006 main.go:141] libmachine: (no-preload-892672) Ensuring network mk-no-preload-892672 is active
	I0501 03:30:43.565557   66006 main.go:141] libmachine: (no-preload-892672) Getting domain xml...
	I0501 03:30:43.566375   66006 main.go:141] libmachine: (no-preload-892672) Creating domain...
	I0501 03:30:45.028249   66006 main.go:141] libmachine: (no-preload-892672) Waiting to get IP...
	I0501 03:30:45.029145   66006 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:30:45.029717   66006 main.go:141] libmachine: (no-preload-892672) DBG | unable to find current IP address of domain no-preload-892672 in network mk-no-preload-892672
	I0501 03:30:45.029771   66006 main.go:141] libmachine: (no-preload-892672) DBG | I0501 03:30:45.029683   66339 retry.go:31] will retry after 233.076436ms: waiting for machine to come up
	I0501 03:30:45.264173   66006 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:30:45.264962   66006 main.go:141] libmachine: (no-preload-892672) DBG | unable to find current IP address of domain no-preload-892672 in network mk-no-preload-892672
	I0501 03:30:45.264984   66006 main.go:141] libmachine: (no-preload-892672) DBG | I0501 03:30:45.264922   66339 retry.go:31] will retry after 236.126383ms: waiting for machine to come up
	I0501 03:30:45.502743   66006 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:30:45.503394   66006 main.go:141] libmachine: (no-preload-892672) DBG | unable to find current IP address of domain no-preload-892672 in network mk-no-preload-892672
	I0501 03:30:45.503427   66006 main.go:141] libmachine: (no-preload-892672) DBG | I0501 03:30:45.503370   66339 retry.go:31] will retry after 339.088937ms: waiting for machine to come up
	I0501 03:30:45.843964   66006 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:30:45.844537   66006 main.go:141] libmachine: (no-preload-892672) DBG | unable to find current IP address of domain no-preload-892672 in network mk-no-preload-892672
	I0501 03:30:45.844564   66006 main.go:141] libmachine: (no-preload-892672) DBG | I0501 03:30:45.844470   66339 retry.go:31] will retry after 549.547935ms: waiting for machine to come up
	I0501 03:30:46.395964   66006 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:30:46.396600   66006 main.go:141] libmachine: (no-preload-892672) DBG | unable to find current IP address of domain no-preload-892672 in network mk-no-preload-892672
	I0501 03:30:46.396628   66006 main.go:141] libmachine: (no-preload-892672) DBG | I0501 03:30:46.396538   66339 retry.go:31] will retry after 522.187207ms: waiting for machine to come up
	I0501 03:30:46.920134   66006 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:30:46.920719   66006 main.go:141] libmachine: (no-preload-892672) DBG | unable to find current IP address of domain no-preload-892672 in network mk-no-preload-892672
	I0501 03:30:46.920746   66006 main.go:141] libmachine: (no-preload-892672) DBG | I0501 03:30:46.920605   66339 retry.go:31] will retry after 756.222457ms: waiting for machine to come up
	I0501 03:30:44.502920   65502 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetIP
	I0501 03:30:44.506073   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:30:44.506516   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:68:83", ip: ""} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:30:44.506545   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:30:44.506772   65502 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0501 03:30:44.511654   65502 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0501 03:30:44.527001   65502 kubeadm.go:877] updating cluster {Name:old-k8s-version-503971 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-503971 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.104 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0501 03:30:44.527145   65502 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0501 03:30:44.527210   65502 ssh_runner.go:195] Run: sudo crictl images --output json
	I0501 03:30:44.576772   65502 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0501 03:30:44.576863   65502 ssh_runner.go:195] Run: which lz4
	I0501 03:30:44.582006   65502 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0501 03:30:44.587493   65502 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0501 03:30:44.587530   65502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0501 03:30:46.840075   65502 crio.go:462] duration metric: took 2.258108991s to copy over tarball
	I0501 03:30:46.840154   65502 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0501 03:30:47.678794   66006 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:30:47.679342   66006 main.go:141] libmachine: (no-preload-892672) DBG | unable to find current IP address of domain no-preload-892672 in network mk-no-preload-892672
	I0501 03:30:47.679390   66006 main.go:141] libmachine: (no-preload-892672) DBG | I0501 03:30:47.679297   66339 retry.go:31] will retry after 1.042102681s: waiting for machine to come up
	I0501 03:30:48.722729   66006 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:30:48.723284   66006 main.go:141] libmachine: (no-preload-892672) DBG | unable to find current IP address of domain no-preload-892672 in network mk-no-preload-892672
	I0501 03:30:48.723313   66006 main.go:141] libmachine: (no-preload-892672) DBG | I0501 03:30:48.723264   66339 retry.go:31] will retry after 1.28927265s: waiting for machine to come up
	I0501 03:30:50.014803   66006 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:30:50.015229   66006 main.go:141] libmachine: (no-preload-892672) DBG | unable to find current IP address of domain no-preload-892672 in network mk-no-preload-892672
	I0501 03:30:50.015250   66006 main.go:141] libmachine: (no-preload-892672) DBG | I0501 03:30:50.015207   66339 retry.go:31] will retry after 1.534229236s: waiting for machine to come up
	I0501 03:30:51.551744   66006 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:30:51.552180   66006 main.go:141] libmachine: (no-preload-892672) DBG | unable to find current IP address of domain no-preload-892672 in network mk-no-preload-892672
	I0501 03:30:51.552218   66006 main.go:141] libmachine: (no-preload-892672) DBG | I0501 03:30:51.552152   66339 retry.go:31] will retry after 1.443995128s: waiting for machine to come up
	I0501 03:30:49.842324   65502 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.002139829s)
	I0501 03:30:49.842356   65502 crio.go:469] duration metric: took 3.002253578s to extract the tarball
	I0501 03:30:49.842366   65502 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0501 03:30:49.890494   65502 ssh_runner.go:195] Run: sudo crictl images --output json
	I0501 03:30:49.952833   65502 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0501 03:30:49.952865   65502 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0501 03:30:49.952939   65502 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0501 03:30:49.952973   65502 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0501 03:30:49.952998   65502 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0501 03:30:49.953002   65502 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0501 03:30:49.952978   65502 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0501 03:30:49.952971   65502 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0501 03:30:49.953044   65502 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0501 03:30:49.953095   65502 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0501 03:30:49.954439   65502 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0501 03:30:49.954542   65502 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0501 03:30:49.954562   65502 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0501 03:30:49.954604   65502 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0501 03:30:49.954623   65502 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0501 03:30:49.954709   65502 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0501 03:30:49.954722   65502 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0501 03:30:49.954782   65502 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0501 03:30:50.072986   65502 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0501 03:30:50.118614   65502 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0501 03:30:50.125554   65502 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0501 03:30:50.125751   65502 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0501 03:30:50.125797   65502 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0501 03:30:50.125838   65502 ssh_runner.go:195] Run: which crictl
	I0501 03:30:50.176894   65502 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0501 03:30:50.176949   65502 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0501 03:30:50.177007   65502 ssh_runner.go:195] Run: which crictl
	I0501 03:30:50.182111   65502 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0501 03:30:50.193301   65502 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0501 03:30:50.193341   65502 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0501 03:30:50.193349   65502 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0501 03:30:50.193377   65502 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0501 03:30:50.193415   65502 ssh_runner.go:195] Run: which crictl
	I0501 03:30:50.244120   65502 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0501 03:30:50.244175   65502 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0501 03:30:50.244229   65502 ssh_runner.go:195] Run: which crictl
	I0501 03:30:50.272477   65502 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0501 03:30:50.272531   65502 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0501 03:30:50.272597   65502 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0501 03:30:50.272597   65502 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0501 03:30:50.321368   65502 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0501 03:30:50.328355   65502 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0501 03:30:50.328444   65502 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0501 03:30:50.331393   65502 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0501 03:30:50.377703   65502 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0501 03:30:50.377748   65502 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0501 03:30:50.377799   65502 ssh_runner.go:195] Run: which crictl
	I0501 03:30:50.384874   65502 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0501 03:30:50.384928   65502 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0501 03:30:50.384999   65502 ssh_runner.go:195] Run: which crictl
	I0501 03:30:50.385027   65502 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0501 03:30:50.400233   65502 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0501 03:30:50.431071   65502 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0501 03:30:50.431140   65502 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0501 03:30:50.468327   65502 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0501 03:30:50.468374   65502 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0501 03:30:50.468423   65502 ssh_runner.go:195] Run: which crictl
	I0501 03:30:50.483635   65502 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0501 03:30:50.483884   65502 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0501 03:30:50.523093   65502 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0501 03:30:50.883241   65502 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0501 03:30:51.038440   65502 cache_images.go:92] duration metric: took 1.085549933s to LoadCachedImages
	W0501 03:30:51.038528   65502 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I0501 03:30:51.038551   65502 kubeadm.go:928] updating node { 192.168.61.104 8443 v1.20.0 crio true true} ...
	I0501 03:30:51.038735   65502 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-503971 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.104
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-503971 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0501 03:30:51.038844   65502 ssh_runner.go:195] Run: crio config
	I0501 03:30:51.094689   65502 cni.go:84] Creating CNI manager for ""
	I0501 03:30:51.094721   65502 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0501 03:30:51.094740   65502 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0501 03:30:51.094766   65502 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.104 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-503971 NodeName:old-k8s-version-503971 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.104"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.104 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0501 03:30:51.094961   65502 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.104
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-503971"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.104
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.104"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0501 03:30:51.095038   65502 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0501 03:30:51.107613   65502 binaries.go:44] Found k8s binaries, skipping transfer
	I0501 03:30:51.107689   65502 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0501 03:30:51.119057   65502 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0501 03:30:51.139902   65502 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0501 03:30:51.160697   65502 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0501 03:30:51.181308   65502 ssh_runner.go:195] Run: grep 192.168.61.104	control-plane.minikube.internal$ /etc/hosts
	I0501 03:30:51.186095   65502 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.104	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0501 03:30:51.200407   65502 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 03:30:51.341718   65502 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0501 03:30:51.361824   65502 certs.go:68] Setting up /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/old-k8s-version-503971 for IP: 192.168.61.104
	I0501 03:30:51.361849   65502 certs.go:194] generating shared ca certs ...
	I0501 03:30:51.361886   65502 certs.go:226] acquiring lock for ca certs: {Name:mk85a75d14c00780d4bf09cd9bdaf0f96f1b8fce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 03:30:51.362071   65502 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18779-13391/.minikube/ca.key
	I0501 03:30:51.362139   65502 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.key
	I0501 03:30:51.362154   65502 certs.go:256] generating profile certs ...
	I0501 03:30:51.362224   65502 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/old-k8s-version-503971/client.key
	I0501 03:30:51.362241   65502 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/old-k8s-version-503971/client.crt with IP's: []
	I0501 03:30:51.545067   65502 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/old-k8s-version-503971/client.crt ...
	I0501 03:30:51.545100   65502 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/old-k8s-version-503971/client.crt: {Name:mkb291995c78a70d2aa99b3de57a89e0b204a34e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 03:30:51.545321   65502 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/old-k8s-version-503971/client.key ...
	I0501 03:30:51.545341   65502 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/old-k8s-version-503971/client.key: {Name:mkbd7ea061c299f0c055a413768768a5fe4e6594 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 03:30:51.545470   65502 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/old-k8s-version-503971/apiserver.key.760b883a
	I0501 03:30:51.545493   65502 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/old-k8s-version-503971/apiserver.crt.760b883a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.104]
	I0501 03:30:51.858137   65502 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/old-k8s-version-503971/apiserver.crt.760b883a ...
	I0501 03:30:51.858174   65502 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/old-k8s-version-503971/apiserver.crt.760b883a: {Name:mk43b28d265a30fadff81730d277d5e9a53ed81b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 03:30:51.858338   65502 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/old-k8s-version-503971/apiserver.key.760b883a ...
	I0501 03:30:51.858354   65502 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/old-k8s-version-503971/apiserver.key.760b883a: {Name:mk6abbd75de4d0204a5ddb349b7dd731c6dad335 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 03:30:51.858453   65502 certs.go:381] copying /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/old-k8s-version-503971/apiserver.crt.760b883a -> /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/old-k8s-version-503971/apiserver.crt
	I0501 03:30:51.858556   65502 certs.go:385] copying /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/old-k8s-version-503971/apiserver.key.760b883a -> /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/old-k8s-version-503971/apiserver.key
	I0501 03:30:51.858613   65502 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/old-k8s-version-503971/proxy-client.key
	I0501 03:30:51.858629   65502 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/old-k8s-version-503971/proxy-client.crt with IP's: []
	I0501 03:30:51.926667   65502 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/old-k8s-version-503971/proxy-client.crt ...
	I0501 03:30:51.926698   65502 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/old-k8s-version-503971/proxy-client.crt: {Name:mk06c320401a2419a3c417ef2b2bfd213f5e04ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 03:30:51.951290   65502 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/old-k8s-version-503971/proxy-client.key ...
	I0501 03:30:51.951325   65502 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/old-k8s-version-503971/proxy-client.key: {Name:mk8232ae44275fffebff8fcc51b89dbe91275d3d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 03:30:51.951553   65502 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/20724.pem (1338 bytes)
	W0501 03:30:51.951624   65502 certs.go:480] ignoring /home/jenkins/minikube-integration/18779-13391/.minikube/certs/20724_empty.pem, impossibly tiny 0 bytes
	I0501 03:30:51.951636   65502 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca-key.pem (1679 bytes)
	I0501 03:30:51.951668   65502 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem (1078 bytes)
	I0501 03:30:51.951700   65502 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem (1123 bytes)
	I0501 03:30:51.951735   65502 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem (1675 bytes)
	I0501 03:30:51.951809   65502 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem (1708 bytes)
	I0501 03:30:51.952467   65502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0501 03:30:51.986170   65502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0501 03:30:52.013909   65502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0501 03:30:52.046333   65502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0501 03:30:52.076235   65502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/old-k8s-version-503971/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0501 03:30:52.109294   65502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/old-k8s-version-503971/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0501 03:30:52.140100   65502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/old-k8s-version-503971/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0501 03:30:52.172091   65502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/old-k8s-version-503971/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0501 03:30:52.206615   65502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0501 03:30:52.247633   65502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/certs/20724.pem --> /usr/share/ca-certificates/20724.pem (1338 bytes)
	I0501 03:30:52.293157   65502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem --> /usr/share/ca-certificates/207242.pem (1708 bytes)
	I0501 03:30:52.325589   65502 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0501 03:30:52.344575   65502 ssh_runner.go:195] Run: openssl version
	I0501 03:30:52.351318   65502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/207242.pem && ln -fs /usr/share/ca-certificates/207242.pem /etc/ssl/certs/207242.pem"
	I0501 03:30:52.363721   65502 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/207242.pem
	I0501 03:30:52.369508   65502 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May  1 02:20 /usr/share/ca-certificates/207242.pem
	I0501 03:30:52.369580   65502 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/207242.pem
	I0501 03:30:52.376486   65502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/207242.pem /etc/ssl/certs/3ec20f2e.0"
	I0501 03:30:52.389675   65502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0501 03:30:52.402595   65502 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0501 03:30:52.408132   65502 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May  1 02:08 /usr/share/ca-certificates/minikubeCA.pem
	I0501 03:30:52.408192   65502 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0501 03:30:52.415072   65502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0501 03:30:52.427777   65502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20724.pem && ln -fs /usr/share/ca-certificates/20724.pem /etc/ssl/certs/20724.pem"
	I0501 03:30:52.440074   65502 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20724.pem
	I0501 03:30:52.445322   65502 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May  1 02:20 /usr/share/ca-certificates/20724.pem
	I0501 03:30:52.445383   65502 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20724.pem
	I0501 03:30:52.451972   65502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20724.pem /etc/ssl/certs/51391683.0"
	I0501 03:30:52.464182   65502 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0501 03:30:52.468944   65502 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0501 03:30:52.469006   65502 kubeadm.go:391] StartCluster: {Name:old-k8s-version-503971 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-503971 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.104 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 03:30:52.469144   65502 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0501 03:30:52.469185   65502 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0501 03:30:52.507958   65502 cri.go:89] found id: ""
	I0501 03:30:52.508028   65502 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0501 03:30:52.519327   65502 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0501 03:30:52.530245   65502 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0501 03:30:52.541053   65502 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0501 03:30:52.541073   65502 kubeadm.go:156] found existing configuration files:
	
	I0501 03:30:52.541122   65502 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0501 03:30:52.551167   65502 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0501 03:30:52.551227   65502 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0501 03:30:52.561184   65502 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0501 03:30:52.571255   65502 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0501 03:30:52.571308   65502 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0501 03:30:52.581638   65502 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0501 03:30:52.591465   65502 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0501 03:30:52.591560   65502 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0501 03:30:52.601628   65502 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0501 03:30:52.612311   65502 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0501 03:30:52.612381   65502 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0501 03:30:52.624432   65502 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0501 03:30:52.751406   65502 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0501 03:30:52.751534   65502 kubeadm.go:309] [preflight] Running pre-flight checks
	I0501 03:30:52.936474   65502 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0501 03:30:52.936623   65502 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0501 03:30:52.936772   65502 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0501 03:30:53.175705   65502 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0501 03:30:53.177702   65502 out.go:204]   - Generating certificates and keys ...
	I0501 03:30:53.177814   65502 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0501 03:30:53.177917   65502 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0501 03:30:53.284246   65502 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0501 03:30:53.744671   65502 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0501 03:30:53.912918   65502 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0501 03:30:54.082836   65502 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0501 03:30:54.217925   65502 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0501 03:30:54.218391   65502 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-503971] and IPs [192.168.61.104 127.0.0.1 ::1]
	I0501 03:30:54.462226   65502 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0501 03:30:54.462608   65502 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-503971] and IPs [192.168.61.104 127.0.0.1 ::1]
	I0501 03:30:54.776205   65502 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0501 03:30:54.908978   65502 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0501 03:30:55.044122   65502 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0501 03:30:55.044449   65502 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0501 03:30:55.210328   65502 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0501 03:30:55.452313   65502 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0501 03:30:55.640378   65502 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0501 03:30:55.759466   65502 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0501 03:30:55.785297   65502 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0501 03:30:55.787319   65502 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0501 03:30:55.787397   65502 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0501 03:30:55.936196   65502 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0501 03:30:52.998085   66006 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:30:52.998644   66006 main.go:141] libmachine: (no-preload-892672) DBG | unable to find current IP address of domain no-preload-892672 in network mk-no-preload-892672
	I0501 03:30:52.998674   66006 main.go:141] libmachine: (no-preload-892672) DBG | I0501 03:30:52.998598   66339 retry.go:31] will retry after 1.840265648s: waiting for machine to come up
	I0501 03:30:54.840264   66006 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:30:54.840870   66006 main.go:141] libmachine: (no-preload-892672) DBG | unable to find current IP address of domain no-preload-892672 in network mk-no-preload-892672
	I0501 03:30:54.840892   66006 main.go:141] libmachine: (no-preload-892672) DBG | I0501 03:30:54.840829   66339 retry.go:31] will retry after 2.270069078s: waiting for machine to come up
	I0501 03:30:57.112251   66006 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:30:57.112698   66006 main.go:141] libmachine: (no-preload-892672) DBG | unable to find current IP address of domain no-preload-892672 in network mk-no-preload-892672
	I0501 03:30:57.112715   66006 main.go:141] libmachine: (no-preload-892672) DBG | I0501 03:30:57.112675   66339 retry.go:31] will retry after 3.090508879s: waiting for machine to come up
	I0501 03:30:55.939274   65502 out.go:204]   - Booting up control plane ...
	I0501 03:30:55.939411   65502 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0501 03:30:55.944024   65502 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0501 03:30:55.945218   65502 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0501 03:30:55.946149   65502 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0501 03:30:55.951042   65502 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0501 03:31:00.205840   66006 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:31:00.206290   66006 main.go:141] libmachine: (no-preload-892672) DBG | unable to find current IP address of domain no-preload-892672 in network mk-no-preload-892672
	I0501 03:31:00.206311   66006 main.go:141] libmachine: (no-preload-892672) DBG | I0501 03:31:00.206267   66339 retry.go:31] will retry after 3.596496154s: waiting for machine to come up
	I0501 03:31:03.804601   66006 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:31:03.805044   66006 main.go:141] libmachine: (no-preload-892672) DBG | unable to find current IP address of domain no-preload-892672 in network mk-no-preload-892672
	I0501 03:31:03.805069   66006 main.go:141] libmachine: (no-preload-892672) DBG | I0501 03:31:03.805002   66339 retry.go:31] will retry after 4.903029271s: waiting for machine to come up
	I0501 03:31:10.272028   66044 start.go:364] duration metric: took 52.059278641s to acquireMachinesLock for "embed-certs-277128"
	I0501 03:31:10.272108   66044 start.go:93] Provisioning new machine with config: &{Name:embed-certs-277128 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.0 ClusterName:embed-certs-277128 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0501 03:31:10.272231   66044 start.go:125] createHost starting for "" (driver="kvm2")
	I0501 03:31:08.709588   66006 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:31:08.710020   66006 main.go:141] libmachine: (no-preload-892672) Found IP for machine: 192.168.39.144
	I0501 03:31:08.710040   66006 main.go:141] libmachine: (no-preload-892672) Reserving static IP address...
	I0501 03:31:08.710051   66006 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has current primary IP address 192.168.39.144 and MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:31:08.710427   66006 main.go:141] libmachine: (no-preload-892672) DBG | unable to find host DHCP lease matching {name: "no-preload-892672", mac: "52:54:00:c7:6d:9a", ip: "192.168.39.144"} in network mk-no-preload-892672
	I0501 03:31:08.788212   66006 main.go:141] libmachine: (no-preload-892672) DBG | Getting to WaitForSSH function...
	I0501 03:31:08.788245   66006 main.go:141] libmachine: (no-preload-892672) Reserved static IP address: 192.168.39.144
	I0501 03:31:08.788260   66006 main.go:141] libmachine: (no-preload-892672) Waiting for SSH to be available...
	I0501 03:31:08.790758   66006 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:31:08.791108   66006 main.go:141] libmachine: (no-preload-892672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6d:9a", ip: ""} in network mk-no-preload-892672: {Iface:virbr1 ExpiryTime:2024-05-01 04:31:00 +0000 UTC Type:0 Mac:52:54:00:c7:6d:9a Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:minikube Clientid:01:52:54:00:c7:6d:9a}
	I0501 03:31:08.791140   66006 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined IP address 192.168.39.144 and MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:31:08.791208   66006 main.go:141] libmachine: (no-preload-892672) DBG | Using SSH client type: external
	I0501 03:31:08.791241   66006 main.go:141] libmachine: (no-preload-892672) DBG | Using SSH private key: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/no-preload-892672/id_rsa (-rw-------)
	I0501 03:31:08.791289   66006 main.go:141] libmachine: (no-preload-892672) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.144 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18779-13391/.minikube/machines/no-preload-892672/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0501 03:31:08.791312   66006 main.go:141] libmachine: (no-preload-892672) DBG | About to run SSH command:
	I0501 03:31:08.791328   66006 main.go:141] libmachine: (no-preload-892672) DBG | exit 0
	I0501 03:31:08.922495   66006 main.go:141] libmachine: (no-preload-892672) DBG | SSH cmd err, output: <nil>: 
	I0501 03:31:08.922803   66006 main.go:141] libmachine: (no-preload-892672) KVM machine creation complete!
	I0501 03:31:08.923134   66006 main.go:141] libmachine: (no-preload-892672) Calling .GetConfigRaw
	I0501 03:31:08.923677   66006 main.go:141] libmachine: (no-preload-892672) Calling .DriverName
	I0501 03:31:08.923870   66006 main.go:141] libmachine: (no-preload-892672) Calling .DriverName
	I0501 03:31:08.924021   66006 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0501 03:31:08.924033   66006 main.go:141] libmachine: (no-preload-892672) Calling .GetState
	I0501 03:31:08.925181   66006 main.go:141] libmachine: Detecting operating system of created instance...
	I0501 03:31:08.925193   66006 main.go:141] libmachine: Waiting for SSH to be available...
	I0501 03:31:08.925200   66006 main.go:141] libmachine: Getting to WaitForSSH function...
	I0501 03:31:08.925209   66006 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHHostname
	I0501 03:31:08.927570   66006 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:31:08.927939   66006 main.go:141] libmachine: (no-preload-892672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6d:9a", ip: ""} in network mk-no-preload-892672: {Iface:virbr1 ExpiryTime:2024-05-01 04:31:00 +0000 UTC Type:0 Mac:52:54:00:c7:6d:9a Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:no-preload-892672 Clientid:01:52:54:00:c7:6d:9a}
	I0501 03:31:08.927963   66006 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined IP address 192.168.39.144 and MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:31:08.928077   66006 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHPort
	I0501 03:31:08.928245   66006 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHKeyPath
	I0501 03:31:08.928399   66006 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHKeyPath
	I0501 03:31:08.928569   66006 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHUsername
	I0501 03:31:08.928746   66006 main.go:141] libmachine: Using SSH client type: native
	I0501 03:31:08.928975   66006 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.144 22 <nil> <nil>}
	I0501 03:31:08.928986   66006 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0501 03:31:09.042412   66006 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0501 03:31:09.042440   66006 main.go:141] libmachine: Detecting the provisioner...
	I0501 03:31:09.042452   66006 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHHostname
	I0501 03:31:09.045257   66006 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:31:09.045616   66006 main.go:141] libmachine: (no-preload-892672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6d:9a", ip: ""} in network mk-no-preload-892672: {Iface:virbr1 ExpiryTime:2024-05-01 04:31:00 +0000 UTC Type:0 Mac:52:54:00:c7:6d:9a Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:no-preload-892672 Clientid:01:52:54:00:c7:6d:9a}
	I0501 03:31:09.045662   66006 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined IP address 192.168.39.144 and MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:31:09.045826   66006 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHPort
	I0501 03:31:09.046032   66006 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHKeyPath
	I0501 03:31:09.046180   66006 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHKeyPath
	I0501 03:31:09.046301   66006 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHUsername
	I0501 03:31:09.046461   66006 main.go:141] libmachine: Using SSH client type: native
	I0501 03:31:09.046632   66006 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.144 22 <nil> <nil>}
	I0501 03:31:09.046643   66006 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0501 03:31:09.163795   66006 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0501 03:31:09.163892   66006 main.go:141] libmachine: found compatible host: buildroot
	I0501 03:31:09.163905   66006 main.go:141] libmachine: Provisioning with buildroot...
	I0501 03:31:09.163913   66006 main.go:141] libmachine: (no-preload-892672) Calling .GetMachineName
	I0501 03:31:09.164192   66006 buildroot.go:166] provisioning hostname "no-preload-892672"
	I0501 03:31:09.164224   66006 main.go:141] libmachine: (no-preload-892672) Calling .GetMachineName
	I0501 03:31:09.164418   66006 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHHostname
	I0501 03:31:09.167155   66006 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:31:09.167518   66006 main.go:141] libmachine: (no-preload-892672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6d:9a", ip: ""} in network mk-no-preload-892672: {Iface:virbr1 ExpiryTime:2024-05-01 04:31:00 +0000 UTC Type:0 Mac:52:54:00:c7:6d:9a Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:no-preload-892672 Clientid:01:52:54:00:c7:6d:9a}
	I0501 03:31:09.167538   66006 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined IP address 192.168.39.144 and MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:31:09.167699   66006 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHPort
	I0501 03:31:09.167887   66006 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHKeyPath
	I0501 03:31:09.168054   66006 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHKeyPath
	I0501 03:31:09.168180   66006 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHUsername
	I0501 03:31:09.168349   66006 main.go:141] libmachine: Using SSH client type: native
	I0501 03:31:09.168497   66006 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.144 22 <nil> <nil>}
	I0501 03:31:09.168509   66006 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-892672 && echo "no-preload-892672" | sudo tee /etc/hostname
	I0501 03:31:09.304809   66006 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-892672
	
	I0501 03:31:09.304838   66006 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHHostname
	I0501 03:31:09.307309   66006 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:31:09.307657   66006 main.go:141] libmachine: (no-preload-892672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6d:9a", ip: ""} in network mk-no-preload-892672: {Iface:virbr1 ExpiryTime:2024-05-01 04:31:00 +0000 UTC Type:0 Mac:52:54:00:c7:6d:9a Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:no-preload-892672 Clientid:01:52:54:00:c7:6d:9a}
	I0501 03:31:09.307681   66006 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined IP address 192.168.39.144 and MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:31:09.307894   66006 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHPort
	I0501 03:31:09.308091   66006 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHKeyPath
	I0501 03:31:09.308244   66006 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHKeyPath
	I0501 03:31:09.308359   66006 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHUsername
	I0501 03:31:09.308488   66006 main.go:141] libmachine: Using SSH client type: native
	I0501 03:31:09.309032   66006 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.144 22 <nil> <nil>}
	I0501 03:31:09.309071   66006 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-892672' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-892672/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-892672' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0501 03:31:09.437496   66006 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0501 03:31:09.437535   66006 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18779-13391/.minikube CaCertPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18779-13391/.minikube}
	I0501 03:31:09.437566   66006 buildroot.go:174] setting up certificates
	I0501 03:31:09.437580   66006 provision.go:84] configureAuth start
	I0501 03:31:09.437598   66006 main.go:141] libmachine: (no-preload-892672) Calling .GetMachineName
	I0501 03:31:09.437941   66006 main.go:141] libmachine: (no-preload-892672) Calling .GetIP
	I0501 03:31:09.440442   66006 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:31:09.440773   66006 main.go:141] libmachine: (no-preload-892672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6d:9a", ip: ""} in network mk-no-preload-892672: {Iface:virbr1 ExpiryTime:2024-05-01 04:31:00 +0000 UTC Type:0 Mac:52:54:00:c7:6d:9a Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:no-preload-892672 Clientid:01:52:54:00:c7:6d:9a}
	I0501 03:31:09.440794   66006 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined IP address 192.168.39.144 and MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:31:09.440949   66006 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHHostname
	I0501 03:31:09.442921   66006 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:31:09.443230   66006 main.go:141] libmachine: (no-preload-892672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6d:9a", ip: ""} in network mk-no-preload-892672: {Iface:virbr1 ExpiryTime:2024-05-01 04:31:00 +0000 UTC Type:0 Mac:52:54:00:c7:6d:9a Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:no-preload-892672 Clientid:01:52:54:00:c7:6d:9a}
	I0501 03:31:09.443272   66006 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined IP address 192.168.39.144 and MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:31:09.443364   66006 provision.go:143] copyHostCerts
	I0501 03:31:09.443419   66006 exec_runner.go:144] found /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem, removing ...
	I0501 03:31:09.443430   66006 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem
	I0501 03:31:09.443478   66006 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem (1078 bytes)
	I0501 03:31:09.443561   66006 exec_runner.go:144] found /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem, removing ...
	I0501 03:31:09.443571   66006 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem
	I0501 03:31:09.443591   66006 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem (1123 bytes)
	I0501 03:31:09.443641   66006 exec_runner.go:144] found /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem, removing ...
	I0501 03:31:09.443647   66006 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem
	I0501 03:31:09.443663   66006 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem (1675 bytes)
	I0501 03:31:09.443706   66006 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca-key.pem org=jenkins.no-preload-892672 san=[127.0.0.1 192.168.39.144 localhost minikube no-preload-892672]
	I0501 03:31:09.532561   66006 provision.go:177] copyRemoteCerts
	I0501 03:31:09.532613   66006 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0501 03:31:09.532636   66006 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHHostname
	I0501 03:31:09.535326   66006 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:31:09.535645   66006 main.go:141] libmachine: (no-preload-892672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6d:9a", ip: ""} in network mk-no-preload-892672: {Iface:virbr1 ExpiryTime:2024-05-01 04:31:00 +0000 UTC Type:0 Mac:52:54:00:c7:6d:9a Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:no-preload-892672 Clientid:01:52:54:00:c7:6d:9a}
	I0501 03:31:09.535677   66006 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined IP address 192.168.39.144 and MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:31:09.535885   66006 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHPort
	I0501 03:31:09.536066   66006 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHKeyPath
	I0501 03:31:09.536233   66006 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHUsername
	I0501 03:31:09.536397   66006 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/no-preload-892672/id_rsa Username:docker}
	I0501 03:31:09.625470   66006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0501 03:31:09.652971   66006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0501 03:31:09.680449   66006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0501 03:31:09.711235   66006 provision.go:87] duration metric: took 273.638761ms to configureAuth
	I0501 03:31:09.711262   66006 buildroot.go:189] setting minikube options for container-runtime
	I0501 03:31:09.711418   66006 config.go:182] Loaded profile config "no-preload-892672": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0501 03:31:09.711479   66006 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHHostname
	I0501 03:31:09.713976   66006 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:31:09.714263   66006 main.go:141] libmachine: (no-preload-892672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6d:9a", ip: ""} in network mk-no-preload-892672: {Iface:virbr1 ExpiryTime:2024-05-01 04:31:00 +0000 UTC Type:0 Mac:52:54:00:c7:6d:9a Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:no-preload-892672 Clientid:01:52:54:00:c7:6d:9a}
	I0501 03:31:09.714289   66006 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined IP address 192.168.39.144 and MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:31:09.714545   66006 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHPort
	I0501 03:31:09.714771   66006 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHKeyPath
	I0501 03:31:09.714961   66006 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHKeyPath
	I0501 03:31:09.715108   66006 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHUsername
	I0501 03:31:09.715279   66006 main.go:141] libmachine: Using SSH client type: native
	I0501 03:31:09.715442   66006 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.144 22 <nil> <nil>}
	I0501 03:31:09.715458   66006 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0501 03:31:10.004651   66006 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0501 03:31:10.004684   66006 main.go:141] libmachine: Checking connection to Docker...
	I0501 03:31:10.004696   66006 main.go:141] libmachine: (no-preload-892672) Calling .GetURL
	I0501 03:31:10.005871   66006 main.go:141] libmachine: (no-preload-892672) DBG | Using libvirt version 6000000
	I0501 03:31:10.008391   66006 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:31:10.008802   66006 main.go:141] libmachine: (no-preload-892672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6d:9a", ip: ""} in network mk-no-preload-892672: {Iface:virbr1 ExpiryTime:2024-05-01 04:31:00 +0000 UTC Type:0 Mac:52:54:00:c7:6d:9a Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:no-preload-892672 Clientid:01:52:54:00:c7:6d:9a}
	I0501 03:31:10.008829   66006 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined IP address 192.168.39.144 and MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:31:10.009085   66006 main.go:141] libmachine: Docker is up and running!
	I0501 03:31:10.009101   66006 main.go:141] libmachine: Reticulating splines...
	I0501 03:31:10.009109   66006 client.go:171] duration metric: took 27.115905251s to LocalClient.Create
	I0501 03:31:10.009133   66006 start.go:167] duration metric: took 27.115961523s to libmachine.API.Create "no-preload-892672"
	I0501 03:31:10.009142   66006 start.go:293] postStartSetup for "no-preload-892672" (driver="kvm2")
	I0501 03:31:10.009156   66006 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0501 03:31:10.009181   66006 main.go:141] libmachine: (no-preload-892672) Calling .DriverName
	I0501 03:31:10.009465   66006 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0501 03:31:10.009496   66006 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHHostname
	I0501 03:31:10.012094   66006 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:31:10.012470   66006 main.go:141] libmachine: (no-preload-892672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6d:9a", ip: ""} in network mk-no-preload-892672: {Iface:virbr1 ExpiryTime:2024-05-01 04:31:00 +0000 UTC Type:0 Mac:52:54:00:c7:6d:9a Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:no-preload-892672 Clientid:01:52:54:00:c7:6d:9a}
	I0501 03:31:10.012499   66006 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined IP address 192.168.39.144 and MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:31:10.012623   66006 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHPort
	I0501 03:31:10.012854   66006 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHKeyPath
	I0501 03:31:10.013009   66006 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHUsername
	I0501 03:31:10.013153   66006 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/no-preload-892672/id_rsa Username:docker}
	I0501 03:31:10.103569   66006 ssh_runner.go:195] Run: cat /etc/os-release
	I0501 03:31:10.108723   66006 info.go:137] Remote host: Buildroot 2023.02.9
	I0501 03:31:10.108747   66006 filesync.go:126] Scanning /home/jenkins/minikube-integration/18779-13391/.minikube/addons for local assets ...
	I0501 03:31:10.108827   66006 filesync.go:126] Scanning /home/jenkins/minikube-integration/18779-13391/.minikube/files for local assets ...
	I0501 03:31:10.108916   66006 filesync.go:149] local asset: /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem -> 207242.pem in /etc/ssl/certs
	I0501 03:31:10.109015   66006 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0501 03:31:10.119838   66006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem --> /etc/ssl/certs/207242.pem (1708 bytes)
	I0501 03:31:10.148236   66006 start.go:296] duration metric: took 139.079894ms for postStartSetup
	I0501 03:31:10.148288   66006 main.go:141] libmachine: (no-preload-892672) Calling .GetConfigRaw
	I0501 03:31:10.148935   66006 main.go:141] libmachine: (no-preload-892672) Calling .GetIP
	I0501 03:31:10.151486   66006 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:31:10.151896   66006 main.go:141] libmachine: (no-preload-892672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6d:9a", ip: ""} in network mk-no-preload-892672: {Iface:virbr1 ExpiryTime:2024-05-01 04:31:00 +0000 UTC Type:0 Mac:52:54:00:c7:6d:9a Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:no-preload-892672 Clientid:01:52:54:00:c7:6d:9a}
	I0501 03:31:10.151950   66006 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined IP address 192.168.39.144 and MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:31:10.152118   66006 profile.go:143] Saving config to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/no-preload-892672/config.json ...
	I0501 03:31:10.152320   66006 start.go:128] duration metric: took 27.279871649s to createHost
	I0501 03:31:10.152346   66006 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHHostname
	I0501 03:31:10.154549   66006 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:31:10.154896   66006 main.go:141] libmachine: (no-preload-892672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6d:9a", ip: ""} in network mk-no-preload-892672: {Iface:virbr1 ExpiryTime:2024-05-01 04:31:00 +0000 UTC Type:0 Mac:52:54:00:c7:6d:9a Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:no-preload-892672 Clientid:01:52:54:00:c7:6d:9a}
	I0501 03:31:10.154926   66006 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined IP address 192.168.39.144 and MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:31:10.155079   66006 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHPort
	I0501 03:31:10.155255   66006 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHKeyPath
	I0501 03:31:10.155425   66006 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHKeyPath
	I0501 03:31:10.155559   66006 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHUsername
	I0501 03:31:10.155698   66006 main.go:141] libmachine: Using SSH client type: native
	I0501 03:31:10.155880   66006 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.144 22 <nil> <nil>}
	I0501 03:31:10.155895   66006 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0501 03:31:10.271886   66006 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714534270.256743225
	
	I0501 03:31:10.271914   66006 fix.go:216] guest clock: 1714534270.256743225
	I0501 03:31:10.271924   66006 fix.go:229] Guest: 2024-05-01 03:31:10.256743225 +0000 UTC Remote: 2024-05-01 03:31:10.152333071 +0000 UTC m=+52.838660601 (delta=104.410154ms)
	I0501 03:31:10.271949   66006 fix.go:200] guest clock delta is within tolerance: 104.410154ms
	I0501 03:31:10.271956   66006 start.go:83] releasing machines lock for "no-preload-892672", held for 27.399747576s
	I0501 03:31:10.271981   66006 main.go:141] libmachine: (no-preload-892672) Calling .DriverName
	I0501 03:31:10.272287   66006 main.go:141] libmachine: (no-preload-892672) Calling .GetIP
	I0501 03:31:10.275153   66006 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:31:10.275518   66006 main.go:141] libmachine: (no-preload-892672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6d:9a", ip: ""} in network mk-no-preload-892672: {Iface:virbr1 ExpiryTime:2024-05-01 04:31:00 +0000 UTC Type:0 Mac:52:54:00:c7:6d:9a Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:no-preload-892672 Clientid:01:52:54:00:c7:6d:9a}
	I0501 03:31:10.275543   66006 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined IP address 192.168.39.144 and MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:31:10.275713   66006 main.go:141] libmachine: (no-preload-892672) Calling .DriverName
	I0501 03:31:10.276262   66006 main.go:141] libmachine: (no-preload-892672) Calling .DriverName
	I0501 03:31:10.276472   66006 main.go:141] libmachine: (no-preload-892672) Calling .DriverName
	I0501 03:31:10.276564   66006 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0501 03:31:10.276606   66006 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHHostname
	I0501 03:31:10.276695   66006 ssh_runner.go:195] Run: cat /version.json
	I0501 03:31:10.276721   66006 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHHostname
	I0501 03:31:10.279273   66006 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:31:10.279618   66006 main.go:141] libmachine: (no-preload-892672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6d:9a", ip: ""} in network mk-no-preload-892672: {Iface:virbr1 ExpiryTime:2024-05-01 04:31:00 +0000 UTC Type:0 Mac:52:54:00:c7:6d:9a Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:no-preload-892672 Clientid:01:52:54:00:c7:6d:9a}
	I0501 03:31:10.279647   66006 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:31:10.279666   66006 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined IP address 192.168.39.144 and MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:31:10.279772   66006 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHPort
	I0501 03:31:10.279976   66006 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHKeyPath
	I0501 03:31:10.280079   66006 main.go:141] libmachine: (no-preload-892672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6d:9a", ip: ""} in network mk-no-preload-892672: {Iface:virbr1 ExpiryTime:2024-05-01 04:31:00 +0000 UTC Type:0 Mac:52:54:00:c7:6d:9a Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:no-preload-892672 Clientid:01:52:54:00:c7:6d:9a}
	I0501 03:31:10.280104   66006 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined IP address 192.168.39.144 and MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:31:10.280119   66006 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHUsername
	I0501 03:31:10.280275   66006 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/no-preload-892672/id_rsa Username:docker}
	I0501 03:31:10.280293   66006 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHPort
	I0501 03:31:10.280462   66006 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHKeyPath
	I0501 03:31:10.280579   66006 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHUsername
	I0501 03:31:10.280734   66006 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/no-preload-892672/id_rsa Username:docker}
	I0501 03:31:10.377087   66006 ssh_runner.go:195] Run: systemctl --version
	I0501 03:31:10.407383   66006 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0501 03:31:10.577363   66006 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0501 03:31:10.584997   66006 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0501 03:31:10.585075   66006 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0501 03:31:10.604197   66006 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0501 03:31:10.604228   66006 start.go:494] detecting cgroup driver to use...
	I0501 03:31:10.604301   66006 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0501 03:31:10.628660   66006 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0501 03:31:10.648936   66006 docker.go:217] disabling cri-docker service (if available) ...
	I0501 03:31:10.649000   66006 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0501 03:31:10.668953   66006 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0501 03:31:10.685802   66006 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0501 03:31:10.830616   66006 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0501 03:31:11.023136   66006 docker.go:233] disabling docker service ...
	I0501 03:31:11.023281   66006 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0501 03:31:11.041422   66006 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0501 03:31:11.059078   66006 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0501 03:31:11.213511   66006 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0501 03:31:11.359231   66006 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0501 03:31:11.379011   66006 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0501 03:31:11.400894   66006 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0501 03:31:11.400958   66006 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:31:11.414113   66006 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0501 03:31:11.414179   66006 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:31:11.427267   66006 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:31:11.440848   66006 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:31:11.456777   66006 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0501 03:31:11.473528   66006 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:31:11.488470   66006 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:31:11.514090   66006 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:31:11.530470   66006 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0501 03:31:11.544222   66006 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0501 03:31:11.544289   66006 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0501 03:31:11.563308   66006 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0501 03:31:11.575452   66006 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 03:31:11.721657   66006 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0501 03:31:11.882573   66006 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0501 03:31:11.882655   66006 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0501 03:31:11.888271   66006 start.go:562] Will wait 60s for crictl version
	I0501 03:31:11.888337   66006 ssh_runner.go:195] Run: which crictl
	I0501 03:31:11.893165   66006 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0501 03:31:11.940235   66006 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0501 03:31:11.940346   66006 ssh_runner.go:195] Run: crio --version
	I0501 03:31:11.978742   66006 ssh_runner.go:195] Run: crio --version
	I0501 03:31:12.015713   66006 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0501 03:31:12.017031   66006 main.go:141] libmachine: (no-preload-892672) Calling .GetIP
	I0501 03:31:12.020174   66006 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:31:12.020602   66006 main.go:141] libmachine: (no-preload-892672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6d:9a", ip: ""} in network mk-no-preload-892672: {Iface:virbr1 ExpiryTime:2024-05-01 04:31:00 +0000 UTC Type:0 Mac:52:54:00:c7:6d:9a Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:no-preload-892672 Clientid:01:52:54:00:c7:6d:9a}
	I0501 03:31:12.020628   66006 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined IP address 192.168.39.144 and MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:31:12.020887   66006 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0501 03:31:12.026078   66006 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0501 03:31:12.043891   66006 kubeadm.go:877] updating cluster {Name:no-preload-892672 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.0 ClusterName:no-preload-892672 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.144 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:
0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0501 03:31:12.044060   66006 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0501 03:31:12.044127   66006 ssh_runner.go:195] Run: sudo crictl images --output json
	I0501 03:31:12.083280   66006 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0501 03:31:12.083310   66006 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.30.0 registry.k8s.io/kube-controller-manager:v1.30.0 registry.k8s.io/kube-scheduler:v1.30.0 registry.k8s.io/kube-proxy:v1.30.0 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.12-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0501 03:31:12.083366   66006 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0501 03:31:12.083380   66006 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.0
	I0501 03:31:12.083397   66006 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0501 03:31:12.083418   66006 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0501 03:31:12.083440   66006 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.0
	I0501 03:31:12.083470   66006 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.0
	I0501 03:31:12.083425   66006 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.0
	I0501 03:31:12.083517   66006 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0501 03:31:12.084898   66006 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0501 03:31:12.084910   66006 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.0
	I0501 03:31:12.084936   66006 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.0
	I0501 03:31:12.084900   66006 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.0
	I0501 03:31:12.084964   66006 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0501 03:31:12.084900   66006 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0501 03:31:12.084994   66006 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.0
	I0501 03:31:12.085152   66006 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0501 03:31:12.187428   66006 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.30.0
	I0501 03:31:12.192097   66006 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.12-0
	I0501 03:31:12.197443   66006 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0501 03:31:12.199670   66006 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.30.0
	I0501 03:31:12.209330   66006 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0501 03:31:12.217591   66006 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.30.0
	I0501 03:31:12.219435   66006 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.30.0
	I0501 03:31:12.303408   66006 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.30.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.30.0" does not exist at hash "259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced" in container runtime
	I0501 03:31:12.303455   66006 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.30.0
	I0501 03:31:12.303509   66006 ssh_runner.go:195] Run: which crictl
	I0501 03:31:12.366194   66006 cache_images.go:116] "registry.k8s.io/etcd:3.5.12-0" needs transfer: "registry.k8s.io/etcd:3.5.12-0" does not exist at hash "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899" in container runtime
	I0501 03:31:12.366240   66006 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.12-0
	I0501 03:31:12.366292   66006 ssh_runner.go:195] Run: which crictl
	I0501 03:31:10.274486   66044 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0501 03:31:10.274635   66044 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:31:10.274686   66044 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:31:10.290926   66044 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33059
	I0501 03:31:10.291450   66044 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:31:10.292082   66044 main.go:141] libmachine: Using API Version  1
	I0501 03:31:10.292106   66044 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:31:10.292463   66044 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:31:10.292636   66044 main.go:141] libmachine: (embed-certs-277128) Calling .GetMachineName
	I0501 03:31:10.292780   66044 main.go:141] libmachine: (embed-certs-277128) Calling .DriverName
	I0501 03:31:10.292917   66044 start.go:159] libmachine.API.Create for "embed-certs-277128" (driver="kvm2")
	I0501 03:31:10.292959   66044 client.go:168] LocalClient.Create starting
	I0501 03:31:10.292988   66044 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem
	I0501 03:31:10.293017   66044 main.go:141] libmachine: Decoding PEM data...
	I0501 03:31:10.293031   66044 main.go:141] libmachine: Parsing certificate...
	I0501 03:31:10.293076   66044 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem
	I0501 03:31:10.293097   66044 main.go:141] libmachine: Decoding PEM data...
	I0501 03:31:10.293108   66044 main.go:141] libmachine: Parsing certificate...
	I0501 03:31:10.293127   66044 main.go:141] libmachine: Running pre-create checks...
	I0501 03:31:10.293135   66044 main.go:141] libmachine: (embed-certs-277128) Calling .PreCreateCheck
	I0501 03:31:10.293484   66044 main.go:141] libmachine: (embed-certs-277128) Calling .GetConfigRaw
	I0501 03:31:10.293850   66044 main.go:141] libmachine: Creating machine...
	I0501 03:31:10.293864   66044 main.go:141] libmachine: (embed-certs-277128) Calling .Create
	I0501 03:31:10.294007   66044 main.go:141] libmachine: (embed-certs-277128) Creating KVM machine...
	I0501 03:31:10.295164   66044 main.go:141] libmachine: (embed-certs-277128) DBG | found existing default KVM network
	I0501 03:31:10.296494   66044 main.go:141] libmachine: (embed-certs-277128) DBG | I0501 03:31:10.296341   66592 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:64:ec:5c} reservation:<nil>}
	I0501 03:31:10.297570   66044 main.go:141] libmachine: (embed-certs-277128) DBG | I0501 03:31:10.297480   66592 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002da1f0}
	I0501 03:31:10.297595   66044 main.go:141] libmachine: (embed-certs-277128) DBG | created network xml: 
	I0501 03:31:10.297608   66044 main.go:141] libmachine: (embed-certs-277128) DBG | <network>
	I0501 03:31:10.297616   66044 main.go:141] libmachine: (embed-certs-277128) DBG |   <name>mk-embed-certs-277128</name>
	I0501 03:31:10.297637   66044 main.go:141] libmachine: (embed-certs-277128) DBG |   <dns enable='no'/>
	I0501 03:31:10.297651   66044 main.go:141] libmachine: (embed-certs-277128) DBG |   
	I0501 03:31:10.297660   66044 main.go:141] libmachine: (embed-certs-277128) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0501 03:31:10.297668   66044 main.go:141] libmachine: (embed-certs-277128) DBG |     <dhcp>
	I0501 03:31:10.297675   66044 main.go:141] libmachine: (embed-certs-277128) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0501 03:31:10.297691   66044 main.go:141] libmachine: (embed-certs-277128) DBG |     </dhcp>
	I0501 03:31:10.297700   66044 main.go:141] libmachine: (embed-certs-277128) DBG |   </ip>
	I0501 03:31:10.297705   66044 main.go:141] libmachine: (embed-certs-277128) DBG |   
	I0501 03:31:10.297713   66044 main.go:141] libmachine: (embed-certs-277128) DBG | </network>
	I0501 03:31:10.297723   66044 main.go:141] libmachine: (embed-certs-277128) DBG | 
	I0501 03:31:10.303538   66044 main.go:141] libmachine: (embed-certs-277128) DBG | trying to create private KVM network mk-embed-certs-277128 192.168.50.0/24...
	I0501 03:31:10.371448   66044 main.go:141] libmachine: (embed-certs-277128) DBG | private KVM network mk-embed-certs-277128 192.168.50.0/24 created
	I0501 03:31:10.371484   66044 main.go:141] libmachine: (embed-certs-277128) Setting up store path in /home/jenkins/minikube-integration/18779-13391/.minikube/machines/embed-certs-277128 ...
	I0501 03:31:10.371497   66044 main.go:141] libmachine: (embed-certs-277128) DBG | I0501 03:31:10.371413   66592 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18779-13391/.minikube
	I0501 03:31:10.371518   66044 main.go:141] libmachine: (embed-certs-277128) Building disk image from file:///home/jenkins/minikube-integration/18779-13391/.minikube/cache/iso/amd64/minikube-v1.33.0-1714498396-18779-amd64.iso
	I0501 03:31:10.371534   66044 main.go:141] libmachine: (embed-certs-277128) Downloading /home/jenkins/minikube-integration/18779-13391/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18779-13391/.minikube/cache/iso/amd64/minikube-v1.33.0-1714498396-18779-amd64.iso...
	I0501 03:31:10.594957   66044 main.go:141] libmachine: (embed-certs-277128) DBG | I0501 03:31:10.594801   66592 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/embed-certs-277128/id_rsa...
	I0501 03:31:10.690615   66044 main.go:141] libmachine: (embed-certs-277128) DBG | I0501 03:31:10.690521   66592 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/embed-certs-277128/embed-certs-277128.rawdisk...
	I0501 03:31:10.690644   66044 main.go:141] libmachine: (embed-certs-277128) DBG | Writing magic tar header
	I0501 03:31:10.690670   66044 main.go:141] libmachine: (embed-certs-277128) DBG | Writing SSH key tar header
	I0501 03:31:10.690825   66044 main.go:141] libmachine: (embed-certs-277128) DBG | I0501 03:31:10.690721   66592 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18779-13391/.minikube/machines/embed-certs-277128 ...
	I0501 03:31:10.690872   66044 main.go:141] libmachine: (embed-certs-277128) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/embed-certs-277128
	I0501 03:31:10.690895   66044 main.go:141] libmachine: (embed-certs-277128) Setting executable bit set on /home/jenkins/minikube-integration/18779-13391/.minikube/machines/embed-certs-277128 (perms=drwx------)
	I0501 03:31:10.690906   66044 main.go:141] libmachine: (embed-certs-277128) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18779-13391/.minikube/machines
	I0501 03:31:10.690920   66044 main.go:141] libmachine: (embed-certs-277128) Setting executable bit set on /home/jenkins/minikube-integration/18779-13391/.minikube/machines (perms=drwxr-xr-x)
	I0501 03:31:10.690936   66044 main.go:141] libmachine: (embed-certs-277128) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18779-13391/.minikube
	I0501 03:31:10.690984   66044 main.go:141] libmachine: (embed-certs-277128) Setting executable bit set on /home/jenkins/minikube-integration/18779-13391/.minikube (perms=drwxr-xr-x)
	I0501 03:31:10.691017   66044 main.go:141] libmachine: (embed-certs-277128) Setting executable bit set on /home/jenkins/minikube-integration/18779-13391 (perms=drwxrwxr-x)
	I0501 03:31:10.691032   66044 main.go:141] libmachine: (embed-certs-277128) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18779-13391
	I0501 03:31:10.691045   66044 main.go:141] libmachine: (embed-certs-277128) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0501 03:31:10.691059   66044 main.go:141] libmachine: (embed-certs-277128) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0501 03:31:10.691067   66044 main.go:141] libmachine: (embed-certs-277128) Creating domain...
	I0501 03:31:10.691081   66044 main.go:141] libmachine: (embed-certs-277128) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0501 03:31:10.691091   66044 main.go:141] libmachine: (embed-certs-277128) DBG | Checking permissions on dir: /home/jenkins
	I0501 03:31:10.691100   66044 main.go:141] libmachine: (embed-certs-277128) DBG | Checking permissions on dir: /home
	I0501 03:31:10.691108   66044 main.go:141] libmachine: (embed-certs-277128) DBG | Skipping /home - not owner
	I0501 03:31:10.692582   66044 main.go:141] libmachine: (embed-certs-277128) define libvirt domain using xml: 
	I0501 03:31:10.692607   66044 main.go:141] libmachine: (embed-certs-277128) <domain type='kvm'>
	I0501 03:31:10.692617   66044 main.go:141] libmachine: (embed-certs-277128)   <name>embed-certs-277128</name>
	I0501 03:31:10.692625   66044 main.go:141] libmachine: (embed-certs-277128)   <memory unit='MiB'>2200</memory>
	I0501 03:31:10.692637   66044 main.go:141] libmachine: (embed-certs-277128)   <vcpu>2</vcpu>
	I0501 03:31:10.692643   66044 main.go:141] libmachine: (embed-certs-277128)   <features>
	I0501 03:31:10.692663   66044 main.go:141] libmachine: (embed-certs-277128)     <acpi/>
	I0501 03:31:10.692671   66044 main.go:141] libmachine: (embed-certs-277128)     <apic/>
	I0501 03:31:10.692681   66044 main.go:141] libmachine: (embed-certs-277128)     <pae/>
	I0501 03:31:10.692692   66044 main.go:141] libmachine: (embed-certs-277128)     
	I0501 03:31:10.692701   66044 main.go:141] libmachine: (embed-certs-277128)   </features>
	I0501 03:31:10.692711   66044 main.go:141] libmachine: (embed-certs-277128)   <cpu mode='host-passthrough'>
	I0501 03:31:10.692716   66044 main.go:141] libmachine: (embed-certs-277128)   
	I0501 03:31:10.692721   66044 main.go:141] libmachine: (embed-certs-277128)   </cpu>
	I0501 03:31:10.692726   66044 main.go:141] libmachine: (embed-certs-277128)   <os>
	I0501 03:31:10.692731   66044 main.go:141] libmachine: (embed-certs-277128)     <type>hvm</type>
	I0501 03:31:10.692736   66044 main.go:141] libmachine: (embed-certs-277128)     <boot dev='cdrom'/>
	I0501 03:31:10.692745   66044 main.go:141] libmachine: (embed-certs-277128)     <boot dev='hd'/>
	I0501 03:31:10.692751   66044 main.go:141] libmachine: (embed-certs-277128)     <bootmenu enable='no'/>
	I0501 03:31:10.692755   66044 main.go:141] libmachine: (embed-certs-277128)   </os>
	I0501 03:31:10.692760   66044 main.go:141] libmachine: (embed-certs-277128)   <devices>
	I0501 03:31:10.692768   66044 main.go:141] libmachine: (embed-certs-277128)     <disk type='file' device='cdrom'>
	I0501 03:31:10.692777   66044 main.go:141] libmachine: (embed-certs-277128)       <source file='/home/jenkins/minikube-integration/18779-13391/.minikube/machines/embed-certs-277128/boot2docker.iso'/>
	I0501 03:31:10.692785   66044 main.go:141] libmachine: (embed-certs-277128)       <target dev='hdc' bus='scsi'/>
	I0501 03:31:10.692790   66044 main.go:141] libmachine: (embed-certs-277128)       <readonly/>
	I0501 03:31:10.692796   66044 main.go:141] libmachine: (embed-certs-277128)     </disk>
	I0501 03:31:10.692803   66044 main.go:141] libmachine: (embed-certs-277128)     <disk type='file' device='disk'>
	I0501 03:31:10.692819   66044 main.go:141] libmachine: (embed-certs-277128)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0501 03:31:10.692843   66044 main.go:141] libmachine: (embed-certs-277128)       <source file='/home/jenkins/minikube-integration/18779-13391/.minikube/machines/embed-certs-277128/embed-certs-277128.rawdisk'/>
	I0501 03:31:10.692855   66044 main.go:141] libmachine: (embed-certs-277128)       <target dev='hda' bus='virtio'/>
	I0501 03:31:10.692864   66044 main.go:141] libmachine: (embed-certs-277128)     </disk>
	I0501 03:31:10.692884   66044 main.go:141] libmachine: (embed-certs-277128)     <interface type='network'>
	I0501 03:31:10.692933   66044 main.go:141] libmachine: (embed-certs-277128)       <source network='mk-embed-certs-277128'/>
	I0501 03:31:10.692963   66044 main.go:141] libmachine: (embed-certs-277128)       <model type='virtio'/>
	I0501 03:31:10.692974   66044 main.go:141] libmachine: (embed-certs-277128)     </interface>
	I0501 03:31:10.692991   66044 main.go:141] libmachine: (embed-certs-277128)     <interface type='network'>
	I0501 03:31:10.693005   66044 main.go:141] libmachine: (embed-certs-277128)       <source network='default'/>
	I0501 03:31:10.693017   66044 main.go:141] libmachine: (embed-certs-277128)       <model type='virtio'/>
	I0501 03:31:10.693029   66044 main.go:141] libmachine: (embed-certs-277128)     </interface>
	I0501 03:31:10.693040   66044 main.go:141] libmachine: (embed-certs-277128)     <serial type='pty'>
	I0501 03:31:10.693055   66044 main.go:141] libmachine: (embed-certs-277128)       <target port='0'/>
	I0501 03:31:10.693066   66044 main.go:141] libmachine: (embed-certs-277128)     </serial>
	I0501 03:31:10.693086   66044 main.go:141] libmachine: (embed-certs-277128)     <console type='pty'>
	I0501 03:31:10.693103   66044 main.go:141] libmachine: (embed-certs-277128)       <target type='serial' port='0'/>
	I0501 03:31:10.693115   66044 main.go:141] libmachine: (embed-certs-277128)     </console>
	I0501 03:31:10.693132   66044 main.go:141] libmachine: (embed-certs-277128)     <rng model='virtio'>
	I0501 03:31:10.693157   66044 main.go:141] libmachine: (embed-certs-277128)       <backend model='random'>/dev/random</backend>
	I0501 03:31:10.693170   66044 main.go:141] libmachine: (embed-certs-277128)     </rng>
	I0501 03:31:10.693182   66044 main.go:141] libmachine: (embed-certs-277128)     
	I0501 03:31:10.693193   66044 main.go:141] libmachine: (embed-certs-277128)     
	I0501 03:31:10.693203   66044 main.go:141] libmachine: (embed-certs-277128)   </devices>
	I0501 03:31:10.693218   66044 main.go:141] libmachine: (embed-certs-277128) </domain>
	I0501 03:31:10.693232   66044 main.go:141] libmachine: (embed-certs-277128) 
	I0501 03:31:10.701304   66044 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:2f:57:0e in network default
	I0501 03:31:10.701908   66044 main.go:141] libmachine: (embed-certs-277128) Ensuring networks are active...
	I0501 03:31:10.701930   66044 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:31:10.702607   66044 main.go:141] libmachine: (embed-certs-277128) Ensuring network default is active
	I0501 03:31:10.703012   66044 main.go:141] libmachine: (embed-certs-277128) Ensuring network mk-embed-certs-277128 is active
	I0501 03:31:10.703633   66044 main.go:141] libmachine: (embed-certs-277128) Getting domain xml...
	I0501 03:31:10.704467   66044 main.go:141] libmachine: (embed-certs-277128) Creating domain...
	I0501 03:31:11.965058   66044 main.go:141] libmachine: (embed-certs-277128) Waiting to get IP...
	I0501 03:31:11.965935   66044 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:31:11.966577   66044 main.go:141] libmachine: (embed-certs-277128) DBG | unable to find current IP address of domain embed-certs-277128 in network mk-embed-certs-277128
	I0501 03:31:11.966651   66044 main.go:141] libmachine: (embed-certs-277128) DBG | I0501 03:31:11.966540   66592 retry.go:31] will retry after 295.828389ms: waiting for machine to come up
	I0501 03:31:12.265312   66044 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:31:12.266194   66044 main.go:141] libmachine: (embed-certs-277128) DBG | unable to find current IP address of domain embed-certs-277128 in network mk-embed-certs-277128
	I0501 03:31:12.266228   66044 main.go:141] libmachine: (embed-certs-277128) DBG | I0501 03:31:12.266146   66592 retry.go:31] will retry after 236.189422ms: waiting for machine to come up
	I0501 03:31:12.503642   66044 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:31:12.504272   66044 main.go:141] libmachine: (embed-certs-277128) DBG | unable to find current IP address of domain embed-certs-277128 in network mk-embed-certs-277128
	I0501 03:31:12.504300   66044 main.go:141] libmachine: (embed-certs-277128) DBG | I0501 03:31:12.504223   66592 retry.go:31] will retry after 481.570064ms: waiting for machine to come up
	I0501 03:31:12.987166   66044 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:31:12.987914   66044 main.go:141] libmachine: (embed-certs-277128) DBG | unable to find current IP address of domain embed-certs-277128 in network mk-embed-certs-277128
	I0501 03:31:12.987939   66044 main.go:141] libmachine: (embed-certs-277128) DBG | I0501 03:31:12.987828   66592 retry.go:31] will retry after 432.825947ms: waiting for machine to come up
	I0501 03:31:12.409840   66006 cache_images.go:116] "registry.k8s.io/pause:3.9" needs transfer: "registry.k8s.io/pause:3.9" does not exist at hash "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c" in container runtime
	I0501 03:31:12.409881   66006 cri.go:218] Removing image: registry.k8s.io/pause:3.9
	I0501 03:31:12.409929   66006 ssh_runner.go:195] Run: which crictl
	I0501 03:31:12.410609   66006 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.30.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.30.0" does not exist at hash "c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b" in container runtime
	I0501 03:31:12.410653   66006 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0501 03:31:12.410696   66006 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0501 03:31:12.410740   66006 ssh_runner.go:195] Run: which crictl
	I0501 03:31:12.410651   66006 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.30.0
	I0501 03:31:12.410811   66006 ssh_runner.go:195] Run: which crictl
	I0501 03:31:12.422770   66006 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.9
	I0501 03:31:12.435071   66006 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.30.0" needs transfer: "registry.k8s.io/kube-proxy:v1.30.0" does not exist at hash "a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b" in container runtime
	I0501 03:31:12.435119   66006 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.30.0
	I0501 03:31:12.435166   66006 ssh_runner.go:195] Run: which crictl
	I0501 03:31:12.435206   66006 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.0
	I0501 03:31:12.435225   66006 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.12-0
	I0501 03:31:12.435240   66006 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.30.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.30.0" does not exist at hash "c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0" in container runtime
	I0501 03:31:12.435271   66006 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.30.0
	I0501 03:31:12.435297   66006 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0501 03:31:12.435277   66006 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.30.0
	I0501 03:31:12.435309   66006 ssh_runner.go:195] Run: which crictl
	I0501 03:31:12.520523   66006 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9
	I0501 03:31:12.520585   66006 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.30.0
	I0501 03:31:12.520645   66006 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.9
	I0501 03:31:12.532668   66006 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0
	I0501 03:31:12.532780   66006 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0
	I0501 03:31:12.560252   66006 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0
	I0501 03:31:12.560374   66006 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.0
	I0501 03:31:12.577619   66006 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0
	I0501 03:31:12.577665   66006 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.30.0
	I0501 03:31:12.577727   66006 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.0
	I0501 03:31:12.577865   66006 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0501 03:31:12.577964   66006 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0501 03:31:12.627920   66006 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.9: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.9: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.9': No such file or directory
	I0501 03:31:12.627963   66006 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.12-0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.12-0': No such file or directory
	I0501 03:31:12.627968   66006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9 --> /var/lib/minikube/images/pause_3.9 (322048 bytes)
	I0501 03:31:12.627977   66006 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0
	I0501 03:31:12.627992   66006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 --> /var/lib/minikube/images/etcd_3.5.12-0 (57244160 bytes)
	I0501 03:31:12.628038   66006 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.30.0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.30.0': No such file or directory
	I0501 03:31:12.628072   66006 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.0
	I0501 03:31:12.628095   66006 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0
	I0501 03:31:12.628120   66006 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.11.1: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.11.1': No such file or directory
	I0501 03:31:12.628070   66006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0 --> /var/lib/minikube/images/kube-scheduler_v1.30.0 (19219456 bytes)
	I0501 03:31:12.628136   66006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 --> /var/lib/minikube/images/coredns_v1.11.1 (18189312 bytes)
	I0501 03:31:12.628175   66006 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.0
	I0501 03:31:12.628214   66006 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.30.0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.30.0': No such file or directory
	I0501 03:31:12.628244   66006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0 --> /var/lib/minikube/images/kube-controller-manager_v1.30.0 (31041024 bytes)
	I0501 03:31:12.695701   66006 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.30.0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.30.0': No such file or directory
	I0501 03:31:12.695751   66006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0 --> /var/lib/minikube/images/kube-proxy_v1.30.0 (29022720 bytes)
	I0501 03:31:12.695851   66006 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.30.0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.30.0': No such file or directory
	I0501 03:31:12.696054   66006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0 --> /var/lib/minikube/images/kube-apiserver_v1.30.0 (32674304 bytes)
	I0501 03:31:12.751115   66006 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.9
	I0501 03:31:12.751212   66006 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.9
	I0501 03:31:13.049222   66006 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0501 03:31:13.586895   66006 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9 from cache
	I0501 03:31:13.586942   66006 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0501 03:31:13.586996   66006 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0501 03:31:13.587028   66006 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0501 03:31:13.587044   66006 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0501 03:31:13.587097   66006 ssh_runner.go:195] Run: which crictl
	I0501 03:31:13.625991   66006 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0501 03:31:15.863123   66006 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.276061699s)
	I0501 03:31:15.863144   66006 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.237059015s)
	I0501 03:31:15.863163   66006 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0501 03:31:15.863184   66006 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0501 03:31:15.863205   66006 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.30.0
	I0501 03:31:15.863279   66006 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0
	I0501 03:31:15.863288   66006 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0501 03:31:13.422691   66044 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:31:13.423322   66044 main.go:141] libmachine: (embed-certs-277128) DBG | unable to find current IP address of domain embed-certs-277128 in network mk-embed-certs-277128
	I0501 03:31:13.423354   66044 main.go:141] libmachine: (embed-certs-277128) DBG | I0501 03:31:13.423262   66592 retry.go:31] will retry after 490.527584ms: waiting for machine to come up
	I0501 03:31:13.915087   66044 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:31:13.915731   66044 main.go:141] libmachine: (embed-certs-277128) DBG | unable to find current IP address of domain embed-certs-277128 in network mk-embed-certs-277128
	I0501 03:31:13.915762   66044 main.go:141] libmachine: (embed-certs-277128) DBG | I0501 03:31:13.915673   66592 retry.go:31] will retry after 951.286843ms: waiting for machine to come up
	I0501 03:31:14.868789   66044 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:31:14.869363   66044 main.go:141] libmachine: (embed-certs-277128) DBG | unable to find current IP address of domain embed-certs-277128 in network mk-embed-certs-277128
	I0501 03:31:14.869387   66044 main.go:141] libmachine: (embed-certs-277128) DBG | I0501 03:31:14.869309   66592 retry.go:31] will retry after 1.128666589s: waiting for machine to come up
	I0501 03:31:16.000132   66044 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:31:16.000647   66044 main.go:141] libmachine: (embed-certs-277128) DBG | unable to find current IP address of domain embed-certs-277128 in network mk-embed-certs-277128
	I0501 03:31:16.000688   66044 main.go:141] libmachine: (embed-certs-277128) DBG | I0501 03:31:16.000601   66592 retry.go:31] will retry after 1.13133924s: waiting for machine to come up
	I0501 03:31:17.133016   66044 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:31:17.133518   66044 main.go:141] libmachine: (embed-certs-277128) DBG | unable to find current IP address of domain embed-certs-277128 in network mk-embed-certs-277128
	I0501 03:31:17.133546   66044 main.go:141] libmachine: (embed-certs-277128) DBG | I0501 03:31:17.133469   66592 retry.go:31] will retry after 1.277542551s: waiting for machine to come up
	I0501 03:31:18.730984   66006 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.867672823s)
	I0501 03:31:18.731024   66006 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0501 03:31:18.731060   66006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I0501 03:31:18.731089   66006 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0: (2.867783514s)
	I0501 03:31:18.731116   66006 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0 from cache
	I0501 03:31:18.731146   66006 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.30.0
	I0501 03:31:18.731195   66006 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0
	I0501 03:31:21.221529   66006 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0: (2.490301999s)
	I0501 03:31:21.221560   66006 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0 from cache
	I0501 03:31:21.221593   66006 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.30.0
	I0501 03:31:21.221643   66006 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0
	I0501 03:31:18.412107   66044 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:31:18.412658   66044 main.go:141] libmachine: (embed-certs-277128) DBG | unable to find current IP address of domain embed-certs-277128 in network mk-embed-certs-277128
	I0501 03:31:18.412705   66044 main.go:141] libmachine: (embed-certs-277128) DBG | I0501 03:31:18.412625   66592 retry.go:31] will retry after 2.0109544s: waiting for machine to come up
	I0501 03:31:20.424849   66044 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:31:20.425354   66044 main.go:141] libmachine: (embed-certs-277128) DBG | unable to find current IP address of domain embed-certs-277128 in network mk-embed-certs-277128
	I0501 03:31:20.425401   66044 main.go:141] libmachine: (embed-certs-277128) DBG | I0501 03:31:20.425314   66592 retry.go:31] will retry after 2.078428851s: waiting for machine to come up
	I0501 03:31:22.506816   66044 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:31:22.507382   66044 main.go:141] libmachine: (embed-certs-277128) DBG | unable to find current IP address of domain embed-certs-277128 in network mk-embed-certs-277128
	I0501 03:31:22.507412   66044 main.go:141] libmachine: (embed-certs-277128) DBG | I0501 03:31:22.507346   66592 retry.go:31] will retry after 2.8275295s: waiting for machine to come up
	I0501 03:31:23.501230   66006 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0: (2.279558587s)
	I0501 03:31:23.501266   66006 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0 from cache
	I0501 03:31:23.501297   66006 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.30.0
	I0501 03:31:23.501350   66006 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0
	I0501 03:31:26.072311   66006 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0: (2.570917283s)
	I0501 03:31:26.072344   66006 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0 from cache
	I0501 03:31:26.072379   66006 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.12-0
	I0501 03:31:26.072434   66006 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0
	I0501 03:31:25.336492   66044 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:31:25.337108   66044 main.go:141] libmachine: (embed-certs-277128) DBG | unable to find current IP address of domain embed-certs-277128 in network mk-embed-certs-277128
	I0501 03:31:25.337140   66044 main.go:141] libmachine: (embed-certs-277128) DBG | I0501 03:31:25.337053   66592 retry.go:31] will retry after 3.113783197s: waiting for machine to come up
	I0501 03:31:29.968758   66006 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0: (3.896292908s)
	I0501 03:31:29.968790   66006 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 from cache
	I0501 03:31:29.968818   66006 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0501 03:31:29.968870   66006 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0501 03:31:30.731237   66006 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0501 03:31:30.731289   66006 cache_images.go:123] Successfully loaded all cached images
	I0501 03:31:30.731296   66006 cache_images.go:92] duration metric: took 18.647972708s to LoadCachedImages
	I0501 03:31:30.731311   66006 kubeadm.go:928] updating node { 192.168.39.144 8443 v1.30.0 crio true true} ...
	I0501 03:31:30.731478   66006 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-892672 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.144
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:no-preload-892672 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0501 03:31:30.731563   66006 ssh_runner.go:195] Run: crio config
	I0501 03:31:30.786577   66006 cni.go:84] Creating CNI manager for ""
	I0501 03:31:30.786601   66006 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0501 03:31:30.786613   66006 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0501 03:31:30.786643   66006 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.144 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-892672 NodeName:no-preload-892672 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.144"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.144 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0501 03:31:30.786795   66006 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.144
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-892672"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.144
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.144"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0501 03:31:30.786855   66006 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0501 03:31:30.799250   66006 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.0': No such file or directory
	
	Initiating transfer...
	I0501 03:31:30.799312   66006 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.0
	I0501 03:31:30.811478   66006 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet.sha256
	I0501 03:31:30.811513   66006 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl.sha256
	I0501 03:31:30.811544   66006 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 03:31:30.811484   66006 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm.sha256
	I0501 03:31:30.811584   66006 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl
	I0501 03:31:30.811678   66006 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0501 03:31:30.829028   66006 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubectl': No such file or directory
	I0501 03:31:30.829061   66006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/cache/linux/amd64/v1.30.0/kubectl --> /var/lib/minikube/binaries/v1.30.0/kubectl (51454104 bytes)
	I0501 03:31:30.829102   66006 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubeadm': No such file or directory
	I0501 03:31:30.829122   66006 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet
	I0501 03:31:30.829129   66006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/cache/linux/amd64/v1.30.0/kubeadm --> /var/lib/minikube/binaries/v1.30.0/kubeadm (50249880 bytes)
	I0501 03:31:30.858858   66006 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubelet': No such file or directory
	I0501 03:31:30.858899   66006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/cache/linux/amd64/v1.30.0/kubelet --> /var/lib/minikube/binaries/v1.30.0/kubelet (100100024 bytes)
	I0501 03:31:31.717601   66006 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0501 03:31:31.728148   66006 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0501 03:31:31.747324   66006 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0501 03:31:31.765702   66006 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0501 03:31:31.784146   66006 ssh_runner.go:195] Run: grep 192.168.39.144	control-plane.minikube.internal$ /etc/hosts
	I0501 03:31:31.788535   66006 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.144	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0501 03:31:31.802016   66006 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 03:31:31.926515   66006 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0501 03:31:31.945969   66006 certs.go:68] Setting up /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/no-preload-892672 for IP: 192.168.39.144
	I0501 03:31:31.945997   66006 certs.go:194] generating shared ca certs ...
	I0501 03:31:31.946019   66006 certs.go:226] acquiring lock for ca certs: {Name:mk85a75d14c00780d4bf09cd9bdaf0f96f1b8fce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 03:31:31.946205   66006 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18779-13391/.minikube/ca.key
	I0501 03:31:31.946265   66006 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.key
	I0501 03:31:31.946280   66006 certs.go:256] generating profile certs ...
	I0501 03:31:31.946350   66006 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/no-preload-892672/client.key
	I0501 03:31:31.946369   66006 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/no-preload-892672/client.crt with IP's: []
	I0501 03:31:32.032521   66006 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/no-preload-892672/client.crt ...
	I0501 03:31:32.032550   66006 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/no-preload-892672/client.crt: {Name:mk6aa5ab302db9345de8c52a5cd2755a4bb70f26 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 03:31:32.032712   66006 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/no-preload-892672/client.key ...
	I0501 03:31:32.032723   66006 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/no-preload-892672/client.key: {Name:mk892ba3f08b6aaad4e016ca995331dc987e7466 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 03:31:32.032802   66006 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/no-preload-892672/apiserver.key.3644a8af
	I0501 03:31:32.032817   66006 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/no-preload-892672/apiserver.crt.3644a8af with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.144]
	I0501 03:31:32.320368   66006 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/no-preload-892672/apiserver.crt.3644a8af ...
	I0501 03:31:32.320398   66006 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/no-preload-892672/apiserver.crt.3644a8af: {Name:mkad4160594232e3128e45a34447aba242d3ae7d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 03:31:32.320542   66006 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/no-preload-892672/apiserver.key.3644a8af ...
	I0501 03:31:32.320556   66006 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/no-preload-892672/apiserver.key.3644a8af: {Name:mkb50911ae80ff504441ea83ff5c8e26d1e25a71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 03:31:32.320629   66006 certs.go:381] copying /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/no-preload-892672/apiserver.crt.3644a8af -> /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/no-preload-892672/apiserver.crt
	I0501 03:31:32.320710   66006 certs.go:385] copying /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/no-preload-892672/apiserver.key.3644a8af -> /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/no-preload-892672/apiserver.key
	I0501 03:31:32.320773   66006 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/no-preload-892672/proxy-client.key
	I0501 03:31:32.320788   66006 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/no-preload-892672/proxy-client.crt with IP's: []
	I0501 03:31:28.452608   66044 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:31:28.453074   66044 main.go:141] libmachine: (embed-certs-277128) DBG | unable to find current IP address of domain embed-certs-277128 in network mk-embed-certs-277128
	I0501 03:31:28.453104   66044 main.go:141] libmachine: (embed-certs-277128) DBG | I0501 03:31:28.453026   66592 retry.go:31] will retry after 3.965232669s: waiting for machine to come up
	I0501 03:31:32.419515   66044 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:31:32.419988   66044 main.go:141] libmachine: (embed-certs-277128) Found IP for machine: 192.168.50.218
	I0501 03:31:32.420030   66044 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has current primary IP address 192.168.50.218 and MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:31:32.420041   66044 main.go:141] libmachine: (embed-certs-277128) Reserving static IP address...
	I0501 03:31:32.420358   66044 main.go:141] libmachine: (embed-certs-277128) DBG | unable to find host DHCP lease matching {name: "embed-certs-277128", mac: "52:54:00:96:11:7d", ip: "192.168.50.218"} in network mk-embed-certs-277128
	I0501 03:31:32.496352   66044 main.go:141] libmachine: (embed-certs-277128) Reserved static IP address: 192.168.50.218
	I0501 03:31:32.496383   66044 main.go:141] libmachine: (embed-certs-277128) Waiting for SSH to be available...
	I0501 03:31:32.496394   66044 main.go:141] libmachine: (embed-certs-277128) DBG | Getting to WaitForSSH function...
	I0501 03:31:32.499178   66044 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:31:32.499709   66044 main.go:141] libmachine: (embed-certs-277128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:7d", ip: ""} in network mk-embed-certs-277128: {Iface:virbr2 ExpiryTime:2024-05-01 04:31:26 +0000 UTC Type:0 Mac:52:54:00:96:11:7d Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:minikube Clientid:01:52:54:00:96:11:7d}
	I0501 03:31:32.499760   66044 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined IP address 192.168.50.218 and MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:31:32.499858   66044 main.go:141] libmachine: (embed-certs-277128) DBG | Using SSH client type: external
	I0501 03:31:32.499879   66044 main.go:141] libmachine: (embed-certs-277128) DBG | Using SSH private key: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/embed-certs-277128/id_rsa (-rw-------)
	I0501 03:31:32.499940   66044 main.go:141] libmachine: (embed-certs-277128) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.218 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18779-13391/.minikube/machines/embed-certs-277128/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0501 03:31:32.499960   66044 main.go:141] libmachine: (embed-certs-277128) DBG | About to run SSH command:
	I0501 03:31:32.499973   66044 main.go:141] libmachine: (embed-certs-277128) DBG | exit 0
	I0501 03:31:32.631274   66044 main.go:141] libmachine: (embed-certs-277128) DBG | SSH cmd err, output: <nil>: 
	I0501 03:31:32.631542   66044 main.go:141] libmachine: (embed-certs-277128) KVM machine creation complete!
	I0501 03:31:32.631943   66044 main.go:141] libmachine: (embed-certs-277128) Calling .GetConfigRaw
	I0501 03:31:32.632662   66044 main.go:141] libmachine: (embed-certs-277128) Calling .DriverName
	I0501 03:31:32.632912   66044 main.go:141] libmachine: (embed-certs-277128) Calling .DriverName
	I0501 03:31:32.633108   66044 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0501 03:31:32.633124   66044 main.go:141] libmachine: (embed-certs-277128) Calling .GetState
	I0501 03:31:32.634474   66044 main.go:141] libmachine: Detecting operating system of created instance...
	I0501 03:31:32.634492   66044 main.go:141] libmachine: Waiting for SSH to be available...
	I0501 03:31:32.634499   66044 main.go:141] libmachine: Getting to WaitForSSH function...
	I0501 03:31:32.634515   66044 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHHostname
	I0501 03:31:32.637129   66044 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:31:32.637636   66044 main.go:141] libmachine: (embed-certs-277128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:7d", ip: ""} in network mk-embed-certs-277128: {Iface:virbr2 ExpiryTime:2024-05-01 04:31:26 +0000 UTC Type:0 Mac:52:54:00:96:11:7d Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-277128 Clientid:01:52:54:00:96:11:7d}
	I0501 03:31:32.637666   66044 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined IP address 192.168.50.218 and MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:31:32.637873   66044 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHPort
	I0501 03:31:32.638060   66044 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHKeyPath
	I0501 03:31:32.638260   66044 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHKeyPath
	I0501 03:31:32.638447   66044 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHUsername
	I0501 03:31:32.638618   66044 main.go:141] libmachine: Using SSH client type: native
	I0501 03:31:32.638850   66044 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.218 22 <nil> <nil>}
	I0501 03:31:32.638862   66044 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0501 03:31:32.758027   66044 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0501 03:31:32.758056   66044 main.go:141] libmachine: Detecting the provisioner...
	I0501 03:31:32.758064   66044 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHHostname
	I0501 03:31:32.760887   66044 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:31:32.761329   66044 main.go:141] libmachine: (embed-certs-277128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:7d", ip: ""} in network mk-embed-certs-277128: {Iface:virbr2 ExpiryTime:2024-05-01 04:31:26 +0000 UTC Type:0 Mac:52:54:00:96:11:7d Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-277128 Clientid:01:52:54:00:96:11:7d}
	I0501 03:31:32.761375   66044 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined IP address 192.168.50.218 and MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:31:32.761571   66044 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHPort
	I0501 03:31:32.761784   66044 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHKeyPath
	I0501 03:31:32.761964   66044 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHKeyPath
	I0501 03:31:32.762091   66044 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHUsername
	I0501 03:31:32.762280   66044 main.go:141] libmachine: Using SSH client type: native
	I0501 03:31:32.762531   66044 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.218 22 <nil> <nil>}
	I0501 03:31:32.762548   66044 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0501 03:31:32.883924   66044 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0501 03:31:32.884020   66044 main.go:141] libmachine: found compatible host: buildroot
	I0501 03:31:32.884035   66044 main.go:141] libmachine: Provisioning with buildroot...
	I0501 03:31:32.884049   66044 main.go:141] libmachine: (embed-certs-277128) Calling .GetMachineName
	I0501 03:31:32.884309   66044 buildroot.go:166] provisioning hostname "embed-certs-277128"
	I0501 03:31:32.884337   66044 main.go:141] libmachine: (embed-certs-277128) Calling .GetMachineName
	I0501 03:31:32.884565   66044 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHHostname
	I0501 03:31:32.887568   66044 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:31:32.887985   66044 main.go:141] libmachine: (embed-certs-277128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:7d", ip: ""} in network mk-embed-certs-277128: {Iface:virbr2 ExpiryTime:2024-05-01 04:31:26 +0000 UTC Type:0 Mac:52:54:00:96:11:7d Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-277128 Clientid:01:52:54:00:96:11:7d}
	I0501 03:31:32.888010   66044 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined IP address 192.168.50.218 and MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:31:32.888207   66044 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHPort
	I0501 03:31:32.888372   66044 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHKeyPath
	I0501 03:31:32.888560   66044 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHKeyPath
	I0501 03:31:32.888720   66044 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHUsername
	I0501 03:31:32.888933   66044 main.go:141] libmachine: Using SSH client type: native
	I0501 03:31:32.889151   66044 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.218 22 <nil> <nil>}
	I0501 03:31:32.889169   66044 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-277128 && echo "embed-certs-277128" | sudo tee /etc/hostname
	I0501 03:31:33.021128   66044 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-277128
	
	I0501 03:31:33.021155   66044 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHHostname
	I0501 03:31:33.024053   66044 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:31:33.024454   66044 main.go:141] libmachine: (embed-certs-277128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:7d", ip: ""} in network mk-embed-certs-277128: {Iface:virbr2 ExpiryTime:2024-05-01 04:31:26 +0000 UTC Type:0 Mac:52:54:00:96:11:7d Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-277128 Clientid:01:52:54:00:96:11:7d}
	I0501 03:31:33.024485   66044 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined IP address 192.168.50.218 and MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:31:33.024619   66044 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHPort
	I0501 03:31:33.024817   66044 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHKeyPath
	I0501 03:31:33.024998   66044 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHKeyPath
	I0501 03:31:33.025129   66044 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHUsername
	I0501 03:31:33.025294   66044 main.go:141] libmachine: Using SSH client type: native
	I0501 03:31:33.025488   66044 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.218 22 <nil> <nil>}
	I0501 03:31:33.025514   66044 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-277128' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-277128/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-277128' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0501 03:31:32.456408   66006 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/no-preload-892672/proxy-client.crt ...
	I0501 03:31:32.456434   66006 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/no-preload-892672/proxy-client.crt: {Name:mka33cf5f191a20cc9d0f29a17a639230001d65b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 03:31:32.456569   66006 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/no-preload-892672/proxy-client.key ...
	I0501 03:31:32.456582   66006 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/no-preload-892672/proxy-client.key: {Name:mka8eef76d8317fb98376cabc9369dc42c6f6cc3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 03:31:32.456767   66006 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/20724.pem (1338 bytes)
	W0501 03:31:32.456802   66006 certs.go:480] ignoring /home/jenkins/minikube-integration/18779-13391/.minikube/certs/20724_empty.pem, impossibly tiny 0 bytes
	I0501 03:31:32.456815   66006 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca-key.pem (1679 bytes)
	I0501 03:31:32.456838   66006 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem (1078 bytes)
	I0501 03:31:32.456859   66006 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem (1123 bytes)
	I0501 03:31:32.456879   66006 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem (1675 bytes)
	I0501 03:31:32.456915   66006 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem (1708 bytes)
	I0501 03:31:32.457490   66006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0501 03:31:32.489589   66006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0501 03:31:32.520295   66006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0501 03:31:32.548084   66006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0501 03:31:32.574731   66006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/no-preload-892672/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0501 03:31:32.600823   66006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/no-preload-892672/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0501 03:31:32.629416   66006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/no-preload-892672/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0501 03:31:32.661828   66006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/no-preload-892672/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0501 03:31:32.690391   66006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem --> /usr/share/ca-certificates/207242.pem (1708 bytes)
	I0501 03:31:32.721433   66006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0501 03:31:32.746522   66006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/certs/20724.pem --> /usr/share/ca-certificates/20724.pem (1338 bytes)
	I0501 03:31:32.773450   66006 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0501 03:31:32.793741   66006 ssh_runner.go:195] Run: openssl version
	I0501 03:31:32.800936   66006 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0501 03:31:32.816196   66006 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0501 03:31:32.821539   66006 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May  1 02:08 /usr/share/ca-certificates/minikubeCA.pem
	I0501 03:31:32.821582   66006 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0501 03:31:32.828112   66006 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0501 03:31:32.842479   66006 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20724.pem && ln -fs /usr/share/ca-certificates/20724.pem /etc/ssl/certs/20724.pem"
	I0501 03:31:32.855720   66006 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20724.pem
	I0501 03:31:32.860834   66006 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May  1 02:20 /usr/share/ca-certificates/20724.pem
	I0501 03:31:32.860882   66006 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20724.pem
	I0501 03:31:32.867225   66006 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20724.pem /etc/ssl/certs/51391683.0"
	I0501 03:31:32.881221   66006 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/207242.pem && ln -fs /usr/share/ca-certificates/207242.pem /etc/ssl/certs/207242.pem"
	I0501 03:31:32.896383   66006 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/207242.pem
	I0501 03:31:32.901493   66006 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May  1 02:20 /usr/share/ca-certificates/207242.pem
	I0501 03:31:32.901551   66006 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/207242.pem
	I0501 03:31:32.908311   66006 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/207242.pem /etc/ssl/certs/3ec20f2e.0"
	I0501 03:31:32.922459   66006 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0501 03:31:32.927188   66006 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0501 03:31:32.927255   66006 kubeadm.go:391] StartCluster: {Name:no-preload-892672 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.0 ClusterName:no-preload-892672 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.144 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 03:31:32.927336   66006 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0501 03:31:32.927395   66006 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0501 03:31:32.972035   66006 cri.go:89] found id: ""
	I0501 03:31:32.972096   66006 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0501 03:31:32.983950   66006 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0501 03:31:32.996998   66006 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0501 03:31:33.009962   66006 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0501 03:31:33.009993   66006 kubeadm.go:156] found existing configuration files:
	
	I0501 03:31:33.010047   66006 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0501 03:31:33.020757   66006 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0501 03:31:33.020831   66006 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0501 03:31:33.035165   66006 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0501 03:31:33.047425   66006 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0501 03:31:33.047483   66006 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0501 03:31:33.059039   66006 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0501 03:31:33.070352   66006 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0501 03:31:33.070446   66006 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0501 03:31:33.081638   66006 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0501 03:31:33.091827   66006 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0501 03:31:33.091881   66006 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0501 03:31:33.104596   66006 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0501 03:31:33.163210   66006 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0501 03:31:33.163320   66006 kubeadm.go:309] [preflight] Running pre-flight checks
	I0501 03:31:33.302844   66006 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0501 03:31:33.303009   66006 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0501 03:31:33.303153   66006 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0501 03:31:33.567389   66006 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0501 03:31:34.180186   66261 start.go:364] duration metric: took 58.340518755s to acquireMachinesLock for "kubernetes-upgrade-046243"
	I0501 03:31:34.180245   66261 start.go:96] Skipping create...Using existing machine configuration
	I0501 03:31:34.180256   66261 fix.go:54] fixHost starting: 
	I0501 03:31:34.180803   66261 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:31:34.180867   66261 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:31:34.198150   66261 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44429
	I0501 03:31:34.198685   66261 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:31:34.199283   66261 main.go:141] libmachine: Using API Version  1
	I0501 03:31:34.199310   66261 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:31:34.199668   66261 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:31:34.199852   66261 main.go:141] libmachine: (kubernetes-upgrade-046243) Calling .DriverName
	I0501 03:31:34.200003   66261 main.go:141] libmachine: (kubernetes-upgrade-046243) Calling .GetState
	I0501 03:31:34.201604   66261 fix.go:112] recreateIfNeeded on kubernetes-upgrade-046243: state=Running err=<nil>
	W0501 03:31:34.201627   66261 fix.go:138] unexpected machine state, will restart: <nil>
	I0501 03:31:34.205291   66261 out.go:177] * Updating the running kvm2 "kubernetes-upgrade-046243" VM ...
	I0501 03:31:34.206690   66261 machine.go:94] provisionDockerMachine start ...
	I0501 03:31:34.206710   66261 main.go:141] libmachine: (kubernetes-upgrade-046243) Calling .DriverName
	I0501 03:31:34.206883   66261 main.go:141] libmachine: (kubernetes-upgrade-046243) Calling .GetSSHHostname
	I0501 03:31:34.209610   66261 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | domain kubernetes-upgrade-046243 has defined MAC address 52:54:00:ac:ba:ac in network mk-kubernetes-upgrade-046243
	I0501 03:31:34.210114   66261 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:ba:ac", ip: ""} in network mk-kubernetes-upgrade-046243: {Iface:virbr3 ExpiryTime:2024-05-01 04:30:07 +0000 UTC Type:0 Mac:52:54:00:ac:ba:ac Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:kubernetes-upgrade-046243 Clientid:01:52:54:00:ac:ba:ac}
	I0501 03:31:34.210143   66261 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | domain kubernetes-upgrade-046243 has defined IP address 192.168.72.134 and MAC address 52:54:00:ac:ba:ac in network mk-kubernetes-upgrade-046243
	I0501 03:31:34.210273   66261 main.go:141] libmachine: (kubernetes-upgrade-046243) Calling .GetSSHPort
	I0501 03:31:34.210445   66261 main.go:141] libmachine: (kubernetes-upgrade-046243) Calling .GetSSHKeyPath
	I0501 03:31:34.210642   66261 main.go:141] libmachine: (kubernetes-upgrade-046243) Calling .GetSSHKeyPath
	I0501 03:31:34.210785   66261 main.go:141] libmachine: (kubernetes-upgrade-046243) Calling .GetSSHUsername
	I0501 03:31:34.210934   66261 main.go:141] libmachine: Using SSH client type: native
	I0501 03:31:34.211163   66261 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.134 22 <nil> <nil>}
	I0501 03:31:34.211177   66261 main.go:141] libmachine: About to run SSH command:
	hostname
	I0501 03:31:34.327690   66261 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-046243
	
	I0501 03:31:34.327730   66261 main.go:141] libmachine: (kubernetes-upgrade-046243) Calling .GetMachineName
	I0501 03:31:34.327992   66261 buildroot.go:166] provisioning hostname "kubernetes-upgrade-046243"
	I0501 03:31:34.328025   66261 main.go:141] libmachine: (kubernetes-upgrade-046243) Calling .GetMachineName
	I0501 03:31:34.328326   66261 main.go:141] libmachine: (kubernetes-upgrade-046243) Calling .GetSSHHostname
	I0501 03:31:34.331419   66261 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | domain kubernetes-upgrade-046243 has defined MAC address 52:54:00:ac:ba:ac in network mk-kubernetes-upgrade-046243
	I0501 03:31:34.331833   66261 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:ba:ac", ip: ""} in network mk-kubernetes-upgrade-046243: {Iface:virbr3 ExpiryTime:2024-05-01 04:30:07 +0000 UTC Type:0 Mac:52:54:00:ac:ba:ac Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:kubernetes-upgrade-046243 Clientid:01:52:54:00:ac:ba:ac}
	I0501 03:31:34.331914   66261 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | domain kubernetes-upgrade-046243 has defined IP address 192.168.72.134 and MAC address 52:54:00:ac:ba:ac in network mk-kubernetes-upgrade-046243
	I0501 03:31:34.332041   66261 main.go:141] libmachine: (kubernetes-upgrade-046243) Calling .GetSSHPort
	I0501 03:31:34.332256   66261 main.go:141] libmachine: (kubernetes-upgrade-046243) Calling .GetSSHKeyPath
	I0501 03:31:34.332428   66261 main.go:141] libmachine: (kubernetes-upgrade-046243) Calling .GetSSHKeyPath
	I0501 03:31:34.332558   66261 main.go:141] libmachine: (kubernetes-upgrade-046243) Calling .GetSSHUsername
	I0501 03:31:34.332705   66261 main.go:141] libmachine: Using SSH client type: native
	I0501 03:31:34.332993   66261 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.134 22 <nil> <nil>}
	I0501 03:31:34.333016   66261 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-046243 && echo "kubernetes-upgrade-046243" | sudo tee /etc/hostname
	I0501 03:31:34.475078   66261 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-046243
	
	I0501 03:31:34.475108   66261 main.go:141] libmachine: (kubernetes-upgrade-046243) Calling .GetSSHHostname
	I0501 03:31:34.477773   66261 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | domain kubernetes-upgrade-046243 has defined MAC address 52:54:00:ac:ba:ac in network mk-kubernetes-upgrade-046243
	I0501 03:31:34.478095   66261 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:ba:ac", ip: ""} in network mk-kubernetes-upgrade-046243: {Iface:virbr3 ExpiryTime:2024-05-01 04:30:07 +0000 UTC Type:0 Mac:52:54:00:ac:ba:ac Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:kubernetes-upgrade-046243 Clientid:01:52:54:00:ac:ba:ac}
	I0501 03:31:34.478129   66261 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | domain kubernetes-upgrade-046243 has defined IP address 192.168.72.134 and MAC address 52:54:00:ac:ba:ac in network mk-kubernetes-upgrade-046243
	I0501 03:31:34.478453   66261 main.go:141] libmachine: (kubernetes-upgrade-046243) Calling .GetSSHPort
	I0501 03:31:34.478683   66261 main.go:141] libmachine: (kubernetes-upgrade-046243) Calling .GetSSHKeyPath
	I0501 03:31:34.478897   66261 main.go:141] libmachine: (kubernetes-upgrade-046243) Calling .GetSSHKeyPath
	I0501 03:31:34.479039   66261 main.go:141] libmachine: (kubernetes-upgrade-046243) Calling .GetSSHUsername
	I0501 03:31:34.479227   66261 main.go:141] libmachine: Using SSH client type: native
	I0501 03:31:34.479457   66261 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.134 22 <nil> <nil>}
	I0501 03:31:34.479484   66261 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-046243' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-046243/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-046243' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0501 03:31:34.600914   66261 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0501 03:31:34.600948   66261 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18779-13391/.minikube CaCertPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18779-13391/.minikube}
	I0501 03:31:34.600988   66261 buildroot.go:174] setting up certificates
	I0501 03:31:34.601001   66261 provision.go:84] configureAuth start
	I0501 03:31:34.601020   66261 main.go:141] libmachine: (kubernetes-upgrade-046243) Calling .GetMachineName
	I0501 03:31:34.601316   66261 main.go:141] libmachine: (kubernetes-upgrade-046243) Calling .GetIP
	I0501 03:31:34.604200   66261 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | domain kubernetes-upgrade-046243 has defined MAC address 52:54:00:ac:ba:ac in network mk-kubernetes-upgrade-046243
	I0501 03:31:34.604611   66261 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:ba:ac", ip: ""} in network mk-kubernetes-upgrade-046243: {Iface:virbr3 ExpiryTime:2024-05-01 04:30:07 +0000 UTC Type:0 Mac:52:54:00:ac:ba:ac Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:kubernetes-upgrade-046243 Clientid:01:52:54:00:ac:ba:ac}
	I0501 03:31:34.604665   66261 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | domain kubernetes-upgrade-046243 has defined IP address 192.168.72.134 and MAC address 52:54:00:ac:ba:ac in network mk-kubernetes-upgrade-046243
	I0501 03:31:34.604823   66261 main.go:141] libmachine: (kubernetes-upgrade-046243) Calling .GetSSHHostname
	I0501 03:31:34.607060   66261 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | domain kubernetes-upgrade-046243 has defined MAC address 52:54:00:ac:ba:ac in network mk-kubernetes-upgrade-046243
	I0501 03:31:34.607416   66261 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:ba:ac", ip: ""} in network mk-kubernetes-upgrade-046243: {Iface:virbr3 ExpiryTime:2024-05-01 04:30:07 +0000 UTC Type:0 Mac:52:54:00:ac:ba:ac Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:kubernetes-upgrade-046243 Clientid:01:52:54:00:ac:ba:ac}
	I0501 03:31:34.607446   66261 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | domain kubernetes-upgrade-046243 has defined IP address 192.168.72.134 and MAC address 52:54:00:ac:ba:ac in network mk-kubernetes-upgrade-046243
	I0501 03:31:34.607547   66261 provision.go:143] copyHostCerts
	I0501 03:31:34.607603   66261 exec_runner.go:144] found /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem, removing ...
	I0501 03:31:34.607615   66261 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem
	I0501 03:31:34.607677   66261 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem (1078 bytes)
	I0501 03:31:34.607808   66261 exec_runner.go:144] found /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem, removing ...
	I0501 03:31:34.607819   66261 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem
	I0501 03:31:34.607844   66261 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem (1123 bytes)
	I0501 03:31:34.607927   66261 exec_runner.go:144] found /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem, removing ...
	I0501 03:31:34.607948   66261 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem
	I0501 03:31:34.607983   66261 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem (1675 bytes)
	I0501 03:31:34.608064   66261 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-046243 san=[127.0.0.1 192.168.72.134 kubernetes-upgrade-046243 localhost minikube]
	I0501 03:31:34.695806   66261 provision.go:177] copyRemoteCerts
	I0501 03:31:34.695871   66261 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0501 03:31:34.695894   66261 main.go:141] libmachine: (kubernetes-upgrade-046243) Calling .GetSSHHostname
	I0501 03:31:34.699014   66261 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | domain kubernetes-upgrade-046243 has defined MAC address 52:54:00:ac:ba:ac in network mk-kubernetes-upgrade-046243
	I0501 03:31:34.699400   66261 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:ba:ac", ip: ""} in network mk-kubernetes-upgrade-046243: {Iface:virbr3 ExpiryTime:2024-05-01 04:30:07 +0000 UTC Type:0 Mac:52:54:00:ac:ba:ac Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:kubernetes-upgrade-046243 Clientid:01:52:54:00:ac:ba:ac}
	I0501 03:31:34.699425   66261 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | domain kubernetes-upgrade-046243 has defined IP address 192.168.72.134 and MAC address 52:54:00:ac:ba:ac in network mk-kubernetes-upgrade-046243
	I0501 03:31:34.699673   66261 main.go:141] libmachine: (kubernetes-upgrade-046243) Calling .GetSSHPort
	I0501 03:31:34.699887   66261 main.go:141] libmachine: (kubernetes-upgrade-046243) Calling .GetSSHKeyPath
	I0501 03:31:34.700057   66261 main.go:141] libmachine: (kubernetes-upgrade-046243) Calling .GetSSHUsername
	I0501 03:31:34.700213   66261 sshutil.go:53] new ssh client: &{IP:192.168.72.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/kubernetes-upgrade-046243/id_rsa Username:docker}
	I0501 03:31:34.794179   66261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0501 03:31:34.835848   66261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0501 03:31:34.879430   66261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0501 03:31:34.917027   66261 provision.go:87] duration metric: took 316.004529ms to configureAuth
	I0501 03:31:34.917066   66261 buildroot.go:189] setting minikube options for container-runtime
	I0501 03:31:34.917288   66261 config.go:182] Loaded profile config "kubernetes-upgrade-046243": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0501 03:31:34.917410   66261 main.go:141] libmachine: (kubernetes-upgrade-046243) Calling .GetSSHHostname
	I0501 03:31:34.919973   66261 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | domain kubernetes-upgrade-046243 has defined MAC address 52:54:00:ac:ba:ac in network mk-kubernetes-upgrade-046243
	I0501 03:31:34.920469   66261 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:ba:ac", ip: ""} in network mk-kubernetes-upgrade-046243: {Iface:virbr3 ExpiryTime:2024-05-01 04:30:07 +0000 UTC Type:0 Mac:52:54:00:ac:ba:ac Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:kubernetes-upgrade-046243 Clientid:01:52:54:00:ac:ba:ac}
	I0501 03:31:34.920492   66261 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | domain kubernetes-upgrade-046243 has defined IP address 192.168.72.134 and MAC address 52:54:00:ac:ba:ac in network mk-kubernetes-upgrade-046243
	I0501 03:31:34.920680   66261 main.go:141] libmachine: (kubernetes-upgrade-046243) Calling .GetSSHPort
	I0501 03:31:34.920923   66261 main.go:141] libmachine: (kubernetes-upgrade-046243) Calling .GetSSHKeyPath
	I0501 03:31:34.921098   66261 main.go:141] libmachine: (kubernetes-upgrade-046243) Calling .GetSSHKeyPath
	I0501 03:31:34.921276   66261 main.go:141] libmachine: (kubernetes-upgrade-046243) Calling .GetSSHUsername
	I0501 03:31:34.921557   66261 main.go:141] libmachine: Using SSH client type: native
	I0501 03:31:34.921792   66261 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.134 22 <nil> <nil>}
	I0501 03:31:34.921822   66261 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0501 03:31:33.156709   66044 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0501 03:31:33.156737   66044 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18779-13391/.minikube CaCertPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18779-13391/.minikube}
	I0501 03:31:33.156761   66044 buildroot.go:174] setting up certificates
	I0501 03:31:33.156771   66044 provision.go:84] configureAuth start
	I0501 03:31:33.156796   66044 main.go:141] libmachine: (embed-certs-277128) Calling .GetMachineName
	I0501 03:31:33.157117   66044 main.go:141] libmachine: (embed-certs-277128) Calling .GetIP
	I0501 03:31:33.159847   66044 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:31:33.160248   66044 main.go:141] libmachine: (embed-certs-277128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:7d", ip: ""} in network mk-embed-certs-277128: {Iface:virbr2 ExpiryTime:2024-05-01 04:31:26 +0000 UTC Type:0 Mac:52:54:00:96:11:7d Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-277128 Clientid:01:52:54:00:96:11:7d}
	I0501 03:31:33.160281   66044 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined IP address 192.168.50.218 and MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:31:33.160409   66044 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHHostname
	I0501 03:31:33.162840   66044 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:31:33.163252   66044 main.go:141] libmachine: (embed-certs-277128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:7d", ip: ""} in network mk-embed-certs-277128: {Iface:virbr2 ExpiryTime:2024-05-01 04:31:26 +0000 UTC Type:0 Mac:52:54:00:96:11:7d Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-277128 Clientid:01:52:54:00:96:11:7d}
	I0501 03:31:33.163275   66044 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined IP address 192.168.50.218 and MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:31:33.163490   66044 provision.go:143] copyHostCerts
	I0501 03:31:33.163538   66044 exec_runner.go:144] found /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem, removing ...
	I0501 03:31:33.163551   66044 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem
	I0501 03:31:33.163612   66044 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem (1123 bytes)
	I0501 03:31:33.163711   66044 exec_runner.go:144] found /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem, removing ...
	I0501 03:31:33.163720   66044 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem
	I0501 03:31:33.163739   66044 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem (1675 bytes)
	I0501 03:31:33.163824   66044 exec_runner.go:144] found /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem, removing ...
	I0501 03:31:33.163840   66044 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem
	I0501 03:31:33.163869   66044 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem (1078 bytes)
	I0501 03:31:33.163953   66044 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca-key.pem org=jenkins.embed-certs-277128 san=[127.0.0.1 192.168.50.218 embed-certs-277128 localhost minikube]
	I0501 03:31:33.448706   66044 provision.go:177] copyRemoteCerts
	I0501 03:31:33.448761   66044 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0501 03:31:33.448789   66044 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHHostname
	I0501 03:31:33.451877   66044 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:31:33.452271   66044 main.go:141] libmachine: (embed-certs-277128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:7d", ip: ""} in network mk-embed-certs-277128: {Iface:virbr2 ExpiryTime:2024-05-01 04:31:26 +0000 UTC Type:0 Mac:52:54:00:96:11:7d Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-277128 Clientid:01:52:54:00:96:11:7d}
	I0501 03:31:33.452338   66044 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined IP address 192.168.50.218 and MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:31:33.452439   66044 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHPort
	I0501 03:31:33.452626   66044 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHKeyPath
	I0501 03:31:33.452841   66044 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHUsername
	I0501 03:31:33.452990   66044 sshutil.go:53] new ssh client: &{IP:192.168.50.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/embed-certs-277128/id_rsa Username:docker}
	I0501 03:31:33.543138   66044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0501 03:31:33.571961   66044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0501 03:31:33.604209   66044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0501 03:31:33.632771   66044 provision.go:87] duration metric: took 475.958791ms to configureAuth
	I0501 03:31:33.632807   66044 buildroot.go:189] setting minikube options for container-runtime
	I0501 03:31:33.632979   66044 config.go:182] Loaded profile config "embed-certs-277128": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0501 03:31:33.633042   66044 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHHostname
	I0501 03:31:33.635789   66044 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:31:33.636230   66044 main.go:141] libmachine: (embed-certs-277128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:7d", ip: ""} in network mk-embed-certs-277128: {Iface:virbr2 ExpiryTime:2024-05-01 04:31:26 +0000 UTC Type:0 Mac:52:54:00:96:11:7d Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-277128 Clientid:01:52:54:00:96:11:7d}
	I0501 03:31:33.636262   66044 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined IP address 192.168.50.218 and MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:31:33.636390   66044 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHPort
	I0501 03:31:33.636597   66044 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHKeyPath
	I0501 03:31:33.636768   66044 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHKeyPath
	I0501 03:31:33.636971   66044 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHUsername
	I0501 03:31:33.637205   66044 main.go:141] libmachine: Using SSH client type: native
	I0501 03:31:33.637406   66044 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.218 22 <nil> <nil>}
	I0501 03:31:33.637425   66044 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0501 03:31:33.920241   66044 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0501 03:31:33.920270   66044 main.go:141] libmachine: Checking connection to Docker...
	I0501 03:31:33.920283   66044 main.go:141] libmachine: (embed-certs-277128) Calling .GetURL
	I0501 03:31:33.921480   66044 main.go:141] libmachine: (embed-certs-277128) DBG | Using libvirt version 6000000
	I0501 03:31:33.924050   66044 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:31:33.924371   66044 main.go:141] libmachine: (embed-certs-277128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:7d", ip: ""} in network mk-embed-certs-277128: {Iface:virbr2 ExpiryTime:2024-05-01 04:31:26 +0000 UTC Type:0 Mac:52:54:00:96:11:7d Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-277128 Clientid:01:52:54:00:96:11:7d}
	I0501 03:31:33.924397   66044 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined IP address 192.168.50.218 and MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:31:33.924676   66044 main.go:141] libmachine: Docker is up and running!
	I0501 03:31:33.924692   66044 main.go:141] libmachine: Reticulating splines...
	I0501 03:31:33.924699   66044 client.go:171] duration metric: took 23.631730173s to LocalClient.Create
	I0501 03:31:33.924723   66044 start.go:167] duration metric: took 23.631807665s to libmachine.API.Create "embed-certs-277128"
	I0501 03:31:33.924733   66044 start.go:293] postStartSetup for "embed-certs-277128" (driver="kvm2")
	I0501 03:31:33.924742   66044 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0501 03:31:33.924757   66044 main.go:141] libmachine: (embed-certs-277128) Calling .DriverName
	I0501 03:31:33.924993   66044 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0501 03:31:33.925023   66044 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHHostname
	I0501 03:31:33.927375   66044 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:31:33.927763   66044 main.go:141] libmachine: (embed-certs-277128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:7d", ip: ""} in network mk-embed-certs-277128: {Iface:virbr2 ExpiryTime:2024-05-01 04:31:26 +0000 UTC Type:0 Mac:52:54:00:96:11:7d Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-277128 Clientid:01:52:54:00:96:11:7d}
	I0501 03:31:33.927788   66044 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined IP address 192.168.50.218 and MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:31:33.927931   66044 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHPort
	I0501 03:31:33.928084   66044 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHKeyPath
	I0501 03:31:33.928259   66044 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHUsername
	I0501 03:31:33.928393   66044 sshutil.go:53] new ssh client: &{IP:192.168.50.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/embed-certs-277128/id_rsa Username:docker}
	I0501 03:31:34.014568   66044 ssh_runner.go:195] Run: cat /etc/os-release
	I0501 03:31:34.019339   66044 info.go:137] Remote host: Buildroot 2023.02.9
	I0501 03:31:34.019363   66044 filesync.go:126] Scanning /home/jenkins/minikube-integration/18779-13391/.minikube/addons for local assets ...
	I0501 03:31:34.019422   66044 filesync.go:126] Scanning /home/jenkins/minikube-integration/18779-13391/.minikube/files for local assets ...
	I0501 03:31:34.019500   66044 filesync.go:149] local asset: /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem -> 207242.pem in /etc/ssl/certs
	I0501 03:31:34.019595   66044 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0501 03:31:34.031088   66044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem --> /etc/ssl/certs/207242.pem (1708 bytes)
	I0501 03:31:34.057766   66044 start.go:296] duration metric: took 133.021401ms for postStartSetup
	I0501 03:31:34.057810   66044 main.go:141] libmachine: (embed-certs-277128) Calling .GetConfigRaw
	I0501 03:31:34.058381   66044 main.go:141] libmachine: (embed-certs-277128) Calling .GetIP
	I0501 03:31:34.060953   66044 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:31:34.061328   66044 main.go:141] libmachine: (embed-certs-277128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:7d", ip: ""} in network mk-embed-certs-277128: {Iface:virbr2 ExpiryTime:2024-05-01 04:31:26 +0000 UTC Type:0 Mac:52:54:00:96:11:7d Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-277128 Clientid:01:52:54:00:96:11:7d}
	I0501 03:31:34.061358   66044 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined IP address 192.168.50.218 and MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:31:34.061598   66044 profile.go:143] Saving config to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/embed-certs-277128/config.json ...
	I0501 03:31:34.061784   66044 start.go:128] duration metric: took 23.789541458s to createHost
	I0501 03:31:34.061806   66044 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHHostname
	I0501 03:31:34.063940   66044 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:31:34.064203   66044 main.go:141] libmachine: (embed-certs-277128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:7d", ip: ""} in network mk-embed-certs-277128: {Iface:virbr2 ExpiryTime:2024-05-01 04:31:26 +0000 UTC Type:0 Mac:52:54:00:96:11:7d Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-277128 Clientid:01:52:54:00:96:11:7d}
	I0501 03:31:34.064239   66044 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined IP address 192.168.50.218 and MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:31:34.064362   66044 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHPort
	I0501 03:31:34.064540   66044 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHKeyPath
	I0501 03:31:34.064692   66044 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHKeyPath
	I0501 03:31:34.064845   66044 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHUsername
	I0501 03:31:34.065021   66044 main.go:141] libmachine: Using SSH client type: native
	I0501 03:31:34.065193   66044 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.218 22 <nil> <nil>}
	I0501 03:31:34.065207   66044 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0501 03:31:34.179988   66044 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714534294.155600787
	
	I0501 03:31:34.180024   66044 fix.go:216] guest clock: 1714534294.155600787
	I0501 03:31:34.180034   66044 fix.go:229] Guest: 2024-05-01 03:31:34.155600787 +0000 UTC Remote: 2024-05-01 03:31:34.061795974 +0000 UTC m=+76.018484109 (delta=93.804813ms)
	I0501 03:31:34.180073   66044 fix.go:200] guest clock delta is within tolerance: 93.804813ms
	I0501 03:31:34.180081   66044 start.go:83] releasing machines lock for "embed-certs-277128", held for 23.908010742s
	I0501 03:31:34.180129   66044 main.go:141] libmachine: (embed-certs-277128) Calling .DriverName
	I0501 03:31:34.180389   66044 main.go:141] libmachine: (embed-certs-277128) Calling .GetIP
	I0501 03:31:34.183164   66044 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:31:34.183556   66044 main.go:141] libmachine: (embed-certs-277128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:7d", ip: ""} in network mk-embed-certs-277128: {Iface:virbr2 ExpiryTime:2024-05-01 04:31:26 +0000 UTC Type:0 Mac:52:54:00:96:11:7d Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-277128 Clientid:01:52:54:00:96:11:7d}
	I0501 03:31:34.183602   66044 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined IP address 192.168.50.218 and MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:31:34.183861   66044 main.go:141] libmachine: (embed-certs-277128) Calling .DriverName
	I0501 03:31:34.184388   66044 main.go:141] libmachine: (embed-certs-277128) Calling .DriverName
	I0501 03:31:34.184614   66044 main.go:141] libmachine: (embed-certs-277128) Calling .DriverName
	I0501 03:31:34.184701   66044 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0501 03:31:34.184739   66044 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHHostname
	I0501 03:31:34.184853   66044 ssh_runner.go:195] Run: cat /version.json
	I0501 03:31:34.184877   66044 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHHostname
	I0501 03:31:34.187558   66044 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:31:34.187947   66044 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:31:34.188079   66044 main.go:141] libmachine: (embed-certs-277128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:7d", ip: ""} in network mk-embed-certs-277128: {Iface:virbr2 ExpiryTime:2024-05-01 04:31:26 +0000 UTC Type:0 Mac:52:54:00:96:11:7d Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-277128 Clientid:01:52:54:00:96:11:7d}
	I0501 03:31:34.188106   66044 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined IP address 192.168.50.218 and MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:31:34.188290   66044 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHPort
	I0501 03:31:34.188409   66044 main.go:141] libmachine: (embed-certs-277128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:7d", ip: ""} in network mk-embed-certs-277128: {Iface:virbr2 ExpiryTime:2024-05-01 04:31:26 +0000 UTC Type:0 Mac:52:54:00:96:11:7d Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-277128 Clientid:01:52:54:00:96:11:7d}
	I0501 03:31:34.188441   66044 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined IP address 192.168.50.218 and MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:31:34.188453   66044 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHKeyPath
	I0501 03:31:34.188596   66044 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHUsername
	I0501 03:31:34.188633   66044 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHPort
	I0501 03:31:34.188819   66044 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHKeyPath
	I0501 03:31:34.188808   66044 sshutil.go:53] new ssh client: &{IP:192.168.50.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/embed-certs-277128/id_rsa Username:docker}
	I0501 03:31:34.188973   66044 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHUsername
	I0501 03:31:34.189076   66044 sshutil.go:53] new ssh client: &{IP:192.168.50.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/embed-certs-277128/id_rsa Username:docker}
	I0501 03:31:34.272730   66044 ssh_runner.go:195] Run: systemctl --version
	I0501 03:31:34.299785   66044 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0501 03:31:34.471343   66044 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0501 03:31:34.480208   66044 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0501 03:31:34.480288   66044 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0501 03:31:34.500210   66044 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0501 03:31:34.500240   66044 start.go:494] detecting cgroup driver to use...
	I0501 03:31:34.500313   66044 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0501 03:31:34.523161   66044 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0501 03:31:34.542584   66044 docker.go:217] disabling cri-docker service (if available) ...
	I0501 03:31:34.542663   66044 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0501 03:31:34.560500   66044 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0501 03:31:34.577021   66044 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0501 03:31:34.732620   66044 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0501 03:31:34.922428   66044 docker.go:233] disabling docker service ...
	I0501 03:31:34.922485   66044 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0501 03:31:34.943270   66044 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0501 03:31:34.958851   66044 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0501 03:31:35.129423   66044 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0501 03:31:35.280577   66044 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0501 03:31:35.299168   66044 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0501 03:31:35.323585   66044 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0501 03:31:35.323674   66044 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:31:35.337820   66044 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0501 03:31:35.337882   66044 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:31:35.349516   66044 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:31:35.361142   66044 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:31:35.372308   66044 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0501 03:31:35.384041   66044 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:31:35.395878   66044 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:31:35.415685   66044 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:31:35.427946   66044 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0501 03:31:35.437803   66044 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0501 03:31:35.437863   66044 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0501 03:31:35.453019   66044 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0501 03:31:35.469117   66044 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 03:31:35.598343   66044 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0501 03:31:35.768275   66044 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0501 03:31:35.768335   66044 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0501 03:31:35.774314   66044 start.go:562] Will wait 60s for crictl version
	I0501 03:31:35.774360   66044 ssh_runner.go:195] Run: which crictl
	I0501 03:31:35.778652   66044 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0501 03:31:35.821243   66044 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0501 03:31:35.821336   66044 ssh_runner.go:195] Run: crio --version
	I0501 03:31:35.853072   66044 ssh_runner.go:195] Run: crio --version
	I0501 03:31:35.884018   66044 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0501 03:31:33.570271   66006 out.go:204]   - Generating certificates and keys ...
	I0501 03:31:33.570370   66006 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0501 03:31:33.570473   66006 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0501 03:31:33.829172   66006 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0501 03:31:33.982841   66006 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0501 03:31:34.334964   66006 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0501 03:31:34.576083   66006 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0501 03:31:35.048684   66006 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0501 03:31:35.049117   66006 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-892672] and IPs [192.168.39.144 127.0.0.1 ::1]
	I0501 03:31:35.198307   66006 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0501 03:31:35.198514   66006 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-892672] and IPs [192.168.39.144 127.0.0.1 ::1]
	I0501 03:31:35.398999   66006 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0501 03:31:35.570431   66006 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0501 03:31:35.645099   66006 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0501 03:31:35.645655   66006 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0501 03:31:35.792972   66006 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0501 03:31:35.993721   66006 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0501 03:31:36.255030   66006 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0501 03:31:36.467832   66006 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0501 03:31:36.620473   66006 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0501 03:31:36.621505   66006 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0501 03:31:36.624680   66006 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0501 03:31:36.638913   66006 out.go:204]   - Booting up control plane ...
	I0501 03:31:36.639039   66006 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0501 03:31:36.639111   66006 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0501 03:31:36.639244   66006 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0501 03:31:36.654208   66006 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0501 03:31:36.654363   66006 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0501 03:31:36.654442   66006 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0501 03:31:36.853699   66006 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0501 03:31:36.853823   66006 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0501 03:31:37.368379   66006 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 515.262365ms
	I0501 03:31:37.368499   66006 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0501 03:31:35.949222   65502 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0501 03:31:35.949620   65502 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0501 03:31:35.949913   65502 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0501 03:31:35.885311   66044 main.go:141] libmachine: (embed-certs-277128) Calling .GetIP
	I0501 03:31:35.887975   66044 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:31:35.888340   66044 main.go:141] libmachine: (embed-certs-277128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:7d", ip: ""} in network mk-embed-certs-277128: {Iface:virbr2 ExpiryTime:2024-05-01 04:31:26 +0000 UTC Type:0 Mac:52:54:00:96:11:7d Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-277128 Clientid:01:52:54:00:96:11:7d}
	I0501 03:31:35.888366   66044 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined IP address 192.168.50.218 and MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:31:35.888593   66044 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0501 03:31:35.893790   66044 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0501 03:31:35.910777   66044 kubeadm.go:877] updating cluster {Name:embed-certs-277128 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.0 ClusterName:embed-certs-277128 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.218 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort
:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0501 03:31:35.910868   66044 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0501 03:31:35.910913   66044 ssh_runner.go:195] Run: sudo crictl images --output json
	I0501 03:31:35.951818   66044 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0501 03:31:35.951907   66044 ssh_runner.go:195] Run: which lz4
	I0501 03:31:35.956425   66044 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0501 03:31:35.960901   66044 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0501 03:31:35.960931   66044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394544937 bytes)
	I0501 03:31:37.698116   66044 crio.go:462] duration metric: took 1.741727445s to copy over tarball
	I0501 03:31:37.698205   66044 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0501 03:31:40.950624   65502 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0501 03:31:40.950904   65502 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0501 03:31:40.429639   66044 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.731402047s)
	I0501 03:31:40.429671   66044 crio.go:469] duration metric: took 2.731524299s to extract the tarball
	I0501 03:31:40.429681   66044 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0501 03:31:40.472234   66044 ssh_runner.go:195] Run: sudo crictl images --output json
	I0501 03:31:40.534625   66044 crio.go:514] all images are preloaded for cri-o runtime.
	I0501 03:31:40.534655   66044 cache_images.go:84] Images are preloaded, skipping loading
	I0501 03:31:40.534664   66044 kubeadm.go:928] updating node { 192.168.50.218 8443 v1.30.0 crio true true} ...
	I0501 03:31:40.534791   66044 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-277128 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.218
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:embed-certs-277128 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0501 03:31:40.534874   66044 ssh_runner.go:195] Run: crio config
	I0501 03:31:40.594372   66044 cni.go:84] Creating CNI manager for ""
	I0501 03:31:40.594417   66044 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0501 03:31:40.594434   66044 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0501 03:31:40.594462   66044 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.218 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-277128 NodeName:embed-certs-277128 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.218"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.218 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0501 03:31:40.594648   66044 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.218
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-277128"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.218
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.218"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0501 03:31:40.594724   66044 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0501 03:31:40.606642   66044 binaries.go:44] Found k8s binaries, skipping transfer
	I0501 03:31:40.606718   66044 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0501 03:31:40.617857   66044 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0501 03:31:40.639836   66044 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0501 03:31:40.661800   66044 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0501 03:31:40.683999   66044 ssh_runner.go:195] Run: grep 192.168.50.218	control-plane.minikube.internal$ /etc/hosts
	I0501 03:31:40.692843   66044 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.218	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0501 03:31:40.711673   66044 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 03:31:40.884912   66044 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0501 03:31:40.908016   66044 certs.go:68] Setting up /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/embed-certs-277128 for IP: 192.168.50.218
	I0501 03:31:40.908044   66044 certs.go:194] generating shared ca certs ...
	I0501 03:31:40.908066   66044 certs.go:226] acquiring lock for ca certs: {Name:mk85a75d14c00780d4bf09cd9bdaf0f96f1b8fce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 03:31:40.908256   66044 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18779-13391/.minikube/ca.key
	I0501 03:31:40.908325   66044 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.key
	I0501 03:31:40.908341   66044 certs.go:256] generating profile certs ...
	I0501 03:31:40.908419   66044 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/embed-certs-277128/client.key
	I0501 03:31:40.908439   66044 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/embed-certs-277128/client.crt with IP's: []
	I0501 03:31:41.111343   66044 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/embed-certs-277128/client.crt ...
	I0501 03:31:41.111388   66044 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/embed-certs-277128/client.crt: {Name:mk49476428436386cf933896336f0fae6e4a9fa4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 03:31:41.111619   66044 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/embed-certs-277128/client.key ...
	I0501 03:31:41.111639   66044 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/embed-certs-277128/client.key: {Name:mk043d54bd4030b5cc78d86fd39f59226dc35e4a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 03:31:41.111760   66044 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/embed-certs-277128/apiserver.key.65584253
	I0501 03:31:41.111780   66044 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/embed-certs-277128/apiserver.crt.65584253 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.218]
	I0501 03:31:41.201870   66044 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/embed-certs-277128/apiserver.crt.65584253 ...
	I0501 03:31:41.201909   66044 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/embed-certs-277128/apiserver.crt.65584253: {Name:mk7900b364428a294ea20aba624bddc3c891fe8b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 03:31:41.202073   66044 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/embed-certs-277128/apiserver.key.65584253 ...
	I0501 03:31:41.202088   66044 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/embed-certs-277128/apiserver.key.65584253: {Name:mk6eda5add973625dae64fc5fe17902e9b9d88e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 03:31:41.202163   66044 certs.go:381] copying /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/embed-certs-277128/apiserver.crt.65584253 -> /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/embed-certs-277128/apiserver.crt
	I0501 03:31:41.202282   66044 certs.go:385] copying /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/embed-certs-277128/apiserver.key.65584253 -> /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/embed-certs-277128/apiserver.key
	I0501 03:31:41.202353   66044 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/embed-certs-277128/proxy-client.key
	I0501 03:31:41.202369   66044 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/embed-certs-277128/proxy-client.crt with IP's: []
	I0501 03:31:41.483304   66044 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/embed-certs-277128/proxy-client.crt ...
	I0501 03:31:41.483333   66044 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/embed-certs-277128/proxy-client.crt: {Name:mk18ac5cb08af324919c30d97ad2fbe4040731c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 03:31:41.483485   66044 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/embed-certs-277128/proxy-client.key ...
	I0501 03:31:41.483497   66044 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/embed-certs-277128/proxy-client.key: {Name:mk3639f48a2c8bbf6433944fc9592b24c0592aac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 03:31:41.483661   66044 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/20724.pem (1338 bytes)
	W0501 03:31:41.483704   66044 certs.go:480] ignoring /home/jenkins/minikube-integration/18779-13391/.minikube/certs/20724_empty.pem, impossibly tiny 0 bytes
	I0501 03:31:41.483714   66044 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca-key.pem (1679 bytes)
	I0501 03:31:41.483734   66044 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem (1078 bytes)
	I0501 03:31:41.483754   66044 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem (1123 bytes)
	I0501 03:31:41.483783   66044 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem (1675 bytes)
	I0501 03:31:41.483820   66044 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem (1708 bytes)
	I0501 03:31:41.484449   66044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0501 03:31:41.519342   66044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0501 03:31:41.552053   66044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0501 03:31:41.584468   66044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0501 03:31:41.618099   66044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/embed-certs-277128/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0501 03:31:41.644757   66044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/embed-certs-277128/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0501 03:31:41.674610   66044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/embed-certs-277128/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0501 03:31:41.718460   66044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/embed-certs-277128/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0501 03:31:41.765478   66044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/certs/20724.pem --> /usr/share/ca-certificates/20724.pem (1338 bytes)
	I0501 03:31:41.804566   66044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem --> /usr/share/ca-certificates/207242.pem (1708 bytes)
	I0501 03:31:41.833507   66044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0501 03:31:41.865709   66044 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0501 03:31:41.887742   66044 ssh_runner.go:195] Run: openssl version
	I0501 03:31:41.894556   66044 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20724.pem && ln -fs /usr/share/ca-certificates/20724.pem /etc/ssl/certs/20724.pem"
	I0501 03:31:41.909697   66044 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20724.pem
	I0501 03:31:41.915319   66044 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May  1 02:20 /usr/share/ca-certificates/20724.pem
	I0501 03:31:41.915390   66044 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20724.pem
	I0501 03:31:41.924455   66044 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20724.pem /etc/ssl/certs/51391683.0"
	I0501 03:31:41.938152   66044 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/207242.pem && ln -fs /usr/share/ca-certificates/207242.pem /etc/ssl/certs/207242.pem"
	I0501 03:31:41.958789   66044 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/207242.pem
	I0501 03:31:41.964519   66044 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May  1 02:20 /usr/share/ca-certificates/207242.pem
	I0501 03:31:41.964588   66044 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/207242.pem
	I0501 03:31:41.972165   66044 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/207242.pem /etc/ssl/certs/3ec20f2e.0"
	I0501 03:31:41.986732   66044 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0501 03:31:42.003665   66044 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0501 03:31:42.009105   66044 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May  1 02:08 /usr/share/ca-certificates/minikubeCA.pem
	I0501 03:31:42.009161   66044 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0501 03:31:42.015755   66044 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0501 03:31:42.030095   66044 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0501 03:31:42.035036   66044 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0501 03:31:42.035089   66044 kubeadm.go:391] StartCluster: {Name:embed-certs-277128 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.0 ClusterName:embed-certs-277128 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.218 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 03:31:42.035185   66044 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0501 03:31:42.035234   66044 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0501 03:31:42.080694   66044 cri.go:89] found id: ""
	I0501 03:31:42.080757   66044 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0501 03:31:42.094633   66044 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0501 03:31:42.108291   66044 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0501 03:31:42.121704   66044 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0501 03:31:42.121732   66044 kubeadm.go:156] found existing configuration files:
	
	I0501 03:31:42.121782   66044 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0501 03:31:42.134779   66044 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0501 03:31:42.134870   66044 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0501 03:31:42.147416   66044 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0501 03:31:42.160785   66044 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0501 03:31:42.160851   66044 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0501 03:31:42.173324   66044 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0501 03:31:42.185742   66044 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0501 03:31:42.185807   66044 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0501 03:31:42.198623   66044 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0501 03:31:42.210603   66044 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0501 03:31:42.210678   66044 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0501 03:31:42.223307   66044 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0501 03:31:42.523216   66044 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0501 03:31:44.370484   66006 kubeadm.go:309] [api-check] The API server is healthy after 7.002593965s
	I0501 03:31:44.502867   66006 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0501 03:31:44.992958   66006 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0501 03:31:45.059504   66006 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0501 03:31:45.059771   66006 kubeadm.go:309] [mark-control-plane] Marking the node no-preload-892672 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0501 03:31:45.077937   66006 kubeadm.go:309] [bootstrap-token] Using token: 96j2on.4foj0u82daa56sa0
	I0501 03:31:41.563949   66261 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0501 03:31:41.563980   66261 machine.go:97] duration metric: took 7.357273878s to provisionDockerMachine
	I0501 03:31:41.563993   66261 start.go:293] postStartSetup for "kubernetes-upgrade-046243" (driver="kvm2")
	I0501 03:31:41.564007   66261 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0501 03:31:41.564028   66261 main.go:141] libmachine: (kubernetes-upgrade-046243) Calling .DriverName
	I0501 03:31:41.564403   66261 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0501 03:31:41.564438   66261 main.go:141] libmachine: (kubernetes-upgrade-046243) Calling .GetSSHHostname
	I0501 03:31:41.567535   66261 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | domain kubernetes-upgrade-046243 has defined MAC address 52:54:00:ac:ba:ac in network mk-kubernetes-upgrade-046243
	I0501 03:31:41.567874   66261 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:ba:ac", ip: ""} in network mk-kubernetes-upgrade-046243: {Iface:virbr3 ExpiryTime:2024-05-01 04:30:07 +0000 UTC Type:0 Mac:52:54:00:ac:ba:ac Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:kubernetes-upgrade-046243 Clientid:01:52:54:00:ac:ba:ac}
	I0501 03:31:41.567903   66261 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | domain kubernetes-upgrade-046243 has defined IP address 192.168.72.134 and MAC address 52:54:00:ac:ba:ac in network mk-kubernetes-upgrade-046243
	I0501 03:31:41.568092   66261 main.go:141] libmachine: (kubernetes-upgrade-046243) Calling .GetSSHPort
	I0501 03:31:41.568309   66261 main.go:141] libmachine: (kubernetes-upgrade-046243) Calling .GetSSHKeyPath
	I0501 03:31:41.568542   66261 main.go:141] libmachine: (kubernetes-upgrade-046243) Calling .GetSSHUsername
	I0501 03:31:41.568695   66261 sshutil.go:53] new ssh client: &{IP:192.168.72.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/kubernetes-upgrade-046243/id_rsa Username:docker}
	I0501 03:31:41.657542   66261 ssh_runner.go:195] Run: cat /etc/os-release
	I0501 03:31:41.662666   66261 info.go:137] Remote host: Buildroot 2023.02.9
	I0501 03:31:41.662697   66261 filesync.go:126] Scanning /home/jenkins/minikube-integration/18779-13391/.minikube/addons for local assets ...
	I0501 03:31:41.662780   66261 filesync.go:126] Scanning /home/jenkins/minikube-integration/18779-13391/.minikube/files for local assets ...
	I0501 03:31:41.662880   66261 filesync.go:149] local asset: /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem -> 207242.pem in /etc/ssl/certs
	I0501 03:31:41.663024   66261 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0501 03:31:41.673861   66261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem --> /etc/ssl/certs/207242.pem (1708 bytes)
	I0501 03:31:41.706858   66261 start.go:296] duration metric: took 142.84948ms for postStartSetup
	I0501 03:31:41.706899   66261 fix.go:56] duration metric: took 7.526643531s for fixHost
	I0501 03:31:41.706924   66261 main.go:141] libmachine: (kubernetes-upgrade-046243) Calling .GetSSHHostname
	I0501 03:31:41.709970   66261 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | domain kubernetes-upgrade-046243 has defined MAC address 52:54:00:ac:ba:ac in network mk-kubernetes-upgrade-046243
	I0501 03:31:41.710443   66261 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:ba:ac", ip: ""} in network mk-kubernetes-upgrade-046243: {Iface:virbr3 ExpiryTime:2024-05-01 04:30:07 +0000 UTC Type:0 Mac:52:54:00:ac:ba:ac Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:kubernetes-upgrade-046243 Clientid:01:52:54:00:ac:ba:ac}
	I0501 03:31:41.710476   66261 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | domain kubernetes-upgrade-046243 has defined IP address 192.168.72.134 and MAC address 52:54:00:ac:ba:ac in network mk-kubernetes-upgrade-046243
	I0501 03:31:41.710628   66261 main.go:141] libmachine: (kubernetes-upgrade-046243) Calling .GetSSHPort
	I0501 03:31:41.710800   66261 main.go:141] libmachine: (kubernetes-upgrade-046243) Calling .GetSSHKeyPath
	I0501 03:31:41.710994   66261 main.go:141] libmachine: (kubernetes-upgrade-046243) Calling .GetSSHKeyPath
	I0501 03:31:41.711183   66261 main.go:141] libmachine: (kubernetes-upgrade-046243) Calling .GetSSHUsername
	I0501 03:31:41.711428   66261 main.go:141] libmachine: Using SSH client type: native
	I0501 03:31:41.711648   66261 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.134 22 <nil> <nil>}
	I0501 03:31:41.711663   66261 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0501 03:31:41.823844   66261 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714534301.798354390
	
	I0501 03:31:41.823867   66261 fix.go:216] guest clock: 1714534301.798354390
	I0501 03:31:41.823883   66261 fix.go:229] Guest: 2024-05-01 03:31:41.79835439 +0000 UTC Remote: 2024-05-01 03:31:41.706904391 +0000 UTC m=+66.013922851 (delta=91.449999ms)
	I0501 03:31:41.823925   66261 fix.go:200] guest clock delta is within tolerance: 91.449999ms
	I0501 03:31:41.823932   66261 start.go:83] releasing machines lock for "kubernetes-upgrade-046243", held for 7.643711184s
	I0501 03:31:41.823956   66261 main.go:141] libmachine: (kubernetes-upgrade-046243) Calling .DriverName
	I0501 03:31:41.824205   66261 main.go:141] libmachine: (kubernetes-upgrade-046243) Calling .GetIP
	I0501 03:31:41.826906   66261 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | domain kubernetes-upgrade-046243 has defined MAC address 52:54:00:ac:ba:ac in network mk-kubernetes-upgrade-046243
	I0501 03:31:41.827285   66261 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:ba:ac", ip: ""} in network mk-kubernetes-upgrade-046243: {Iface:virbr3 ExpiryTime:2024-05-01 04:30:07 +0000 UTC Type:0 Mac:52:54:00:ac:ba:ac Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:kubernetes-upgrade-046243 Clientid:01:52:54:00:ac:ba:ac}
	I0501 03:31:41.827322   66261 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | domain kubernetes-upgrade-046243 has defined IP address 192.168.72.134 and MAC address 52:54:00:ac:ba:ac in network mk-kubernetes-upgrade-046243
	I0501 03:31:41.827511   66261 main.go:141] libmachine: (kubernetes-upgrade-046243) Calling .DriverName
	I0501 03:31:41.828139   66261 main.go:141] libmachine: (kubernetes-upgrade-046243) Calling .DriverName
	I0501 03:31:41.828306   66261 main.go:141] libmachine: (kubernetes-upgrade-046243) Calling .DriverName
	I0501 03:31:41.828425   66261 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0501 03:31:41.828474   66261 main.go:141] libmachine: (kubernetes-upgrade-046243) Calling .GetSSHHostname
	I0501 03:31:41.828499   66261 ssh_runner.go:195] Run: cat /version.json
	I0501 03:31:41.828526   66261 main.go:141] libmachine: (kubernetes-upgrade-046243) Calling .GetSSHHostname
	I0501 03:31:41.831383   66261 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | domain kubernetes-upgrade-046243 has defined MAC address 52:54:00:ac:ba:ac in network mk-kubernetes-upgrade-046243
	I0501 03:31:41.831510   66261 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | domain kubernetes-upgrade-046243 has defined MAC address 52:54:00:ac:ba:ac in network mk-kubernetes-upgrade-046243
	I0501 03:31:41.831759   66261 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:ba:ac", ip: ""} in network mk-kubernetes-upgrade-046243: {Iface:virbr3 ExpiryTime:2024-05-01 04:30:07 +0000 UTC Type:0 Mac:52:54:00:ac:ba:ac Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:kubernetes-upgrade-046243 Clientid:01:52:54:00:ac:ba:ac}
	I0501 03:31:41.831792   66261 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:ba:ac", ip: ""} in network mk-kubernetes-upgrade-046243: {Iface:virbr3 ExpiryTime:2024-05-01 04:30:07 +0000 UTC Type:0 Mac:52:54:00:ac:ba:ac Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:kubernetes-upgrade-046243 Clientid:01:52:54:00:ac:ba:ac}
	I0501 03:31:41.831813   66261 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | domain kubernetes-upgrade-046243 has defined IP address 192.168.72.134 and MAC address 52:54:00:ac:ba:ac in network mk-kubernetes-upgrade-046243
	I0501 03:31:41.831827   66261 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | domain kubernetes-upgrade-046243 has defined IP address 192.168.72.134 and MAC address 52:54:00:ac:ba:ac in network mk-kubernetes-upgrade-046243
	I0501 03:31:41.831956   66261 main.go:141] libmachine: (kubernetes-upgrade-046243) Calling .GetSSHPort
	I0501 03:31:41.831975   66261 main.go:141] libmachine: (kubernetes-upgrade-046243) Calling .GetSSHPort
	I0501 03:31:41.832165   66261 main.go:141] libmachine: (kubernetes-upgrade-046243) Calling .GetSSHKeyPath
	I0501 03:31:41.832178   66261 main.go:141] libmachine: (kubernetes-upgrade-046243) Calling .GetSSHKeyPath
	I0501 03:31:41.832372   66261 main.go:141] libmachine: (kubernetes-upgrade-046243) Calling .GetSSHUsername
	I0501 03:31:41.832411   66261 main.go:141] libmachine: (kubernetes-upgrade-046243) Calling .GetSSHUsername
	I0501 03:31:41.832560   66261 sshutil.go:53] new ssh client: &{IP:192.168.72.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/kubernetes-upgrade-046243/id_rsa Username:docker}
	I0501 03:31:41.832550   66261 sshutil.go:53] new ssh client: &{IP:192.168.72.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/kubernetes-upgrade-046243/id_rsa Username:docker}
	I0501 03:31:41.916834   66261 ssh_runner.go:195] Run: systemctl --version
	I0501 03:31:41.941569   66261 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0501 03:31:42.109488   66261 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0501 03:31:42.117522   66261 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0501 03:31:42.117594   66261 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0501 03:31:42.149211   66261 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0501 03:31:42.149239   66261 start.go:494] detecting cgroup driver to use...
	I0501 03:31:42.149309   66261 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0501 03:31:42.234474   66261 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0501 03:31:42.275982   66261 docker.go:217] disabling cri-docker service (if available) ...
	I0501 03:31:42.276038   66261 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0501 03:31:42.322431   66261 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0501 03:31:42.345004   66261 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0501 03:31:42.520907   66261 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0501 03:31:42.737443   66261 docker.go:233] disabling docker service ...
	I0501 03:31:42.737517   66261 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0501 03:31:42.760238   66261 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0501 03:31:42.776957   66261 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0501 03:31:42.985484   66261 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0501 03:31:43.313197   66261 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0501 03:31:43.352953   66261 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0501 03:31:43.522435   66261 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0501 03:31:43.522577   66261 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:31:43.712373   66261 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0501 03:31:43.712437   66261 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:31:43.883029   66261 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:31:44.012403   66261 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:31:44.062599   66261 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0501 03:31:44.135612   66261 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:31:44.169489   66261 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:31:44.195443   66261 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:31:44.224445   66261 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0501 03:31:44.240718   66261 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0501 03:31:44.258512   66261 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 03:31:44.575786   66261 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0501 03:31:45.745222   66261 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.169391797s)
	I0501 03:31:45.745256   66261 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0501 03:31:45.745310   66261 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0501 03:31:45.751816   66261 start.go:562] Will wait 60s for crictl version
	I0501 03:31:45.751872   66261 ssh_runner.go:195] Run: which crictl
	I0501 03:31:45.756973   66261 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0501 03:31:45.797463   66261 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0501 03:31:45.797550   66261 ssh_runner.go:195] Run: crio --version
	I0501 03:31:45.835833   66261 ssh_runner.go:195] Run: crio --version
	I0501 03:31:45.874892   66261 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0501 03:31:45.079247   66006 out.go:204]   - Configuring RBAC rules ...
	I0501 03:31:45.079373   66006 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0501 03:31:45.090345   66006 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0501 03:31:45.102188   66006 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0501 03:31:45.112137   66006 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0501 03:31:45.117377   66006 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0501 03:31:45.123420   66006 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0501 03:31:45.143764   66006 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0501 03:31:45.486958   66006 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0501 03:31:45.959970   66006 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0501 03:31:45.960681   66006 kubeadm.go:309] 
	I0501 03:31:45.960766   66006 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0501 03:31:45.960798   66006 kubeadm.go:309] 
	I0501 03:31:45.960939   66006 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0501 03:31:45.960953   66006 kubeadm.go:309] 
	I0501 03:31:45.960986   66006 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0501 03:31:45.961074   66006 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0501 03:31:45.961165   66006 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0501 03:31:45.961184   66006 kubeadm.go:309] 
	I0501 03:31:45.961266   66006 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0501 03:31:45.961283   66006 kubeadm.go:309] 
	I0501 03:31:45.961334   66006 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0501 03:31:45.961342   66006 kubeadm.go:309] 
	I0501 03:31:45.961399   66006 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0501 03:31:45.961489   66006 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0501 03:31:45.961575   66006 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0501 03:31:45.961585   66006 kubeadm.go:309] 
	I0501 03:31:45.961683   66006 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0501 03:31:45.961777   66006 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0501 03:31:45.961805   66006 kubeadm.go:309] 
	I0501 03:31:45.961969   66006 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 96j2on.4foj0u82daa56sa0 \
	I0501 03:31:45.962125   66006 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:bd94cc6acfb95fe36f70b12c6a1f2980601b8235e7c4dea65cebdf35eb514754 \
	I0501 03:31:45.962157   66006 kubeadm.go:309] 	--control-plane 
	I0501 03:31:45.962164   66006 kubeadm.go:309] 
	I0501 03:31:45.962284   66006 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0501 03:31:45.962296   66006 kubeadm.go:309] 
	I0501 03:31:45.962416   66006 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 96j2on.4foj0u82daa56sa0 \
	I0501 03:31:45.962557   66006 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:bd94cc6acfb95fe36f70b12c6a1f2980601b8235e7c4dea65cebdf35eb514754 
	I0501 03:31:45.964365   66006 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0501 03:31:45.964393   66006 cni.go:84] Creating CNI manager for ""
	I0501 03:31:45.964408   66006 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0501 03:31:45.966155   66006 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0501 03:31:45.967632   66006 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0501 03:31:45.984713   66006 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0501 03:31:46.012127   66006 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0501 03:31:46.012203   66006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:31:46.012223   66006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-892672 minikube.k8s.io/updated_at=2024_05_01T03_31_46_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=2c4eae41cda912e6a762d77f0d8868e00f97bb4e minikube.k8s.io/name=no-preload-892672 minikube.k8s.io/primary=true
	I0501 03:31:46.175455   66006 ops.go:34] apiserver oom_adj: -16
	I0501 03:31:46.175599   66006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:31:46.676585   66006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:31:47.176343   66006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:31:45.876855   66261 main.go:141] libmachine: (kubernetes-upgrade-046243) Calling .GetIP
	I0501 03:31:45.880045   66261 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | domain kubernetes-upgrade-046243 has defined MAC address 52:54:00:ac:ba:ac in network mk-kubernetes-upgrade-046243
	I0501 03:31:45.880571   66261 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:ba:ac", ip: ""} in network mk-kubernetes-upgrade-046243: {Iface:virbr3 ExpiryTime:2024-05-01 04:30:07 +0000 UTC Type:0 Mac:52:54:00:ac:ba:ac Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:kubernetes-upgrade-046243 Clientid:01:52:54:00:ac:ba:ac}
	I0501 03:31:45.880601   66261 main.go:141] libmachine: (kubernetes-upgrade-046243) DBG | domain kubernetes-upgrade-046243 has defined IP address 192.168.72.134 and MAC address 52:54:00:ac:ba:ac in network mk-kubernetes-upgrade-046243
	I0501 03:31:45.880853   66261 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0501 03:31:45.886312   66261 kubeadm.go:877] updating cluster {Name:kubernetes-upgrade-046243 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.0 ClusterName:kubernetes-upgrade-046243 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.134 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0501 03:31:45.886426   66261 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0501 03:31:45.886480   66261 ssh_runner.go:195] Run: sudo crictl images --output json
	I0501 03:31:45.953936   66261 crio.go:514] all images are preloaded for cri-o runtime.
	I0501 03:31:45.953967   66261 crio.go:433] Images already preloaded, skipping extraction
	I0501 03:31:45.954022   66261 ssh_runner.go:195] Run: sudo crictl images --output json
	I0501 03:31:45.998870   66261 crio.go:514] all images are preloaded for cri-o runtime.
	I0501 03:31:45.998901   66261 cache_images.go:84] Images are preloaded, skipping loading
	I0501 03:31:45.998921   66261 kubeadm.go:928] updating node { 192.168.72.134 8443 v1.30.0 crio true true} ...
	I0501 03:31:45.999088   66261 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-046243 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.134
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:kubernetes-upgrade-046243 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0501 03:31:45.999193   66261 ssh_runner.go:195] Run: crio config
	I0501 03:31:46.064969   66261 cni.go:84] Creating CNI manager for ""
	I0501 03:31:46.064997   66261 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0501 03:31:46.065011   66261 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0501 03:31:46.065046   66261 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.134 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-046243 NodeName:kubernetes-upgrade-046243 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.134"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.134 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0501 03:31:46.065264   66261 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.134
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-046243"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.134
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.134"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0501 03:31:46.065334   66261 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0501 03:31:46.082877   66261 binaries.go:44] Found k8s binaries, skipping transfer
	I0501 03:31:46.082983   66261 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0501 03:31:46.100097   66261 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (325 bytes)
	I0501 03:31:46.125435   66261 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0501 03:31:46.147790   66261 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0501 03:31:46.172984   66261 ssh_runner.go:195] Run: grep 192.168.72.134	control-plane.minikube.internal$ /etc/hosts
	I0501 03:31:46.178885   66261 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 03:31:46.347068   66261 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0501 03:31:46.439175   66261 certs.go:68] Setting up /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/kubernetes-upgrade-046243 for IP: 192.168.72.134
	I0501 03:31:46.439207   66261 certs.go:194] generating shared ca certs ...
	I0501 03:31:46.439228   66261 certs.go:226] acquiring lock for ca certs: {Name:mk85a75d14c00780d4bf09cd9bdaf0f96f1b8fce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 03:31:46.439402   66261 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18779-13391/.minikube/ca.key
	I0501 03:31:46.439470   66261 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.key
	I0501 03:31:46.439485   66261 certs.go:256] generating profile certs ...
	I0501 03:31:46.439641   66261 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/kubernetes-upgrade-046243/client.key
	I0501 03:31:46.439718   66261 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/kubernetes-upgrade-046243/apiserver.key.cae910e0
	I0501 03:31:46.439772   66261 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/kubernetes-upgrade-046243/proxy-client.key
	I0501 03:31:46.439907   66261 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/20724.pem (1338 bytes)
	W0501 03:31:46.439961   66261 certs.go:480] ignoring /home/jenkins/minikube-integration/18779-13391/.minikube/certs/20724_empty.pem, impossibly tiny 0 bytes
	I0501 03:31:46.439971   66261 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca-key.pem (1679 bytes)
	I0501 03:31:46.440013   66261 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem (1078 bytes)
	I0501 03:31:46.440046   66261 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem (1123 bytes)
	I0501 03:31:46.440075   66261 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem (1675 bytes)
	I0501 03:31:46.440130   66261 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem (1708 bytes)
	I0501 03:31:46.441008   66261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0501 03:31:46.705518   66261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0501 03:31:46.979432   66261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0501 03:31:47.078746   66261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0501 03:31:47.159751   66261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/kubernetes-upgrade-046243/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0501 03:31:47.223607   66261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/kubernetes-upgrade-046243/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0501 03:31:47.312314   66261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/kubernetes-upgrade-046243/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0501 03:31:47.377367   66261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/kubernetes-upgrade-046243/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0501 03:31:47.459324   66261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0501 03:31:47.526101   66261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/certs/20724.pem --> /usr/share/ca-certificates/20724.pem (1338 bytes)
	I0501 03:31:47.582600   66261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem --> /usr/share/ca-certificates/207242.pem (1708 bytes)
	I0501 03:31:47.717831   66261 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0501 03:31:47.762978   66261 ssh_runner.go:195] Run: openssl version
	I0501 03:31:47.770191   66261 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20724.pem && ln -fs /usr/share/ca-certificates/20724.pem /etc/ssl/certs/20724.pem"
	I0501 03:31:47.784703   66261 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20724.pem
	I0501 03:31:47.790679   66261 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May  1 02:20 /usr/share/ca-certificates/20724.pem
	I0501 03:31:47.790751   66261 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20724.pem
	I0501 03:31:47.797967   66261 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20724.pem /etc/ssl/certs/51391683.0"
	I0501 03:31:47.810264   66261 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/207242.pem && ln -fs /usr/share/ca-certificates/207242.pem /etc/ssl/certs/207242.pem"
	I0501 03:31:47.824070   66261 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/207242.pem
	I0501 03:31:47.830013   66261 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May  1 02:20 /usr/share/ca-certificates/207242.pem
	I0501 03:31:47.830080   66261 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/207242.pem
	I0501 03:31:47.839414   66261 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/207242.pem /etc/ssl/certs/3ec20f2e.0"
	I0501 03:31:47.854817   66261 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0501 03:31:47.871235   66261 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0501 03:31:47.877344   66261 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May  1 02:08 /usr/share/ca-certificates/minikubeCA.pem
	I0501 03:31:47.877411   66261 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0501 03:31:47.884642   66261 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0501 03:31:47.899967   66261 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0501 03:31:47.906349   66261 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0501 03:31:47.915796   66261 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0501 03:31:47.923021   66261 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0501 03:31:47.929667   66261 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0501 03:31:47.936857   66261 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0501 03:31:47.945932   66261 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0501 03:31:47.954529   66261 kubeadm.go:391] StartCluster: {Name:kubernetes-upgrade-046243 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.30.0 ClusterName:kubernetes-upgrade-046243 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.134 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 03:31:47.954625   66261 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0501 03:31:47.954686   66261 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0501 03:31:47.996099   66261 cri.go:89] found id: "295664cc72d15c2742b0f5af393d6b5de6dab013626c9f739b50552bcf5541c4"
	I0501 03:31:47.996130   66261 cri.go:89] found id: "335fc6d78d8ba089d2e7a358c4b4e082ec02efef38ce2a16fbbf4746f68851d0"
	I0501 03:31:47.996137   66261 cri.go:89] found id: "a157d612fec91cef216896672d792cf369d97e05faec4c46c7823794e66cdd32"
	I0501 03:31:47.996142   66261 cri.go:89] found id: "8ebff6abace980aea6297d402fd9b13259a6aa09404e83196198dbb279656627"
	I0501 03:31:47.996149   66261 cri.go:89] found id: "45ba288bf6bb94bd91736a77ddd07ca63de73ef1ff396f9b3bc22c9d108c4763"
	I0501 03:31:47.996154   66261 cri.go:89] found id: "e949b0af9f56f9d705e772905897ed96d4f15c7df6011a60e52f7df0dc3e1144"
	I0501 03:31:47.996158   66261 cri.go:89] found id: "c39102292565ed9e4776f00f2e627efad6c1f08ba704a7d9b7bfb040ed10e948"
	I0501 03:31:47.996162   66261 cri.go:89] found id: "27961d1eae4cba321babd7d34860ca5097a8710cf52a2e7dab31283402c2c1b8"
	I0501 03:31:47.996166   66261 cri.go:89] found id: "02948a2cf9596ad831add64f08c749313ecbf05fe8701851d77d2580dabff68d"
	I0501 03:31:47.996173   66261 cri.go:89] found id: ""
	I0501 03:31:47.996225   66261 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	May 01 03:32:06 kubernetes-upgrade-046243 crio[2955]: time="2024-05-01 03:32:06.156013976Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a80e16d3-e436-45b6-a3b2-cfb97128c2db name=/runtime.v1.RuntimeService/Version
	May 01 03:32:06 kubernetes-upgrade-046243 crio[2955]: time="2024-05-01 03:32:06.157547064Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=55970ab4-99c6-48e2-87de-e3389a4f6bd8 name=/runtime.v1.ImageService/ImageFsInfo
	May 01 03:32:06 kubernetes-upgrade-046243 crio[2955]: time="2024-05-01 03:32:06.158200266Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714534326158165957,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=55970ab4-99c6-48e2-87de-e3389a4f6bd8 name=/runtime.v1.ImageService/ImageFsInfo
	May 01 03:32:06 kubernetes-upgrade-046243 crio[2955]: time="2024-05-01 03:32:06.159073980Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c49fe7f5-af7c-4a27-9860-365418fdb74e name=/runtime.v1.RuntimeService/ListContainers
	May 01 03:32:06 kubernetes-upgrade-046243 crio[2955]: time="2024-05-01 03:32:06.159158979Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c49fe7f5-af7c-4a27-9860-365418fdb74e name=/runtime.v1.RuntimeService/ListContainers
	May 01 03:32:06 kubernetes-upgrade-046243 crio[2955]: time="2024-05-01 03:32:06.159619254Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d2a2dee5fec37a94d78639e43ab18d58b7d2bdde16b4e664f34e9806264bf27a,PodSandboxId:4b7cb9609c8500fe7e1801099ca7ba851ef001ddde2b8fbef411ba639a3424b3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714534321594605852,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d0654f0-3dbc-4fbe-acb6-08d8d6123629,},Annotations:map[string]string{io.kubernetes.container.hash: 4cf146d0,io.kubernetes.container.restartCount: 3,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52839cda351a564943c10d1f069e2e7eac32c18a72cd1ec1524ff48d88aca03c,PodSandboxId:50f729bc7facbcd68e21f3eba71c0bb148aef4bdc18f1787faf140d0ab0d8786,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714534321659396531,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bmnd4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47055de2-8962-4852-8282-32ddc4093cfa,},Annotations:map[string]string{io.kubernetes.container.hash: b938d7f9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protoco
l\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01870a8385d76b9d6cb5c6c0cc0cfd62b8bba21c4a1fb91e475aa8e253e5b4a5,PodSandboxId:5b892dc6246633aeb4f1b2937dd6837bb93a58fb144223ced289f7fb9facc81d,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714534321590455675,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-c5jc5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 482582a7-083a-4531-b5b3-94ed36133aea,},Annotations:map[string]string{io.kubernetes.container.hash: bfa3dd91,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99b75c3b3b23b48274c1fb0b68c4053e517b01adbb431b15ece5480584234c54,PodSandboxId:7725ca3dd5b383cb105265459382a800347e36820d311cf3d69d3d39282d4a6f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,C
reatedAt:1714534321571915411,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gngl4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93b5672a-e926-4a7b-b868-60ccd6a64635,},Annotations:map[string]string{io.kubernetes.container.hash: c1bab6d6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ccabc06303833deca36ac99622e90d228e12eaa689b9265642adf430e2e1fc7,PodSandboxId:23ba656b219f152fa3ce577160ee7567953f8d17e03751f7415b5157dcccdb20,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:17145343210
31282154,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-046243,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b0e15334e5236f5ab586c4e177194b4,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d4374b54ef182b09d0e9ce4b5d0e8b243b3b7f3c2219ce996cde9ae2d04a9dc,PodSandboxId:70de4e4726381b61deaccde28237384affbe5a36345e431167d598fc0b04714a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedA
t:1714534321007695236,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-046243,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17d9581aec97abd738252f64bb65a5ea,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:875c25114a129d0c57c73599db5ed65aa5ec707d51b2edeeb7dfe817d094e5a4,PodSandboxId:87b9325cbe5ad1ceb8ba904b815d471239dbf5492a89e8a425ccf6cbae6e9a25,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714534316117
202510,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-046243,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc5e9dc6b6785053df6374cdd3f25621,},Annotations:map[string]string{io.kubernetes.container.hash: 339d0358,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b950d0cf344155184e15a92f30e81cdb59dfe78e36925c2b3abb8690a53f697,PodSandboxId:db55da0033513ea65d48a300c8ac8fd5911b3b5534407316b11e07457ff27097,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714534315750554616,Labels:map[string]
string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-046243,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f06ab202de6165be4081a57cbe54c02,},Annotations:map[string]string{io.kubernetes.container.hash: f63b3fa1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:295664cc72d15c2742b0f5af393d6b5de6dab013626c9f739b50552bcf5541c4,PodSandboxId:5b892dc6246633aeb4f1b2937dd6837bb93a58fb144223ced289f7fb9facc81d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714534307480371965,Labels:map[string]string{io.kub
ernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-c5jc5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 482582a7-083a-4531-b5b3-94ed36133aea,},Annotations:map[string]string{io.kubernetes.container.hash: bfa3dd91,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:335fc6d78d8ba089d2e7a358c4b4e082ec02efef38ce2a16fbbf4746f68851d0,PodSandboxId:4b7cb9609c8500fe7e1801099ca7ba851ef001ddde2b8fbef411ba639a3424b3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string
]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1714534307109680201,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d0654f0-3dbc-4fbe-acb6-08d8d6123629,},Annotations:map[string]string{io.kubernetes.container.hash: 4cf146d0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ebff6abace980aea6297d402fd9b13259a6aa09404e83196198dbb279656627,PodSandboxId:7502780a6a3409a08e9445b2177c7b8693031e06270681865ebd1b83a2e75c5c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifie
dImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1714534303693262848,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gngl4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93b5672a-e926-4a7b-b868-60ccd6a64635,},Annotations:map[string]string{io.kubernetes.container.hash: c1bab6d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a157d612fec91cef216896672d792cf369d97e05faec4c46c7823794e66cdd32,PodSandboxId:ba079dad2fbba3f772a8a6fe882cd9294658db598bcd244fb3ff43766639afe0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Imag
eRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1714534303781579612,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-046243,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17d9581aec97abd738252f64bb65a5ea,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45ba288bf6bb94bd91736a77ddd07ca63de73ef1ff396f9b3bc22c9d108c4763,PodSandboxId:d1e5a289154a1c6e6fa5aa4fbb2296dbb917a1a7db767fb4e99b26e00789d0d5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c0
4ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1714534303572509256,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-046243,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc5e9dc6b6785053df6374cdd3f25621,},Annotations:map[string]string{io.kubernetes.container.hash: 339d0358,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e949b0af9f56f9d705e772905897ed96d4f15c7df6011a60e52f7df0dc3e1144,PodSandboxId:4d6f9d86e982a2a7db1e0c288a5c2aaa4435b62f23a364189a26d68d1f3e5ebc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b2
4fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714534303496392368,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-046243,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f06ab202de6165be4081a57cbe54c02,},Annotations:map[string]string{io.kubernetes.container.hash: f63b3fa1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c39102292565ed9e4776f00f2e627efad6c1f08ba704a7d9b7bfb040ed10e948,PodSandboxId:41ccee652e532d95556fc1fae7d6f21fb0e60b7f09fbfbc896bcbcda9cd2a841,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9
451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714534302989180294,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-046243,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b0e15334e5236f5ab586c4e177194b4,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27961d1eae4cba321babd7d34860ca5097a8710cf52a2e7dab31283402c2c1b8,PodSandboxId:0a3997084296ae4c5ff4122274a2fe48384464e2db9ee18126e53a101ad42e94,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909
a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714534246265393540,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bmnd4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47055de2-8962-4852-8282-32ddc4093cfa,},Annotations:map[string]string{io.kubernetes.container.hash: b938d7f9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c49fe7f5-af7c-4a27-9860-365418fdb74e name=/runtime.v1.RuntimeService/ListContainers
	May 01 03:32:06 kubernetes-upgrade-046243 crio[2955]: time="2024-05-01 03:32:06.224439349Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f2f5cbf0-3852-4cbf-a361-1a50dd7b2f61 name=/runtime.v1.RuntimeService/Version
	May 01 03:32:06 kubernetes-upgrade-046243 crio[2955]: time="2024-05-01 03:32:06.224515095Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f2f5cbf0-3852-4cbf-a361-1a50dd7b2f61 name=/runtime.v1.RuntimeService/Version
	May 01 03:32:06 kubernetes-upgrade-046243 crio[2955]: time="2024-05-01 03:32:06.226343724Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c909bbb0-d02d-46fb-a26c-41a05b8915c1 name=/runtime.v1.ImageService/ImageFsInfo
	May 01 03:32:06 kubernetes-upgrade-046243 crio[2955]: time="2024-05-01 03:32:06.226773691Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714534326226747697,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c909bbb0-d02d-46fb-a26c-41a05b8915c1 name=/runtime.v1.ImageService/ImageFsInfo
	May 01 03:32:06 kubernetes-upgrade-046243 crio[2955]: time="2024-05-01 03:32:06.227690713Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=590e944d-7780-456e-a79b-f8961e722fb7 name=/runtime.v1.RuntimeService/ListContainers
	May 01 03:32:06 kubernetes-upgrade-046243 crio[2955]: time="2024-05-01 03:32:06.227743865Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=590e944d-7780-456e-a79b-f8961e722fb7 name=/runtime.v1.RuntimeService/ListContainers
	May 01 03:32:06 kubernetes-upgrade-046243 crio[2955]: time="2024-05-01 03:32:06.228157641Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d2a2dee5fec37a94d78639e43ab18d58b7d2bdde16b4e664f34e9806264bf27a,PodSandboxId:4b7cb9609c8500fe7e1801099ca7ba851ef001ddde2b8fbef411ba639a3424b3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714534321594605852,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d0654f0-3dbc-4fbe-acb6-08d8d6123629,},Annotations:map[string]string{io.kubernetes.container.hash: 4cf146d0,io.kubernetes.container.restartCount: 3,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52839cda351a564943c10d1f069e2e7eac32c18a72cd1ec1524ff48d88aca03c,PodSandboxId:50f729bc7facbcd68e21f3eba71c0bb148aef4bdc18f1787faf140d0ab0d8786,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714534321659396531,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bmnd4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47055de2-8962-4852-8282-32ddc4093cfa,},Annotations:map[string]string{io.kubernetes.container.hash: b938d7f9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protoco
l\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01870a8385d76b9d6cb5c6c0cc0cfd62b8bba21c4a1fb91e475aa8e253e5b4a5,PodSandboxId:5b892dc6246633aeb4f1b2937dd6837bb93a58fb144223ced289f7fb9facc81d,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714534321590455675,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-c5jc5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 482582a7-083a-4531-b5b3-94ed36133aea,},Annotations:map[string]string{io.kubernetes.container.hash: bfa3dd91,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99b75c3b3b23b48274c1fb0b68c4053e517b01adbb431b15ece5480584234c54,PodSandboxId:7725ca3dd5b383cb105265459382a800347e36820d311cf3d69d3d39282d4a6f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,C
reatedAt:1714534321571915411,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gngl4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93b5672a-e926-4a7b-b868-60ccd6a64635,},Annotations:map[string]string{io.kubernetes.container.hash: c1bab6d6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ccabc06303833deca36ac99622e90d228e12eaa689b9265642adf430e2e1fc7,PodSandboxId:23ba656b219f152fa3ce577160ee7567953f8d17e03751f7415b5157dcccdb20,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:17145343210
31282154,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-046243,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b0e15334e5236f5ab586c4e177194b4,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d4374b54ef182b09d0e9ce4b5d0e8b243b3b7f3c2219ce996cde9ae2d04a9dc,PodSandboxId:70de4e4726381b61deaccde28237384affbe5a36345e431167d598fc0b04714a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedA
t:1714534321007695236,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-046243,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17d9581aec97abd738252f64bb65a5ea,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:875c25114a129d0c57c73599db5ed65aa5ec707d51b2edeeb7dfe817d094e5a4,PodSandboxId:87b9325cbe5ad1ceb8ba904b815d471239dbf5492a89e8a425ccf6cbae6e9a25,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714534316117
202510,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-046243,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc5e9dc6b6785053df6374cdd3f25621,},Annotations:map[string]string{io.kubernetes.container.hash: 339d0358,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b950d0cf344155184e15a92f30e81cdb59dfe78e36925c2b3abb8690a53f697,PodSandboxId:db55da0033513ea65d48a300c8ac8fd5911b3b5534407316b11e07457ff27097,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714534315750554616,Labels:map[string]
string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-046243,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f06ab202de6165be4081a57cbe54c02,},Annotations:map[string]string{io.kubernetes.container.hash: f63b3fa1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:295664cc72d15c2742b0f5af393d6b5de6dab013626c9f739b50552bcf5541c4,PodSandboxId:5b892dc6246633aeb4f1b2937dd6837bb93a58fb144223ced289f7fb9facc81d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714534307480371965,Labels:map[string]string{io.kub
ernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-c5jc5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 482582a7-083a-4531-b5b3-94ed36133aea,},Annotations:map[string]string{io.kubernetes.container.hash: bfa3dd91,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:335fc6d78d8ba089d2e7a358c4b4e082ec02efef38ce2a16fbbf4746f68851d0,PodSandboxId:4b7cb9609c8500fe7e1801099ca7ba851ef001ddde2b8fbef411ba639a3424b3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string
]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1714534307109680201,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d0654f0-3dbc-4fbe-acb6-08d8d6123629,},Annotations:map[string]string{io.kubernetes.container.hash: 4cf146d0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ebff6abace980aea6297d402fd9b13259a6aa09404e83196198dbb279656627,PodSandboxId:7502780a6a3409a08e9445b2177c7b8693031e06270681865ebd1b83a2e75c5c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifie
dImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1714534303693262848,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gngl4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93b5672a-e926-4a7b-b868-60ccd6a64635,},Annotations:map[string]string{io.kubernetes.container.hash: c1bab6d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a157d612fec91cef216896672d792cf369d97e05faec4c46c7823794e66cdd32,PodSandboxId:ba079dad2fbba3f772a8a6fe882cd9294658db598bcd244fb3ff43766639afe0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Imag
eRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1714534303781579612,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-046243,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17d9581aec97abd738252f64bb65a5ea,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45ba288bf6bb94bd91736a77ddd07ca63de73ef1ff396f9b3bc22c9d108c4763,PodSandboxId:d1e5a289154a1c6e6fa5aa4fbb2296dbb917a1a7db767fb4e99b26e00789d0d5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c0
4ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1714534303572509256,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-046243,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc5e9dc6b6785053df6374cdd3f25621,},Annotations:map[string]string{io.kubernetes.container.hash: 339d0358,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e949b0af9f56f9d705e772905897ed96d4f15c7df6011a60e52f7df0dc3e1144,PodSandboxId:4d6f9d86e982a2a7db1e0c288a5c2aaa4435b62f23a364189a26d68d1f3e5ebc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b2
4fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714534303496392368,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-046243,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f06ab202de6165be4081a57cbe54c02,},Annotations:map[string]string{io.kubernetes.container.hash: f63b3fa1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c39102292565ed9e4776f00f2e627efad6c1f08ba704a7d9b7bfb040ed10e948,PodSandboxId:41ccee652e532d95556fc1fae7d6f21fb0e60b7f09fbfbc896bcbcda9cd2a841,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9
451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714534302989180294,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-046243,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b0e15334e5236f5ab586c4e177194b4,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27961d1eae4cba321babd7d34860ca5097a8710cf52a2e7dab31283402c2c1b8,PodSandboxId:0a3997084296ae4c5ff4122274a2fe48384464e2db9ee18126e53a101ad42e94,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909
a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714534246265393540,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bmnd4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47055de2-8962-4852-8282-32ddc4093cfa,},Annotations:map[string]string{io.kubernetes.container.hash: b938d7f9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=590e944d-7780-456e-a79b-f8961e722fb7 name=/runtime.v1.RuntimeService/ListContainers
	May 01 03:32:06 kubernetes-upgrade-046243 crio[2955]: time="2024-05-01 03:32:06.278690975Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d65642a9-04fa-4580-88d3-c53b2fecea79 name=/runtime.v1.RuntimeService/Version
	May 01 03:32:06 kubernetes-upgrade-046243 crio[2955]: time="2024-05-01 03:32:06.278826307Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d65642a9-04fa-4580-88d3-c53b2fecea79 name=/runtime.v1.RuntimeService/Version
	May 01 03:32:06 kubernetes-upgrade-046243 crio[2955]: time="2024-05-01 03:32:06.281259288Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a18728c3-21a3-4500-ad6f-e9920c8f2f34 name=/runtime.v1.ImageService/ImageFsInfo
	May 01 03:32:06 kubernetes-upgrade-046243 crio[2955]: time="2024-05-01 03:32:06.281727373Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714534326281693911,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a18728c3-21a3-4500-ad6f-e9920c8f2f34 name=/runtime.v1.ImageService/ImageFsInfo
	May 01 03:32:06 kubernetes-upgrade-046243 crio[2955]: time="2024-05-01 03:32:06.283159444Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d96d5443-17d6-403d-923b-1bb086dd7543 name=/runtime.v1.RuntimeService/ListContainers
	May 01 03:32:06 kubernetes-upgrade-046243 crio[2955]: time="2024-05-01 03:32:06.283224873Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d96d5443-17d6-403d-923b-1bb086dd7543 name=/runtime.v1.RuntimeService/ListContainers
	May 01 03:32:06 kubernetes-upgrade-046243 crio[2955]: time="2024-05-01 03:32:06.283972936Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d2a2dee5fec37a94d78639e43ab18d58b7d2bdde16b4e664f34e9806264bf27a,PodSandboxId:4b7cb9609c8500fe7e1801099ca7ba851ef001ddde2b8fbef411ba639a3424b3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714534321594605852,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d0654f0-3dbc-4fbe-acb6-08d8d6123629,},Annotations:map[string]string{io.kubernetes.container.hash: 4cf146d0,io.kubernetes.container.restartCount: 3,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52839cda351a564943c10d1f069e2e7eac32c18a72cd1ec1524ff48d88aca03c,PodSandboxId:50f729bc7facbcd68e21f3eba71c0bb148aef4bdc18f1787faf140d0ab0d8786,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714534321659396531,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bmnd4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47055de2-8962-4852-8282-32ddc4093cfa,},Annotations:map[string]string{io.kubernetes.container.hash: b938d7f9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protoco
l\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01870a8385d76b9d6cb5c6c0cc0cfd62b8bba21c4a1fb91e475aa8e253e5b4a5,PodSandboxId:5b892dc6246633aeb4f1b2937dd6837bb93a58fb144223ced289f7fb9facc81d,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714534321590455675,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-c5jc5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 482582a7-083a-4531-b5b3-94ed36133aea,},Annotations:map[string]string{io.kubernetes.container.hash: bfa3dd91,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99b75c3b3b23b48274c1fb0b68c4053e517b01adbb431b15ece5480584234c54,PodSandboxId:7725ca3dd5b383cb105265459382a800347e36820d311cf3d69d3d39282d4a6f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,C
reatedAt:1714534321571915411,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gngl4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93b5672a-e926-4a7b-b868-60ccd6a64635,},Annotations:map[string]string{io.kubernetes.container.hash: c1bab6d6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ccabc06303833deca36ac99622e90d228e12eaa689b9265642adf430e2e1fc7,PodSandboxId:23ba656b219f152fa3ce577160ee7567953f8d17e03751f7415b5157dcccdb20,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:17145343210
31282154,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-046243,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b0e15334e5236f5ab586c4e177194b4,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d4374b54ef182b09d0e9ce4b5d0e8b243b3b7f3c2219ce996cde9ae2d04a9dc,PodSandboxId:70de4e4726381b61deaccde28237384affbe5a36345e431167d598fc0b04714a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedA
t:1714534321007695236,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-046243,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17d9581aec97abd738252f64bb65a5ea,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:875c25114a129d0c57c73599db5ed65aa5ec707d51b2edeeb7dfe817d094e5a4,PodSandboxId:87b9325cbe5ad1ceb8ba904b815d471239dbf5492a89e8a425ccf6cbae6e9a25,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714534316117
202510,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-046243,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc5e9dc6b6785053df6374cdd3f25621,},Annotations:map[string]string{io.kubernetes.container.hash: 339d0358,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b950d0cf344155184e15a92f30e81cdb59dfe78e36925c2b3abb8690a53f697,PodSandboxId:db55da0033513ea65d48a300c8ac8fd5911b3b5534407316b11e07457ff27097,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714534315750554616,Labels:map[string]
string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-046243,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f06ab202de6165be4081a57cbe54c02,},Annotations:map[string]string{io.kubernetes.container.hash: f63b3fa1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:295664cc72d15c2742b0f5af393d6b5de6dab013626c9f739b50552bcf5541c4,PodSandboxId:5b892dc6246633aeb4f1b2937dd6837bb93a58fb144223ced289f7fb9facc81d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714534307480371965,Labels:map[string]string{io.kub
ernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-c5jc5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 482582a7-083a-4531-b5b3-94ed36133aea,},Annotations:map[string]string{io.kubernetes.container.hash: bfa3dd91,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:335fc6d78d8ba089d2e7a358c4b4e082ec02efef38ce2a16fbbf4746f68851d0,PodSandboxId:4b7cb9609c8500fe7e1801099ca7ba851ef001ddde2b8fbef411ba639a3424b3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string
]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1714534307109680201,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d0654f0-3dbc-4fbe-acb6-08d8d6123629,},Annotations:map[string]string{io.kubernetes.container.hash: 4cf146d0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ebff6abace980aea6297d402fd9b13259a6aa09404e83196198dbb279656627,PodSandboxId:7502780a6a3409a08e9445b2177c7b8693031e06270681865ebd1b83a2e75c5c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifie
dImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1714534303693262848,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gngl4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93b5672a-e926-4a7b-b868-60ccd6a64635,},Annotations:map[string]string{io.kubernetes.container.hash: c1bab6d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a157d612fec91cef216896672d792cf369d97e05faec4c46c7823794e66cdd32,PodSandboxId:ba079dad2fbba3f772a8a6fe882cd9294658db598bcd244fb3ff43766639afe0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Imag
eRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1714534303781579612,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-046243,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17d9581aec97abd738252f64bb65a5ea,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45ba288bf6bb94bd91736a77ddd07ca63de73ef1ff396f9b3bc22c9d108c4763,PodSandboxId:d1e5a289154a1c6e6fa5aa4fbb2296dbb917a1a7db767fb4e99b26e00789d0d5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c0
4ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1714534303572509256,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-046243,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc5e9dc6b6785053df6374cdd3f25621,},Annotations:map[string]string{io.kubernetes.container.hash: 339d0358,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e949b0af9f56f9d705e772905897ed96d4f15c7df6011a60e52f7df0dc3e1144,PodSandboxId:4d6f9d86e982a2a7db1e0c288a5c2aaa4435b62f23a364189a26d68d1f3e5ebc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b2
4fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714534303496392368,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-046243,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f06ab202de6165be4081a57cbe54c02,},Annotations:map[string]string{io.kubernetes.container.hash: f63b3fa1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c39102292565ed9e4776f00f2e627efad6c1f08ba704a7d9b7bfb040ed10e948,PodSandboxId:41ccee652e532d95556fc1fae7d6f21fb0e60b7f09fbfbc896bcbcda9cd2a841,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9
451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714534302989180294,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-046243,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b0e15334e5236f5ab586c4e177194b4,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27961d1eae4cba321babd7d34860ca5097a8710cf52a2e7dab31283402c2c1b8,PodSandboxId:0a3997084296ae4c5ff4122274a2fe48384464e2db9ee18126e53a101ad42e94,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909
a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714534246265393540,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bmnd4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47055de2-8962-4852-8282-32ddc4093cfa,},Annotations:map[string]string{io.kubernetes.container.hash: b938d7f9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d96d5443-17d6-403d-923b-1bb086dd7543 name=/runtime.v1.RuntimeService/ListContainers
	May 01 03:32:06 kubernetes-upgrade-046243 crio[2955]: time="2024-05-01 03:32:06.319806587Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=17b809c0-1387-4ba9-a6d3-13cf1927a29f name=/runtime.v1.RuntimeService/ListPodSandbox
	May 01 03:32:06 kubernetes-upgrade-046243 crio[2955]: time="2024-05-01 03:32:06.320504119Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:50f729bc7facbcd68e21f3eba71c0bb148aef4bdc18f1787faf140d0ab0d8786,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-bmnd4,Uid:47055de2-8962-4852-8282-32ddc4093cfa,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1714534307058689354,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-bmnd4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47055de2-8962-4852-8282-32ddc4093cfa,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-05-01T03:30:44.794242269Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5b892dc6246633aeb4f1b2937dd6837bb93a58fb144223ced289f7fb9facc81d,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-c5jc5,Uid:482582a7-083a-4531-b5b3-94ed36133aea,Namespac
e:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1714534306754363230,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-c5jc5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 482582a7-083a-4531-b5b3-94ed36133aea,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-05-01T03:30:44.720525785Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:7725ca3dd5b383cb105265459382a800347e36820d311cf3d69d3d39282d4a6f,Metadata:&PodSandboxMetadata{Name:kube-proxy-gngl4,Uid:93b5672a-e926-4a7b-b868-60ccd6a64635,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1714534306705150037,Labels:map[string]string{controller-revision-hash: 79cf874c65,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-gngl4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93b5672a-e926-4a7b-b868-60ccd6a64635,k8s-app: kube-proxy,pod-template-generation: 1,},Annot
ations:map[string]string{kubernetes.io/config.seen: 2024-05-01T03:30:44.445448588Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4b7cb9609c8500fe7e1801099ca7ba851ef001ddde2b8fbef411ba639a3424b3,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:6d0654f0-3dbc-4fbe-acb6-08d8d6123629,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1714534306617172757,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d0654f0-3dbc-4fbe-acb6-08d8d6123629,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"conta
iners\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-05-01T03:30:43.789813865Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:23ba656b219f152fa3ce577160ee7567953f8d17e03751f7415b5157dcccdb20,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-kubernetes-upgrade-046243,Uid:0b0e15334e5236f5ab586c4e177194b4,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1714534306600120559,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-046243,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 0b0e15334e5236f5ab586c4e177194b4,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 0b0e15334e5236f5ab586c4e177194b4,kubernetes.io/config.seen: 2024-05-01T03:30:24.641119225Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:87b9325cbe5ad1ceb8ba904b815d471239dbf5492a89e8a425ccf6cbae6e9a25,Metadata:&PodSandboxMetadata{Name:etcd-kubernetes-upgrade-046243,Uid:fc5e9dc6b6785053df6374cdd3f25621,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1714534306467814995,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-kubernetes-upgrade-046243,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc5e9dc6b6785053df6374cdd3f25621,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.72.134:2379,kubernetes.io/config.hash: fc5e9dc6b6785053df6374cdd3f25621,kubernetes.io/config.seen: 2024-05-01T03:30:24.732753662Z,kubernetes.io/config.s
ource: file,},RuntimeHandler:,},&PodSandbox{Id:db55da0033513ea65d48a300c8ac8fd5911b3b5534407316b11e07457ff27097,Metadata:&PodSandboxMetadata{Name:kube-apiserver-kubernetes-upgrade-046243,Uid:7f06ab202de6165be4081a57cbe54c02,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1714534306431849384,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-046243,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f06ab202de6165be4081a57cbe54c02,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.72.134:8443,kubernetes.io/config.hash: 7f06ab202de6165be4081a57cbe54c02,kubernetes.io/config.seen: 2024-05-01T03:30:24.641124009Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:70de4e4726381b61deaccde28237384affbe5a36345e431167d598fc0b04714a,Metadata:&PodSandboxMetadata{Name:kube-scheduler-kubernetes-upgrade-046243,Uid:1
7d9581aec97abd738252f64bb65a5ea,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1714534306392228328,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-046243,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17d9581aec97abd738252f64bb65a5ea,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 17d9581aec97abd738252f64bb65a5ea,kubernetes.io/config.seen: 2024-05-01T03:30:24.641122998Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=17b809c0-1387-4ba9-a6d3-13cf1927a29f name=/runtime.v1.RuntimeService/ListPodSandbox
	May 01 03:32:06 kubernetes-upgrade-046243 crio[2955]: time="2024-05-01 03:32:06.321674393Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fc6b06b9-3a55-4e80-b5b0-e800c21fb7a1 name=/runtime.v1.RuntimeService/ListContainers
	May 01 03:32:06 kubernetes-upgrade-046243 crio[2955]: time="2024-05-01 03:32:06.321757874Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fc6b06b9-3a55-4e80-b5b0-e800c21fb7a1 name=/runtime.v1.RuntimeService/ListContainers
	May 01 03:32:06 kubernetes-upgrade-046243 crio[2955]: time="2024-05-01 03:32:06.322258458Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d2a2dee5fec37a94d78639e43ab18d58b7d2bdde16b4e664f34e9806264bf27a,PodSandboxId:4b7cb9609c8500fe7e1801099ca7ba851ef001ddde2b8fbef411ba639a3424b3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714534321594605852,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d0654f0-3dbc-4fbe-acb6-08d8d6123629,},Annotations:map[string]string{io.kubernetes.container.hash: 4cf146d0,io.kubernetes.container.restartCount: 3,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52839cda351a564943c10d1f069e2e7eac32c18a72cd1ec1524ff48d88aca03c,PodSandboxId:50f729bc7facbcd68e21f3eba71c0bb148aef4bdc18f1787faf140d0ab0d8786,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714534321659396531,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bmnd4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47055de2-8962-4852-8282-32ddc4093cfa,},Annotations:map[string]string{io.kubernetes.container.hash: b938d7f9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protoco
l\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01870a8385d76b9d6cb5c6c0cc0cfd62b8bba21c4a1fb91e475aa8e253e5b4a5,PodSandboxId:5b892dc6246633aeb4f1b2937dd6837bb93a58fb144223ced289f7fb9facc81d,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714534321590455675,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-c5jc5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 482582a7-083a-4531-b5b3-94ed36133aea,},Annotations:map[string]string{io.kubernetes.container.hash: bfa3dd91,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99b75c3b3b23b48274c1fb0b68c4053e517b01adbb431b15ece5480584234c54,PodSandboxId:7725ca3dd5b383cb105265459382a800347e36820d311cf3d69d3d39282d4a6f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,C
reatedAt:1714534321571915411,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gngl4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93b5672a-e926-4a7b-b868-60ccd6a64635,},Annotations:map[string]string{io.kubernetes.container.hash: c1bab6d6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ccabc06303833deca36ac99622e90d228e12eaa689b9265642adf430e2e1fc7,PodSandboxId:23ba656b219f152fa3ce577160ee7567953f8d17e03751f7415b5157dcccdb20,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:17145343210
31282154,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-046243,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b0e15334e5236f5ab586c4e177194b4,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d4374b54ef182b09d0e9ce4b5d0e8b243b3b7f3c2219ce996cde9ae2d04a9dc,PodSandboxId:70de4e4726381b61deaccde28237384affbe5a36345e431167d598fc0b04714a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedA
t:1714534321007695236,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-046243,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17d9581aec97abd738252f64bb65a5ea,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:875c25114a129d0c57c73599db5ed65aa5ec707d51b2edeeb7dfe817d094e5a4,PodSandboxId:87b9325cbe5ad1ceb8ba904b815d471239dbf5492a89e8a425ccf6cbae6e9a25,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714534316117
202510,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-046243,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc5e9dc6b6785053df6374cdd3f25621,},Annotations:map[string]string{io.kubernetes.container.hash: 339d0358,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b950d0cf344155184e15a92f30e81cdb59dfe78e36925c2b3abb8690a53f697,PodSandboxId:db55da0033513ea65d48a300c8ac8fd5911b3b5534407316b11e07457ff27097,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714534315750554616,Labels:map[string]
string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-046243,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f06ab202de6165be4081a57cbe54c02,},Annotations:map[string]string{io.kubernetes.container.hash: f63b3fa1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fc6b06b9-3a55-4e80-b5b0-e800c21fb7a1 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	52839cda351a5       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   4 seconds ago        Running             coredns                   1                   50f729bc7facb       coredns-7db6d8ff4d-bmnd4
	d2a2dee5fec37       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   4 seconds ago        Running             storage-provisioner       3                   4b7cb9609c850       storage-provisioner
	01870a8385d76       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   4 seconds ago        Running             coredns                   2                   5b892dc624663       coredns-7db6d8ff4d-c5jc5
	99b75c3b3b23b       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b   4 seconds ago        Running             kube-proxy                2                   7725ca3dd5b38       kube-proxy-gngl4
	8ccabc0630383       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b   5 seconds ago        Running             kube-controller-manager   2                   23ba656b219f1       kube-controller-manager-kubernetes-upgrade-046243
	3d4374b54ef18       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced   5 seconds ago        Running             kube-scheduler            2                   70de4e4726381       kube-scheduler-kubernetes-upgrade-046243
	875c25114a129       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   10 seconds ago       Running             etcd                      2                   87b9325cbe5ad       etcd-kubernetes-upgrade-046243
	3b950d0cf3441       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0   10 seconds ago       Running             kube-apiserver            2                   db55da0033513       kube-apiserver-kubernetes-upgrade-046243
	295664cc72d15       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   18 seconds ago       Exited              coredns                   1                   5b892dc624663       coredns-7db6d8ff4d-c5jc5
	335fc6d78d8ba       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   19 seconds ago       Exited              storage-provisioner       2                   4b7cb9609c850       storage-provisioner
	a157d612fec91       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced   22 seconds ago       Exited              kube-scheduler            1                   ba079dad2fbba       kube-scheduler-kubernetes-upgrade-046243
	8ebff6abace98       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b   22 seconds ago       Exited              kube-proxy                1                   7502780a6a340       kube-proxy-gngl4
	45ba288bf6bb9       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   22 seconds ago       Exited              etcd                      1                   d1e5a289154a1       etcd-kubernetes-upgrade-046243
	e949b0af9f56f       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0   22 seconds ago       Exited              kube-apiserver            1                   4d6f9d86e982a       kube-apiserver-kubernetes-upgrade-046243
	c39102292565e       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b   23 seconds ago       Exited              kube-controller-manager   1                   41ccee652e532       kube-controller-manager-kubernetes-upgrade-046243
	27961d1eae4cb       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   About a minute ago   Exited              coredns                   0                   0a3997084296a       coredns-7db6d8ff4d-bmnd4
	
	
	==> coredns [01870a8385d76b9d6cb5c6c0cc0cfd62b8bba21c4a1fb91e475aa8e253e5b4a5] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [27961d1eae4cba321babd7d34860ca5097a8710cf52a2e7dab31283402c2c1b8] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [295664cc72d15c2742b0f5af393d6b5de6dab013626c9f739b50552bcf5541c4] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [52839cda351a564943c10d1f069e2e7eac32c18a72cd1ec1524ff48d88aca03c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-046243
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-046243
	                    kubernetes.io/os=linux
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 01 May 2024 03:30:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-046243
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 01 May 2024 03:32:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 01 May 2024 03:32:00 +0000   Wed, 01 May 2024 03:30:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 01 May 2024 03:32:00 +0000   Wed, 01 May 2024 03:30:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 01 May 2024 03:32:00 +0000   Wed, 01 May 2024 03:30:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 01 May 2024 03:32:00 +0000   Wed, 01 May 2024 03:30:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.134
	  Hostname:    kubernetes-upgrade-046243
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 282aa550e7504fe6bc7b2ddbd5da7c1f
	  System UUID:                282aa550-e750-4fe6-bc7b-2ddbd5da7c1f
	  Boot ID:                    defbb406-12f3-40b7-9c9f-64c6659164ef
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-bmnd4                             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     82s
	  kube-system                 coredns-7db6d8ff4d-c5jc5                             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     82s
	  kube-system                 etcd-kubernetes-upgrade-046243                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         89s
	  kube-system                 kube-apiserver-kubernetes-upgrade-046243             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         91s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-046243    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         97s
	  kube-system                 kube-proxy-gngl4                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         82s
	  kube-system                 kube-scheduler-kubernetes-upgrade-046243             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         93s
	  kube-system                 storage-provisioner                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         91s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             240Mi (11%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 81s                  kube-proxy       
	  Normal  Starting                 4s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  102s (x8 over 102s)  kubelet          Node kubernetes-upgrade-046243 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    102s (x8 over 102s)  kubelet          Node kubernetes-upgrade-046243 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     102s (x7 over 102s)  kubelet          Node kubernetes-upgrade-046243 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  102s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           83s                  node-controller  Node kubernetes-upgrade-046243 event: Registered Node kubernetes-upgrade-046243 in Controller
	  Normal  Starting                 6s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6s                   kubelet          Node kubernetes-upgrade-046243 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6s                   kubelet          Node kubernetes-upgrade-046243 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6s                   kubelet          Node kubernetes-upgrade-046243 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6s                   kubelet          Updated Node Allocatable limit across pods
	
	
	==> dmesg <==
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.198809] systemd-fstab-generator[567]: Ignoring "noauto" option for root device
	[  +0.082202] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.076639] systemd-fstab-generator[579]: Ignoring "noauto" option for root device
	[  +0.203220] systemd-fstab-generator[593]: Ignoring "noauto" option for root device
	[  +0.154332] systemd-fstab-generator[605]: Ignoring "noauto" option for root device
	[  +0.377244] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +5.798645] systemd-fstab-generator[730]: Ignoring "noauto" option for root device
	[  +0.062711] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.446764] systemd-fstab-generator[853]: Ignoring "noauto" option for root device
	[ +10.318736] systemd-fstab-generator[1251]: Ignoring "noauto" option for root device
	[  +0.106741] kauditd_printk_skb: 97 callbacks suppressed
	[ +10.023844] kauditd_printk_skb: 21 callbacks suppressed
	[ +10.157649] kauditd_printk_skb: 78 callbacks suppressed
	[May 1 03:31] systemd-fstab-generator[2324]: Ignoring "noauto" option for root device
	[  +0.204899] systemd-fstab-generator[2341]: Ignoring "noauto" option for root device
	[  +0.260392] systemd-fstab-generator[2382]: Ignoring "noauto" option for root device
	[  +0.251840] systemd-fstab-generator[2485]: Ignoring "noauto" option for root device
	[  +1.274052] systemd-fstab-generator[2885]: Ignoring "noauto" option for root device
	[  +1.853603] systemd-fstab-generator[3246]: Ignoring "noauto" option for root device
	[  +1.015824] kauditd_printk_skb: 278 callbacks suppressed
	[  +8.557004] kauditd_printk_skb: 8 callbacks suppressed
	[  +4.013388] systemd-fstab-generator[4032]: Ignoring "noauto" option for root device
	[May 1 03:32] kauditd_printk_skb: 33 callbacks suppressed
	[  +2.861099] systemd-fstab-generator[4497]: Ignoring "noauto" option for root device
	
	
	==> etcd [45ba288bf6bb94bd91736a77ddd07ca63de73ef1ff396f9b3bc22c9d108c4763] <==
	{"level":"warn","ts":"2024-05-01T03:31:44.400042Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	{"level":"info","ts":"2024-05-01T03:31:44.400136Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://192.168.72.134:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://192.168.72.134:2380","--initial-cluster=kubernetes-upgrade-046243=https://192.168.72.134:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://192.168.72.134:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://192.168.72.134:2380","--name=kubernetes-upgrade-046243","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--proxy-refresh-interval=70000","--sna
pshot-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
	{"level":"info","ts":"2024-05-01T03:31:44.400223Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	{"level":"warn","ts":"2024-05-01T03:31:44.400263Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	{"level":"info","ts":"2024-05-01T03:31:44.400277Z","caller":"embed/etcd.go:127","msg":"configuring peer listeners","listen-peer-urls":["https://192.168.72.134:2380"]}
	{"level":"info","ts":"2024-05-01T03:31:44.400314Z","caller":"embed/etcd.go:494","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-05-01T03:31:44.401254Z","caller":"embed/etcd.go:135","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://192.168.72.134:2379"]}
	{"level":"info","ts":"2024-05-01T03:31:44.401438Z","caller":"embed/etcd.go:308","msg":"starting an etcd server","etcd-version":"3.5.12","git-sha":"e7b3bb6cc","go-version":"go1.20.13","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"kubernetes-upgrade-046243","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://192.168.72.134:2380"],"listen-peer-urls":["https://192.168.72.134:2380"],"advertise-client-urls":["https://192.168.72.134:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.72.134:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initial-cluster":"","initial-cluster-state":"new
","initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	{"level":"info","ts":"2024-05-01T03:31:44.459838Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"58.205824ms"}
	{"level":"info","ts":"2024-05-01T03:31:44.538939Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-05-01T03:31:44.583448Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"e05c7f9c7688aa0f","local-member-id":"b97e97327d189999","commit-index":438}
	{"level":"info","ts":"2024-05-01T03:31:44.583571Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b97e97327d189999 switched to configuration voters=()"}
	{"level":"info","ts":"2024-05-01T03:31:44.58361Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b97e97327d189999 became follower at term 2"}
	{"level":"info","ts":"2024-05-01T03:31:44.583626Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft b97e97327d189999 [peers: [], term: 2, commit: 438, applied: 0, lastindex: 438, lastterm: 2]"}
	
	
	==> etcd [875c25114a129d0c57c73599db5ed65aa5ec707d51b2edeeb7dfe817d094e5a4] <==
	{"level":"info","ts":"2024-05-01T03:31:56.29738Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-05-01T03:31:56.297459Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-05-01T03:31:56.298082Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b97e97327d189999 switched to configuration voters=(13366286987185133977)"}
	{"level":"info","ts":"2024-05-01T03:31:56.29816Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"e05c7f9c7688aa0f","local-member-id":"b97e97327d189999","added-peer-id":"b97e97327d189999","added-peer-peer-urls":["https://192.168.72.134:2380"]}
	{"level":"info","ts":"2024-05-01T03:31:56.29827Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"e05c7f9c7688aa0f","local-member-id":"b97e97327d189999","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-01T03:31:56.298315Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-01T03:31:56.302839Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-05-01T03:31:56.302985Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.72.134:2380"}
	{"level":"info","ts":"2024-05-01T03:31:56.303172Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.72.134:2380"}
	{"level":"info","ts":"2024-05-01T03:31:56.304654Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-05-01T03:31:56.304584Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"b97e97327d189999","initial-advertise-peer-urls":["https://192.168.72.134:2380"],"listen-peer-urls":["https://192.168.72.134:2380"],"advertise-client-urls":["https://192.168.72.134:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.72.134:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-05-01T03:31:58.087281Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b97e97327d189999 is starting a new election at term 2"}
	{"level":"info","ts":"2024-05-01T03:31:58.087332Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b97e97327d189999 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-05-01T03:31:58.087353Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b97e97327d189999 received MsgPreVoteResp from b97e97327d189999 at term 2"}
	{"level":"info","ts":"2024-05-01T03:31:58.087365Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b97e97327d189999 became candidate at term 3"}
	{"level":"info","ts":"2024-05-01T03:31:58.087373Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b97e97327d189999 received MsgVoteResp from b97e97327d189999 at term 3"}
	{"level":"info","ts":"2024-05-01T03:31:58.087384Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b97e97327d189999 became leader at term 3"}
	{"level":"info","ts":"2024-05-01T03:31:58.087416Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b97e97327d189999 elected leader b97e97327d189999 at term 3"}
	{"level":"info","ts":"2024-05-01T03:31:58.090341Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"b97e97327d189999","local-member-attributes":"{Name:kubernetes-upgrade-046243 ClientURLs:[https://192.168.72.134:2379]}","request-path":"/0/members/b97e97327d189999/attributes","cluster-id":"e05c7f9c7688aa0f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-05-01T03:31:58.090417Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-01T03:31:58.090714Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-01T03:31:58.092458Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-05-01T03:31:58.094092Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.134:2379"}
	{"level":"info","ts":"2024-05-01T03:31:58.096997Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-05-01T03:31:58.097087Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 03:32:06 up 2 min,  0 users,  load average: 2.38, 0.78, 0.28
	Linux kubernetes-upgrade-046243 5.10.207 #1 SMP Tue Apr 30 22:38:43 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [3b950d0cf344155184e15a92f30e81cdb59dfe78e36925c2b3abb8690a53f697] <==
	I0501 03:32:00.388791       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0501 03:32:00.388807       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0501 03:32:00.534151       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0501 03:32:00.537256       1 aggregator.go:165] initial CRD sync complete...
	I0501 03:32:00.537269       1 autoregister_controller.go:141] Starting autoregister controller
	I0501 03:32:00.537275       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0501 03:32:00.537597       1 shared_informer.go:320] Caches are synced for configmaps
	I0501 03:32:00.541668       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0501 03:32:00.545022       1 policy_source.go:224] refreshing policies
	I0501 03:32:00.550637       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0501 03:32:00.551554       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0501 03:32:00.611056       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0501 03:32:00.611075       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0501 03:32:00.612940       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0501 03:32:00.614694       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0501 03:32:00.616765       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0501 03:32:00.626445       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0501 03:32:00.641279       1 cache.go:39] Caches are synced for autoregister controller
	I0501 03:32:01.352050       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0501 03:32:03.608232       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0501 03:32:03.626586       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0501 03:32:03.683176       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0501 03:32:03.776106       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0501 03:32:03.782567       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0501 03:32:04.888484       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-apiserver [e949b0af9f56f9d705e772905897ed96d4f15c7df6011a60e52f7df0dc3e1144] <==
	I0501 03:31:44.529116       1 options.go:221] external host was not specified, using 192.168.72.134
	I0501 03:31:44.530093       1 server.go:148] Version: v1.30.0
	I0501 03:31:44.530151       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	
	
	==> kube-controller-manager [8ccabc06303833deca36ac99622e90d228e12eaa689b9265642adf430e2e1fc7] <==
	I0501 03:32:03.329212       1 pv_controller_base.go:313] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0501 03:32:03.330218       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0501 03:32:03.332068       1 controllermanager.go:759] "Started controller" controller="persistentvolume-expander-controller"
	I0501 03:32:03.332185       1 expand_controller.go:329] "Starting expand controller" logger="persistentvolume-expander-controller"
	I0501 03:32:03.332345       1 shared_informer.go:313] Waiting for caches to sync for expand
	I0501 03:32:03.334045       1 controllermanager.go:759] "Started controller" controller="serviceaccount-controller"
	I0501 03:32:03.334373       1 serviceaccounts_controller.go:111] "Starting service account controller" logger="serviceaccount-controller"
	I0501 03:32:03.335221       1 shared_informer.go:313] Waiting for caches to sync for service account
	I0501 03:32:03.337184       1 controllermanager.go:759] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0501 03:32:03.337535       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0501 03:32:03.337837       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0501 03:32:03.337551       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0501 03:32:03.337576       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0501 03:32:03.338101       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0501 03:32:03.337584       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0501 03:32:03.337605       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0501 03:32:03.338510       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0501 03:32:03.337612       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0501 03:32:03.337649       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0501 03:32:03.338770       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0501 03:32:03.337657       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0501 03:32:03.341524       1 shared_informer.go:320] Caches are synced for tokens
	I0501 03:32:03.341787       1 controllermanager.go:759] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0501 03:32:03.342202       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0501 03:32:03.343361       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	
	
	==> kube-controller-manager [c39102292565ed9e4776f00f2e627efad6c1f08ba704a7d9b7bfb040ed10e948] <==
	
	
	==> kube-proxy [8ebff6abace980aea6297d402fd9b13259a6aa09404e83196198dbb279656627] <==
	
	
	==> kube-proxy [99b75c3b3b23b48274c1fb0b68c4053e517b01adbb431b15ece5480584234c54] <==
	I0501 03:32:02.147165       1 server_linux.go:69] "Using iptables proxy"
	I0501 03:32:02.175336       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.72.134"]
	I0501 03:32:02.292824       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0501 03:32:02.292938       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0501 03:32:02.292957       1 server_linux.go:165] "Using iptables Proxier"
	I0501 03:32:02.303009       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0501 03:32:02.303215       1 server.go:872] "Version info" version="v1.30.0"
	I0501 03:32:02.303256       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0501 03:32:02.306700       1 config.go:192] "Starting service config controller"
	I0501 03:32:02.306750       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0501 03:32:02.306771       1 config.go:101] "Starting endpoint slice config controller"
	I0501 03:32:02.306775       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0501 03:32:02.307272       1 config.go:319] "Starting node config controller"
	I0501 03:32:02.307313       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0501 03:32:02.408288       1 shared_informer.go:320] Caches are synced for node config
	I0501 03:32:02.408355       1 shared_informer.go:320] Caches are synced for service config
	I0501 03:32:02.408394       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [3d4374b54ef182b09d0e9ce4b5d0e8b243b3b7f3c2219ce996cde9ae2d04a9dc] <==
	I0501 03:32:02.905657       1 serving.go:380] Generated self-signed cert in-memory
	I0501 03:32:03.990323       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0"
	I0501 03:32:03.990438       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0501 03:32:03.995101       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I0501 03:32:03.995351       1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0501 03:32:03.995575       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0501 03:32:03.995644       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0501 03:32:03.995677       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0501 03:32:03.995801       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0501 03:32:04.001051       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0501 03:32:04.002356       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0501 03:32:04.096564       1 shared_informer.go:320] Caches are synced for RequestHeaderAuthRequestController
	I0501 03:32:04.097013       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0501 03:32:04.097131       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	
	
	==> kube-scheduler [a157d612fec91cef216896672d792cf369d97e05faec4c46c7823794e66cdd32] <==
	
	
	==> kubelet <==
	May 01 03:32:00 kubernetes-upgrade-046243 kubelet[4039]: I0501 03:32:00.749206    4039 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0b0e15334e5236f5ab586c4e177194b4-k8s-certs\") pod \"kube-controller-manager-kubernetes-upgrade-046243\" (UID: \"0b0e15334e5236f5ab586c4e177194b4\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-046243"
	May 01 03:32:00 kubernetes-upgrade-046243 kubelet[4039]: I0501 03:32:00.749225    4039 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/fc5e9dc6b6785053df6374cdd3f25621-etcd-certs\") pod \"etcd-kubernetes-upgrade-046243\" (UID: \"fc5e9dc6b6785053df6374cdd3f25621\") " pod="kube-system/etcd-kubernetes-upgrade-046243"
	May 01 03:32:00 kubernetes-upgrade-046243 kubelet[4039]: I0501 03:32:00.749246    4039 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/fc5e9dc6b6785053df6374cdd3f25621-etcd-data\") pod \"etcd-kubernetes-upgrade-046243\" (UID: \"fc5e9dc6b6785053df6374cdd3f25621\") " pod="kube-system/etcd-kubernetes-upgrade-046243"
	May 01 03:32:00 kubernetes-upgrade-046243 kubelet[4039]: I0501 03:32:00.749259    4039 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7f06ab202de6165be4081a57cbe54c02-ca-certs\") pod \"kube-apiserver-kubernetes-upgrade-046243\" (UID: \"7f06ab202de6165be4081a57cbe54c02\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-046243"
	May 01 03:32:00 kubernetes-upgrade-046243 kubelet[4039]: I0501 03:32:00.749274    4039 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7f06ab202de6165be4081a57cbe54c02-k8s-certs\") pod \"kube-apiserver-kubernetes-upgrade-046243\" (UID: \"7f06ab202de6165be4081a57cbe54c02\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-046243"
	May 01 03:32:00 kubernetes-upgrade-046243 kubelet[4039]: I0501 03:32:00.749288    4039 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7f06ab202de6165be4081a57cbe54c02-usr-share-ca-certificates\") pod \"kube-apiserver-kubernetes-upgrade-046243\" (UID: \"7f06ab202de6165be4081a57cbe54c02\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-046243"
	May 01 03:32:00 kubernetes-upgrade-046243 kubelet[4039]: I0501 03:32:00.749302    4039 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0b0e15334e5236f5ab586c4e177194b4-ca-certs\") pod \"kube-controller-manager-kubernetes-upgrade-046243\" (UID: \"0b0e15334e5236f5ab586c4e177194b4\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-046243"
	May 01 03:32:00 kubernetes-upgrade-046243 kubelet[4039]: I0501 03:32:00.749317    4039 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0b0e15334e5236f5ab586c4e177194b4-kubeconfig\") pod \"kube-controller-manager-kubernetes-upgrade-046243\" (UID: \"0b0e15334e5236f5ab586c4e177194b4\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-046243"
	May 01 03:32:00 kubernetes-upgrade-046243 kubelet[4039]: I0501 03:32:00.749335    4039 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0b0e15334e5236f5ab586c4e177194b4-usr-share-ca-certificates\") pod \"kube-controller-manager-kubernetes-upgrade-046243\" (UID: \"0b0e15334e5236f5ab586c4e177194b4\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-046243"
	May 01 03:32:00 kubernetes-upgrade-046243 kubelet[4039]: I0501 03:32:00.749349    4039 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/17d9581aec97abd738252f64bb65a5ea-kubeconfig\") pod \"kube-scheduler-kubernetes-upgrade-046243\" (UID: \"17d9581aec97abd738252f64bb65a5ea\") " pod="kube-system/kube-scheduler-kubernetes-upgrade-046243"
	May 01 03:32:00 kubernetes-upgrade-046243 kubelet[4039]: I0501 03:32:00.993558    4039 scope.go:117] "RemoveContainer" containerID="a157d612fec91cef216896672d792cf369d97e05faec4c46c7823794e66cdd32"
	May 01 03:32:00 kubernetes-upgrade-046243 kubelet[4039]: I0501 03:32:00.993977    4039 scope.go:117] "RemoveContainer" containerID="c39102292565ed9e4776f00f2e627efad6c1f08ba704a7d9b7bfb040ed10e948"
	May 01 03:32:01 kubernetes-upgrade-046243 kubelet[4039]: I0501 03:32:01.225415    4039 apiserver.go:52] "Watching apiserver"
	May 01 03:32:01 kubernetes-upgrade-046243 kubelet[4039]: I0501 03:32:01.230027    4039 topology_manager.go:215] "Topology Admit Handler" podUID="6d0654f0-3dbc-4fbe-acb6-08d8d6123629" podNamespace="kube-system" podName="storage-provisioner"
	May 01 03:32:01 kubernetes-upgrade-046243 kubelet[4039]: I0501 03:32:01.230176    4039 topology_manager.go:215] "Topology Admit Handler" podUID="47055de2-8962-4852-8282-32ddc4093cfa" podNamespace="kube-system" podName="coredns-7db6d8ff4d-bmnd4"
	May 01 03:32:01 kubernetes-upgrade-046243 kubelet[4039]: I0501 03:32:01.230278    4039 topology_manager.go:215] "Topology Admit Handler" podUID="482582a7-083a-4531-b5b3-94ed36133aea" podNamespace="kube-system" podName="coredns-7db6d8ff4d-c5jc5"
	May 01 03:32:01 kubernetes-upgrade-046243 kubelet[4039]: I0501 03:32:01.230330    4039 topology_manager.go:215] "Topology Admit Handler" podUID="93b5672a-e926-4a7b-b868-60ccd6a64635" podNamespace="kube-system" podName="kube-proxy-gngl4"
	May 01 03:32:01 kubernetes-upgrade-046243 kubelet[4039]: I0501 03:32:01.282481    4039 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	May 01 03:32:01 kubernetes-upgrade-046243 kubelet[4039]: I0501 03:32:01.358968    4039 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/6d0654f0-3dbc-4fbe-acb6-08d8d6123629-tmp\") pod \"storage-provisioner\" (UID: \"6d0654f0-3dbc-4fbe-acb6-08d8d6123629\") " pod="kube-system/storage-provisioner"
	May 01 03:32:01 kubernetes-upgrade-046243 kubelet[4039]: I0501 03:32:01.359327    4039 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/93b5672a-e926-4a7b-b868-60ccd6a64635-lib-modules\") pod \"kube-proxy-gngl4\" (UID: \"93b5672a-e926-4a7b-b868-60ccd6a64635\") " pod="kube-system/kube-proxy-gngl4"
	May 01 03:32:01 kubernetes-upgrade-046243 kubelet[4039]: I0501 03:32:01.359584    4039 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/93b5672a-e926-4a7b-b868-60ccd6a64635-xtables-lock\") pod \"kube-proxy-gngl4\" (UID: \"93b5672a-e926-4a7b-b868-60ccd6a64635\") " pod="kube-system/kube-proxy-gngl4"
	May 01 03:32:01 kubernetes-upgrade-046243 kubelet[4039]: I0501 03:32:01.531800    4039 scope.go:117] "RemoveContainer" containerID="295664cc72d15c2742b0f5af393d6b5de6dab013626c9f739b50552bcf5541c4"
	May 01 03:32:01 kubernetes-upgrade-046243 kubelet[4039]: I0501 03:32:01.532102    4039 scope.go:117] "RemoveContainer" containerID="8ebff6abace980aea6297d402fd9b13259a6aa09404e83196198dbb279656627"
	May 01 03:32:01 kubernetes-upgrade-046243 kubelet[4039]: I0501 03:32:01.532361    4039 scope.go:117] "RemoveContainer" containerID="27961d1eae4cba321babd7d34860ca5097a8710cf52a2e7dab31283402c2c1b8"
	May 01 03:32:01 kubernetes-upgrade-046243 kubelet[4039]: I0501 03:32:01.532662    4039 scope.go:117] "RemoveContainer" containerID="335fc6d78d8ba089d2e7a358c4b4e082ec02efef38ce2a16fbbf4746f68851d0"
	
	
	==> storage-provisioner [335fc6d78d8ba089d2e7a358c4b4e082ec02efef38ce2a16fbbf4746f68851d0] <==
	I0501 03:31:47.248093       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0501 03:31:47.252293       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [d2a2dee5fec37a94d78639e43ab18d58b7d2bdde16b4e664f34e9806264bf27a] <==
	I0501 03:32:02.078322       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0501 03:32:02.140917       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0501 03:32:02.141059       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0501 03:32:05.639957   67180 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18779-13391/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-046243 -n kubernetes-upgrade-046243
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-046243 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-046243" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-046243
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-046243: (1.176144527s)
--- FAIL: TestKubernetesUpgrade (474.18s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (54.74s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-542495 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-542495 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (49.86567777s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-542495] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18779
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18779-13391/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18779-13391/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-542495" primary control-plane node in "pause-542495" cluster
	* Updating the running kvm2 "pause-542495" VM ...
	* Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	* Done! kubectl is now configured to use "pause-542495" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0501 03:29:21.960205   64715 out.go:291] Setting OutFile to fd 1 ...
	I0501 03:29:21.960469   64715 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 03:29:21.960479   64715 out.go:304] Setting ErrFile to fd 2...
	I0501 03:29:21.960484   64715 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 03:29:21.960692   64715 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18779-13391/.minikube/bin
	I0501 03:29:21.961315   64715 out.go:298] Setting JSON to false
	I0501 03:29:21.962602   64715 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7905,"bootTime":1714526257,"procs":225,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0501 03:29:21.962688   64715 start.go:139] virtualization: kvm guest
	I0501 03:29:21.965049   64715 out.go:177] * [pause-542495] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0501 03:29:21.966665   64715 out.go:177]   - MINIKUBE_LOCATION=18779
	I0501 03:29:21.966625   64715 notify.go:220] Checking for updates...
	I0501 03:29:21.968173   64715 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0501 03:29:21.969783   64715 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18779-13391/kubeconfig
	I0501 03:29:21.971155   64715 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18779-13391/.minikube
	I0501 03:29:21.972560   64715 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0501 03:29:21.973937   64715 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0501 03:29:21.975857   64715 config.go:182] Loaded profile config "pause-542495": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0501 03:29:21.976517   64715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:29:21.976576   64715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:29:21.993234   64715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38013
	I0501 03:29:21.993668   64715 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:29:21.994232   64715 main.go:141] libmachine: Using API Version  1
	I0501 03:29:21.994255   64715 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:29:21.994676   64715 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:29:21.994929   64715 main.go:141] libmachine: (pause-542495) Calling .DriverName
	I0501 03:29:21.995210   64715 driver.go:392] Setting default libvirt URI to qemu:///system
	I0501 03:29:21.995496   64715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:29:21.995538   64715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:29:22.012325   64715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34087
	I0501 03:29:22.012729   64715 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:29:22.013284   64715 main.go:141] libmachine: Using API Version  1
	I0501 03:29:22.013301   64715 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:29:22.013652   64715 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:29:22.013871   64715 main.go:141] libmachine: (pause-542495) Calling .DriverName
	I0501 03:29:22.541849   64715 out.go:177] * Using the kvm2 driver based on existing profile
	I0501 03:29:22.542871   64715 start.go:297] selected driver: kvm2
	I0501 03:29:22.542885   64715 start.go:901] validating driver "kvm2" against &{Name:pause-542495 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.30.0 ClusterName:pause-542495 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.4 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-devic
e-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 03:29:22.543014   64715 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0501 03:29:22.543314   64715 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0501 03:29:22.543370   64715 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18779-13391/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0501 03:29:22.559437   64715 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0501 03:29:22.560230   64715 cni.go:84] Creating CNI manager for ""
	I0501 03:29:22.560251   64715 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0501 03:29:22.560326   64715 start.go:340] cluster config:
	{Name:pause-542495 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:pause-542495 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.4 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:fa
lse registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 03:29:22.560482   64715 iso.go:125] acquiring lock: {Name:mkcd2496aadb29931e179193d707c731bfda98b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0501 03:29:22.562544   64715 out.go:177] * Starting "pause-542495" primary control-plane node in "pause-542495" cluster
	I0501 03:29:22.563662   64715 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0501 03:29:22.563696   64715 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0501 03:29:22.563710   64715 cache.go:56] Caching tarball of preloaded images
	I0501 03:29:22.563774   64715 preload.go:173] Found /home/jenkins/minikube-integration/18779-13391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0501 03:29:22.563787   64715 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0501 03:29:22.563922   64715 profile.go:143] Saving config to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/pause-542495/config.json ...
	I0501 03:29:22.564128   64715 start.go:360] acquireMachinesLock for pause-542495: {Name:mkc4548e5afe1d4b0a833f6c522103562b0cefff Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0501 03:29:22.564172   64715 start.go:364] duration metric: took 25.009µs to acquireMachinesLock for "pause-542495"
	I0501 03:29:22.564191   64715 start.go:96] Skipping create...Using existing machine configuration
	I0501 03:29:22.564200   64715 fix.go:54] fixHost starting: 
	I0501 03:29:22.564487   64715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:29:22.564520   64715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:29:22.579413   64715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46351
	I0501 03:29:22.579829   64715 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:29:22.580279   64715 main.go:141] libmachine: Using API Version  1
	I0501 03:29:22.580298   64715 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:29:22.580645   64715 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:29:22.580837   64715 main.go:141] libmachine: (pause-542495) Calling .DriverName
	I0501 03:29:22.580999   64715 main.go:141] libmachine: (pause-542495) Calling .GetState
	I0501 03:29:22.582735   64715 fix.go:112] recreateIfNeeded on pause-542495: state=Running err=<nil>
	W0501 03:29:22.582760   64715 fix.go:138] unexpected machine state, will restart: <nil>
	I0501 03:29:22.584499   64715 out.go:177] * Updating the running kvm2 "pause-542495" VM ...
	I0501 03:29:22.585714   64715 machine.go:94] provisionDockerMachine start ...
	I0501 03:29:22.585732   64715 main.go:141] libmachine: (pause-542495) Calling .DriverName
	I0501 03:29:22.585921   64715 main.go:141] libmachine: (pause-542495) Calling .GetSSHHostname
	I0501 03:29:22.588797   64715 main.go:141] libmachine: (pause-542495) DBG | domain pause-542495 has defined MAC address 52:54:00:fa:a1:7f in network mk-pause-542495
	I0501 03:29:22.589280   64715 main.go:141] libmachine: (pause-542495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:a1:7f", ip: ""} in network mk-pause-542495: {Iface:virbr1 ExpiryTime:2024-05-01 04:27:53 +0000 UTC Type:0 Mac:52:54:00:fa:a1:7f Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:pause-542495 Clientid:01:52:54:00:fa:a1:7f}
	I0501 03:29:22.589313   64715 main.go:141] libmachine: (pause-542495) DBG | domain pause-542495 has defined IP address 192.168.39.4 and MAC address 52:54:00:fa:a1:7f in network mk-pause-542495
	I0501 03:29:22.589372   64715 main.go:141] libmachine: (pause-542495) Calling .GetSSHPort
	I0501 03:29:22.589553   64715 main.go:141] libmachine: (pause-542495) Calling .GetSSHKeyPath
	I0501 03:29:22.589713   64715 main.go:141] libmachine: (pause-542495) Calling .GetSSHKeyPath
	I0501 03:29:22.589843   64715 main.go:141] libmachine: (pause-542495) Calling .GetSSHUsername
	I0501 03:29:22.589972   64715 main.go:141] libmachine: Using SSH client type: native
	I0501 03:29:22.590588   64715 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.4 22 <nil> <nil>}
	I0501 03:29:22.590604   64715 main.go:141] libmachine: About to run SSH command:
	hostname
	I0501 03:29:22.712521   64715 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-542495
	
	I0501 03:29:22.712555   64715 main.go:141] libmachine: (pause-542495) Calling .GetMachineName
	I0501 03:29:22.712926   64715 buildroot.go:166] provisioning hostname "pause-542495"
	I0501 03:29:22.712954   64715 main.go:141] libmachine: (pause-542495) Calling .GetMachineName
	I0501 03:29:22.713194   64715 main.go:141] libmachine: (pause-542495) Calling .GetSSHHostname
	I0501 03:29:22.716341   64715 main.go:141] libmachine: (pause-542495) DBG | domain pause-542495 has defined MAC address 52:54:00:fa:a1:7f in network mk-pause-542495
	I0501 03:29:22.716791   64715 main.go:141] libmachine: (pause-542495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:a1:7f", ip: ""} in network mk-pause-542495: {Iface:virbr1 ExpiryTime:2024-05-01 04:27:53 +0000 UTC Type:0 Mac:52:54:00:fa:a1:7f Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:pause-542495 Clientid:01:52:54:00:fa:a1:7f}
	I0501 03:29:22.716815   64715 main.go:141] libmachine: (pause-542495) DBG | domain pause-542495 has defined IP address 192.168.39.4 and MAC address 52:54:00:fa:a1:7f in network mk-pause-542495
	I0501 03:29:22.716951   64715 main.go:141] libmachine: (pause-542495) Calling .GetSSHPort
	I0501 03:29:22.717168   64715 main.go:141] libmachine: (pause-542495) Calling .GetSSHKeyPath
	I0501 03:29:22.717350   64715 main.go:141] libmachine: (pause-542495) Calling .GetSSHKeyPath
	I0501 03:29:22.717552   64715 main.go:141] libmachine: (pause-542495) Calling .GetSSHUsername
	I0501 03:29:22.717733   64715 main.go:141] libmachine: Using SSH client type: native
	I0501 03:29:22.717934   64715 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.4 22 <nil> <nil>}
	I0501 03:29:22.717950   64715 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-542495 && echo "pause-542495" | sudo tee /etc/hostname
	I0501 03:29:22.854328   64715 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-542495
	
	I0501 03:29:22.854365   64715 main.go:141] libmachine: (pause-542495) Calling .GetSSHHostname
	I0501 03:29:22.857483   64715 main.go:141] libmachine: (pause-542495) DBG | domain pause-542495 has defined MAC address 52:54:00:fa:a1:7f in network mk-pause-542495
	I0501 03:29:22.857960   64715 main.go:141] libmachine: (pause-542495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:a1:7f", ip: ""} in network mk-pause-542495: {Iface:virbr1 ExpiryTime:2024-05-01 04:27:53 +0000 UTC Type:0 Mac:52:54:00:fa:a1:7f Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:pause-542495 Clientid:01:52:54:00:fa:a1:7f}
	I0501 03:29:22.857990   64715 main.go:141] libmachine: (pause-542495) DBG | domain pause-542495 has defined IP address 192.168.39.4 and MAC address 52:54:00:fa:a1:7f in network mk-pause-542495
	I0501 03:29:22.858209   64715 main.go:141] libmachine: (pause-542495) Calling .GetSSHPort
	I0501 03:29:22.858422   64715 main.go:141] libmachine: (pause-542495) Calling .GetSSHKeyPath
	I0501 03:29:22.858600   64715 main.go:141] libmachine: (pause-542495) Calling .GetSSHKeyPath
	I0501 03:29:22.858780   64715 main.go:141] libmachine: (pause-542495) Calling .GetSSHUsername
	I0501 03:29:22.858954   64715 main.go:141] libmachine: Using SSH client type: native
	I0501 03:29:22.859135   64715 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.4 22 <nil> <nil>}
	I0501 03:29:22.859162   64715 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-542495' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-542495/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-542495' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0501 03:29:22.977469   64715 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0501 03:29:22.977503   64715 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18779-13391/.minikube CaCertPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18779-13391/.minikube}
	I0501 03:29:22.977542   64715 buildroot.go:174] setting up certificates
	I0501 03:29:22.977554   64715 provision.go:84] configureAuth start
	I0501 03:29:22.977568   64715 main.go:141] libmachine: (pause-542495) Calling .GetMachineName
	I0501 03:29:22.977825   64715 main.go:141] libmachine: (pause-542495) Calling .GetIP
	I0501 03:29:22.980517   64715 main.go:141] libmachine: (pause-542495) DBG | domain pause-542495 has defined MAC address 52:54:00:fa:a1:7f in network mk-pause-542495
	I0501 03:29:22.980846   64715 main.go:141] libmachine: (pause-542495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:a1:7f", ip: ""} in network mk-pause-542495: {Iface:virbr1 ExpiryTime:2024-05-01 04:27:53 +0000 UTC Type:0 Mac:52:54:00:fa:a1:7f Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:pause-542495 Clientid:01:52:54:00:fa:a1:7f}
	I0501 03:29:22.980866   64715 main.go:141] libmachine: (pause-542495) DBG | domain pause-542495 has defined IP address 192.168.39.4 and MAC address 52:54:00:fa:a1:7f in network mk-pause-542495
	I0501 03:29:22.981020   64715 main.go:141] libmachine: (pause-542495) Calling .GetSSHHostname
	I0501 03:29:22.983482   64715 main.go:141] libmachine: (pause-542495) DBG | domain pause-542495 has defined MAC address 52:54:00:fa:a1:7f in network mk-pause-542495
	I0501 03:29:22.983839   64715 main.go:141] libmachine: (pause-542495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:a1:7f", ip: ""} in network mk-pause-542495: {Iface:virbr1 ExpiryTime:2024-05-01 04:27:53 +0000 UTC Type:0 Mac:52:54:00:fa:a1:7f Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:pause-542495 Clientid:01:52:54:00:fa:a1:7f}
	I0501 03:29:22.983871   64715 main.go:141] libmachine: (pause-542495) DBG | domain pause-542495 has defined IP address 192.168.39.4 and MAC address 52:54:00:fa:a1:7f in network mk-pause-542495
	I0501 03:29:22.983977   64715 provision.go:143] copyHostCerts
	I0501 03:29:22.984036   64715 exec_runner.go:144] found /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem, removing ...
	I0501 03:29:22.984050   64715 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem
	I0501 03:29:22.984118   64715 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem (1078 bytes)
	I0501 03:29:22.984235   64715 exec_runner.go:144] found /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem, removing ...
	I0501 03:29:22.984247   64715 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem
	I0501 03:29:22.984276   64715 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem (1123 bytes)
	I0501 03:29:22.984362   64715 exec_runner.go:144] found /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem, removing ...
	I0501 03:29:22.984373   64715 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem
	I0501 03:29:22.984398   64715 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem (1675 bytes)
	I0501 03:29:22.984485   64715 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca-key.pem org=jenkins.pause-542495 san=[127.0.0.1 192.168.39.4 localhost minikube pause-542495]
	I0501 03:29:23.171485   64715 provision.go:177] copyRemoteCerts
	I0501 03:29:23.171559   64715 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0501 03:29:23.171586   64715 main.go:141] libmachine: (pause-542495) Calling .GetSSHHostname
	I0501 03:29:23.174448   64715 main.go:141] libmachine: (pause-542495) DBG | domain pause-542495 has defined MAC address 52:54:00:fa:a1:7f in network mk-pause-542495
	I0501 03:29:23.174860   64715 main.go:141] libmachine: (pause-542495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:a1:7f", ip: ""} in network mk-pause-542495: {Iface:virbr1 ExpiryTime:2024-05-01 04:27:53 +0000 UTC Type:0 Mac:52:54:00:fa:a1:7f Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:pause-542495 Clientid:01:52:54:00:fa:a1:7f}
	I0501 03:29:23.174894   64715 main.go:141] libmachine: (pause-542495) DBG | domain pause-542495 has defined IP address 192.168.39.4 and MAC address 52:54:00:fa:a1:7f in network mk-pause-542495
	I0501 03:29:23.175168   64715 main.go:141] libmachine: (pause-542495) Calling .GetSSHPort
	I0501 03:29:23.175401   64715 main.go:141] libmachine: (pause-542495) Calling .GetSSHKeyPath
	I0501 03:29:23.175574   64715 main.go:141] libmachine: (pause-542495) Calling .GetSSHUsername
	I0501 03:29:23.175720   64715 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/pause-542495/id_rsa Username:docker}
	I0501 03:29:23.273798   64715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I0501 03:29:23.309997   64715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0501 03:29:23.342439   64715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0501 03:29:23.375642   64715 provision.go:87] duration metric: took 398.073594ms to configureAuth
	I0501 03:29:23.375669   64715 buildroot.go:189] setting minikube options for container-runtime
	I0501 03:29:23.375946   64715 config.go:182] Loaded profile config "pause-542495": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0501 03:29:23.376041   64715 main.go:141] libmachine: (pause-542495) Calling .GetSSHHostname
	I0501 03:29:23.379548   64715 main.go:141] libmachine: (pause-542495) DBG | domain pause-542495 has defined MAC address 52:54:00:fa:a1:7f in network mk-pause-542495
	I0501 03:29:23.380086   64715 main.go:141] libmachine: (pause-542495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:a1:7f", ip: ""} in network mk-pause-542495: {Iface:virbr1 ExpiryTime:2024-05-01 04:27:53 +0000 UTC Type:0 Mac:52:54:00:fa:a1:7f Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:pause-542495 Clientid:01:52:54:00:fa:a1:7f}
	I0501 03:29:23.380171   64715 main.go:141] libmachine: (pause-542495) DBG | domain pause-542495 has defined IP address 192.168.39.4 and MAC address 52:54:00:fa:a1:7f in network mk-pause-542495
	I0501 03:29:23.380310   64715 main.go:141] libmachine: (pause-542495) Calling .GetSSHPort
	I0501 03:29:23.380526   64715 main.go:141] libmachine: (pause-542495) Calling .GetSSHKeyPath
	I0501 03:29:23.380713   64715 main.go:141] libmachine: (pause-542495) Calling .GetSSHKeyPath
	I0501 03:29:23.380896   64715 main.go:141] libmachine: (pause-542495) Calling .GetSSHUsername
	I0501 03:29:23.381076   64715 main.go:141] libmachine: Using SSH client type: native
	I0501 03:29:23.381288   64715 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.4 22 <nil> <nil>}
	I0501 03:29:23.381310   64715 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0501 03:29:28.974831   64715 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0501 03:29:28.974858   64715 machine.go:97] duration metric: took 6.389128312s to provisionDockerMachine
	I0501 03:29:28.974871   64715 start.go:293] postStartSetup for "pause-542495" (driver="kvm2")
	I0501 03:29:28.974884   64715 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0501 03:29:28.974901   64715 main.go:141] libmachine: (pause-542495) Calling .DriverName
	I0501 03:29:28.975298   64715 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0501 03:29:28.975332   64715 main.go:141] libmachine: (pause-542495) Calling .GetSSHHostname
	I0501 03:29:28.978176   64715 main.go:141] libmachine: (pause-542495) DBG | domain pause-542495 has defined MAC address 52:54:00:fa:a1:7f in network mk-pause-542495
	I0501 03:29:28.978576   64715 main.go:141] libmachine: (pause-542495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:a1:7f", ip: ""} in network mk-pause-542495: {Iface:virbr1 ExpiryTime:2024-05-01 04:27:53 +0000 UTC Type:0 Mac:52:54:00:fa:a1:7f Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:pause-542495 Clientid:01:52:54:00:fa:a1:7f}
	I0501 03:29:28.978606   64715 main.go:141] libmachine: (pause-542495) DBG | domain pause-542495 has defined IP address 192.168.39.4 and MAC address 52:54:00:fa:a1:7f in network mk-pause-542495
	I0501 03:29:28.978776   64715 main.go:141] libmachine: (pause-542495) Calling .GetSSHPort
	I0501 03:29:28.978985   64715 main.go:141] libmachine: (pause-542495) Calling .GetSSHKeyPath
	I0501 03:29:28.979157   64715 main.go:141] libmachine: (pause-542495) Calling .GetSSHUsername
	I0501 03:29:28.979292   64715 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/pause-542495/id_rsa Username:docker}
	I0501 03:29:29.066681   64715 ssh_runner.go:195] Run: cat /etc/os-release
	I0501 03:29:29.071911   64715 info.go:137] Remote host: Buildroot 2023.02.9
	I0501 03:29:29.071936   64715 filesync.go:126] Scanning /home/jenkins/minikube-integration/18779-13391/.minikube/addons for local assets ...
	I0501 03:29:29.072024   64715 filesync.go:126] Scanning /home/jenkins/minikube-integration/18779-13391/.minikube/files for local assets ...
	I0501 03:29:29.072124   64715 filesync.go:149] local asset: /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem -> 207242.pem in /etc/ssl/certs
	I0501 03:29:29.072244   64715 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0501 03:29:29.083501   64715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem --> /etc/ssl/certs/207242.pem (1708 bytes)
	I0501 03:29:29.112065   64715 start.go:296] duration metric: took 137.180012ms for postStartSetup
	I0501 03:29:29.112112   64715 fix.go:56] duration metric: took 6.54791154s for fixHost
	I0501 03:29:29.112137   64715 main.go:141] libmachine: (pause-542495) Calling .GetSSHHostname
	I0501 03:29:29.115058   64715 main.go:141] libmachine: (pause-542495) DBG | domain pause-542495 has defined MAC address 52:54:00:fa:a1:7f in network mk-pause-542495
	I0501 03:29:29.115455   64715 main.go:141] libmachine: (pause-542495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:a1:7f", ip: ""} in network mk-pause-542495: {Iface:virbr1 ExpiryTime:2024-05-01 04:27:53 +0000 UTC Type:0 Mac:52:54:00:fa:a1:7f Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:pause-542495 Clientid:01:52:54:00:fa:a1:7f}
	I0501 03:29:29.115491   64715 main.go:141] libmachine: (pause-542495) DBG | domain pause-542495 has defined IP address 192.168.39.4 and MAC address 52:54:00:fa:a1:7f in network mk-pause-542495
	I0501 03:29:29.115652   64715 main.go:141] libmachine: (pause-542495) Calling .GetSSHPort
	I0501 03:29:29.115854   64715 main.go:141] libmachine: (pause-542495) Calling .GetSSHKeyPath
	I0501 03:29:29.116063   64715 main.go:141] libmachine: (pause-542495) Calling .GetSSHKeyPath
	I0501 03:29:29.116278   64715 main.go:141] libmachine: (pause-542495) Calling .GetSSHUsername
	I0501 03:29:29.116456   64715 main.go:141] libmachine: Using SSH client type: native
	I0501 03:29:29.116664   64715 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.4 22 <nil> <nil>}
	I0501 03:29:29.116677   64715 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0501 03:29:29.231736   64715 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714534169.225250429
	
	I0501 03:29:29.231766   64715 fix.go:216] guest clock: 1714534169.225250429
	I0501 03:29:29.231776   64715 fix.go:229] Guest: 2024-05-01 03:29:29.225250429 +0000 UTC Remote: 2024-05-01 03:29:29.112117034 +0000 UTC m=+7.203712306 (delta=113.133395ms)
	I0501 03:29:29.231797   64715 fix.go:200] guest clock delta is within tolerance: 113.133395ms
	I0501 03:29:29.231802   64715 start.go:83] releasing machines lock for "pause-542495", held for 6.667618547s
	I0501 03:29:29.231823   64715 main.go:141] libmachine: (pause-542495) Calling .DriverName
	I0501 03:29:29.232089   64715 main.go:141] libmachine: (pause-542495) Calling .GetIP
	I0501 03:29:29.234927   64715 main.go:141] libmachine: (pause-542495) DBG | domain pause-542495 has defined MAC address 52:54:00:fa:a1:7f in network mk-pause-542495
	I0501 03:29:29.235290   64715 main.go:141] libmachine: (pause-542495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:a1:7f", ip: ""} in network mk-pause-542495: {Iface:virbr1 ExpiryTime:2024-05-01 04:27:53 +0000 UTC Type:0 Mac:52:54:00:fa:a1:7f Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:pause-542495 Clientid:01:52:54:00:fa:a1:7f}
	I0501 03:29:29.235321   64715 main.go:141] libmachine: (pause-542495) DBG | domain pause-542495 has defined IP address 192.168.39.4 and MAC address 52:54:00:fa:a1:7f in network mk-pause-542495
	I0501 03:29:29.235475   64715 main.go:141] libmachine: (pause-542495) Calling .DriverName
	I0501 03:29:29.236021   64715 main.go:141] libmachine: (pause-542495) Calling .DriverName
	I0501 03:29:29.236195   64715 main.go:141] libmachine: (pause-542495) Calling .DriverName
	I0501 03:29:29.236267   64715 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0501 03:29:29.236308   64715 main.go:141] libmachine: (pause-542495) Calling .GetSSHHostname
	I0501 03:29:29.236374   64715 ssh_runner.go:195] Run: cat /version.json
	I0501 03:29:29.236399   64715 main.go:141] libmachine: (pause-542495) Calling .GetSSHHostname
	I0501 03:29:29.238940   64715 main.go:141] libmachine: (pause-542495) DBG | domain pause-542495 has defined MAC address 52:54:00:fa:a1:7f in network mk-pause-542495
	I0501 03:29:29.239241   64715 main.go:141] libmachine: (pause-542495) DBG | domain pause-542495 has defined MAC address 52:54:00:fa:a1:7f in network mk-pause-542495
	I0501 03:29:29.239278   64715 main.go:141] libmachine: (pause-542495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:a1:7f", ip: ""} in network mk-pause-542495: {Iface:virbr1 ExpiryTime:2024-05-01 04:27:53 +0000 UTC Type:0 Mac:52:54:00:fa:a1:7f Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:pause-542495 Clientid:01:52:54:00:fa:a1:7f}
	I0501 03:29:29.239297   64715 main.go:141] libmachine: (pause-542495) DBG | domain pause-542495 has defined IP address 192.168.39.4 and MAC address 52:54:00:fa:a1:7f in network mk-pause-542495
	I0501 03:29:29.239485   64715 main.go:141] libmachine: (pause-542495) Calling .GetSSHPort
	I0501 03:29:29.239663   64715 main.go:141] libmachine: (pause-542495) Calling .GetSSHKeyPath
	I0501 03:29:29.239695   64715 main.go:141] libmachine: (pause-542495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:a1:7f", ip: ""} in network mk-pause-542495: {Iface:virbr1 ExpiryTime:2024-05-01 04:27:53 +0000 UTC Type:0 Mac:52:54:00:fa:a1:7f Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:pause-542495 Clientid:01:52:54:00:fa:a1:7f}
	I0501 03:29:29.239718   64715 main.go:141] libmachine: (pause-542495) DBG | domain pause-542495 has defined IP address 192.168.39.4 and MAC address 52:54:00:fa:a1:7f in network mk-pause-542495
	I0501 03:29:29.239845   64715 main.go:141] libmachine: (pause-542495) Calling .GetSSHUsername
	I0501 03:29:29.239920   64715 main.go:141] libmachine: (pause-542495) Calling .GetSSHPort
	I0501 03:29:29.240082   64715 main.go:141] libmachine: (pause-542495) Calling .GetSSHKeyPath
	I0501 03:29:29.240104   64715 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/pause-542495/id_rsa Username:docker}
	I0501 03:29:29.240480   64715 main.go:141] libmachine: (pause-542495) Calling .GetSSHUsername
	I0501 03:29:29.240634   64715 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/pause-542495/id_rsa Username:docker}
	I0501 03:29:29.342521   64715 ssh_runner.go:195] Run: systemctl --version
	I0501 03:29:29.352890   64715 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0501 03:29:29.519793   64715 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0501 03:29:29.526678   64715 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0501 03:29:29.526759   64715 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0501 03:29:29.538444   64715 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0501 03:29:29.538464   64715 start.go:494] detecting cgroup driver to use...
	I0501 03:29:29.538527   64715 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0501 03:29:29.558301   64715 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0501 03:29:29.574767   64715 docker.go:217] disabling cri-docker service (if available) ...
	I0501 03:29:29.574827   64715 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0501 03:29:29.589334   64715 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0501 03:29:29.603648   64715 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0501 03:29:29.743024   64715 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0501 03:29:29.884030   64715 docker.go:233] disabling docker service ...
	I0501 03:29:29.884107   64715 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0501 03:29:29.903883   64715 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0501 03:29:29.919946   64715 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0501 03:29:30.062322   64715 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0501 03:29:30.198534   64715 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0501 03:29:30.217580   64715 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0501 03:29:30.241941   64715 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0501 03:29:30.242035   64715 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:29:30.254236   64715 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0501 03:29:30.254304   64715 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:29:30.268963   64715 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:29:30.283661   64715 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:29:30.296973   64715 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0501 03:29:30.311289   64715 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:29:30.323519   64715 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:29:30.337932   64715 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:29:30.351054   64715 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0501 03:29:30.362472   64715 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0501 03:29:30.373359   64715 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 03:29:30.508827   64715 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0501 03:29:36.151625   64715 ssh_runner.go:235] Completed: sudo systemctl restart crio: (5.642758921s)
	I0501 03:29:36.151652   64715 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0501 03:29:36.151696   64715 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0501 03:29:36.157504   64715 start.go:562] Will wait 60s for crictl version
	I0501 03:29:36.157569   64715 ssh_runner.go:195] Run: which crictl
	I0501 03:29:36.162127   64715 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0501 03:29:36.200904   64715 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0501 03:29:36.201002   64715 ssh_runner.go:195] Run: crio --version
	I0501 03:29:36.237631   64715 ssh_runner.go:195] Run: crio --version
	I0501 03:29:36.272693   64715 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0501 03:29:36.274120   64715 main.go:141] libmachine: (pause-542495) Calling .GetIP
	I0501 03:29:36.276741   64715 main.go:141] libmachine: (pause-542495) DBG | domain pause-542495 has defined MAC address 52:54:00:fa:a1:7f in network mk-pause-542495
	I0501 03:29:36.277119   64715 main.go:141] libmachine: (pause-542495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:a1:7f", ip: ""} in network mk-pause-542495: {Iface:virbr1 ExpiryTime:2024-05-01 04:27:53 +0000 UTC Type:0 Mac:52:54:00:fa:a1:7f Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:pause-542495 Clientid:01:52:54:00:fa:a1:7f}
	I0501 03:29:36.277148   64715 main.go:141] libmachine: (pause-542495) DBG | domain pause-542495 has defined IP address 192.168.39.4 and MAC address 52:54:00:fa:a1:7f in network mk-pause-542495
	I0501 03:29:36.277332   64715 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0501 03:29:36.282598   64715 kubeadm.go:877] updating cluster {Name:pause-542495 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0
ClusterName:pause-542495 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.4 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false
olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0501 03:29:36.282734   64715 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0501 03:29:36.282796   64715 ssh_runner.go:195] Run: sudo crictl images --output json
	I0501 03:29:36.341790   64715 crio.go:514] all images are preloaded for cri-o runtime.
	I0501 03:29:36.341824   64715 crio.go:433] Images already preloaded, skipping extraction
	I0501 03:29:36.341882   64715 ssh_runner.go:195] Run: sudo crictl images --output json
	I0501 03:29:36.382029   64715 crio.go:514] all images are preloaded for cri-o runtime.
	I0501 03:29:36.382051   64715 cache_images.go:84] Images are preloaded, skipping loading
	I0501 03:29:36.382058   64715 kubeadm.go:928] updating node { 192.168.39.4 8443 v1.30.0 crio true true} ...
	I0501 03:29:36.382146   64715 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-542495 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:pause-542495 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0501 03:29:36.382209   64715 ssh_runner.go:195] Run: crio config
	I0501 03:29:36.434958   64715 cni.go:84] Creating CNI manager for ""
	I0501 03:29:36.434980   64715 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0501 03:29:36.434990   64715 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0501 03:29:36.435009   64715 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.4 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-542495 NodeName:pause-542495 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.4"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.4 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0501 03:29:36.435129   64715 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.4
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-542495"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.4
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.4"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0501 03:29:36.435189   64715 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0501 03:29:36.447500   64715 binaries.go:44] Found k8s binaries, skipping transfer
	I0501 03:29:36.447567   64715 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0501 03:29:36.458832   64715 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (310 bytes)
	I0501 03:29:36.477871   64715 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0501 03:29:36.496516   64715 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0501 03:29:36.515101   64715 ssh_runner.go:195] Run: grep 192.168.39.4	control-plane.minikube.internal$ /etc/hosts
	I0501 03:29:36.519795   64715 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 03:29:36.651396   64715 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0501 03:29:36.667654   64715 certs.go:68] Setting up /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/pause-542495 for IP: 192.168.39.4
	I0501 03:29:36.667700   64715 certs.go:194] generating shared ca certs ...
	I0501 03:29:36.667721   64715 certs.go:226] acquiring lock for ca certs: {Name:mk85a75d14c00780d4bf09cd9bdaf0f96f1b8fce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 03:29:36.667888   64715 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18779-13391/.minikube/ca.key
	I0501 03:29:36.667954   64715 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.key
	I0501 03:29:36.667968   64715 certs.go:256] generating profile certs ...
	I0501 03:29:36.668068   64715 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/pause-542495/client.key
	I0501 03:29:36.668151   64715 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/pause-542495/apiserver.key.ce566cf6
	I0501 03:29:36.668203   64715 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/pause-542495/proxy-client.key
	I0501 03:29:36.668344   64715 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/20724.pem (1338 bytes)
	W0501 03:29:36.668380   64715 certs.go:480] ignoring /home/jenkins/minikube-integration/18779-13391/.minikube/certs/20724_empty.pem, impossibly tiny 0 bytes
	I0501 03:29:36.668393   64715 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca-key.pem (1679 bytes)
	I0501 03:29:36.668430   64715 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem (1078 bytes)
	I0501 03:29:36.668462   64715 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem (1123 bytes)
	I0501 03:29:36.668495   64715 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem (1675 bytes)
	I0501 03:29:36.668561   64715 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem (1708 bytes)
	I0501 03:29:36.669425   64715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0501 03:29:36.701621   64715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0501 03:29:36.731304   64715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0501 03:29:36.763516   64715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0501 03:29:36.794017   64715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/pause-542495/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0501 03:29:36.830181   64715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/pause-542495/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0501 03:29:36.860317   64715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/pause-542495/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0501 03:29:36.895078   64715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/pause-542495/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0501 03:29:36.927588   64715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem --> /usr/share/ca-certificates/207242.pem (1708 bytes)
	I0501 03:29:36.957909   64715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0501 03:29:36.988626   64715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/certs/20724.pem --> /usr/share/ca-certificates/20724.pem (1338 bytes)
	I0501 03:29:37.018354   64715 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0501 03:29:37.037952   64715 ssh_runner.go:195] Run: openssl version
	I0501 03:29:37.044468   64715 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20724.pem && ln -fs /usr/share/ca-certificates/20724.pem /etc/ssl/certs/20724.pem"
	I0501 03:29:37.056673   64715 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20724.pem
	I0501 03:29:37.061961   64715 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May  1 02:20 /usr/share/ca-certificates/20724.pem
	I0501 03:29:37.062019   64715 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20724.pem
	I0501 03:29:37.068657   64715 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20724.pem /etc/ssl/certs/51391683.0"
	I0501 03:29:37.079295   64715 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/207242.pem && ln -fs /usr/share/ca-certificates/207242.pem /etc/ssl/certs/207242.pem"
	I0501 03:29:37.091655   64715 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/207242.pem
	I0501 03:29:37.098582   64715 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May  1 02:20 /usr/share/ca-certificates/207242.pem
	I0501 03:29:37.098640   64715 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/207242.pem
	I0501 03:29:37.107182   64715 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/207242.pem /etc/ssl/certs/3ec20f2e.0"
	I0501 03:29:37.123294   64715 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0501 03:29:37.143433   64715 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0501 03:29:37.177472   64715 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May  1 02:08 /usr/share/ca-certificates/minikubeCA.pem
	I0501 03:29:37.177540   64715 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0501 03:29:37.216181   64715 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0501 03:29:37.245512   64715 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0501 03:29:37.283624   64715 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0501 03:29:37.358156   64715 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0501 03:29:37.396903   64715 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0501 03:29:37.443765   64715 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0501 03:29:37.540713   64715 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0501 03:29:37.582184   64715 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0501 03:29:37.683676   64715 kubeadm.go:391] StartCluster: {Name:pause-542495 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Cl
usterName:pause-542495 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.4 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm
:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 03:29:37.683816   64715 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0501 03:29:37.683881   64715 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0501 03:29:37.918108   64715 cri.go:89] found id: "92a924bfd194ca856d32145f1b416a6feaeb97526d5e7d1f3806a954427b41cc"
	I0501 03:29:37.918130   64715 cri.go:89] found id: "ec898beec5e3d2f8d855ee92cc720b0d9dca6366cf80da972ab7c460dbaebf7b"
	I0501 03:29:37.918135   64715 cri.go:89] found id: "09b229cb407320d3c750597c319b318b1b9632dbff2346820d3561439a3a3115"
	I0501 03:29:37.918139   64715 cri.go:89] found id: "bef611c533a535e2bfd6a2122f40ce235697abe33bff274ad85257f799b3948f"
	I0501 03:29:37.918141   64715 cri.go:89] found id: "1abcf9a16dd7ce05e00be8f82a9c6d7b732ba8a404f903bbc13c658fe6596f99"
	I0501 03:29:37.918144   64715 cri.go:89] found id: "7369fbe67db99f3aecc21062700054a8de9b2c9f0a544c30c58fdb823b3260f3"
	I0501 03:29:37.918146   64715 cri.go:89] found id: "234285fef15e8f904fde8efeae5cdd5a3b89deed909a8d2078b72eb6ac39e7db"
	I0501 03:29:37.918149   64715 cri.go:89] found id: "ffc59df7261fe630a23f5b1e9eff4148c7ab446a1ba00903e19b3dfd9e2e6fea"
	I0501 03:29:37.918151   64715 cri.go:89] found id: ""
	I0501 03:29:37.918220   64715 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-542495 -n pause-542495
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-542495 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-542495 logs -n 25: (1.674952733s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-731347 sudo docker                         | cilium-731347             | jenkins | v1.33.0 | 01 May 24 03:28 UTC |                     |
	|         | system info                                          |                           |         |         |                     |                     |
	| ssh     | -p cilium-731347 sudo                                | cilium-731347             | jenkins | v1.33.0 | 01 May 24 03:28 UTC |                     |
	|         | systemctl status cri-docker                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-731347 sudo                                | cilium-731347             | jenkins | v1.33.0 | 01 May 24 03:28 UTC |                     |
	|         | systemctl cat cri-docker                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-731347 sudo cat                            | cilium-731347             | jenkins | v1.33.0 | 01 May 24 03:28 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p cilium-731347 sudo cat                            | cilium-731347             | jenkins | v1.33.0 | 01 May 24 03:28 UTC |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p cilium-731347 sudo                                | cilium-731347             | jenkins | v1.33.0 | 01 May 24 03:28 UTC |                     |
	|         | cri-dockerd --version                                |                           |         |         |                     |                     |
	| ssh     | -p cilium-731347 sudo                                | cilium-731347             | jenkins | v1.33.0 | 01 May 24 03:28 UTC |                     |
	|         | systemctl status containerd                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-731347 sudo                                | cilium-731347             | jenkins | v1.33.0 | 01 May 24 03:28 UTC |                     |
	|         | systemctl cat containerd                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-731347 sudo cat                            | cilium-731347             | jenkins | v1.33.0 | 01 May 24 03:28 UTC |                     |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p cilium-731347 sudo cat                            | cilium-731347             | jenkins | v1.33.0 | 01 May 24 03:28 UTC |                     |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p cilium-731347 sudo                                | cilium-731347             | jenkins | v1.33.0 | 01 May 24 03:28 UTC |                     |
	|         | containerd config dump                               |                           |         |         |                     |                     |
	| ssh     | -p cilium-731347 sudo                                | cilium-731347             | jenkins | v1.33.0 | 01 May 24 03:28 UTC |                     |
	|         | systemctl status crio --all                          |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p cilium-731347 sudo                                | cilium-731347             | jenkins | v1.33.0 | 01 May 24 03:28 UTC |                     |
	|         | systemctl cat crio --no-pager                        |                           |         |         |                     |                     |
	| ssh     | -p cilium-731347 sudo find                           | cilium-731347             | jenkins | v1.33.0 | 01 May 24 03:28 UTC |                     |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-731347 sudo crio                           | cilium-731347             | jenkins | v1.33.0 | 01 May 24 03:28 UTC |                     |
	|         | config                                               |                           |         |         |                     |                     |
	| delete  | -p cilium-731347                                     | cilium-731347             | jenkins | v1.33.0 | 01 May 24 03:28 UTC | 01 May 24 03:28 UTC |
	| start   | -p force-systemd-flag-616131                         | force-systemd-flag-616131 | jenkins | v1.33.0 | 01 May 24 03:28 UTC | 01 May 24 03:29 UTC |
	|         | --memory=2048 --force-systemd                        |                           |         |         |                     |                     |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| start   | -p running-upgrade-179111                            | running-upgrade-179111    | jenkins | v1.33.0 | 01 May 24 03:29 UTC | 01 May 24 03:30 UTC |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-616131 ssh cat                    | force-systemd-flag-616131 | jenkins | v1.33.0 | 01 May 24 03:29 UTC | 01 May 24 03:29 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf                   |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-616131                         | force-systemd-flag-616131 | jenkins | v1.33.0 | 01 May 24 03:29 UTC | 01 May 24 03:29 UTC |
	| start   | -p pause-542495                                      | pause-542495              | jenkins | v1.33.0 | 01 May 24 03:29 UTC | 01 May 24 03:30 UTC |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| start   | -p cert-options-582976                               | cert-options-582976       | jenkins | v1.33.0 | 01 May 24 03:29 UTC |                     |
	|         | --memory=2048                                        |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1                            |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15                        |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost                          |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com                     |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                                |                           |         |         |                     |                     |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-046243                         | kubernetes-upgrade-046243 | jenkins | v1.33.0 | 01 May 24 03:29 UTC | 01 May 24 03:29 UTC |
	| start   | -p kubernetes-upgrade-046243                         | kubernetes-upgrade-046243 | jenkins | v1.33.0 | 01 May 24 03:29 UTC |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-179111                            | running-upgrade-179111    | jenkins | v1.33.0 | 01 May 24 03:30 UTC | 01 May 24 03:30 UTC |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/01 03:29:35
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0501 03:29:35.335675   65002 out.go:291] Setting OutFile to fd 1 ...
	I0501 03:29:35.335795   65002 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 03:29:35.335804   65002 out.go:304] Setting ErrFile to fd 2...
	I0501 03:29:35.335808   65002 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 03:29:35.336030   65002 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18779-13391/.minikube/bin
	I0501 03:29:35.336584   65002 out.go:298] Setting JSON to false
	I0501 03:29:35.337509   65002 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7918,"bootTime":1714526257,"procs":219,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0501 03:29:35.337570   65002 start.go:139] virtualization: kvm guest
	I0501 03:29:35.339934   65002 out.go:177] * [kubernetes-upgrade-046243] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0501 03:29:35.341362   65002 out.go:177]   - MINIKUBE_LOCATION=18779
	I0501 03:29:35.341409   65002 notify.go:220] Checking for updates...
	I0501 03:29:35.342740   65002 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0501 03:29:35.344333   65002 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18779-13391/kubeconfig
	I0501 03:29:35.345917   65002 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18779-13391/.minikube
	I0501 03:29:35.347491   65002 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0501 03:29:35.348805   65002 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0501 03:29:35.350768   65002 config.go:182] Loaded profile config "kubernetes-upgrade-046243": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0501 03:29:35.351401   65002 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:29:35.351468   65002 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:29:35.366922   65002 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43795
	I0501 03:29:35.367322   65002 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:29:35.367868   65002 main.go:141] libmachine: Using API Version  1
	I0501 03:29:35.367902   65002 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:29:35.368195   65002 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:29:35.368351   65002 main.go:141] libmachine: (kubernetes-upgrade-046243) Calling .DriverName
	I0501 03:29:35.368631   65002 driver.go:392] Setting default libvirt URI to qemu:///system
	I0501 03:29:35.368897   65002 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:29:35.368929   65002 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:29:35.383212   65002 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33601
	I0501 03:29:35.383659   65002 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:29:35.384239   65002 main.go:141] libmachine: Using API Version  1
	I0501 03:29:35.384275   65002 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:29:35.384565   65002 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:29:35.384758   65002 main.go:141] libmachine: (kubernetes-upgrade-046243) Calling .DriverName
	I0501 03:29:35.420961   65002 out.go:177] * Using the kvm2 driver based on existing profile
	I0501 03:29:35.422527   65002 start.go:297] selected driver: kvm2
	I0501 03:29:35.422540   65002 start.go:901] validating driver "kvm2" against &{Name:kubernetes-upgrade-046243 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-046243 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.134 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 03:29:35.422651   65002 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0501 03:29:35.423327   65002 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0501 03:29:35.423400   65002 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18779-13391/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0501 03:29:35.437827   65002 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0501 03:29:35.438197   65002 cni.go:84] Creating CNI manager for ""
	I0501 03:29:35.438215   65002 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0501 03:29:35.438255   65002 start.go:340] cluster config:
	{Name:kubernetes-upgrade-046243 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:kubernetes-upgrade-046243 Namespace:d
efault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.134 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: Sock
etVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 03:29:35.438367   65002 iso.go:125] acquiring lock: {Name:mkcd2496aadb29931e179193d707c731bfda98b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0501 03:29:35.440268   65002 out.go:177] * Starting "kubernetes-upgrade-046243" primary control-plane node in "kubernetes-upgrade-046243" cluster
	I0501 03:29:36.151625   64715 ssh_runner.go:235] Completed: sudo systemctl restart crio: (5.642758921s)
	I0501 03:29:36.151652   64715 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0501 03:29:36.151696   64715 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0501 03:29:36.157504   64715 start.go:562] Will wait 60s for crictl version
	I0501 03:29:36.157569   64715 ssh_runner.go:195] Run: which crictl
	I0501 03:29:36.162127   64715 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0501 03:29:36.200904   64715 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0501 03:29:36.201002   64715 ssh_runner.go:195] Run: crio --version
	I0501 03:29:36.237631   64715 ssh_runner.go:195] Run: crio --version
	I0501 03:29:36.272693   64715 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0501 03:29:36.274120   64715 main.go:141] libmachine: (pause-542495) Calling .GetIP
	I0501 03:29:36.276741   64715 main.go:141] libmachine: (pause-542495) DBG | domain pause-542495 has defined MAC address 52:54:00:fa:a1:7f in network mk-pause-542495
	I0501 03:29:36.277119   64715 main.go:141] libmachine: (pause-542495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:a1:7f", ip: ""} in network mk-pause-542495: {Iface:virbr1 ExpiryTime:2024-05-01 04:27:53 +0000 UTC Type:0 Mac:52:54:00:fa:a1:7f Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:pause-542495 Clientid:01:52:54:00:fa:a1:7f}
	I0501 03:29:36.277148   64715 main.go:141] libmachine: (pause-542495) DBG | domain pause-542495 has defined IP address 192.168.39.4 and MAC address 52:54:00:fa:a1:7f in network mk-pause-542495
	I0501 03:29:36.277332   64715 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0501 03:29:36.282598   64715 kubeadm.go:877] updating cluster {Name:pause-542495 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0
ClusterName:pause-542495 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.4 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false
olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0501 03:29:36.282734   64715 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0501 03:29:36.282796   64715 ssh_runner.go:195] Run: sudo crictl images --output json
	I0501 03:29:36.341790   64715 crio.go:514] all images are preloaded for cri-o runtime.
	I0501 03:29:36.341824   64715 crio.go:433] Images already preloaded, skipping extraction
	I0501 03:29:36.341882   64715 ssh_runner.go:195] Run: sudo crictl images --output json
	I0501 03:29:36.382029   64715 crio.go:514] all images are preloaded for cri-o runtime.
	I0501 03:29:36.382051   64715 cache_images.go:84] Images are preloaded, skipping loading
	I0501 03:29:36.382058   64715 kubeadm.go:928] updating node { 192.168.39.4 8443 v1.30.0 crio true true} ...
	I0501 03:29:36.382146   64715 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-542495 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:pause-542495 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0501 03:29:36.382209   64715 ssh_runner.go:195] Run: crio config
	I0501 03:29:36.434958   64715 cni.go:84] Creating CNI manager for ""
	I0501 03:29:36.434980   64715 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0501 03:29:36.434990   64715 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0501 03:29:36.435009   64715 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.4 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-542495 NodeName:pause-542495 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.4"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.4 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0501 03:29:36.435129   64715 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.4
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-542495"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.4
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.4"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0501 03:29:36.435189   64715 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0501 03:29:36.447500   64715 binaries.go:44] Found k8s binaries, skipping transfer
	I0501 03:29:36.447567   64715 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0501 03:29:36.458832   64715 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (310 bytes)
	I0501 03:29:36.477871   64715 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0501 03:29:36.496516   64715 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0501 03:29:36.515101   64715 ssh_runner.go:195] Run: grep 192.168.39.4	control-plane.minikube.internal$ /etc/hosts
	I0501 03:29:36.519795   64715 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 03:29:36.651396   64715 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0501 03:29:36.667654   64715 certs.go:68] Setting up /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/pause-542495 for IP: 192.168.39.4
	I0501 03:29:36.667700   64715 certs.go:194] generating shared ca certs ...
	I0501 03:29:36.667721   64715 certs.go:226] acquiring lock for ca certs: {Name:mk85a75d14c00780d4bf09cd9bdaf0f96f1b8fce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 03:29:36.667888   64715 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18779-13391/.minikube/ca.key
	I0501 03:29:36.667954   64715 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.key
	I0501 03:29:36.667968   64715 certs.go:256] generating profile certs ...
	I0501 03:29:36.668068   64715 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/pause-542495/client.key
	I0501 03:29:36.668151   64715 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/pause-542495/apiserver.key.ce566cf6
	I0501 03:29:36.668203   64715 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/pause-542495/proxy-client.key
	I0501 03:29:36.668344   64715 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/20724.pem (1338 bytes)
	W0501 03:29:36.668380   64715 certs.go:480] ignoring /home/jenkins/minikube-integration/18779-13391/.minikube/certs/20724_empty.pem, impossibly tiny 0 bytes
	I0501 03:29:36.668393   64715 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca-key.pem (1679 bytes)
	I0501 03:29:36.668430   64715 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem (1078 bytes)
	I0501 03:29:36.668462   64715 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem (1123 bytes)
	I0501 03:29:36.668495   64715 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem (1675 bytes)
	I0501 03:29:36.668561   64715 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem (1708 bytes)
	I0501 03:29:36.669425   64715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0501 03:29:36.701621   64715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0501 03:29:36.731304   64715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0501 03:29:36.763516   64715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0501 03:29:36.794017   64715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/pause-542495/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0501 03:29:36.830181   64715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/pause-542495/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0501 03:29:36.860317   64715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/pause-542495/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0501 03:29:36.895078   64715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/pause-542495/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0501 03:29:36.927588   64715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem --> /usr/share/ca-certificates/207242.pem (1708 bytes)
	I0501 03:29:36.957909   64715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0501 03:29:34.364799   64474 api_server.go:279] https://192.168.61.116:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0501 03:29:34.364915   64474 api_server.go:103] status: https://192.168.61.116:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0501 03:29:34.364962   64474 api_server.go:253] Checking apiserver healthz at https://192.168.61.116:8443/healthz ...
	I0501 03:29:36.371890   64474 api_server.go:279] https://192.168.61.116:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0501 03:29:36.371931   64474 api_server.go:103] status: https://192.168.61.116:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0501 03:29:36.371949   64474 api_server.go:253] Checking apiserver healthz at https://192.168.61.116:8443/healthz ...
	I0501 03:29:33.195543   64773 main.go:141] libmachine: (cert-options-582976) DBG | domain cert-options-582976 has defined MAC address 52:54:00:0c:8c:a4 in network mk-cert-options-582976
	I0501 03:29:33.195984   64773 main.go:141] libmachine: (cert-options-582976) DBG | unable to find current IP address of domain cert-options-582976 in network mk-cert-options-582976
	I0501 03:29:33.196002   64773 main.go:141] libmachine: (cert-options-582976) DBG | I0501 03:29:33.195938   64818 retry.go:31] will retry after 855.127125ms: waiting for machine to come up
	I0501 03:29:34.052814   64773 main.go:141] libmachine: (cert-options-582976) DBG | domain cert-options-582976 has defined MAC address 52:54:00:0c:8c:a4 in network mk-cert-options-582976
	I0501 03:29:34.053179   64773 main.go:141] libmachine: (cert-options-582976) DBG | unable to find current IP address of domain cert-options-582976 in network mk-cert-options-582976
	I0501 03:29:34.053204   64773 main.go:141] libmachine: (cert-options-582976) DBG | I0501 03:29:34.053120   64818 retry.go:31] will retry after 823.772767ms: waiting for machine to come up
	I0501 03:29:34.879875   64773 main.go:141] libmachine: (cert-options-582976) DBG | domain cert-options-582976 has defined MAC address 52:54:00:0c:8c:a4 in network mk-cert-options-582976
	I0501 03:29:34.880244   64773 main.go:141] libmachine: (cert-options-582976) DBG | unable to find current IP address of domain cert-options-582976 in network mk-cert-options-582976
	I0501 03:29:34.880264   64773 main.go:141] libmachine: (cert-options-582976) DBG | I0501 03:29:34.880206   64818 retry.go:31] will retry after 1.148406705s: waiting for machine to come up
	I0501 03:29:36.030550   64773 main.go:141] libmachine: (cert-options-582976) DBG | domain cert-options-582976 has defined MAC address 52:54:00:0c:8c:a4 in network mk-cert-options-582976
	I0501 03:29:36.031120   64773 main.go:141] libmachine: (cert-options-582976) DBG | unable to find current IP address of domain cert-options-582976 in network mk-cert-options-582976
	I0501 03:29:36.031145   64773 main.go:141] libmachine: (cert-options-582976) DBG | I0501 03:29:36.031066   64818 retry.go:31] will retry after 1.243358419s: waiting for machine to come up
	I0501 03:29:37.275937   64773 main.go:141] libmachine: (cert-options-582976) DBG | domain cert-options-582976 has defined MAC address 52:54:00:0c:8c:a4 in network mk-cert-options-582976
	I0501 03:29:37.276578   64773 main.go:141] libmachine: (cert-options-582976) DBG | unable to find current IP address of domain cert-options-582976 in network mk-cert-options-582976
	I0501 03:29:37.276602   64773 main.go:141] libmachine: (cert-options-582976) DBG | I0501 03:29:37.276528   64818 retry.go:31] will retry after 1.44528743s: waiting for machine to come up
	I0501 03:29:35.441442   65002 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0501 03:29:35.441472   65002 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0501 03:29:35.441479   65002 cache.go:56] Caching tarball of preloaded images
	I0501 03:29:35.441558   65002 preload.go:173] Found /home/jenkins/minikube-integration/18779-13391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0501 03:29:35.441569   65002 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0501 03:29:35.441655   65002 profile.go:143] Saving config to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/kubernetes-upgrade-046243/config.json ...
	I0501 03:29:35.441818   65002 start.go:360] acquireMachinesLock for kubernetes-upgrade-046243: {Name:mkc4548e5afe1d4b0a833f6c522103562b0cefff Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0501 03:29:36.988626   64715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/certs/20724.pem --> /usr/share/ca-certificates/20724.pem (1338 bytes)
	I0501 03:29:37.018354   64715 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0501 03:29:37.037952   64715 ssh_runner.go:195] Run: openssl version
	I0501 03:29:37.044468   64715 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20724.pem && ln -fs /usr/share/ca-certificates/20724.pem /etc/ssl/certs/20724.pem"
	I0501 03:29:37.056673   64715 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20724.pem
	I0501 03:29:37.061961   64715 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May  1 02:20 /usr/share/ca-certificates/20724.pem
	I0501 03:29:37.062019   64715 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20724.pem
	I0501 03:29:37.068657   64715 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20724.pem /etc/ssl/certs/51391683.0"
	I0501 03:29:37.079295   64715 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/207242.pem && ln -fs /usr/share/ca-certificates/207242.pem /etc/ssl/certs/207242.pem"
	I0501 03:29:37.091655   64715 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/207242.pem
	I0501 03:29:37.098582   64715 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May  1 02:20 /usr/share/ca-certificates/207242.pem
	I0501 03:29:37.098640   64715 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/207242.pem
	I0501 03:29:37.107182   64715 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/207242.pem /etc/ssl/certs/3ec20f2e.0"
	I0501 03:29:37.123294   64715 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0501 03:29:37.143433   64715 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0501 03:29:37.177472   64715 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May  1 02:08 /usr/share/ca-certificates/minikubeCA.pem
	I0501 03:29:37.177540   64715 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0501 03:29:37.216181   64715 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0501 03:29:37.245512   64715 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0501 03:29:37.283624   64715 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0501 03:29:37.358156   64715 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0501 03:29:37.396903   64715 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0501 03:29:37.443765   64715 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0501 03:29:37.540713   64715 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0501 03:29:37.582184   64715 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0501 03:29:37.683676   64715 kubeadm.go:391] StartCluster: {Name:pause-542495 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Cl
usterName:pause-542495 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.4 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm
:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 03:29:37.683816   64715 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0501 03:29:37.683881   64715 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0501 03:29:37.918108   64715 cri.go:89] found id: "92a924bfd194ca856d32145f1b416a6feaeb97526d5e7d1f3806a954427b41cc"
	I0501 03:29:37.918130   64715 cri.go:89] found id: "ec898beec5e3d2f8d855ee92cc720b0d9dca6366cf80da972ab7c460dbaebf7b"
	I0501 03:29:37.918135   64715 cri.go:89] found id: "09b229cb407320d3c750597c319b318b1b9632dbff2346820d3561439a3a3115"
	I0501 03:29:37.918139   64715 cri.go:89] found id: "bef611c533a535e2bfd6a2122f40ce235697abe33bff274ad85257f799b3948f"
	I0501 03:29:37.918141   64715 cri.go:89] found id: "1abcf9a16dd7ce05e00be8f82a9c6d7b732ba8a404f903bbc13c658fe6596f99"
	I0501 03:29:37.918144   64715 cri.go:89] found id: "7369fbe67db99f3aecc21062700054a8de9b2c9f0a544c30c58fdb823b3260f3"
	I0501 03:29:37.918146   64715 cri.go:89] found id: "234285fef15e8f904fde8efeae5cdd5a3b89deed909a8d2078b72eb6ac39e7db"
	I0501 03:29:37.918149   64715 cri.go:89] found id: "ffc59df7261fe630a23f5b1e9eff4148c7ab446a1ba00903e19b3dfd9e2e6fea"
	I0501 03:29:37.918151   64715 cri.go:89] found id: ""
	I0501 03:29:37.918220   64715 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	May 01 03:30:12 pause-542495 crio[2464]: time="2024-05-01 03:30:12.601346467Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e8c663ed-29c2-4d8d-89f0-5f8a73ce64fa name=/runtime.v1.RuntimeService/Version
	May 01 03:30:12 pause-542495 crio[2464]: time="2024-05-01 03:30:12.603425890Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=07605155-7dc1-474e-855d-c88b1e3e9ecc name=/runtime.v1.ImageService/ImageFsInfo
	May 01 03:30:12 pause-542495 crio[2464]: time="2024-05-01 03:30:12.604421408Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714534212604392949,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=07605155-7dc1-474e-855d-c88b1e3e9ecc name=/runtime.v1.ImageService/ImageFsInfo
	May 01 03:30:12 pause-542495 crio[2464]: time="2024-05-01 03:30:12.604906488Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c596e5e9-e6a0-4c56-883c-0d62d5d2105e name=/runtime.v1.RuntimeService/ListContainers
	May 01 03:30:12 pause-542495 crio[2464]: time="2024-05-01 03:30:12.604992476Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c596e5e9-e6a0-4c56-883c-0d62d5d2105e name=/runtime.v1.RuntimeService/ListContainers
	May 01 03:30:12 pause-542495 crio[2464]: time="2024-05-01 03:30:12.605432418Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:16a476e1af3d92a5d2bb28639cec07f414b874fd246a7d9bf61dd4c5a84048ba,PodSandboxId:bb823534975d53f5e5287a5c82dd4018eb82dcced02c7a9d5e28f2713ff893de,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714534194783978189,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-x7vrf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f44ac199-32c4-4977-8f63-564a23e4b83e,},Annotations:map[string]string{io.kubernetes.container.hash: 95de7746,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8631b6ae7b505faf1e9286be8a8009e27a2f5cecd55bff0b3b8ae07a794faefa,PodSandboxId:5e6092e45f59702db958d15a6972fa250e320a2d415982ab858d4b5871a4b6fa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714534190957587306,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-542495,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24d8d34219349d61e1fe05674be00f92,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38f0cafe7b1950e0819feeddbbc6000076623a8f23896a0272ce7823ff05494d,PodSandboxId:77cd8d76d21dbd1776a97c72d57594093dda71b4b61e6b105c5bb346120ca827,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714534190941467331,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-542495,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c29c848e99ee9d176d08ac5bc565db5,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dffc93cfd52f5af64e714cb776f0ac72bf1b49ce8b4419ef46d6adf7e2208982,PodSandboxId:7e2b26855b74308131028514c95e0dd8e69a69b07f3c6ef175c4f57e76021bb6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714534190917113562,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-542495,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21ae9f772b4d01fd8d1605b312e4e87d,},Annotations:map[string]string{io.kubernetes.container.hash: f2d18315,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,i
o.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64f8f19ad8d82fc60e6c9b9c36b6e195dc4558956b053319e657b96d9b27b92d,PodSandboxId:ea903bee7754ec901582e9c5121d776d3ee92f9fe23c26c09540543887466778,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714534178464684246,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lz5kj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f2c4209-14df-46c4-abc7-cf93b398a872,},Annotations:map[string]string{io.kubernetes.container.hash: 80d52e3b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"pro
tocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a781211fa39e0e2823e25a0eab0d5a0e06393b10b52df90a08e6b872d5cc505,PodSandboxId:b065fb5908d6a38e1668d1beba7c0050c6ae711205bb22d474ac4a0a2bf82dc5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714534177742854650,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-542495,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ff0b7b04ff9d65173c3e81efe74f8d5,},Annotations:map[string]string{io
.kubernetes.container.hash: bde3beb4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d30e5e711023dd3054c30c78cddbaba96016c1d461380e99168233181b94590,PodSandboxId:bb823534975d53f5e5287a5c82dd4018eb82dcced02c7a9d5e28f2713ff893de,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1714534177756082963,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-x7vrf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f44ac199-32c4-4977-8f63-564a23e4b83e,},Annotations:map[string]string{io.kubernetes.container.hash: 95de77
46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f8f7cca080d5d6cd94a98511e14b930e6b8bee5eee4b99ed932d0626be72bc0,PodSandboxId:7e2b26855b74308131028514c95e0dd8e69a69b07f3c6ef175c4f57e76021bb6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714534177673100995,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-542495,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21ae9f772b4d01fd8d1605b312e4e87d,},Annotations:map[string]string{io.kubernetes.container.hash: f2d18315,io.kubernetes.co
ntainer.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92a924bfd194ca856d32145f1b416a6feaeb97526d5e7d1f3806a954427b41cc,PodSandboxId:5e6092e45f59702db958d15a6972fa250e320a2d415982ab858d4b5871a4b6fa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714534177610228665,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-542495,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24d8d34219349d61e1fe05674be00f92,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kuber
netes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec898beec5e3d2f8d855ee92cc720b0d9dca6366cf80da972ab7c460dbaebf7b,PodSandboxId:77cd8d76d21dbd1776a97c72d57594093dda71b4b61e6b105c5bb346120ca827,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1714534177474387272,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-542495,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c29c848e99ee9d176d08ac5bc565db5,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.res
tartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09b229cb407320d3c750597c319b318b1b9632dbff2346820d3561439a3a3115,PodSandboxId:e6a055773e04974c174545c30cc9a5af87b31843e34a1187fb8b87b88e46510f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714534120518515910,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lz5kj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f2c4209-14df-46c4-abc7-cf93b398a872,},Annotations:map[string]string{io.kubernetes.container.hash: 80d52e3b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:234285fef15e8f904fde8efeae5cdd5a3b89deed909a8d2078b72eb6ac39e7db,PodSandboxId:d4a6ff79cf15f1b0d4c08a0f699e9f74220986b8378c356650946fbfda438dd2,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1714534100017825751,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-542495,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 6ff0b7b04ff9d65173c3e81efe74f8d5,},Annotations:map[string]string{io.kubernetes.container.hash: bde3beb4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c596e5e9-e6a0-4c56-883c-0d62d5d2105e name=/runtime.v1.RuntimeService/ListContainers
	May 01 03:30:12 pause-542495 crio[2464]: time="2024-05-01 03:30:12.663046905Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b9360ddc-a3da-46f0-bffa-bcb6a438a808 name=/runtime.v1.RuntimeService/Version
	May 01 03:30:12 pause-542495 crio[2464]: time="2024-05-01 03:30:12.663238565Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b9360ddc-a3da-46f0-bffa-bcb6a438a808 name=/runtime.v1.RuntimeService/Version
	May 01 03:30:12 pause-542495 crio[2464]: time="2024-05-01 03:30:12.665441433Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c669b749-1662-4587-9d5a-a7dc49b1919e name=/runtime.v1.ImageService/ImageFsInfo
	May 01 03:30:12 pause-542495 crio[2464]: time="2024-05-01 03:30:12.666054258Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714534212665980855,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c669b749-1662-4587-9d5a-a7dc49b1919e name=/runtime.v1.ImageService/ImageFsInfo
	May 01 03:30:12 pause-542495 crio[2464]: time="2024-05-01 03:30:12.666757840Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1a0fcf83-0b50-4aca-a964-aeba881555e5 name=/runtime.v1.RuntimeService/ListContainers
	May 01 03:30:12 pause-542495 crio[2464]: time="2024-05-01 03:30:12.666858932Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1a0fcf83-0b50-4aca-a964-aeba881555e5 name=/runtime.v1.RuntimeService/ListContainers
	May 01 03:30:12 pause-542495 crio[2464]: time="2024-05-01 03:30:12.667483737Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:16a476e1af3d92a5d2bb28639cec07f414b874fd246a7d9bf61dd4c5a84048ba,PodSandboxId:bb823534975d53f5e5287a5c82dd4018eb82dcced02c7a9d5e28f2713ff893de,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714534194783978189,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-x7vrf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f44ac199-32c4-4977-8f63-564a23e4b83e,},Annotations:map[string]string{io.kubernetes.container.hash: 95de7746,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8631b6ae7b505faf1e9286be8a8009e27a2f5cecd55bff0b3b8ae07a794faefa,PodSandboxId:5e6092e45f59702db958d15a6972fa250e320a2d415982ab858d4b5871a4b6fa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714534190957587306,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-542495,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24d8d34219349d61e1fe05674be00f92,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38f0cafe7b1950e0819feeddbbc6000076623a8f23896a0272ce7823ff05494d,PodSandboxId:77cd8d76d21dbd1776a97c72d57594093dda71b4b61e6b105c5bb346120ca827,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714534190941467331,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-542495,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c29c848e99ee9d176d08ac5bc565db5,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dffc93cfd52f5af64e714cb776f0ac72bf1b49ce8b4419ef46d6adf7e2208982,PodSandboxId:7e2b26855b74308131028514c95e0dd8e69a69b07f3c6ef175c4f57e76021bb6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714534190917113562,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-542495,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21ae9f772b4d01fd8d1605b312e4e87d,},Annotations:map[string]string{io.kubernetes.container.hash: f2d18315,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,i
o.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64f8f19ad8d82fc60e6c9b9c36b6e195dc4558956b053319e657b96d9b27b92d,PodSandboxId:ea903bee7754ec901582e9c5121d776d3ee92f9fe23c26c09540543887466778,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714534178464684246,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lz5kj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f2c4209-14df-46c4-abc7-cf93b398a872,},Annotations:map[string]string{io.kubernetes.container.hash: 80d52e3b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"pro
tocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a781211fa39e0e2823e25a0eab0d5a0e06393b10b52df90a08e6b872d5cc505,PodSandboxId:b065fb5908d6a38e1668d1beba7c0050c6ae711205bb22d474ac4a0a2bf82dc5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714534177742854650,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-542495,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ff0b7b04ff9d65173c3e81efe74f8d5,},Annotations:map[string]string{io
.kubernetes.container.hash: bde3beb4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d30e5e711023dd3054c30c78cddbaba96016c1d461380e99168233181b94590,PodSandboxId:bb823534975d53f5e5287a5c82dd4018eb82dcced02c7a9d5e28f2713ff893de,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1714534177756082963,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-x7vrf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f44ac199-32c4-4977-8f63-564a23e4b83e,},Annotations:map[string]string{io.kubernetes.container.hash: 95de77
46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f8f7cca080d5d6cd94a98511e14b930e6b8bee5eee4b99ed932d0626be72bc0,PodSandboxId:7e2b26855b74308131028514c95e0dd8e69a69b07f3c6ef175c4f57e76021bb6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714534177673100995,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-542495,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21ae9f772b4d01fd8d1605b312e4e87d,},Annotations:map[string]string{io.kubernetes.container.hash: f2d18315,io.kubernetes.co
ntainer.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92a924bfd194ca856d32145f1b416a6feaeb97526d5e7d1f3806a954427b41cc,PodSandboxId:5e6092e45f59702db958d15a6972fa250e320a2d415982ab858d4b5871a4b6fa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714534177610228665,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-542495,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24d8d34219349d61e1fe05674be00f92,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kuber
netes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec898beec5e3d2f8d855ee92cc720b0d9dca6366cf80da972ab7c460dbaebf7b,PodSandboxId:77cd8d76d21dbd1776a97c72d57594093dda71b4b61e6b105c5bb346120ca827,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1714534177474387272,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-542495,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c29c848e99ee9d176d08ac5bc565db5,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.res
tartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09b229cb407320d3c750597c319b318b1b9632dbff2346820d3561439a3a3115,PodSandboxId:e6a055773e04974c174545c30cc9a5af87b31843e34a1187fb8b87b88e46510f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714534120518515910,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lz5kj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f2c4209-14df-46c4-abc7-cf93b398a872,},Annotations:map[string]string{io.kubernetes.container.hash: 80d52e3b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:234285fef15e8f904fde8efeae5cdd5a3b89deed909a8d2078b72eb6ac39e7db,PodSandboxId:d4a6ff79cf15f1b0d4c08a0f699e9f74220986b8378c356650946fbfda438dd2,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1714534100017825751,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-542495,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 6ff0b7b04ff9d65173c3e81efe74f8d5,},Annotations:map[string]string{io.kubernetes.container.hash: bde3beb4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1a0fcf83-0b50-4aca-a964-aeba881555e5 name=/runtime.v1.RuntimeService/ListContainers
	May 01 03:30:12 pause-542495 crio[2464]: time="2024-05-01 03:30:12.680871084Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=3ba650d8-b03b-4daf-b9f4-352d8f33aafc name=/runtime.v1.RuntimeService/ListPodSandbox
	May 01 03:30:12 pause-542495 crio[2464]: time="2024-05-01 03:30:12.681329925Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:ea903bee7754ec901582e9c5121d776d3ee92f9fe23c26c09540543887466778,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-lz5kj,Uid:1f2c4209-14df-46c4-abc7-cf93b398a872,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1714534177452431691,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-lz5kj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f2c4209-14df-46c4-abc7-cf93b398a872,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-05-01T03:28:39.468614935Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:bb823534975d53f5e5287a5c82dd4018eb82dcced02c7a9d5e28f2713ff893de,Metadata:&PodSandboxMetadata{Name:kube-proxy-x7vrf,Uid:f44ac199-32c4-4977-8f63-564a23e4b83e,Namespace:kube-system,Attempt
:1,},State:SANDBOX_READY,CreatedAt:1714534177231626602,Labels:map[string]string{controller-revision-hash: 79cf874c65,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-x7vrf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f44ac199-32c4-4977-8f63-564a23e4b83e,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-05-01T03:28:39.130392083Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5e6092e45f59702db958d15a6972fa250e320a2d415982ab858d4b5871a4b6fa,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-pause-542495,Uid:24d8d34219349d61e1fe05674be00f92,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1714534177206753532,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-pause-542495,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24d8d34219349d61e1fe05674be00f92,tier: control-pla
ne,},Annotations:map[string]string{kubernetes.io/config.hash: 24d8d34219349d61e1fe05674be00f92,kubernetes.io/config.seen: 2024-05-01T03:28:25.700751265Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:b065fb5908d6a38e1668d1beba7c0050c6ae711205bb22d474ac4a0a2bf82dc5,Metadata:&PodSandboxMetadata{Name:etcd-pause-542495,Uid:6ff0b7b04ff9d65173c3e81efe74f8d5,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1714534177198454552,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-pause-542495,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ff0b7b04ff9d65173c3e81efe74f8d5,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.4:2379,kubernetes.io/config.hash: 6ff0b7b04ff9d65173c3e81efe74f8d5,kubernetes.io/config.seen: 2024-05-01T03:28:25.700746206Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:7e2b26855b74308131028514c95e0dd8e69
a69b07f3c6ef175c4f57e76021bb6,Metadata:&PodSandboxMetadata{Name:kube-apiserver-pause-542495,Uid:21ae9f772b4d01fd8d1605b312e4e87d,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1714534177175949676,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-pause-542495,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21ae9f772b4d01fd8d1605b312e4e87d,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.4:8443,kubernetes.io/config.hash: 21ae9f772b4d01fd8d1605b312e4e87d,kubernetes.io/config.seen: 2024-05-01T03:28:25.700750150Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:77cd8d76d21dbd1776a97c72d57594093dda71b4b61e6b105c5bb346120ca827,Metadata:&PodSandboxMetadata{Name:kube-scheduler-pause-542495,Uid:3c29c848e99ee9d176d08ac5bc565db5,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1714534177169356694,Labels:
map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-pause-542495,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c29c848e99ee9d176d08ac5bc565db5,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 3c29c848e99ee9d176d08ac5bc565db5,kubernetes.io/config.seen: 2024-05-01T03:28:25.700752070Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:a30560dea241f13b0060f5635a2789bf9946b452ee477d59d24add789d027998,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-wt45j,Uid:2ddabe27-aea2-41e4-b3b7-dcb59e5a4ca8,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1714534120090012802,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-wt45j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ddabe27-aea2-41e4-b3b7-dcb59e5a4ca8,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/con
fig.seen: 2024-05-01T03:28:39.435928208Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e6a055773e04974c174545c30cc9a5af87b31843e34a1187fb8b87b88e46510f,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-lz5kj,Uid:1f2c4209-14df-46c4-abc7-cf93b398a872,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1714534119787479784,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-lz5kj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f2c4209-14df-46c4-abc7-cf93b398a872,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-05-01T03:28:39.468614935Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d4a6ff79cf15f1b0d4c08a0f699e9f74220986b8378c356650946fbfda438dd2,Metadata:&PodSandboxMetadata{Name:etcd-pause-542495,Uid:6ff0b7b04ff9d65173c3e81efe74f8d5,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1714534099692664773,Labels:
map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-pause-542495,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ff0b7b04ff9d65173c3e81efe74f8d5,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.4:2379,kubernetes.io/config.hash: 6ff0b7b04ff9d65173c3e81efe74f8d5,kubernetes.io/config.seen: 2024-05-01T03:28:19.179419399Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=3ba650d8-b03b-4daf-b9f4-352d8f33aafc name=/runtime.v1.RuntimeService/ListPodSandbox
	May 01 03:30:12 pause-542495 crio[2464]: time="2024-05-01 03:30:12.683087569Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7b2bd4f6-6e6d-481d-9aab-a4fb9aae80a0 name=/runtime.v1.RuntimeService/ListContainers
	May 01 03:30:12 pause-542495 crio[2464]: time="2024-05-01 03:30:12.683260047Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7b2bd4f6-6e6d-481d-9aab-a4fb9aae80a0 name=/runtime.v1.RuntimeService/ListContainers
	May 01 03:30:12 pause-542495 crio[2464]: time="2024-05-01 03:30:12.684258831Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:16a476e1af3d92a5d2bb28639cec07f414b874fd246a7d9bf61dd4c5a84048ba,PodSandboxId:bb823534975d53f5e5287a5c82dd4018eb82dcced02c7a9d5e28f2713ff893de,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714534194783978189,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-x7vrf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f44ac199-32c4-4977-8f63-564a23e4b83e,},Annotations:map[string]string{io.kubernetes.container.hash: 95de7746,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8631b6ae7b505faf1e9286be8a8009e27a2f5cecd55bff0b3b8ae07a794faefa,PodSandboxId:5e6092e45f59702db958d15a6972fa250e320a2d415982ab858d4b5871a4b6fa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714534190957587306,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-542495,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24d8d34219349d61e1fe05674be00f92,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38f0cafe7b1950e0819feeddbbc6000076623a8f23896a0272ce7823ff05494d,PodSandboxId:77cd8d76d21dbd1776a97c72d57594093dda71b4b61e6b105c5bb346120ca827,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714534190941467331,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-542495,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c29c848e99ee9d176d08ac5bc565db5,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dffc93cfd52f5af64e714cb776f0ac72bf1b49ce8b4419ef46d6adf7e2208982,PodSandboxId:7e2b26855b74308131028514c95e0dd8e69a69b07f3c6ef175c4f57e76021bb6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714534190917113562,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-542495,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21ae9f772b4d01fd8d1605b312e4e87d,},Annotations:map[string]string{io.kubernetes.container.hash: f2d18315,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,i
o.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64f8f19ad8d82fc60e6c9b9c36b6e195dc4558956b053319e657b96d9b27b92d,PodSandboxId:ea903bee7754ec901582e9c5121d776d3ee92f9fe23c26c09540543887466778,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714534178464684246,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lz5kj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f2c4209-14df-46c4-abc7-cf93b398a872,},Annotations:map[string]string{io.kubernetes.container.hash: 80d52e3b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"pro
tocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a781211fa39e0e2823e25a0eab0d5a0e06393b10b52df90a08e6b872d5cc505,PodSandboxId:b065fb5908d6a38e1668d1beba7c0050c6ae711205bb22d474ac4a0a2bf82dc5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714534177742854650,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-542495,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ff0b7b04ff9d65173c3e81efe74f8d5,},Annotations:map[string]string{io
.kubernetes.container.hash: bde3beb4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d30e5e711023dd3054c30c78cddbaba96016c1d461380e99168233181b94590,PodSandboxId:bb823534975d53f5e5287a5c82dd4018eb82dcced02c7a9d5e28f2713ff893de,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1714534177756082963,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-x7vrf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f44ac199-32c4-4977-8f63-564a23e4b83e,},Annotations:map[string]string{io.kubernetes.container.hash: 95de77
46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f8f7cca080d5d6cd94a98511e14b930e6b8bee5eee4b99ed932d0626be72bc0,PodSandboxId:7e2b26855b74308131028514c95e0dd8e69a69b07f3c6ef175c4f57e76021bb6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714534177673100995,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-542495,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21ae9f772b4d01fd8d1605b312e4e87d,},Annotations:map[string]string{io.kubernetes.container.hash: f2d18315,io.kubernetes.co
ntainer.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92a924bfd194ca856d32145f1b416a6feaeb97526d5e7d1f3806a954427b41cc,PodSandboxId:5e6092e45f59702db958d15a6972fa250e320a2d415982ab858d4b5871a4b6fa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714534177610228665,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-542495,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24d8d34219349d61e1fe05674be00f92,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kuber
netes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec898beec5e3d2f8d855ee92cc720b0d9dca6366cf80da972ab7c460dbaebf7b,PodSandboxId:77cd8d76d21dbd1776a97c72d57594093dda71b4b61e6b105c5bb346120ca827,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1714534177474387272,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-542495,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c29c848e99ee9d176d08ac5bc565db5,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.res
tartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09b229cb407320d3c750597c319b318b1b9632dbff2346820d3561439a3a3115,PodSandboxId:e6a055773e04974c174545c30cc9a5af87b31843e34a1187fb8b87b88e46510f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714534120518515910,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lz5kj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f2c4209-14df-46c4-abc7-cf93b398a872,},Annotations:map[string]string{io.kubernetes.container.hash: 80d52e3b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:234285fef15e8f904fde8efeae5cdd5a3b89deed909a8d2078b72eb6ac39e7db,PodSandboxId:d4a6ff79cf15f1b0d4c08a0f699e9f74220986b8378c356650946fbfda438dd2,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1714534100017825751,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-542495,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 6ff0b7b04ff9d65173c3e81efe74f8d5,},Annotations:map[string]string{io.kubernetes.container.hash: bde3beb4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7b2bd4f6-6e6d-481d-9aab-a4fb9aae80a0 name=/runtime.v1.RuntimeService/ListContainers
	May 01 03:30:12 pause-542495 crio[2464]: time="2024-05-01 03:30:12.729039743Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b5eb684f-3e92-4d68-af4a-b77636c018aa name=/runtime.v1.RuntimeService/Version
	May 01 03:30:12 pause-542495 crio[2464]: time="2024-05-01 03:30:12.729136116Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b5eb684f-3e92-4d68-af4a-b77636c018aa name=/runtime.v1.RuntimeService/Version
	May 01 03:30:12 pause-542495 crio[2464]: time="2024-05-01 03:30:12.731673031Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6268fab7-5136-42a0-bd95-3a11c1363768 name=/runtime.v1.ImageService/ImageFsInfo
	May 01 03:30:12 pause-542495 crio[2464]: time="2024-05-01 03:30:12.732260931Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714534212732225038,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6268fab7-5136-42a0-bd95-3a11c1363768 name=/runtime.v1.ImageService/ImageFsInfo
	May 01 03:30:12 pause-542495 crio[2464]: time="2024-05-01 03:30:12.733291084Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c2ed8053-f4bb-4a2c-8361-c772be627973 name=/runtime.v1.RuntimeService/ListContainers
	May 01 03:30:12 pause-542495 crio[2464]: time="2024-05-01 03:30:12.733365009Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c2ed8053-f4bb-4a2c-8361-c772be627973 name=/runtime.v1.RuntimeService/ListContainers
	May 01 03:30:12 pause-542495 crio[2464]: time="2024-05-01 03:30:12.733771831Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:16a476e1af3d92a5d2bb28639cec07f414b874fd246a7d9bf61dd4c5a84048ba,PodSandboxId:bb823534975d53f5e5287a5c82dd4018eb82dcced02c7a9d5e28f2713ff893de,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714534194783978189,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-x7vrf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f44ac199-32c4-4977-8f63-564a23e4b83e,},Annotations:map[string]string{io.kubernetes.container.hash: 95de7746,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8631b6ae7b505faf1e9286be8a8009e27a2f5cecd55bff0b3b8ae07a794faefa,PodSandboxId:5e6092e45f59702db958d15a6972fa250e320a2d415982ab858d4b5871a4b6fa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714534190957587306,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-542495,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24d8d34219349d61e1fe05674be00f92,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38f0cafe7b1950e0819feeddbbc6000076623a8f23896a0272ce7823ff05494d,PodSandboxId:77cd8d76d21dbd1776a97c72d57594093dda71b4b61e6b105c5bb346120ca827,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714534190941467331,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-542495,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c29c848e99ee9d176d08ac5bc565db5,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dffc93cfd52f5af64e714cb776f0ac72bf1b49ce8b4419ef46d6adf7e2208982,PodSandboxId:7e2b26855b74308131028514c95e0dd8e69a69b07f3c6ef175c4f57e76021bb6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714534190917113562,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-542495,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21ae9f772b4d01fd8d1605b312e4e87d,},Annotations:map[string]string{io.kubernetes.container.hash: f2d18315,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,i
o.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64f8f19ad8d82fc60e6c9b9c36b6e195dc4558956b053319e657b96d9b27b92d,PodSandboxId:ea903bee7754ec901582e9c5121d776d3ee92f9fe23c26c09540543887466778,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714534178464684246,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lz5kj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f2c4209-14df-46c4-abc7-cf93b398a872,},Annotations:map[string]string{io.kubernetes.container.hash: 80d52e3b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"pro
tocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a781211fa39e0e2823e25a0eab0d5a0e06393b10b52df90a08e6b872d5cc505,PodSandboxId:b065fb5908d6a38e1668d1beba7c0050c6ae711205bb22d474ac4a0a2bf82dc5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714534177742854650,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-542495,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ff0b7b04ff9d65173c3e81efe74f8d5,},Annotations:map[string]string{io
.kubernetes.container.hash: bde3beb4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d30e5e711023dd3054c30c78cddbaba96016c1d461380e99168233181b94590,PodSandboxId:bb823534975d53f5e5287a5c82dd4018eb82dcced02c7a9d5e28f2713ff893de,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1714534177756082963,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-x7vrf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f44ac199-32c4-4977-8f63-564a23e4b83e,},Annotations:map[string]string{io.kubernetes.container.hash: 95de77
46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f8f7cca080d5d6cd94a98511e14b930e6b8bee5eee4b99ed932d0626be72bc0,PodSandboxId:7e2b26855b74308131028514c95e0dd8e69a69b07f3c6ef175c4f57e76021bb6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714534177673100995,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-542495,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21ae9f772b4d01fd8d1605b312e4e87d,},Annotations:map[string]string{io.kubernetes.container.hash: f2d18315,io.kubernetes.co
ntainer.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92a924bfd194ca856d32145f1b416a6feaeb97526d5e7d1f3806a954427b41cc,PodSandboxId:5e6092e45f59702db958d15a6972fa250e320a2d415982ab858d4b5871a4b6fa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714534177610228665,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-542495,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24d8d34219349d61e1fe05674be00f92,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kuber
netes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec898beec5e3d2f8d855ee92cc720b0d9dca6366cf80da972ab7c460dbaebf7b,PodSandboxId:77cd8d76d21dbd1776a97c72d57594093dda71b4b61e6b105c5bb346120ca827,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1714534177474387272,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-542495,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c29c848e99ee9d176d08ac5bc565db5,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.res
tartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09b229cb407320d3c750597c319b318b1b9632dbff2346820d3561439a3a3115,PodSandboxId:e6a055773e04974c174545c30cc9a5af87b31843e34a1187fb8b87b88e46510f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714534120518515910,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lz5kj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f2c4209-14df-46c4-abc7-cf93b398a872,},Annotations:map[string]string{io.kubernetes.container.hash: 80d52e3b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:234285fef15e8f904fde8efeae5cdd5a3b89deed909a8d2078b72eb6ac39e7db,PodSandboxId:d4a6ff79cf15f1b0d4c08a0f699e9f74220986b8378c356650946fbfda438dd2,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1714534100017825751,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-542495,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 6ff0b7b04ff9d65173c3e81efe74f8d5,},Annotations:map[string]string{io.kubernetes.container.hash: bde3beb4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c2ed8053-f4bb-4a2c-8361-c772be627973 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	16a476e1af3d9       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b   18 seconds ago       Running             kube-proxy                2                   bb823534975d5       kube-proxy-x7vrf
	8631b6ae7b505       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b   21 seconds ago       Running             kube-controller-manager   2                   5e6092e45f597       kube-controller-manager-pause-542495
	38f0cafe7b195       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced   21 seconds ago       Running             kube-scheduler            2                   77cd8d76d21db       kube-scheduler-pause-542495
	dffc93cfd52f5       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0   21 seconds ago       Running             kube-apiserver            2                   7e2b26855b743       kube-apiserver-pause-542495
	64f8f19ad8d82       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   34 seconds ago       Running             coredns                   1                   ea903bee7754e       coredns-7db6d8ff4d-lz5kj
	4d30e5e711023       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b   35 seconds ago       Exited              kube-proxy                1                   bb823534975d5       kube-proxy-x7vrf
	4a781211fa39e       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   35 seconds ago       Running             etcd                      1                   b065fb5908d6a       etcd-pause-542495
	4f8f7cca080d5       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0   35 seconds ago       Exited              kube-apiserver            1                   7e2b26855b743       kube-apiserver-pause-542495
	92a924bfd194c       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b   35 seconds ago       Exited              kube-controller-manager   1                   5e6092e45f597       kube-controller-manager-pause-542495
	ec898beec5e3d       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced   35 seconds ago       Exited              kube-scheduler            1                   77cd8d76d21db       kube-scheduler-pause-542495
	09b229cb40732       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   About a minute ago   Exited              coredns                   0                   e6a055773e049       coredns-7db6d8ff4d-lz5kj
	234285fef15e8       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   About a minute ago   Exited              etcd                      0                   d4a6ff79cf15f       etcd-pause-542495
	
	
	==> coredns [09b229cb407320d3c750597c319b318b1b9632dbff2346820d3561439a3a3115] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] Reloading
	[INFO] plugin/kubernetes: Trace[1734290231]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (01-May-2024 03:28:40.932) (total time: 29625ms):
	Trace[1734290231]: ---"Objects listed" error:<nil> 29625ms (03:29:10.558)
	Trace[1734290231]: [29.625622789s] [29.625622789s] END
	[INFO] plugin/kubernetes: Trace[1299550197]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (01-May-2024 03:28:40.936) (total time: 29624ms):
	Trace[1299550197]: ---"Objects listed" error:<nil> 29624ms (03:29:10.560)
	Trace[1299550197]: [29.624339339s] [29.624339339s] END
	[INFO] plugin/kubernetes: Trace[353635498]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (01-May-2024 03:28:40.930) (total time: 29629ms):
	Trace[353635498]: ---"Objects listed" error:<nil> 29629ms (03:29:10.559)
	Trace[353635498]: [29.629179957s] [29.629179957s] END
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	[INFO] Reloading complete
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [64f8f19ad8d82fc60e6c9b9c36b6e195dc4558956b053319e657b96d9b27b92d] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:41508 - 171 "HINFO IN 4228483881059704764.9065488545482063048. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014385551s
	
	
	==> describe nodes <==
	Name:               pause-542495
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-542495
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2c4eae41cda912e6a762d77f0d8868e00f97bb4e
	                    minikube.k8s.io/name=pause-542495
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_01T03_28_26_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 01 May 2024 03:28:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-542495
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 01 May 2024 03:30:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 01 May 2024 03:29:53 +0000   Wed, 01 May 2024 03:28:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 01 May 2024 03:29:53 +0000   Wed, 01 May 2024 03:28:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 01 May 2024 03:29:53 +0000   Wed, 01 May 2024 03:28:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 01 May 2024 03:29:53 +0000   Wed, 01 May 2024 03:28:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.4
	  Hostname:    pause-542495
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 33eec01f441f4461b3781ce5458f5f42
	  System UUID:                33eec01f-441f-4461-b378-1ce5458f5f42
	  Boot ID:                    d7068520-f459-4ab5-a6ed-cf8b2d2001c2
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-lz5kj                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     94s
	  kube-system                 etcd-pause-542495                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         108s
	  kube-system                 kube-apiserver-pause-542495             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         108s
	  kube-system                 kube-controller-manager-pause-542495    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         108s
	  kube-system                 kube-proxy-x7vrf                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         94s
	  kube-system                 kube-scheduler-pause-542495             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         108s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 92s                kube-proxy       
	  Normal  Starting                 18s                kube-proxy       
	  Normal  NodeHasSufficientPID     108s               kubelet          Node pause-542495 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  108s               kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  108s               kubelet          Node pause-542495 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    108s               kubelet          Node pause-542495 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 108s               kubelet          Starting kubelet.
	  Normal  NodeReady                107s               kubelet          Node pause-542495 status is now: NodeReady
	  Normal  RegisteredNode           95s                node-controller  Node pause-542495 event: Registered Node pause-542495 in Controller
	  Normal  Starting                 23s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  23s (x8 over 23s)  kubelet          Node pause-542495 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23s (x8 over 23s)  kubelet          Node pause-542495 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23s (x7 over 23s)  kubelet          Node pause-542495 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  23s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           7s                 node-controller  Node pause-542495 event: Registered Node pause-542495 in Controller
	
	
	==> dmesg <==
	[  +0.067651] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.073784] systemd-fstab-generator[611]: Ignoring "noauto" option for root device
	[  +0.210561] systemd-fstab-generator[625]: Ignoring "noauto" option for root device
	[  +0.162726] systemd-fstab-generator[637]: Ignoring "noauto" option for root device
	[  +0.349448] systemd-fstab-generator[666]: Ignoring "noauto" option for root device
	[  +5.417409] systemd-fstab-generator[762]: Ignoring "noauto" option for root device
	[  +0.069911] kauditd_printk_skb: 130 callbacks suppressed
	[  +5.098175] systemd-fstab-generator[945]: Ignoring "noauto" option for root device
	[  +0.081018] kauditd_printk_skb: 18 callbacks suppressed
	[  +5.519234] kauditd_printk_skb: 69 callbacks suppressed
	[  +1.494240] systemd-fstab-generator[1280]: Ignoring "noauto" option for root device
	[ +14.053708] systemd-fstab-generator[1493]: Ignoring "noauto" option for root device
	[  +0.106677] kauditd_printk_skb: 15 callbacks suppressed
	[ +11.422587] kauditd_printk_skb: 88 callbacks suppressed
	[May 1 03:29] systemd-fstab-generator[2382]: Ignoring "noauto" option for root device
	[  +0.146431] systemd-fstab-generator[2394]: Ignoring "noauto" option for root device
	[  +0.172592] systemd-fstab-generator[2408]: Ignoring "noauto" option for root device
	[  +0.145862] systemd-fstab-generator[2420]: Ignoring "noauto" option for root device
	[  +0.302508] systemd-fstab-generator[2448]: Ignoring "noauto" option for root device
	[  +6.146125] systemd-fstab-generator[2574]: Ignoring "noauto" option for root device
	[  +0.073776] kauditd_printk_skb: 100 callbacks suppressed
	[ +13.544413] systemd-fstab-generator[3295]: Ignoring "noauto" option for root device
	[  +0.081654] kauditd_printk_skb: 86 callbacks suppressed
	[May 1 03:30] kauditd_printk_skb: 41 callbacks suppressed
	[  +2.032217] systemd-fstab-generator[3664]: Ignoring "noauto" option for root device
	
	
	==> etcd [234285fef15e8f904fde8efeae5cdd5a3b89deed909a8d2078b72eb6ac39e7db] <==
	{"level":"warn","ts":"2024-05-01T03:29:20.607766Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-01T03:29:20.14579Z","time spent":"461.921134ms","remote":"127.0.0.1:36648","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":3830,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/replicasets/kube-system/coredns-7db6d8ff4d\" mod_revision:354 > success:<request_put:<key:\"/registry/replicasets/kube-system/coredns-7db6d8ff4d\" value_size:3770 >> failure:<request_range:<key:\"/registry/replicasets/kube-system/coredns-7db6d8ff4d\" > >"}
	{"level":"info","ts":"2024-05-01T03:29:20.607974Z","caller":"traceutil/trace.go:171","msg":"trace[1165779626] linearizableReadLoop","detail":"{readStateIndex:419; appliedIndex:418; }","duration":"503.581925ms","start":"2024-05-01T03:29:20.104378Z","end":"2024-05-01T03:29:20.60796Z","steps":["trace[1165779626] 'read index received'  (duration: 31.168329ms)","trace[1165779626] 'applied index is now lower than readState.Index'  (duration: 472.412614ms)"],"step_count":2}
	{"level":"warn","ts":"2024-05-01T03:29:20.608125Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"503.737256ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-7db6d8ff4d-lz5kj\" ","response":"range_response_count:1 size:4727"}
	{"level":"info","ts":"2024-05-01T03:29:20.608146Z","caller":"traceutil/trace.go:171","msg":"trace[141507368] range","detail":"{range_begin:/registry/pods/kube-system/coredns-7db6d8ff4d-lz5kj; range_end:; response_count:1; response_revision:401; }","duration":"503.785708ms","start":"2024-05-01T03:29:20.104354Z","end":"2024-05-01T03:29:20.60814Z","steps":["trace[141507368] 'agreement among raft nodes before linearized reading'  (duration: 503.675211ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-01T03:29:20.60823Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-01T03:29:20.104339Z","time spent":"503.880146ms","remote":"127.0.0.1:36388","response type":"/etcdserverpb.KV/Range","request count":0,"request size":53,"response count":1,"response size":4749,"request content":"key:\"/registry/pods/kube-system/coredns-7db6d8ff4d-lz5kj\" "}
	{"level":"info","ts":"2024-05-01T03:29:20.60841Z","caller":"traceutil/trace.go:171","msg":"trace[485444998] transaction","detail":"{read_only:false; response_revision:399; number_of_response:1; }","duration":"464.73813ms","start":"2024-05-01T03:29:20.143665Z","end":"2024-05-01T03:29:20.608403Z","steps":["trace[485444998] 'process raft request'  (duration: 463.738342ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-01T03:29:20.608458Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-01T03:29:20.143644Z","time spent":"464.782963ms","remote":"127.0.0.1:36500","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1298,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/endpointslices/kube-system/kube-dns-5mmmp\" mod_revision:383 > success:<request_put:<key:\"/registry/endpointslices/kube-system/kube-dns-5mmmp\" value_size:1239 >> failure:<request_range:<key:\"/registry/endpointslices/kube-system/kube-dns-5mmmp\" > >"}
	{"level":"info","ts":"2024-05-01T03:29:20.608631Z","caller":"traceutil/trace.go:171","msg":"trace[264506834] transaction","detail":"{read_only:false; response_revision:400; number_of_response:1; }","duration":"464.91765ms","start":"2024-05-01T03:29:20.143707Z","end":"2024-05-01T03:29:20.608625Z","steps":["trace[264506834] 'process raft request'  (duration: 463.785301ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-01T03:29:20.608706Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-01T03:29:20.143644Z","time spent":"465.033903ms","remote":"127.0.0.1:36372","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":785,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/kube-dns\" mod_revision:372 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/kube-dns\" value_size:728 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/kube-dns\" > >"}
	{"level":"info","ts":"2024-05-01T03:29:20.814361Z","caller":"traceutil/trace.go:171","msg":"trace[1450256460] linearizableReadLoop","detail":"{readStateIndex:423; appliedIndex:422; }","duration":"182.704497ms","start":"2024-05-01T03:29:20.63164Z","end":"2024-05-01T03:29:20.814344Z","steps":["trace[1450256460] 'read index received'  (duration: 177.268719ms)","trace[1450256460] 'applied index is now lower than readState.Index'  (duration: 5.434973ms)"],"step_count":2}
	{"level":"warn","ts":"2024-05-01T03:29:20.814718Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"169.859676ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-05-01T03:29:20.814792Z","caller":"traceutil/trace.go:171","msg":"trace[507891174] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:402; }","duration":"169.968198ms","start":"2024-05-01T03:29:20.644809Z","end":"2024-05-01T03:29:20.814777Z","steps":["trace[507891174] 'agreement among raft nodes before linearized reading'  (duration: 169.864156ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-01T03:29:20.815093Z","caller":"traceutil/trace.go:171","msg":"trace[1846494894] transaction","detail":"{read_only:false; response_revision:402; number_of_response:1; }","duration":"187.987166ms","start":"2024-05-01T03:29:20.627095Z","end":"2024-05-01T03:29:20.815082Z","steps":["trace[1846494894] 'process raft request'  (duration: 181.869076ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-01T03:29:20.814718Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"183.06177ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/pause-542495\" ","response":"range_response_count:1 size:5425"}
	{"level":"info","ts":"2024-05-01T03:29:20.815464Z","caller":"traceutil/trace.go:171","msg":"trace[1732650769] range","detail":"{range_begin:/registry/minions/pause-542495; range_end:; response_count:1; response_revision:402; }","duration":"183.832956ms","start":"2024-05-01T03:29:20.63162Z","end":"2024-05-01T03:29:20.815453Z","steps":["trace[1732650769] 'agreement among raft nodes before linearized reading'  (duration: 183.032006ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-01T03:29:23.521796Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-05-01T03:29:23.521859Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"pause-542495","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.4:2380"],"advertise-client-urls":["https://192.168.39.4:2379"]}
	{"level":"warn","ts":"2024-05-01T03:29:23.52195Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-05-01T03:29:23.52207Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-05-01T03:29:23.596132Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.4:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-05-01T03:29:23.59625Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.4:2379: use of closed network connection"}
	{"level":"info","ts":"2024-05-01T03:29:23.596333Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"7ab0973fa604e492","current-leader-member-id":"7ab0973fa604e492"}
	{"level":"info","ts":"2024-05-01T03:29:23.598822Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.4:2380"}
	{"level":"info","ts":"2024-05-01T03:29:23.59893Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.4:2380"}
	{"level":"info","ts":"2024-05-01T03:29:23.598942Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"pause-542495","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.4:2380"],"advertise-client-urls":["https://192.168.39.4:2379"]}
	
	
	==> etcd [4a781211fa39e0e2823e25a0eab0d5a0e06393b10b52df90a08e6b872d5cc505] <==
	{"level":"info","ts":"2024-05-01T03:29:38.851859Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-01T03:29:38.846412Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.4:2380"}
	{"level":"info","ts":"2024-05-01T03:29:38.857482Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.4:2380"}
	{"level":"info","ts":"2024-05-01T03:29:38.849924Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-05-01T03:29:40.29435Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7ab0973fa604e492 is starting a new election at term 2"}
	{"level":"info","ts":"2024-05-01T03:29:40.294384Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7ab0973fa604e492 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-05-01T03:29:40.294412Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7ab0973fa604e492 received MsgPreVoteResp from 7ab0973fa604e492 at term 2"}
	{"level":"info","ts":"2024-05-01T03:29:40.294425Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7ab0973fa604e492 became candidate at term 3"}
	{"level":"info","ts":"2024-05-01T03:29:40.29443Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7ab0973fa604e492 received MsgVoteResp from 7ab0973fa604e492 at term 3"}
	{"level":"info","ts":"2024-05-01T03:29:40.294438Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7ab0973fa604e492 became leader at term 3"}
	{"level":"info","ts":"2024-05-01T03:29:40.294445Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7ab0973fa604e492 elected leader 7ab0973fa604e492 at term 3"}
	{"level":"info","ts":"2024-05-01T03:29:40.30096Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"7ab0973fa604e492","local-member-attributes":"{Name:pause-542495 ClientURLs:[https://192.168.39.4:2379]}","request-path":"/0/members/7ab0973fa604e492/attributes","cluster-id":"6b117bdc86acb526","publish-timeout":"7s"}
	{"level":"info","ts":"2024-05-01T03:29:40.300995Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-01T03:29:40.301318Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-01T03:29:40.302967Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-05-01T03:29:40.304506Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.4:2379"}
	{"level":"info","ts":"2024-05-01T03:29:40.304652Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-05-01T03:29:40.304668Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-05-01T03:30:01.50598Z","caller":"traceutil/trace.go:171","msg":"trace[37036908] transaction","detail":"{read_only:false; response_revision:470; number_of_response:1; }","duration":"356.715839ms","start":"2024-05-01T03:30:01.149244Z","end":"2024-05-01T03:30:01.505959Z","steps":["trace[37036908] 'process raft request'  (duration: 356.411614ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-01T03:30:01.510311Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-01T03:30:01.149139Z","time spent":"358.246103ms","remote":"127.0.0.1:37650","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":6576,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-controller-manager-pause-542495\" mod_revision:419 > success:<request_put:<key:\"/registry/pods/kube-system/kube-controller-manager-pause-542495\" value_size:6505 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-controller-manager-pause-542495\" > >"}
	{"level":"warn","ts":"2024-05-01T03:30:02.005223Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"375.221798ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-controller-manager-pause-542495\" ","response":"range_response_count:1 size:6591"}
	{"level":"info","ts":"2024-05-01T03:30:02.005451Z","caller":"traceutil/trace.go:171","msg":"trace[1835108747] range","detail":"{range_begin:/registry/pods/kube-system/kube-controller-manager-pause-542495; range_end:; response_count:1; response_revision:470; }","duration":"375.533196ms","start":"2024-05-01T03:30:01.629887Z","end":"2024-05-01T03:30:02.00542Z","steps":["trace[1835108747] 'range keys from in-memory index tree'  (duration: 375.138384ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-01T03:30:02.00552Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-01T03:30:01.629868Z","time spent":"375.633384ms","remote":"127.0.0.1:37650","response type":"/etcdserverpb.KV/Range","request count":0,"request size":65,"response count":1,"response size":6613,"request content":"key:\"/registry/pods/kube-system/kube-controller-manager-pause-542495\" "}
	{"level":"info","ts":"2024-05-01T03:30:02.203883Z","caller":"traceutil/trace.go:171","msg":"trace[204837190] transaction","detail":"{read_only:false; response_revision:471; number_of_response:1; }","duration":"181.362279ms","start":"2024-05-01T03:30:02.022499Z","end":"2024-05-01T03:30:02.203861Z","steps":["trace[204837190] 'process raft request'  (duration: 181.219461ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-01T03:30:04.485058Z","caller":"traceutil/trace.go:171","msg":"trace[416552853] transaction","detail":"{read_only:false; response_revision:473; number_of_response:1; }","duration":"240.101055ms","start":"2024-05-01T03:30:04.244936Z","end":"2024-05-01T03:30:04.485037Z","steps":["trace[416552853] 'process raft request'  (duration: 172.088536ms)","trace[416552853] 'compare'  (duration: 67.893847ms)"],"step_count":2}
	
	
	==> kernel <==
	 03:30:13 up 2 min,  0 users,  load average: 0.87, 0.32, 0.12
	Linux pause-542495 5.10.207 #1 SMP Tue Apr 30 22:38:43 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [4f8f7cca080d5d6cd94a98511e14b930e6b8bee5eee4b99ed932d0626be72bc0] <==
	I0501 03:29:41.911802       1 controller.go:86] Shutting down OpenAPI V3 AggregationController
	I0501 03:29:41.927500       1 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0501 03:29:41.927565       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0501 03:29:41.927620       1 secure_serving.go:258] Stopped listening on [::]:8443
	I0501 03:29:41.929310       1 dynamic_serving_content.go:146] "Shutting down controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0501 03:29:41.937912       1 controller.go:157] Shutting down quota evaluator
	I0501 03:29:41.938129       1 controller.go:176] quota evaluator worker shutdown
	I0501 03:29:41.938293       1 controller.go:176] quota evaluator worker shutdown
	I0501 03:29:41.938353       1 controller.go:176] quota evaluator worker shutdown
	I0501 03:29:41.938460       1 controller.go:176] quota evaluator worker shutdown
	I0501 03:29:41.938606       1 controller.go:176] quota evaluator worker shutdown
	E0501 03:29:42.663784       1 storage_rbac.go:187] unable to initialize clusterroles: Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles": dial tcp 127.0.0.1:8443: connect: connection refused
	W0501 03:29:42.666660       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0501 03:29:43.662925       1 storage_rbac.go:187] unable to initialize clusterroles: Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles": dial tcp 127.0.0.1:8443: connect: connection refused
	W0501 03:29:43.665466       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0501 03:29:44.663373       1 storage_rbac.go:187] unable to initialize clusterroles: Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles": dial tcp 127.0.0.1:8443: connect: connection refused
	W0501 03:29:44.665744       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0501 03:29:45.662707       1 storage_rbac.go:187] unable to initialize clusterroles: Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles": dial tcp 127.0.0.1:8443: connect: connection refused
	W0501 03:29:45.665731       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0501 03:29:46.662822       1 storage_rbac.go:187] unable to initialize clusterroles: Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles": dial tcp 127.0.0.1:8443: connect: connection refused
	W0501 03:29:46.666578       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0501 03:29:47.663081       1 storage_rbac.go:187] unable to initialize clusterroles: Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles": dial tcp 127.0.0.1:8443: connect: connection refused
	W0501 03:29:47.665860       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0501 03:29:48.662793       1 storage_rbac.go:187] unable to initialize clusterroles: Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles": dial tcp 127.0.0.1:8443: connect: connection refused
	W0501 03:29:48.665768       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	
	
	==> kube-apiserver [dffc93cfd52f5af64e714cb776f0ac72bf1b49ce8b4419ef46d6adf7e2208982] <==
	I0501 03:29:53.780481       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0501 03:29:53.780510       1 aggregator.go:165] initial CRD sync complete...
	I0501 03:29:53.780516       1 autoregister_controller.go:141] Starting autoregister controller
	I0501 03:29:53.780522       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0501 03:29:53.780526       1 cache.go:39] Caches are synced for autoregister controller
	I0501 03:29:53.812281       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0501 03:29:53.830344       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0501 03:29:53.830397       1 policy_source.go:224] refreshing policies
	I0501 03:29:53.832123       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0501 03:29:53.832624       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0501 03:29:53.833930       1 shared_informer.go:320] Caches are synced for configmaps
	I0501 03:29:53.835476       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0501 03:29:53.836118       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0501 03:29:53.836239       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0501 03:29:53.837694       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0501 03:29:53.842312       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0501 03:29:54.647508       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0501 03:29:55.055884       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.4]
	I0501 03:29:55.057691       1 controller.go:615] quota admission added evaluator for: endpoints
	I0501 03:29:55.068095       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0501 03:29:55.750292       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0501 03:29:55.789128       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0501 03:29:55.872276       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0501 03:29:55.932017       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0501 03:29:55.944148       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-controller-manager [8631b6ae7b505faf1e9286be8a8009e27a2f5cecd55bff0b3b8ae07a794faefa] <==
	I0501 03:30:06.348265       1 shared_informer.go:320] Caches are synced for taint
	I0501 03:30:06.348378       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0501 03:30:06.348456       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-542495"
	I0501 03:30:06.348530       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0501 03:30:06.350092       1 shared_informer.go:320] Caches are synced for node
	I0501 03:30:06.350241       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0501 03:30:06.350291       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0501 03:30:06.350315       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0501 03:30:06.350339       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0501 03:30:06.353642       1 shared_informer.go:320] Caches are synced for expand
	I0501 03:30:06.357024       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0501 03:30:06.375349       1 shared_informer.go:320] Caches are synced for crt configmap
	I0501 03:30:06.418022       1 shared_informer.go:320] Caches are synced for persistent volume
	I0501 03:30:06.422970       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0501 03:30:06.458075       1 shared_informer.go:320] Caches are synced for cronjob
	I0501 03:30:06.459422       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0501 03:30:06.470892       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0501 03:30:06.508390       1 shared_informer.go:320] Caches are synced for endpoint
	I0501 03:30:06.511524       1 shared_informer.go:320] Caches are synced for HPA
	I0501 03:30:06.511627       1 shared_informer.go:320] Caches are synced for resource quota
	I0501 03:30:06.531266       1 shared_informer.go:320] Caches are synced for job
	I0501 03:30:06.550043       1 shared_informer.go:320] Caches are synced for resource quota
	I0501 03:30:06.995308       1 shared_informer.go:320] Caches are synced for garbage collector
	I0501 03:30:06.995435       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0501 03:30:06.997842       1 shared_informer.go:320] Caches are synced for garbage collector
	
	
	==> kube-controller-manager [92a924bfd194ca856d32145f1b416a6feaeb97526d5e7d1f3806a954427b41cc] <==
	I0501 03:29:39.259411       1 serving.go:380] Generated self-signed cert in-memory
	I0501 03:29:39.688411       1 controllermanager.go:189] "Starting" version="v1.30.0"
	I0501 03:29:39.688493       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0501 03:29:39.690500       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0501 03:29:39.690613       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0501 03:29:39.691110       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0501 03:29:39.691249       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	
	
	==> kube-proxy [16a476e1af3d92a5d2bb28639cec07f414b874fd246a7d9bf61dd4c5a84048ba] <==
	I0501 03:29:54.991300       1 server_linux.go:69] "Using iptables proxy"
	I0501 03:29:55.001475       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.4"]
	I0501 03:29:55.055292       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0501 03:29:55.055375       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0501 03:29:55.055400       1 server_linux.go:165] "Using iptables Proxier"
	I0501 03:29:55.061480       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0501 03:29:55.061707       1 server.go:872] "Version info" version="v1.30.0"
	I0501 03:29:55.061759       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0501 03:29:55.063582       1 config.go:192] "Starting service config controller"
	I0501 03:29:55.063631       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0501 03:29:55.063664       1 config.go:101] "Starting endpoint slice config controller"
	I0501 03:29:55.063670       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0501 03:29:55.066029       1 config.go:319] "Starting node config controller"
	I0501 03:29:55.066076       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0501 03:29:55.164489       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0501 03:29:55.164598       1 shared_informer.go:320] Caches are synced for service config
	I0501 03:29:55.166115       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [4d30e5e711023dd3054c30c78cddbaba96016c1d461380e99168233181b94590] <==
	
	
	==> kube-scheduler [38f0cafe7b1950e0819feeddbbc6000076623a8f23896a0272ce7823ff05494d] <==
	W0501 03:29:53.769762       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found]
	E0501 03:29:53.769812       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found]
	W0501 03:29:53.769884       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	E0501 03:29:53.769895       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	W0501 03:29:53.770007       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found]
	E0501 03:29:53.770056       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found]
	W0501 03:29:53.770119       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found]
	E0501 03:29:53.770238       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found]
	W0501 03:29:53.770322       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found]
	E0501 03:29:53.770359       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found]
	W0501 03:29:53.770487       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found]
	E0501 03:29:53.770521       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found]
	W0501 03:29:53.770584       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	E0501 03:29:53.770593       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	W0501 03:29:53.770645       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found]
	E0501 03:29:53.770681       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found]
	W0501 03:29:53.770738       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found]
	E0501 03:29:53.770747       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found]
	W0501 03:29:53.770952       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found]
	E0501 03:29:53.770990       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found]
	W0501 03:29:53.771002       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	E0501 03:29:53.771010       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	W0501 03:29:53.772434       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found]
	E0501 03:29:53.772478       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found]
	I0501 03:29:53.849868       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [ec898beec5e3d2f8d855ee92cc720b0d9dca6366cf80da972ab7c460dbaebf7b] <==
	I0501 03:29:39.244027       1 serving.go:380] Generated self-signed cert in-memory
	W0501 03:29:41.711546       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0501 03:29:41.711596       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0501 03:29:41.711606       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0501 03:29:41.711612       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0501 03:29:41.758649       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0"
	I0501 03:29:41.759856       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0501 03:29:41.771653       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0501 03:29:41.771812       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0501 03:29:41.774380       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0501 03:29:41.775615       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0501 03:29:41.875095       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0501 03:29:48.933436       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	E0501 03:29:48.933966       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	May 01 03:29:50 pause-542495 kubelet[3302]: I0501 03:29:50.648845    3302 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/24d8d34219349d61e1fe05674be00f92-ca-certs\") pod \"kube-controller-manager-pause-542495\" (UID: \"24d8d34219349d61e1fe05674be00f92\") " pod="kube-system/kube-controller-manager-pause-542495"
	May 01 03:29:50 pause-542495 kubelet[3302]: I0501 03:29:50.648862    3302 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/24d8d34219349d61e1fe05674be00f92-k8s-certs\") pod \"kube-controller-manager-pause-542495\" (UID: \"24d8d34219349d61e1fe05674be00f92\") " pod="kube-system/kube-controller-manager-pause-542495"
	May 01 03:29:50 pause-542495 kubelet[3302]: I0501 03:29:50.648876    3302 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/24d8d34219349d61e1fe05674be00f92-kubeconfig\") pod \"kube-controller-manager-pause-542495\" (UID: \"24d8d34219349d61e1fe05674be00f92\") " pod="kube-system/kube-controller-manager-pause-542495"
	May 01 03:29:50 pause-542495 kubelet[3302]: I0501 03:29:50.648892    3302 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/24d8d34219349d61e1fe05674be00f92-usr-share-ca-certificates\") pod \"kube-controller-manager-pause-542495\" (UID: \"24d8d34219349d61e1fe05674be00f92\") " pod="kube-system/kube-controller-manager-pause-542495"
	May 01 03:29:50 pause-542495 kubelet[3302]: E0501 03:29:50.648966    3302 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-542495?timeout=10s\": dial tcp 192.168.39.4:8443: connect: connection refused" interval="400ms"
	May 01 03:29:50 pause-542495 kubelet[3302]: I0501 03:29:50.747389    3302 kubelet_node_status.go:73] "Attempting to register node" node="pause-542495"
	May 01 03:29:50 pause-542495 kubelet[3302]: E0501 03:29:50.748317    3302 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.4:8443: connect: connection refused" node="pause-542495"
	May 01 03:29:50 pause-542495 kubelet[3302]: I0501 03:29:50.905795    3302 scope.go:117] "RemoveContainer" containerID="4f8f7cca080d5d6cd94a98511e14b930e6b8bee5eee4b99ed932d0626be72bc0"
	May 01 03:29:50 pause-542495 kubelet[3302]: I0501 03:29:50.906524    3302 scope.go:117] "RemoveContainer" containerID="92a924bfd194ca856d32145f1b416a6feaeb97526d5e7d1f3806a954427b41cc"
	May 01 03:29:50 pause-542495 kubelet[3302]: I0501 03:29:50.908533    3302 scope.go:117] "RemoveContainer" containerID="ec898beec5e3d2f8d855ee92cc720b0d9dca6366cf80da972ab7c460dbaebf7b"
	May 01 03:29:51 pause-542495 kubelet[3302]: E0501 03:29:51.050332    3302 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-542495?timeout=10s\": dial tcp 192.168.39.4:8443: connect: connection refused" interval="800ms"
	May 01 03:29:51 pause-542495 kubelet[3302]: I0501 03:29:51.150760    3302 kubelet_node_status.go:73] "Attempting to register node" node="pause-542495"
	May 01 03:29:51 pause-542495 kubelet[3302]: E0501 03:29:51.151687    3302 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.4:8443: connect: connection refused" node="pause-542495"
	May 01 03:29:51 pause-542495 kubelet[3302]: I0501 03:29:51.953334    3302 kubelet_node_status.go:73] "Attempting to register node" node="pause-542495"
	May 01 03:29:53 pause-542495 kubelet[3302]: I0501 03:29:53.902679    3302 kubelet_node_status.go:112] "Node was previously registered" node="pause-542495"
	May 01 03:29:53 pause-542495 kubelet[3302]: I0501 03:29:53.903238    3302 kubelet_node_status.go:76] "Successfully registered node" node="pause-542495"
	May 01 03:29:53 pause-542495 kubelet[3302]: I0501 03:29:53.908112    3302 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	May 01 03:29:53 pause-542495 kubelet[3302]: I0501 03:29:53.909117    3302 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	May 01 03:29:54 pause-542495 kubelet[3302]: I0501 03:29:54.431068    3302 apiserver.go:52] "Watching apiserver"
	May 01 03:29:54 pause-542495 kubelet[3302]: I0501 03:29:54.440358    3302 topology_manager.go:215] "Topology Admit Handler" podUID="f44ac199-32c4-4977-8f63-564a23e4b83e" podNamespace="kube-system" podName="kube-proxy-x7vrf"
	May 01 03:29:54 pause-542495 kubelet[3302]: I0501 03:29:54.443359    3302 topology_manager.go:215] "Topology Admit Handler" podUID="1f2c4209-14df-46c4-abc7-cf93b398a872" podNamespace="kube-system" podName="coredns-7db6d8ff4d-lz5kj"
	May 01 03:29:54 pause-542495 kubelet[3302]: I0501 03:29:54.543858    3302 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	May 01 03:29:54 pause-542495 kubelet[3302]: I0501 03:29:54.544291    3302 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f44ac199-32c4-4977-8f63-564a23e4b83e-lib-modules\") pod \"kube-proxy-x7vrf\" (UID: \"f44ac199-32c4-4977-8f63-564a23e4b83e\") " pod="kube-system/kube-proxy-x7vrf"
	May 01 03:29:54 pause-542495 kubelet[3302]: I0501 03:29:54.545115    3302 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f44ac199-32c4-4977-8f63-564a23e4b83e-xtables-lock\") pod \"kube-proxy-x7vrf\" (UID: \"f44ac199-32c4-4977-8f63-564a23e4b83e\") " pod="kube-system/kube-proxy-x7vrf"
	May 01 03:29:54 pause-542495 kubelet[3302]: I0501 03:29:54.745426    3302 scope.go:117] "RemoveContainer" containerID="4d30e5e711023dd3054c30c78cddbaba96016c1d461380e99168233181b94590"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0501 03:30:12.155171   65464 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18779-13391/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-542495 -n pause-542495
helpers_test.go:261: (dbg) Run:  kubectl --context pause-542495 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-542495 -n pause-542495
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-542495 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-542495 logs -n 25: (1.752390431s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-731347 sudo                                | cilium-731347             | jenkins | v1.33.0 | 01 May 24 03:28 UTC |                     |
	|         | systemctl status cri-docker                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-731347 sudo                                | cilium-731347             | jenkins | v1.33.0 | 01 May 24 03:28 UTC |                     |
	|         | systemctl cat cri-docker                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-731347 sudo cat                            | cilium-731347             | jenkins | v1.33.0 | 01 May 24 03:28 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p cilium-731347 sudo cat                            | cilium-731347             | jenkins | v1.33.0 | 01 May 24 03:28 UTC |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p cilium-731347 sudo                                | cilium-731347             | jenkins | v1.33.0 | 01 May 24 03:28 UTC |                     |
	|         | cri-dockerd --version                                |                           |         |         |                     |                     |
	| ssh     | -p cilium-731347 sudo                                | cilium-731347             | jenkins | v1.33.0 | 01 May 24 03:28 UTC |                     |
	|         | systemctl status containerd                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-731347 sudo                                | cilium-731347             | jenkins | v1.33.0 | 01 May 24 03:28 UTC |                     |
	|         | systemctl cat containerd                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-731347 sudo cat                            | cilium-731347             | jenkins | v1.33.0 | 01 May 24 03:28 UTC |                     |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p cilium-731347 sudo cat                            | cilium-731347             | jenkins | v1.33.0 | 01 May 24 03:28 UTC |                     |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p cilium-731347 sudo                                | cilium-731347             | jenkins | v1.33.0 | 01 May 24 03:28 UTC |                     |
	|         | containerd config dump                               |                           |         |         |                     |                     |
	| ssh     | -p cilium-731347 sudo                                | cilium-731347             | jenkins | v1.33.0 | 01 May 24 03:28 UTC |                     |
	|         | systemctl status crio --all                          |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p cilium-731347 sudo                                | cilium-731347             | jenkins | v1.33.0 | 01 May 24 03:28 UTC |                     |
	|         | systemctl cat crio --no-pager                        |                           |         |         |                     |                     |
	| ssh     | -p cilium-731347 sudo find                           | cilium-731347             | jenkins | v1.33.0 | 01 May 24 03:28 UTC |                     |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-731347 sudo crio                           | cilium-731347             | jenkins | v1.33.0 | 01 May 24 03:28 UTC |                     |
	|         | config                                               |                           |         |         |                     |                     |
	| delete  | -p cilium-731347                                     | cilium-731347             | jenkins | v1.33.0 | 01 May 24 03:28 UTC | 01 May 24 03:28 UTC |
	| start   | -p force-systemd-flag-616131                         | force-systemd-flag-616131 | jenkins | v1.33.0 | 01 May 24 03:28 UTC | 01 May 24 03:29 UTC |
	|         | --memory=2048 --force-systemd                        |                           |         |         |                     |                     |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| start   | -p running-upgrade-179111                            | running-upgrade-179111    | jenkins | v1.33.0 | 01 May 24 03:29 UTC | 01 May 24 03:30 UTC |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-616131 ssh cat                    | force-systemd-flag-616131 | jenkins | v1.33.0 | 01 May 24 03:29 UTC | 01 May 24 03:29 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf                   |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-616131                         | force-systemd-flag-616131 | jenkins | v1.33.0 | 01 May 24 03:29 UTC | 01 May 24 03:29 UTC |
	| start   | -p pause-542495                                      | pause-542495              | jenkins | v1.33.0 | 01 May 24 03:29 UTC | 01 May 24 03:30 UTC |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| start   | -p cert-options-582976                               | cert-options-582976       | jenkins | v1.33.0 | 01 May 24 03:29 UTC |                     |
	|         | --memory=2048                                        |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1                            |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15                        |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost                          |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com                     |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                                |                           |         |         |                     |                     |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-046243                         | kubernetes-upgrade-046243 | jenkins | v1.33.0 | 01 May 24 03:29 UTC | 01 May 24 03:29 UTC |
	| start   | -p kubernetes-upgrade-046243                         | kubernetes-upgrade-046243 | jenkins | v1.33.0 | 01 May 24 03:29 UTC |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-179111                            | running-upgrade-179111    | jenkins | v1.33.0 | 01 May 24 03:30 UTC | 01 May 24 03:30 UTC |
	| start   | -p old-k8s-version-503971                            | old-k8s-version-503971    | jenkins | v1.33.0 | 01 May 24 03:30 UTC |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                           |         |         |                     |                     |
	|         | --kvm-network=default                                |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                        |                           |         |         |                     |                     |
	|         | --disable-driver-mounts                              |                           |         |         |                     |                     |
	|         | --keep-context=false                                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                         |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/01 03:30:12
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0501 03:30:12.547276   65502 out.go:291] Setting OutFile to fd 1 ...
	I0501 03:30:12.547417   65502 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 03:30:12.547434   65502 out.go:304] Setting ErrFile to fd 2...
	I0501 03:30:12.547450   65502 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 03:30:12.547718   65502 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18779-13391/.minikube/bin
	I0501 03:30:12.548366   65502 out.go:298] Setting JSON to false
	I0501 03:30:12.549428   65502 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7956,"bootTime":1714526257,"procs":218,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0501 03:30:12.549491   65502 start.go:139] virtualization: kvm guest
	I0501 03:30:12.551613   65502 out.go:177] * [old-k8s-version-503971] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0501 03:30:12.552973   65502 out.go:177]   - MINIKUBE_LOCATION=18779
	I0501 03:30:12.554272   65502 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0501 03:30:12.553057   65502 notify.go:220] Checking for updates...
	I0501 03:30:12.556671   65502 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18779-13391/kubeconfig
	I0501 03:30:12.557966   65502 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18779-13391/.minikube
	I0501 03:30:12.559376   65502 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0501 03:30:12.560810   65502 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0501 03:30:12.562481   65502 config.go:182] Loaded profile config "cert-options-582976": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0501 03:30:12.562600   65502 config.go:182] Loaded profile config "kubernetes-upgrade-046243": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0501 03:30:12.562731   65502 config.go:182] Loaded profile config "pause-542495": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0501 03:30:12.562841   65502 driver.go:392] Setting default libvirt URI to qemu:///system
	I0501 03:30:12.601023   65502 out.go:177] * Using the kvm2 driver based on user configuration
	I0501 03:30:12.602106   65502 start.go:297] selected driver: kvm2
	I0501 03:30:12.602117   65502 start.go:901] validating driver "kvm2" against <nil>
	I0501 03:30:12.602127   65502 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0501 03:30:12.602818   65502 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0501 03:30:12.602893   65502 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18779-13391/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0501 03:30:12.618994   65502 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0501 03:30:12.619061   65502 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0501 03:30:12.619303   65502 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0501 03:30:12.619358   65502 cni.go:84] Creating CNI manager for ""
	I0501 03:30:12.619368   65502 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0501 03:30:12.619379   65502 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0501 03:30:12.619429   65502 start.go:340] cluster config:
	{Name:old-k8s-version-503971 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-503971 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 03:30:12.619519   65502 iso.go:125] acquiring lock: {Name:mkcd2496aadb29931e179193d707c731bfda98b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0501 03:30:12.621066   65502 out.go:177] * Starting "old-k8s-version-503971" primary control-plane node in "old-k8s-version-503971" cluster
	I0501 03:30:12.935380   64773 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0501 03:30:12.935472   64773 kubeadm.go:309] [preflight] Running pre-flight checks
	I0501 03:30:12.935591   64773 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0501 03:30:12.935744   64773 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0501 03:30:12.935886   64773 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0501 03:30:12.935982   64773 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0501 03:30:12.937324   64773 out.go:204]   - Generating certificates and keys ...
	I0501 03:30:12.937452   64773 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0501 03:30:12.937540   64773 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0501 03:30:12.937633   64773 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0501 03:30:12.937714   64773 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0501 03:30:12.937772   64773 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0501 03:30:12.937812   64773 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0501 03:30:12.937854   64773 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0501 03:30:12.937955   64773 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [cert-options-582976 localhost] and IPs [192.168.50.20 127.0.0.1 ::1]
	I0501 03:30:12.937996   64773 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0501 03:30:12.938108   64773 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [cert-options-582976 localhost] and IPs [192.168.50.20 127.0.0.1 ::1]
	I0501 03:30:12.938163   64773 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0501 03:30:12.938218   64773 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0501 03:30:12.938255   64773 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0501 03:30:12.938306   64773 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0501 03:30:12.938352   64773 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0501 03:30:12.938412   64773 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0501 03:30:12.938485   64773 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0501 03:30:12.938550   64773 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0501 03:30:12.938608   64773 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0501 03:30:12.938709   64773 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0501 03:30:12.938800   64773 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0501 03:30:12.940184   64773 out.go:204]   - Booting up control plane ...
	I0501 03:30:12.940286   64773 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0501 03:30:12.940390   64773 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0501 03:30:12.940474   64773 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0501 03:30:12.940590   64773 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0501 03:30:12.940707   64773 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0501 03:30:12.940759   64773 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0501 03:30:12.940898   64773 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0501 03:30:12.940954   64773 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0501 03:30:12.941000   64773 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.002349903s
	I0501 03:30:12.941055   64773 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0501 03:30:12.941099   64773 kubeadm.go:309] [api-check] The API server is healthy after 5.001414892s
	I0501 03:30:12.941196   64773 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0501 03:30:12.941304   64773 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0501 03:30:12.941349   64773 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0501 03:30:12.941514   64773 kubeadm.go:309] [mark-control-plane] Marking the node cert-options-582976 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0501 03:30:12.941559   64773 kubeadm.go:309] [bootstrap-token] Using token: s4bz1w.bk7bk8qon5oo5bfh
	I0501 03:30:12.942842   64773 out.go:204]   - Configuring RBAC rules ...
	I0501 03:30:12.942949   64773 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0501 03:30:12.943044   64773 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0501 03:30:12.943235   64773 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0501 03:30:12.943378   64773 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0501 03:30:12.943528   64773 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0501 03:30:12.943604   64773 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0501 03:30:12.943742   64773 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0501 03:30:12.943822   64773 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0501 03:30:12.943880   64773 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0501 03:30:12.943884   64773 kubeadm.go:309] 
	I0501 03:30:12.943952   64773 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0501 03:30:12.943956   64773 kubeadm.go:309] 
	I0501 03:30:12.944024   64773 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0501 03:30:12.944027   64773 kubeadm.go:309] 
	I0501 03:30:12.944047   64773 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0501 03:30:12.944100   64773 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0501 03:30:12.944140   64773 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0501 03:30:12.944142   64773 kubeadm.go:309] 
	I0501 03:30:12.944217   64773 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0501 03:30:12.944223   64773 kubeadm.go:309] 
	I0501 03:30:12.944287   64773 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0501 03:30:12.944292   64773 kubeadm.go:309] 
	I0501 03:30:12.944354   64773 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0501 03:30:12.944451   64773 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0501 03:30:12.944546   64773 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0501 03:30:12.944551   64773 kubeadm.go:309] 
	I0501 03:30:12.944643   64773 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0501 03:30:12.944741   64773 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0501 03:30:12.944745   64773 kubeadm.go:309] 
	I0501 03:30:12.944819   64773 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8555 --token s4bz1w.bk7bk8qon5oo5bfh \
	I0501 03:30:12.944901   64773 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:bd94cc6acfb95fe36f70b12c6a1f2980601b8235e7c4dea65cebdf35eb514754 \
	I0501 03:30:12.944916   64773 kubeadm.go:309] 	--control-plane 
	I0501 03:30:12.944919   64773 kubeadm.go:309] 
	I0501 03:30:12.944986   64773 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0501 03:30:12.944993   64773 kubeadm.go:309] 
	I0501 03:30:12.945058   64773 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8555 --token s4bz1w.bk7bk8qon5oo5bfh \
	I0501 03:30:12.945157   64773 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:bd94cc6acfb95fe36f70b12c6a1f2980601b8235e7c4dea65cebdf35eb514754 
	I0501 03:30:12.945180   64773 cni.go:84] Creating CNI manager for ""
	I0501 03:30:12.945187   64773 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0501 03:30:12.946704   64773 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0501 03:30:12.947946   64773 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0501 03:30:12.963553   64773 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0501 03:30:12.984264   64773 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0501 03:30:12.984345   64773 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:30:12.984371   64773 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes cert-options-582976 minikube.k8s.io/updated_at=2024_05_01T03_30_12_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=2c4eae41cda912e6a762d77f0d8868e00f97bb4e minikube.k8s.io/name=cert-options-582976 minikube.k8s.io/primary=true
	I0501 03:30:13.241364   64773 ops.go:34] apiserver oom_adj: -16
	I0501 03:30:13.241418   64773 kubeadm.go:1107] duration metric: took 257.132595ms to wait for elevateKubeSystemPrivileges
	W0501 03:30:13.241448   64773 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0501 03:30:13.241455   64773 kubeadm.go:393] duration metric: took 11.520711301s to StartCluster
	I0501 03:30:13.241472   64773 settings.go:142] acquiring lock: {Name:mkcfe08bcadd45d99c00ed1eaf4e9c226a489fe2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 03:30:13.241554   64773 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18779-13391/kubeconfig
	I0501 03:30:13.242846   64773 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18779-13391/kubeconfig: {Name:mk5d1131557f885800877622151c0915337cb23a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 03:30:13.243082   64773 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0501 03:30:13.243096   64773 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.20 Port:8555 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0501 03:30:13.244809   64773 out.go:177] * Verifying Kubernetes components...
	I0501 03:30:13.243166   64773 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0501 03:30:13.243315   64773 config.go:182] Loaded profile config "cert-options-582976": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0501 03:30:13.246222   64773 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 03:30:13.246226   64773 addons.go:69] Setting storage-provisioner=true in profile "cert-options-582976"
	I0501 03:30:13.246234   64773 addons.go:69] Setting default-storageclass=true in profile "cert-options-582976"
	I0501 03:30:13.246258   64773 addons.go:234] Setting addon storage-provisioner=true in "cert-options-582976"
	I0501 03:30:13.246257   64773 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "cert-options-582976"
	I0501 03:30:13.246293   64773 host.go:66] Checking if "cert-options-582976" exists ...
	I0501 03:30:13.246705   64773 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:30:13.246726   64773 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:30:13.246744   64773 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:30:13.246748   64773 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:30:13.263693   64773 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39813
	I0501 03:30:13.264259   64773 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:30:13.264862   64773 main.go:141] libmachine: Using API Version  1
	I0501 03:30:13.264881   64773 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:30:13.265242   64773 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:30:13.265309   64773 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45443
	I0501 03:30:13.265734   64773 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:30:13.265816   64773 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:30:13.265858   64773 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:30:13.266197   64773 main.go:141] libmachine: Using API Version  1
	I0501 03:30:13.266209   64773 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:30:13.266751   64773 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:30:13.267679   64773 main.go:141] libmachine: (cert-options-582976) Calling .GetState
	I0501 03:30:13.271247   64773 addons.go:234] Setting addon default-storageclass=true in "cert-options-582976"
	I0501 03:30:13.271277   64773 host.go:66] Checking if "cert-options-582976" exists ...
	I0501 03:30:13.271644   64773 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:30:13.271668   64773 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:30:13.288397   64773 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44047
	I0501 03:30:13.288927   64773 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:30:13.289436   64773 main.go:141] libmachine: Using API Version  1
	I0501 03:30:13.289447   64773 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:30:13.289881   64773 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:30:13.290271   64773 main.go:141] libmachine: (cert-options-582976) Calling .GetState
	I0501 03:30:13.292449   64773 main.go:141] libmachine: (cert-options-582976) Calling .DriverName
	I0501 03:30:13.294614   64773 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0501 03:30:13.293038   64773 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34961
	I0501 03:30:13.296055   64773 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0501 03:30:13.296066   64773 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0501 03:30:13.296083   64773 main.go:141] libmachine: (cert-options-582976) Calling .GetSSHHostname
	I0501 03:30:13.296398   64773 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:30:13.296905   64773 main.go:141] libmachine: Using API Version  1
	I0501 03:30:13.296915   64773 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:30:13.297358   64773 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:30:13.297975   64773 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:30:13.298013   64773 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:30:13.299150   64773 main.go:141] libmachine: (cert-options-582976) DBG | domain cert-options-582976 has defined MAC address 52:54:00:0c:8c:a4 in network mk-cert-options-582976
	I0501 03:30:13.299532   64773 main.go:141] libmachine: (cert-options-582976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:8c:a4", ip: ""} in network mk-cert-options-582976: {Iface:virbr2 ExpiryTime:2024-05-01 04:29:45 +0000 UTC Type:0 Mac:52:54:00:0c:8c:a4 Iaid: IPaddr:192.168.50.20 Prefix:24 Hostname:cert-options-582976 Clientid:01:52:54:00:0c:8c:a4}
	I0501 03:30:13.299548   64773 main.go:141] libmachine: (cert-options-582976) DBG | domain cert-options-582976 has defined IP address 192.168.50.20 and MAC address 52:54:00:0c:8c:a4 in network mk-cert-options-582976
	I0501 03:30:13.299781   64773 main.go:141] libmachine: (cert-options-582976) Calling .GetSSHPort
	I0501 03:30:13.299930   64773 main.go:141] libmachine: (cert-options-582976) Calling .GetSSHKeyPath
	I0501 03:30:13.300054   64773 main.go:141] libmachine: (cert-options-582976) Calling .GetSSHUsername
	I0501 03:30:13.300188   64773 sshutil.go:53] new ssh client: &{IP:192.168.50.20 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/cert-options-582976/id_rsa Username:docker}
	I0501 03:30:13.318603   64773 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44555
	I0501 03:30:13.319122   64773 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:30:13.319510   64773 main.go:141] libmachine: Using API Version  1
	I0501 03:30:13.319520   64773 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:30:13.319896   64773 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:30:13.320059   64773 main.go:141] libmachine: (cert-options-582976) Calling .GetState
	I0501 03:30:13.321759   64773 main.go:141] libmachine: (cert-options-582976) Calling .DriverName
	I0501 03:30:13.322127   64773 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0501 03:30:13.322134   64773 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0501 03:30:13.322144   64773 main.go:141] libmachine: (cert-options-582976) Calling .GetSSHHostname
	I0501 03:30:13.324872   64773 main.go:141] libmachine: (cert-options-582976) DBG | domain cert-options-582976 has defined MAC address 52:54:00:0c:8c:a4 in network mk-cert-options-582976
	I0501 03:30:13.325288   64773 main.go:141] libmachine: (cert-options-582976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:8c:a4", ip: ""} in network mk-cert-options-582976: {Iface:virbr2 ExpiryTime:2024-05-01 04:29:45 +0000 UTC Type:0 Mac:52:54:00:0c:8c:a4 Iaid: IPaddr:192.168.50.20 Prefix:24 Hostname:cert-options-582976 Clientid:01:52:54:00:0c:8c:a4}
	I0501 03:30:13.325305   64773 main.go:141] libmachine: (cert-options-582976) DBG | domain cert-options-582976 has defined IP address 192.168.50.20 and MAC address 52:54:00:0c:8c:a4 in network mk-cert-options-582976
	I0501 03:30:13.325552   64773 main.go:141] libmachine: (cert-options-582976) Calling .GetSSHPort
	I0501 03:30:13.326009   64773 main.go:141] libmachine: (cert-options-582976) Calling .GetSSHKeyPath
	I0501 03:30:13.326796   64773 main.go:141] libmachine: (cert-options-582976) Calling .GetSSHUsername
	I0501 03:30:13.326995   64773 sshutil.go:53] new ssh client: &{IP:192.168.50.20 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/cert-options-582976/id_rsa Username:docker}
	I0501 03:30:13.510025   64773 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0501 03:30:13.510082   64773 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0501 03:30:13.604747   64773 api_server.go:52] waiting for apiserver process to appear ...
	I0501 03:30:13.604805   64773 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:30:13.650954   64773 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0501 03:30:13.691663   64773 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0501 03:30:14.072270   64773 start.go:946] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I0501 03:30:14.072331   64773 api_server.go:72] duration metric: took 829.208166ms to wait for apiserver process to appear ...
	I0501 03:30:14.072347   64773 api_server.go:88] waiting for apiserver healthz status ...
	I0501 03:30:14.072366   64773 api_server.go:253] Checking apiserver healthz at https://192.168.50.20:8555/healthz ...
	I0501 03:30:14.072382   64773 main.go:141] libmachine: Making call to close driver server
	I0501 03:30:14.072395   64773 main.go:141] libmachine: (cert-options-582976) Calling .Close
	I0501 03:30:14.072711   64773 main.go:141] libmachine: Successfully made call to close driver server
	I0501 03:30:14.072721   64773 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 03:30:14.072730   64773 main.go:141] libmachine: Making call to close driver server
	I0501 03:30:14.072739   64773 main.go:141] libmachine: (cert-options-582976) Calling .Close
	I0501 03:30:14.072946   64773 main.go:141] libmachine: Successfully made call to close driver server
	I0501 03:30:14.072954   64773 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 03:30:14.072958   64773 main.go:141] libmachine: (cert-options-582976) DBG | Closing plugin on server side
	I0501 03:30:14.086055   64773 api_server.go:279] https://192.168.50.20:8555/healthz returned 200:
	ok
	I0501 03:30:14.090219   64773 api_server.go:141] control plane version: v1.30.0
	I0501 03:30:14.090229   64773 api_server.go:131] duration metric: took 17.877595ms to wait for apiserver health ...
	I0501 03:30:14.090235   64773 system_pods.go:43] waiting for kube-system pods to appear ...
	I0501 03:30:14.108642   64773 main.go:141] libmachine: Making call to close driver server
	I0501 03:30:14.108660   64773 main.go:141] libmachine: (cert-options-582976) Calling .Close
	I0501 03:30:14.109757   64773 main.go:141] libmachine: (cert-options-582976) DBG | Closing plugin on server side
	I0501 03:30:14.109799   64773 main.go:141] libmachine: Successfully made call to close driver server
	I0501 03:30:14.109805   64773 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 03:30:14.112886   64773 system_pods.go:59] 4 kube-system pods found
	I0501 03:30:14.112910   64773 system_pods.go:61] "etcd-cert-options-582976" [042bef29-bfcb-435a-bd92-0b17aba24b4e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0501 03:30:14.112917   64773 system_pods.go:61] "kube-apiserver-cert-options-582976" [fb03c609-46d6-40f4-91e9-aa621a34ae91] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0501 03:30:14.112923   64773 system_pods.go:61] "kube-controller-manager-cert-options-582976" [b7e872d0-4081-445e-a9a6-a64953112f8c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0501 03:30:14.112928   64773 system_pods.go:61] "kube-scheduler-cert-options-582976" [d0929230-45e8-4ec3-a360-8e67646753f8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0501 03:30:14.112934   64773 system_pods.go:74] duration metric: took 22.695089ms to wait for pod list to return data ...
	I0501 03:30:14.112944   64773 kubeadm.go:576] duration metric: took 869.824599ms to wait for: map[apiserver:true system_pods:true]
	I0501 03:30:14.112953   64773 node_conditions.go:102] verifying NodePressure condition ...
	I0501 03:30:14.116598   64773 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0501 03:30:14.116618   64773 node_conditions.go:123] node cpu capacity is 2
	I0501 03:30:14.116633   64773 node_conditions.go:105] duration metric: took 3.674661ms to run NodePressure ...
	I0501 03:30:14.116648   64773 start.go:240] waiting for startup goroutines ...
	I0501 03:30:14.286877   64773 main.go:141] libmachine: Making call to close driver server
	I0501 03:30:14.286892   64773 main.go:141] libmachine: (cert-options-582976) Calling .Close
	I0501 03:30:14.290394   64773 main.go:141] libmachine: Successfully made call to close driver server
	I0501 03:30:14.290419   64773 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 03:30:14.290423   64773 main.go:141] libmachine: (cert-options-582976) DBG | Closing plugin on server side
	I0501 03:30:14.290427   64773 main.go:141] libmachine: Making call to close driver server
	I0501 03:30:14.290436   64773 main.go:141] libmachine: (cert-options-582976) Calling .Close
	I0501 03:30:14.290646   64773 main.go:141] libmachine: Successfully made call to close driver server
	I0501 03:30:14.290656   64773 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 03:30:14.290673   64773 main.go:141] libmachine: (cert-options-582976) DBG | Closing plugin on server side
	I0501 03:30:14.292397   64773 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	
	
	==> CRI-O <==
	May 01 03:30:14 pause-542495 crio[2464]: time="2024-05-01 03:30:14.982138914Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714534214982100455,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b51e1a61-40ef-4861-b007-fa39d4a9018c name=/runtime.v1.ImageService/ImageFsInfo
	May 01 03:30:14 pause-542495 crio[2464]: time="2024-05-01 03:30:14.983055741Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=21ba625e-7cb4-46a5-ab01-8e72a98f5e92 name=/runtime.v1.RuntimeService/ListContainers
	May 01 03:30:14 pause-542495 crio[2464]: time="2024-05-01 03:30:14.983242727Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=21ba625e-7cb4-46a5-ab01-8e72a98f5e92 name=/runtime.v1.RuntimeService/ListContainers
	May 01 03:30:14 pause-542495 crio[2464]: time="2024-05-01 03:30:14.983978998Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:16a476e1af3d92a5d2bb28639cec07f414b874fd246a7d9bf61dd4c5a84048ba,PodSandboxId:bb823534975d53f5e5287a5c82dd4018eb82dcced02c7a9d5e28f2713ff893de,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714534194783978189,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-x7vrf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f44ac199-32c4-4977-8f63-564a23e4b83e,},Annotations:map[string]string{io.kubernetes.container.hash: 95de7746,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8631b6ae7b505faf1e9286be8a8009e27a2f5cecd55bff0b3b8ae07a794faefa,PodSandboxId:5e6092e45f59702db958d15a6972fa250e320a2d415982ab858d4b5871a4b6fa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714534190957587306,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-542495,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24d8d34219349d61e1fe05674be00f92,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38f0cafe7b1950e0819feeddbbc6000076623a8f23896a0272ce7823ff05494d,PodSandboxId:77cd8d76d21dbd1776a97c72d57594093dda71b4b61e6b105c5bb346120ca827,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714534190941467331,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-542495,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c29c848e99ee9d176d08ac5bc565db5,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dffc93cfd52f5af64e714cb776f0ac72bf1b49ce8b4419ef46d6adf7e2208982,PodSandboxId:7e2b26855b74308131028514c95e0dd8e69a69b07f3c6ef175c4f57e76021bb6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714534190917113562,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-542495,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21ae9f772b4d01fd8d1605b312e4e87d,},Annotations:map[string]string{io.kubernetes.container.hash: f2d18315,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,i
o.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64f8f19ad8d82fc60e6c9b9c36b6e195dc4558956b053319e657b96d9b27b92d,PodSandboxId:ea903bee7754ec901582e9c5121d776d3ee92f9fe23c26c09540543887466778,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714534178464684246,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lz5kj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f2c4209-14df-46c4-abc7-cf93b398a872,},Annotations:map[string]string{io.kubernetes.container.hash: 80d52e3b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"pro
tocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a781211fa39e0e2823e25a0eab0d5a0e06393b10b52df90a08e6b872d5cc505,PodSandboxId:b065fb5908d6a38e1668d1beba7c0050c6ae711205bb22d474ac4a0a2bf82dc5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714534177742854650,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-542495,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ff0b7b04ff9d65173c3e81efe74f8d5,},Annotations:map[string]string{io
.kubernetes.container.hash: bde3beb4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d30e5e711023dd3054c30c78cddbaba96016c1d461380e99168233181b94590,PodSandboxId:bb823534975d53f5e5287a5c82dd4018eb82dcced02c7a9d5e28f2713ff893de,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1714534177756082963,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-x7vrf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f44ac199-32c4-4977-8f63-564a23e4b83e,},Annotations:map[string]string{io.kubernetes.container.hash: 95de77
46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f8f7cca080d5d6cd94a98511e14b930e6b8bee5eee4b99ed932d0626be72bc0,PodSandboxId:7e2b26855b74308131028514c95e0dd8e69a69b07f3c6ef175c4f57e76021bb6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714534177673100995,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-542495,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21ae9f772b4d01fd8d1605b312e4e87d,},Annotations:map[string]string{io.kubernetes.container.hash: f2d18315,io.kubernetes.co
ntainer.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92a924bfd194ca856d32145f1b416a6feaeb97526d5e7d1f3806a954427b41cc,PodSandboxId:5e6092e45f59702db958d15a6972fa250e320a2d415982ab858d4b5871a4b6fa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714534177610228665,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-542495,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24d8d34219349d61e1fe05674be00f92,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kuber
netes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec898beec5e3d2f8d855ee92cc720b0d9dca6366cf80da972ab7c460dbaebf7b,PodSandboxId:77cd8d76d21dbd1776a97c72d57594093dda71b4b61e6b105c5bb346120ca827,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1714534177474387272,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-542495,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c29c848e99ee9d176d08ac5bc565db5,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.res
tartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09b229cb407320d3c750597c319b318b1b9632dbff2346820d3561439a3a3115,PodSandboxId:e6a055773e04974c174545c30cc9a5af87b31843e34a1187fb8b87b88e46510f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714534120518515910,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lz5kj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f2c4209-14df-46c4-abc7-cf93b398a872,},Annotations:map[string]string{io.kubernetes.container.hash: 80d52e3b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:234285fef15e8f904fde8efeae5cdd5a3b89deed909a8d2078b72eb6ac39e7db,PodSandboxId:d4a6ff79cf15f1b0d4c08a0f699e9f74220986b8378c356650946fbfda438dd2,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1714534100017825751,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-542495,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 6ff0b7b04ff9d65173c3e81efe74f8d5,},Annotations:map[string]string{io.kubernetes.container.hash: bde3beb4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=21ba625e-7cb4-46a5-ab01-8e72a98f5e92 name=/runtime.v1.RuntimeService/ListContainers
	May 01 03:30:15 pause-542495 crio[2464]: time="2024-05-01 03:30:15.060653849Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6083b005-1e1a-4d92-9c71-112bff489055 name=/runtime.v1.RuntimeService/Version
	May 01 03:30:15 pause-542495 crio[2464]: time="2024-05-01 03:30:15.060727884Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6083b005-1e1a-4d92-9c71-112bff489055 name=/runtime.v1.RuntimeService/Version
	May 01 03:30:15 pause-542495 crio[2464]: time="2024-05-01 03:30:15.062852787Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=76613016-cff3-4c4b-b68a-948689164266 name=/runtime.v1.ImageService/ImageFsInfo
	May 01 03:30:15 pause-542495 crio[2464]: time="2024-05-01 03:30:15.063360141Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714534215063329975,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=76613016-cff3-4c4b-b68a-948689164266 name=/runtime.v1.ImageService/ImageFsInfo
	May 01 03:30:15 pause-542495 crio[2464]: time="2024-05-01 03:30:15.064777716Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=49726388-b58a-41dd-bda4-513c8e52d783 name=/runtime.v1.RuntimeService/ListContainers
	May 01 03:30:15 pause-542495 crio[2464]: time="2024-05-01 03:30:15.064837530Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=49726388-b58a-41dd-bda4-513c8e52d783 name=/runtime.v1.RuntimeService/ListContainers
	May 01 03:30:15 pause-542495 crio[2464]: time="2024-05-01 03:30:15.065146737Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:16a476e1af3d92a5d2bb28639cec07f414b874fd246a7d9bf61dd4c5a84048ba,PodSandboxId:bb823534975d53f5e5287a5c82dd4018eb82dcced02c7a9d5e28f2713ff893de,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714534194783978189,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-x7vrf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f44ac199-32c4-4977-8f63-564a23e4b83e,},Annotations:map[string]string{io.kubernetes.container.hash: 95de7746,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8631b6ae7b505faf1e9286be8a8009e27a2f5cecd55bff0b3b8ae07a794faefa,PodSandboxId:5e6092e45f59702db958d15a6972fa250e320a2d415982ab858d4b5871a4b6fa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714534190957587306,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-542495,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24d8d34219349d61e1fe05674be00f92,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38f0cafe7b1950e0819feeddbbc6000076623a8f23896a0272ce7823ff05494d,PodSandboxId:77cd8d76d21dbd1776a97c72d57594093dda71b4b61e6b105c5bb346120ca827,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714534190941467331,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-542495,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c29c848e99ee9d176d08ac5bc565db5,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dffc93cfd52f5af64e714cb776f0ac72bf1b49ce8b4419ef46d6adf7e2208982,PodSandboxId:7e2b26855b74308131028514c95e0dd8e69a69b07f3c6ef175c4f57e76021bb6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714534190917113562,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-542495,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21ae9f772b4d01fd8d1605b312e4e87d,},Annotations:map[string]string{io.kubernetes.container.hash: f2d18315,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,i
o.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64f8f19ad8d82fc60e6c9b9c36b6e195dc4558956b053319e657b96d9b27b92d,PodSandboxId:ea903bee7754ec901582e9c5121d776d3ee92f9fe23c26c09540543887466778,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714534178464684246,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lz5kj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f2c4209-14df-46c4-abc7-cf93b398a872,},Annotations:map[string]string{io.kubernetes.container.hash: 80d52e3b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"pro
tocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a781211fa39e0e2823e25a0eab0d5a0e06393b10b52df90a08e6b872d5cc505,PodSandboxId:b065fb5908d6a38e1668d1beba7c0050c6ae711205bb22d474ac4a0a2bf82dc5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714534177742854650,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-542495,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ff0b7b04ff9d65173c3e81efe74f8d5,},Annotations:map[string]string{io
.kubernetes.container.hash: bde3beb4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d30e5e711023dd3054c30c78cddbaba96016c1d461380e99168233181b94590,PodSandboxId:bb823534975d53f5e5287a5c82dd4018eb82dcced02c7a9d5e28f2713ff893de,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1714534177756082963,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-x7vrf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f44ac199-32c4-4977-8f63-564a23e4b83e,},Annotations:map[string]string{io.kubernetes.container.hash: 95de77
46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f8f7cca080d5d6cd94a98511e14b930e6b8bee5eee4b99ed932d0626be72bc0,PodSandboxId:7e2b26855b74308131028514c95e0dd8e69a69b07f3c6ef175c4f57e76021bb6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714534177673100995,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-542495,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21ae9f772b4d01fd8d1605b312e4e87d,},Annotations:map[string]string{io.kubernetes.container.hash: f2d18315,io.kubernetes.co
ntainer.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92a924bfd194ca856d32145f1b416a6feaeb97526d5e7d1f3806a954427b41cc,PodSandboxId:5e6092e45f59702db958d15a6972fa250e320a2d415982ab858d4b5871a4b6fa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714534177610228665,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-542495,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24d8d34219349d61e1fe05674be00f92,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kuber
netes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec898beec5e3d2f8d855ee92cc720b0d9dca6366cf80da972ab7c460dbaebf7b,PodSandboxId:77cd8d76d21dbd1776a97c72d57594093dda71b4b61e6b105c5bb346120ca827,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1714534177474387272,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-542495,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c29c848e99ee9d176d08ac5bc565db5,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.res
tartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09b229cb407320d3c750597c319b318b1b9632dbff2346820d3561439a3a3115,PodSandboxId:e6a055773e04974c174545c30cc9a5af87b31843e34a1187fb8b87b88e46510f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714534120518515910,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lz5kj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f2c4209-14df-46c4-abc7-cf93b398a872,},Annotations:map[string]string{io.kubernetes.container.hash: 80d52e3b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:234285fef15e8f904fde8efeae5cdd5a3b89deed909a8d2078b72eb6ac39e7db,PodSandboxId:d4a6ff79cf15f1b0d4c08a0f699e9f74220986b8378c356650946fbfda438dd2,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1714534100017825751,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-542495,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 6ff0b7b04ff9d65173c3e81efe74f8d5,},Annotations:map[string]string{io.kubernetes.container.hash: bde3beb4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=49726388-b58a-41dd-bda4-513c8e52d783 name=/runtime.v1.RuntimeService/ListContainers
	May 01 03:30:15 pause-542495 crio[2464]: time="2024-05-01 03:30:15.120309272Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=97add75d-12a9-43fb-bd0b-3b2b5a50cccf name=/runtime.v1.RuntimeService/Version
	May 01 03:30:15 pause-542495 crio[2464]: time="2024-05-01 03:30:15.120441045Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=97add75d-12a9-43fb-bd0b-3b2b5a50cccf name=/runtime.v1.RuntimeService/Version
	May 01 03:30:15 pause-542495 crio[2464]: time="2024-05-01 03:30:15.122801595Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=18eeb18a-c2f1-4c4f-b706-44aa86adb778 name=/runtime.v1.ImageService/ImageFsInfo
	May 01 03:30:15 pause-542495 crio[2464]: time="2024-05-01 03:30:15.123811459Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714534215123780628,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=18eeb18a-c2f1-4c4f-b706-44aa86adb778 name=/runtime.v1.ImageService/ImageFsInfo
	May 01 03:30:15 pause-542495 crio[2464]: time="2024-05-01 03:30:15.124882919Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cb258793-c9bf-40ee-93fe-536576b93a59 name=/runtime.v1.RuntimeService/ListContainers
	May 01 03:30:15 pause-542495 crio[2464]: time="2024-05-01 03:30:15.124960243Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cb258793-c9bf-40ee-93fe-536576b93a59 name=/runtime.v1.RuntimeService/ListContainers
	May 01 03:30:15 pause-542495 crio[2464]: time="2024-05-01 03:30:15.125558060Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:16a476e1af3d92a5d2bb28639cec07f414b874fd246a7d9bf61dd4c5a84048ba,PodSandboxId:bb823534975d53f5e5287a5c82dd4018eb82dcced02c7a9d5e28f2713ff893de,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714534194783978189,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-x7vrf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f44ac199-32c4-4977-8f63-564a23e4b83e,},Annotations:map[string]string{io.kubernetes.container.hash: 95de7746,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8631b6ae7b505faf1e9286be8a8009e27a2f5cecd55bff0b3b8ae07a794faefa,PodSandboxId:5e6092e45f59702db958d15a6972fa250e320a2d415982ab858d4b5871a4b6fa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714534190957587306,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-542495,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24d8d34219349d61e1fe05674be00f92,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38f0cafe7b1950e0819feeddbbc6000076623a8f23896a0272ce7823ff05494d,PodSandboxId:77cd8d76d21dbd1776a97c72d57594093dda71b4b61e6b105c5bb346120ca827,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714534190941467331,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-542495,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c29c848e99ee9d176d08ac5bc565db5,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dffc93cfd52f5af64e714cb776f0ac72bf1b49ce8b4419ef46d6adf7e2208982,PodSandboxId:7e2b26855b74308131028514c95e0dd8e69a69b07f3c6ef175c4f57e76021bb6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714534190917113562,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-542495,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21ae9f772b4d01fd8d1605b312e4e87d,},Annotations:map[string]string{io.kubernetes.container.hash: f2d18315,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,i
o.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64f8f19ad8d82fc60e6c9b9c36b6e195dc4558956b053319e657b96d9b27b92d,PodSandboxId:ea903bee7754ec901582e9c5121d776d3ee92f9fe23c26c09540543887466778,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714534178464684246,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lz5kj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f2c4209-14df-46c4-abc7-cf93b398a872,},Annotations:map[string]string{io.kubernetes.container.hash: 80d52e3b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"pro
tocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a781211fa39e0e2823e25a0eab0d5a0e06393b10b52df90a08e6b872d5cc505,PodSandboxId:b065fb5908d6a38e1668d1beba7c0050c6ae711205bb22d474ac4a0a2bf82dc5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714534177742854650,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-542495,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ff0b7b04ff9d65173c3e81efe74f8d5,},Annotations:map[string]string{io
.kubernetes.container.hash: bde3beb4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d30e5e711023dd3054c30c78cddbaba96016c1d461380e99168233181b94590,PodSandboxId:bb823534975d53f5e5287a5c82dd4018eb82dcced02c7a9d5e28f2713ff893de,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1714534177756082963,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-x7vrf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f44ac199-32c4-4977-8f63-564a23e4b83e,},Annotations:map[string]string{io.kubernetes.container.hash: 95de77
46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f8f7cca080d5d6cd94a98511e14b930e6b8bee5eee4b99ed932d0626be72bc0,PodSandboxId:7e2b26855b74308131028514c95e0dd8e69a69b07f3c6ef175c4f57e76021bb6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714534177673100995,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-542495,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21ae9f772b4d01fd8d1605b312e4e87d,},Annotations:map[string]string{io.kubernetes.container.hash: f2d18315,io.kubernetes.co
ntainer.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92a924bfd194ca856d32145f1b416a6feaeb97526d5e7d1f3806a954427b41cc,PodSandboxId:5e6092e45f59702db958d15a6972fa250e320a2d415982ab858d4b5871a4b6fa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714534177610228665,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-542495,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24d8d34219349d61e1fe05674be00f92,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kuber
netes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec898beec5e3d2f8d855ee92cc720b0d9dca6366cf80da972ab7c460dbaebf7b,PodSandboxId:77cd8d76d21dbd1776a97c72d57594093dda71b4b61e6b105c5bb346120ca827,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1714534177474387272,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-542495,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c29c848e99ee9d176d08ac5bc565db5,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.res
tartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09b229cb407320d3c750597c319b318b1b9632dbff2346820d3561439a3a3115,PodSandboxId:e6a055773e04974c174545c30cc9a5af87b31843e34a1187fb8b87b88e46510f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714534120518515910,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lz5kj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f2c4209-14df-46c4-abc7-cf93b398a872,},Annotations:map[string]string{io.kubernetes.container.hash: 80d52e3b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:234285fef15e8f904fde8efeae5cdd5a3b89deed909a8d2078b72eb6ac39e7db,PodSandboxId:d4a6ff79cf15f1b0d4c08a0f699e9f74220986b8378c356650946fbfda438dd2,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1714534100017825751,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-542495,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 6ff0b7b04ff9d65173c3e81efe74f8d5,},Annotations:map[string]string{io.kubernetes.container.hash: bde3beb4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cb258793-c9bf-40ee-93fe-536576b93a59 name=/runtime.v1.RuntimeService/ListContainers
	May 01 03:30:15 pause-542495 crio[2464]: time="2024-05-01 03:30:15.179030366Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=463fd5d5-50cf-4fc9-9a06-2916f3065c68 name=/runtime.v1.RuntimeService/Version
	May 01 03:30:15 pause-542495 crio[2464]: time="2024-05-01 03:30:15.179217651Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=463fd5d5-50cf-4fc9-9a06-2916f3065c68 name=/runtime.v1.RuntimeService/Version
	May 01 03:30:15 pause-542495 crio[2464]: time="2024-05-01 03:30:15.181570580Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c5b6d711-f3ac-42e2-9e56-1be1f5834422 name=/runtime.v1.ImageService/ImageFsInfo
	May 01 03:30:15 pause-542495 crio[2464]: time="2024-05-01 03:30:15.182128239Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714534215182095470,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c5b6d711-f3ac-42e2-9e56-1be1f5834422 name=/runtime.v1.ImageService/ImageFsInfo
	May 01 03:30:15 pause-542495 crio[2464]: time="2024-05-01 03:30:15.182970907Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c5b016ed-7518-420d-92c1-f37f2ff516a6 name=/runtime.v1.RuntimeService/ListContainers
	May 01 03:30:15 pause-542495 crio[2464]: time="2024-05-01 03:30:15.183047133Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c5b016ed-7518-420d-92c1-f37f2ff516a6 name=/runtime.v1.RuntimeService/ListContainers
	May 01 03:30:15 pause-542495 crio[2464]: time="2024-05-01 03:30:15.183588388Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:16a476e1af3d92a5d2bb28639cec07f414b874fd246a7d9bf61dd4c5a84048ba,PodSandboxId:bb823534975d53f5e5287a5c82dd4018eb82dcced02c7a9d5e28f2713ff893de,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714534194783978189,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-x7vrf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f44ac199-32c4-4977-8f63-564a23e4b83e,},Annotations:map[string]string{io.kubernetes.container.hash: 95de7746,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8631b6ae7b505faf1e9286be8a8009e27a2f5cecd55bff0b3b8ae07a794faefa,PodSandboxId:5e6092e45f59702db958d15a6972fa250e320a2d415982ab858d4b5871a4b6fa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714534190957587306,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-542495,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24d8d34219349d61e1fe05674be00f92,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38f0cafe7b1950e0819feeddbbc6000076623a8f23896a0272ce7823ff05494d,PodSandboxId:77cd8d76d21dbd1776a97c72d57594093dda71b4b61e6b105c5bb346120ca827,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714534190941467331,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-542495,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c29c848e99ee9d176d08ac5bc565db5,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dffc93cfd52f5af64e714cb776f0ac72bf1b49ce8b4419ef46d6adf7e2208982,PodSandboxId:7e2b26855b74308131028514c95e0dd8e69a69b07f3c6ef175c4f57e76021bb6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714534190917113562,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-542495,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21ae9f772b4d01fd8d1605b312e4e87d,},Annotations:map[string]string{io.kubernetes.container.hash: f2d18315,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,i
o.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64f8f19ad8d82fc60e6c9b9c36b6e195dc4558956b053319e657b96d9b27b92d,PodSandboxId:ea903bee7754ec901582e9c5121d776d3ee92f9fe23c26c09540543887466778,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714534178464684246,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lz5kj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f2c4209-14df-46c4-abc7-cf93b398a872,},Annotations:map[string]string{io.kubernetes.container.hash: 80d52e3b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"pro
tocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a781211fa39e0e2823e25a0eab0d5a0e06393b10b52df90a08e6b872d5cc505,PodSandboxId:b065fb5908d6a38e1668d1beba7c0050c6ae711205bb22d474ac4a0a2bf82dc5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714534177742854650,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-542495,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ff0b7b04ff9d65173c3e81efe74f8d5,},Annotations:map[string]string{io
.kubernetes.container.hash: bde3beb4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d30e5e711023dd3054c30c78cddbaba96016c1d461380e99168233181b94590,PodSandboxId:bb823534975d53f5e5287a5c82dd4018eb82dcced02c7a9d5e28f2713ff893de,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1714534177756082963,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-x7vrf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f44ac199-32c4-4977-8f63-564a23e4b83e,},Annotations:map[string]string{io.kubernetes.container.hash: 95de77
46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f8f7cca080d5d6cd94a98511e14b930e6b8bee5eee4b99ed932d0626be72bc0,PodSandboxId:7e2b26855b74308131028514c95e0dd8e69a69b07f3c6ef175c4f57e76021bb6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714534177673100995,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-542495,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21ae9f772b4d01fd8d1605b312e4e87d,},Annotations:map[string]string{io.kubernetes.container.hash: f2d18315,io.kubernetes.co
ntainer.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92a924bfd194ca856d32145f1b416a6feaeb97526d5e7d1f3806a954427b41cc,PodSandboxId:5e6092e45f59702db958d15a6972fa250e320a2d415982ab858d4b5871a4b6fa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714534177610228665,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-542495,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24d8d34219349d61e1fe05674be00f92,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kuber
netes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec898beec5e3d2f8d855ee92cc720b0d9dca6366cf80da972ab7c460dbaebf7b,PodSandboxId:77cd8d76d21dbd1776a97c72d57594093dda71b4b61e6b105c5bb346120ca827,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1714534177474387272,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-542495,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c29c848e99ee9d176d08ac5bc565db5,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.res
tartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09b229cb407320d3c750597c319b318b1b9632dbff2346820d3561439a3a3115,PodSandboxId:e6a055773e04974c174545c30cc9a5af87b31843e34a1187fb8b87b88e46510f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714534120518515910,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lz5kj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f2c4209-14df-46c4-abc7-cf93b398a872,},Annotations:map[string]string{io.kubernetes.container.hash: 80d52e3b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:234285fef15e8f904fde8efeae5cdd5a3b89deed909a8d2078b72eb6ac39e7db,PodSandboxId:d4a6ff79cf15f1b0d4c08a0f699e9f74220986b8378c356650946fbfda438dd2,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1714534100017825751,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-542495,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 6ff0b7b04ff9d65173c3e81efe74f8d5,},Annotations:map[string]string{io.kubernetes.container.hash: bde3beb4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c5b016ed-7518-420d-92c1-f37f2ff516a6 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	16a476e1af3d9       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b   20 seconds ago       Running             kube-proxy                2                   bb823534975d5       kube-proxy-x7vrf
	8631b6ae7b505       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b   24 seconds ago       Running             kube-controller-manager   2                   5e6092e45f597       kube-controller-manager-pause-542495
	38f0cafe7b195       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced   24 seconds ago       Running             kube-scheduler            2                   77cd8d76d21db       kube-scheduler-pause-542495
	dffc93cfd52f5       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0   24 seconds ago       Running             kube-apiserver            2                   7e2b26855b743       kube-apiserver-pause-542495
	64f8f19ad8d82       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   36 seconds ago       Running             coredns                   1                   ea903bee7754e       coredns-7db6d8ff4d-lz5kj
	4d30e5e711023       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b   37 seconds ago       Exited              kube-proxy                1                   bb823534975d5       kube-proxy-x7vrf
	4a781211fa39e       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   37 seconds ago       Running             etcd                      1                   b065fb5908d6a       etcd-pause-542495
	4f8f7cca080d5       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0   37 seconds ago       Exited              kube-apiserver            1                   7e2b26855b743       kube-apiserver-pause-542495
	92a924bfd194c       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b   37 seconds ago       Exited              kube-controller-manager   1                   5e6092e45f597       kube-controller-manager-pause-542495
	ec898beec5e3d       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced   37 seconds ago       Exited              kube-scheduler            1                   77cd8d76d21db       kube-scheduler-pause-542495
	09b229cb40732       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   About a minute ago   Exited              coredns                   0                   e6a055773e049       coredns-7db6d8ff4d-lz5kj
	234285fef15e8       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   About a minute ago   Exited              etcd                      0                   d4a6ff79cf15f       etcd-pause-542495
	
	
	==> coredns [09b229cb407320d3c750597c319b318b1b9632dbff2346820d3561439a3a3115] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] Reloading
	[INFO] plugin/kubernetes: Trace[1734290231]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (01-May-2024 03:28:40.932) (total time: 29625ms):
	Trace[1734290231]: ---"Objects listed" error:<nil> 29625ms (03:29:10.558)
	Trace[1734290231]: [29.625622789s] [29.625622789s] END
	[INFO] plugin/kubernetes: Trace[1299550197]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (01-May-2024 03:28:40.936) (total time: 29624ms):
	Trace[1299550197]: ---"Objects listed" error:<nil> 29624ms (03:29:10.560)
	Trace[1299550197]: [29.624339339s] [29.624339339s] END
	[INFO] plugin/kubernetes: Trace[353635498]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (01-May-2024 03:28:40.930) (total time: 29629ms):
	Trace[353635498]: ---"Objects listed" error:<nil> 29629ms (03:29:10.559)
	Trace[353635498]: [29.629179957s] [29.629179957s] END
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	[INFO] Reloading complete
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [64f8f19ad8d82fc60e6c9b9c36b6e195dc4558956b053319e657b96d9b27b92d] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:41508 - 171 "HINFO IN 4228483881059704764.9065488545482063048. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014385551s
	
	
	==> describe nodes <==
	Name:               pause-542495
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-542495
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2c4eae41cda912e6a762d77f0d8868e00f97bb4e
	                    minikube.k8s.io/name=pause-542495
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_01T03_28_26_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 01 May 2024 03:28:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-542495
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 01 May 2024 03:30:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 01 May 2024 03:29:53 +0000   Wed, 01 May 2024 03:28:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 01 May 2024 03:29:53 +0000   Wed, 01 May 2024 03:28:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 01 May 2024 03:29:53 +0000   Wed, 01 May 2024 03:28:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 01 May 2024 03:29:53 +0000   Wed, 01 May 2024 03:28:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.4
	  Hostname:    pause-542495
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 33eec01f441f4461b3781ce5458f5f42
	  System UUID:                33eec01f-441f-4461-b378-1ce5458f5f42
	  Boot ID:                    d7068520-f459-4ab5-a6ed-cf8b2d2001c2
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-lz5kj                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     96s
	  kube-system                 etcd-pause-542495                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         110s
	  kube-system                 kube-apiserver-pause-542495             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         110s
	  kube-system                 kube-controller-manager-pause-542495    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         110s
	  kube-system                 kube-proxy-x7vrf                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         96s
	  kube-system                 kube-scheduler-pause-542495             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         110s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 94s                kube-proxy       
	  Normal  Starting                 20s                kube-proxy       
	  Normal  NodeHasSufficientPID     110s               kubelet          Node pause-542495 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  110s               kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  110s               kubelet          Node pause-542495 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    110s               kubelet          Node pause-542495 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 110s               kubelet          Starting kubelet.
	  Normal  NodeReady                109s               kubelet          Node pause-542495 status is now: NodeReady
	  Normal  RegisteredNode           97s                node-controller  Node pause-542495 event: Registered Node pause-542495 in Controller
	  Normal  Starting                 25s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  25s (x8 over 25s)  kubelet          Node pause-542495 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    25s (x8 over 25s)  kubelet          Node pause-542495 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     25s (x7 over 25s)  kubelet          Node pause-542495 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  25s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           9s                 node-controller  Node pause-542495 event: Registered Node pause-542495 in Controller
	
	
	==> dmesg <==
	[  +0.067651] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.073784] systemd-fstab-generator[611]: Ignoring "noauto" option for root device
	[  +0.210561] systemd-fstab-generator[625]: Ignoring "noauto" option for root device
	[  +0.162726] systemd-fstab-generator[637]: Ignoring "noauto" option for root device
	[  +0.349448] systemd-fstab-generator[666]: Ignoring "noauto" option for root device
	[  +5.417409] systemd-fstab-generator[762]: Ignoring "noauto" option for root device
	[  +0.069911] kauditd_printk_skb: 130 callbacks suppressed
	[  +5.098175] systemd-fstab-generator[945]: Ignoring "noauto" option for root device
	[  +0.081018] kauditd_printk_skb: 18 callbacks suppressed
	[  +5.519234] kauditd_printk_skb: 69 callbacks suppressed
	[  +1.494240] systemd-fstab-generator[1280]: Ignoring "noauto" option for root device
	[ +14.053708] systemd-fstab-generator[1493]: Ignoring "noauto" option for root device
	[  +0.106677] kauditd_printk_skb: 15 callbacks suppressed
	[ +11.422587] kauditd_printk_skb: 88 callbacks suppressed
	[May 1 03:29] systemd-fstab-generator[2382]: Ignoring "noauto" option for root device
	[  +0.146431] systemd-fstab-generator[2394]: Ignoring "noauto" option for root device
	[  +0.172592] systemd-fstab-generator[2408]: Ignoring "noauto" option for root device
	[  +0.145862] systemd-fstab-generator[2420]: Ignoring "noauto" option for root device
	[  +0.302508] systemd-fstab-generator[2448]: Ignoring "noauto" option for root device
	[  +6.146125] systemd-fstab-generator[2574]: Ignoring "noauto" option for root device
	[  +0.073776] kauditd_printk_skb: 100 callbacks suppressed
	[ +13.544413] systemd-fstab-generator[3295]: Ignoring "noauto" option for root device
	[  +0.081654] kauditd_printk_skb: 86 callbacks suppressed
	[May 1 03:30] kauditd_printk_skb: 41 callbacks suppressed
	[  +2.032217] systemd-fstab-generator[3664]: Ignoring "noauto" option for root device
	
	
	==> etcd [234285fef15e8f904fde8efeae5cdd5a3b89deed909a8d2078b72eb6ac39e7db] <==
	{"level":"warn","ts":"2024-05-01T03:29:20.607766Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-01T03:29:20.14579Z","time spent":"461.921134ms","remote":"127.0.0.1:36648","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":3830,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/replicasets/kube-system/coredns-7db6d8ff4d\" mod_revision:354 > success:<request_put:<key:\"/registry/replicasets/kube-system/coredns-7db6d8ff4d\" value_size:3770 >> failure:<request_range:<key:\"/registry/replicasets/kube-system/coredns-7db6d8ff4d\" > >"}
	{"level":"info","ts":"2024-05-01T03:29:20.607974Z","caller":"traceutil/trace.go:171","msg":"trace[1165779626] linearizableReadLoop","detail":"{readStateIndex:419; appliedIndex:418; }","duration":"503.581925ms","start":"2024-05-01T03:29:20.104378Z","end":"2024-05-01T03:29:20.60796Z","steps":["trace[1165779626] 'read index received'  (duration: 31.168329ms)","trace[1165779626] 'applied index is now lower than readState.Index'  (duration: 472.412614ms)"],"step_count":2}
	{"level":"warn","ts":"2024-05-01T03:29:20.608125Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"503.737256ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-7db6d8ff4d-lz5kj\" ","response":"range_response_count:1 size:4727"}
	{"level":"info","ts":"2024-05-01T03:29:20.608146Z","caller":"traceutil/trace.go:171","msg":"trace[141507368] range","detail":"{range_begin:/registry/pods/kube-system/coredns-7db6d8ff4d-lz5kj; range_end:; response_count:1; response_revision:401; }","duration":"503.785708ms","start":"2024-05-01T03:29:20.104354Z","end":"2024-05-01T03:29:20.60814Z","steps":["trace[141507368] 'agreement among raft nodes before linearized reading'  (duration: 503.675211ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-01T03:29:20.60823Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-01T03:29:20.104339Z","time spent":"503.880146ms","remote":"127.0.0.1:36388","response type":"/etcdserverpb.KV/Range","request count":0,"request size":53,"response count":1,"response size":4749,"request content":"key:\"/registry/pods/kube-system/coredns-7db6d8ff4d-lz5kj\" "}
	{"level":"info","ts":"2024-05-01T03:29:20.60841Z","caller":"traceutil/trace.go:171","msg":"trace[485444998] transaction","detail":"{read_only:false; response_revision:399; number_of_response:1; }","duration":"464.73813ms","start":"2024-05-01T03:29:20.143665Z","end":"2024-05-01T03:29:20.608403Z","steps":["trace[485444998] 'process raft request'  (duration: 463.738342ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-01T03:29:20.608458Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-01T03:29:20.143644Z","time spent":"464.782963ms","remote":"127.0.0.1:36500","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1298,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/endpointslices/kube-system/kube-dns-5mmmp\" mod_revision:383 > success:<request_put:<key:\"/registry/endpointslices/kube-system/kube-dns-5mmmp\" value_size:1239 >> failure:<request_range:<key:\"/registry/endpointslices/kube-system/kube-dns-5mmmp\" > >"}
	{"level":"info","ts":"2024-05-01T03:29:20.608631Z","caller":"traceutil/trace.go:171","msg":"trace[264506834] transaction","detail":"{read_only:false; response_revision:400; number_of_response:1; }","duration":"464.91765ms","start":"2024-05-01T03:29:20.143707Z","end":"2024-05-01T03:29:20.608625Z","steps":["trace[264506834] 'process raft request'  (duration: 463.785301ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-01T03:29:20.608706Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-01T03:29:20.143644Z","time spent":"465.033903ms","remote":"127.0.0.1:36372","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":785,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/kube-dns\" mod_revision:372 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/kube-dns\" value_size:728 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/kube-dns\" > >"}
	{"level":"info","ts":"2024-05-01T03:29:20.814361Z","caller":"traceutil/trace.go:171","msg":"trace[1450256460] linearizableReadLoop","detail":"{readStateIndex:423; appliedIndex:422; }","duration":"182.704497ms","start":"2024-05-01T03:29:20.63164Z","end":"2024-05-01T03:29:20.814344Z","steps":["trace[1450256460] 'read index received'  (duration: 177.268719ms)","trace[1450256460] 'applied index is now lower than readState.Index'  (duration: 5.434973ms)"],"step_count":2}
	{"level":"warn","ts":"2024-05-01T03:29:20.814718Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"169.859676ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-05-01T03:29:20.814792Z","caller":"traceutil/trace.go:171","msg":"trace[507891174] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:402; }","duration":"169.968198ms","start":"2024-05-01T03:29:20.644809Z","end":"2024-05-01T03:29:20.814777Z","steps":["trace[507891174] 'agreement among raft nodes before linearized reading'  (duration: 169.864156ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-01T03:29:20.815093Z","caller":"traceutil/trace.go:171","msg":"trace[1846494894] transaction","detail":"{read_only:false; response_revision:402; number_of_response:1; }","duration":"187.987166ms","start":"2024-05-01T03:29:20.627095Z","end":"2024-05-01T03:29:20.815082Z","steps":["trace[1846494894] 'process raft request'  (duration: 181.869076ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-01T03:29:20.814718Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"183.06177ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/pause-542495\" ","response":"range_response_count:1 size:5425"}
	{"level":"info","ts":"2024-05-01T03:29:20.815464Z","caller":"traceutil/trace.go:171","msg":"trace[1732650769] range","detail":"{range_begin:/registry/minions/pause-542495; range_end:; response_count:1; response_revision:402; }","duration":"183.832956ms","start":"2024-05-01T03:29:20.63162Z","end":"2024-05-01T03:29:20.815453Z","steps":["trace[1732650769] 'agreement among raft nodes before linearized reading'  (duration: 183.032006ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-01T03:29:23.521796Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-05-01T03:29:23.521859Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"pause-542495","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.4:2380"],"advertise-client-urls":["https://192.168.39.4:2379"]}
	{"level":"warn","ts":"2024-05-01T03:29:23.52195Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-05-01T03:29:23.52207Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-05-01T03:29:23.596132Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.4:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-05-01T03:29:23.59625Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.4:2379: use of closed network connection"}
	{"level":"info","ts":"2024-05-01T03:29:23.596333Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"7ab0973fa604e492","current-leader-member-id":"7ab0973fa604e492"}
	{"level":"info","ts":"2024-05-01T03:29:23.598822Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.4:2380"}
	{"level":"info","ts":"2024-05-01T03:29:23.59893Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.4:2380"}
	{"level":"info","ts":"2024-05-01T03:29:23.598942Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"pause-542495","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.4:2380"],"advertise-client-urls":["https://192.168.39.4:2379"]}
	
	
	==> etcd [4a781211fa39e0e2823e25a0eab0d5a0e06393b10b52df90a08e6b872d5cc505] <==
	{"level":"info","ts":"2024-05-01T03:29:38.851859Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-01T03:29:38.846412Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.4:2380"}
	{"level":"info","ts":"2024-05-01T03:29:38.857482Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.4:2380"}
	{"level":"info","ts":"2024-05-01T03:29:38.849924Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-05-01T03:29:40.29435Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7ab0973fa604e492 is starting a new election at term 2"}
	{"level":"info","ts":"2024-05-01T03:29:40.294384Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7ab0973fa604e492 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-05-01T03:29:40.294412Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7ab0973fa604e492 received MsgPreVoteResp from 7ab0973fa604e492 at term 2"}
	{"level":"info","ts":"2024-05-01T03:29:40.294425Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7ab0973fa604e492 became candidate at term 3"}
	{"level":"info","ts":"2024-05-01T03:29:40.29443Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7ab0973fa604e492 received MsgVoteResp from 7ab0973fa604e492 at term 3"}
	{"level":"info","ts":"2024-05-01T03:29:40.294438Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7ab0973fa604e492 became leader at term 3"}
	{"level":"info","ts":"2024-05-01T03:29:40.294445Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7ab0973fa604e492 elected leader 7ab0973fa604e492 at term 3"}
	{"level":"info","ts":"2024-05-01T03:29:40.30096Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"7ab0973fa604e492","local-member-attributes":"{Name:pause-542495 ClientURLs:[https://192.168.39.4:2379]}","request-path":"/0/members/7ab0973fa604e492/attributes","cluster-id":"6b117bdc86acb526","publish-timeout":"7s"}
	{"level":"info","ts":"2024-05-01T03:29:40.300995Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-01T03:29:40.301318Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-01T03:29:40.302967Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-05-01T03:29:40.304506Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.4:2379"}
	{"level":"info","ts":"2024-05-01T03:29:40.304652Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-05-01T03:29:40.304668Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-05-01T03:30:01.50598Z","caller":"traceutil/trace.go:171","msg":"trace[37036908] transaction","detail":"{read_only:false; response_revision:470; number_of_response:1; }","duration":"356.715839ms","start":"2024-05-01T03:30:01.149244Z","end":"2024-05-01T03:30:01.505959Z","steps":["trace[37036908] 'process raft request'  (duration: 356.411614ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-01T03:30:01.510311Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-01T03:30:01.149139Z","time spent":"358.246103ms","remote":"127.0.0.1:37650","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":6576,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-controller-manager-pause-542495\" mod_revision:419 > success:<request_put:<key:\"/registry/pods/kube-system/kube-controller-manager-pause-542495\" value_size:6505 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-controller-manager-pause-542495\" > >"}
	{"level":"warn","ts":"2024-05-01T03:30:02.005223Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"375.221798ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-controller-manager-pause-542495\" ","response":"range_response_count:1 size:6591"}
	{"level":"info","ts":"2024-05-01T03:30:02.005451Z","caller":"traceutil/trace.go:171","msg":"trace[1835108747] range","detail":"{range_begin:/registry/pods/kube-system/kube-controller-manager-pause-542495; range_end:; response_count:1; response_revision:470; }","duration":"375.533196ms","start":"2024-05-01T03:30:01.629887Z","end":"2024-05-01T03:30:02.00542Z","steps":["trace[1835108747] 'range keys from in-memory index tree'  (duration: 375.138384ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-01T03:30:02.00552Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-01T03:30:01.629868Z","time spent":"375.633384ms","remote":"127.0.0.1:37650","response type":"/etcdserverpb.KV/Range","request count":0,"request size":65,"response count":1,"response size":6613,"request content":"key:\"/registry/pods/kube-system/kube-controller-manager-pause-542495\" "}
	{"level":"info","ts":"2024-05-01T03:30:02.203883Z","caller":"traceutil/trace.go:171","msg":"trace[204837190] transaction","detail":"{read_only:false; response_revision:471; number_of_response:1; }","duration":"181.362279ms","start":"2024-05-01T03:30:02.022499Z","end":"2024-05-01T03:30:02.203861Z","steps":["trace[204837190] 'process raft request'  (duration: 181.219461ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-01T03:30:04.485058Z","caller":"traceutil/trace.go:171","msg":"trace[416552853] transaction","detail":"{read_only:false; response_revision:473; number_of_response:1; }","duration":"240.101055ms","start":"2024-05-01T03:30:04.244936Z","end":"2024-05-01T03:30:04.485037Z","steps":["trace[416552853] 'process raft request'  (duration: 172.088536ms)","trace[416552853] 'compare'  (duration: 67.893847ms)"],"step_count":2}
	
	
	==> kernel <==
	 03:30:15 up 2 min,  0 users,  load average: 0.87, 0.32, 0.12
	Linux pause-542495 5.10.207 #1 SMP Tue Apr 30 22:38:43 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [4f8f7cca080d5d6cd94a98511e14b930e6b8bee5eee4b99ed932d0626be72bc0] <==
	I0501 03:29:41.911802       1 controller.go:86] Shutting down OpenAPI V3 AggregationController
	I0501 03:29:41.927500       1 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0501 03:29:41.927565       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0501 03:29:41.927620       1 secure_serving.go:258] Stopped listening on [::]:8443
	I0501 03:29:41.929310       1 dynamic_serving_content.go:146] "Shutting down controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0501 03:29:41.937912       1 controller.go:157] Shutting down quota evaluator
	I0501 03:29:41.938129       1 controller.go:176] quota evaluator worker shutdown
	I0501 03:29:41.938293       1 controller.go:176] quota evaluator worker shutdown
	I0501 03:29:41.938353       1 controller.go:176] quota evaluator worker shutdown
	I0501 03:29:41.938460       1 controller.go:176] quota evaluator worker shutdown
	I0501 03:29:41.938606       1 controller.go:176] quota evaluator worker shutdown
	E0501 03:29:42.663784       1 storage_rbac.go:187] unable to initialize clusterroles: Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles": dial tcp 127.0.0.1:8443: connect: connection refused
	W0501 03:29:42.666660       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0501 03:29:43.662925       1 storage_rbac.go:187] unable to initialize clusterroles: Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles": dial tcp 127.0.0.1:8443: connect: connection refused
	W0501 03:29:43.665466       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0501 03:29:44.663373       1 storage_rbac.go:187] unable to initialize clusterroles: Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles": dial tcp 127.0.0.1:8443: connect: connection refused
	W0501 03:29:44.665744       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0501 03:29:45.662707       1 storage_rbac.go:187] unable to initialize clusterroles: Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles": dial tcp 127.0.0.1:8443: connect: connection refused
	W0501 03:29:45.665731       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0501 03:29:46.662822       1 storage_rbac.go:187] unable to initialize clusterroles: Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles": dial tcp 127.0.0.1:8443: connect: connection refused
	W0501 03:29:46.666578       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0501 03:29:47.663081       1 storage_rbac.go:187] unable to initialize clusterroles: Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles": dial tcp 127.0.0.1:8443: connect: connection refused
	W0501 03:29:47.665860       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0501 03:29:48.662793       1 storage_rbac.go:187] unable to initialize clusterroles: Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles": dial tcp 127.0.0.1:8443: connect: connection refused
	W0501 03:29:48.665768       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	
	
	==> kube-apiserver [dffc93cfd52f5af64e714cb776f0ac72bf1b49ce8b4419ef46d6adf7e2208982] <==
	I0501 03:29:53.780481       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0501 03:29:53.780510       1 aggregator.go:165] initial CRD sync complete...
	I0501 03:29:53.780516       1 autoregister_controller.go:141] Starting autoregister controller
	I0501 03:29:53.780522       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0501 03:29:53.780526       1 cache.go:39] Caches are synced for autoregister controller
	I0501 03:29:53.812281       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0501 03:29:53.830344       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0501 03:29:53.830397       1 policy_source.go:224] refreshing policies
	I0501 03:29:53.832123       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0501 03:29:53.832624       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0501 03:29:53.833930       1 shared_informer.go:320] Caches are synced for configmaps
	I0501 03:29:53.835476       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0501 03:29:53.836118       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0501 03:29:53.836239       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0501 03:29:53.837694       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0501 03:29:53.842312       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0501 03:29:54.647508       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0501 03:29:55.055884       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.4]
	I0501 03:29:55.057691       1 controller.go:615] quota admission added evaluator for: endpoints
	I0501 03:29:55.068095       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0501 03:29:55.750292       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0501 03:29:55.789128       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0501 03:29:55.872276       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0501 03:29:55.932017       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0501 03:29:55.944148       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-controller-manager [8631b6ae7b505faf1e9286be8a8009e27a2f5cecd55bff0b3b8ae07a794faefa] <==
	I0501 03:30:06.348265       1 shared_informer.go:320] Caches are synced for taint
	I0501 03:30:06.348378       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0501 03:30:06.348456       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-542495"
	I0501 03:30:06.348530       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0501 03:30:06.350092       1 shared_informer.go:320] Caches are synced for node
	I0501 03:30:06.350241       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0501 03:30:06.350291       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0501 03:30:06.350315       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0501 03:30:06.350339       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0501 03:30:06.353642       1 shared_informer.go:320] Caches are synced for expand
	I0501 03:30:06.357024       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0501 03:30:06.375349       1 shared_informer.go:320] Caches are synced for crt configmap
	I0501 03:30:06.418022       1 shared_informer.go:320] Caches are synced for persistent volume
	I0501 03:30:06.422970       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0501 03:30:06.458075       1 shared_informer.go:320] Caches are synced for cronjob
	I0501 03:30:06.459422       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0501 03:30:06.470892       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0501 03:30:06.508390       1 shared_informer.go:320] Caches are synced for endpoint
	I0501 03:30:06.511524       1 shared_informer.go:320] Caches are synced for HPA
	I0501 03:30:06.511627       1 shared_informer.go:320] Caches are synced for resource quota
	I0501 03:30:06.531266       1 shared_informer.go:320] Caches are synced for job
	I0501 03:30:06.550043       1 shared_informer.go:320] Caches are synced for resource quota
	I0501 03:30:06.995308       1 shared_informer.go:320] Caches are synced for garbage collector
	I0501 03:30:06.995435       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0501 03:30:06.997842       1 shared_informer.go:320] Caches are synced for garbage collector
	
	
	==> kube-controller-manager [92a924bfd194ca856d32145f1b416a6feaeb97526d5e7d1f3806a954427b41cc] <==
	I0501 03:29:39.259411       1 serving.go:380] Generated self-signed cert in-memory
	I0501 03:29:39.688411       1 controllermanager.go:189] "Starting" version="v1.30.0"
	I0501 03:29:39.688493       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0501 03:29:39.690500       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0501 03:29:39.690613       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0501 03:29:39.691110       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0501 03:29:39.691249       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	
	
	==> kube-proxy [16a476e1af3d92a5d2bb28639cec07f414b874fd246a7d9bf61dd4c5a84048ba] <==
	I0501 03:29:54.991300       1 server_linux.go:69] "Using iptables proxy"
	I0501 03:29:55.001475       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.4"]
	I0501 03:29:55.055292       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0501 03:29:55.055375       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0501 03:29:55.055400       1 server_linux.go:165] "Using iptables Proxier"
	I0501 03:29:55.061480       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0501 03:29:55.061707       1 server.go:872] "Version info" version="v1.30.0"
	I0501 03:29:55.061759       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0501 03:29:55.063582       1 config.go:192] "Starting service config controller"
	I0501 03:29:55.063631       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0501 03:29:55.063664       1 config.go:101] "Starting endpoint slice config controller"
	I0501 03:29:55.063670       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0501 03:29:55.066029       1 config.go:319] "Starting node config controller"
	I0501 03:29:55.066076       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0501 03:29:55.164489       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0501 03:29:55.164598       1 shared_informer.go:320] Caches are synced for service config
	I0501 03:29:55.166115       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [4d30e5e711023dd3054c30c78cddbaba96016c1d461380e99168233181b94590] <==
	
	
	==> kube-scheduler [38f0cafe7b1950e0819feeddbbc6000076623a8f23896a0272ce7823ff05494d] <==
	W0501 03:29:53.769762       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found]
	E0501 03:29:53.769812       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found]
	W0501 03:29:53.769884       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	E0501 03:29:53.769895       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	W0501 03:29:53.770007       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found]
	E0501 03:29:53.770056       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found]
	W0501 03:29:53.770119       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found]
	E0501 03:29:53.770238       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found]
	W0501 03:29:53.770322       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found]
	E0501 03:29:53.770359       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found]
	W0501 03:29:53.770487       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found]
	E0501 03:29:53.770521       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found]
	W0501 03:29:53.770584       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	E0501 03:29:53.770593       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	W0501 03:29:53.770645       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found]
	E0501 03:29:53.770681       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found]
	W0501 03:29:53.770738       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found]
	E0501 03:29:53.770747       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found]
	W0501 03:29:53.770952       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found]
	E0501 03:29:53.770990       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found]
	W0501 03:29:53.771002       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	E0501 03:29:53.771010       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	W0501 03:29:53.772434       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found]
	E0501 03:29:53.772478       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found]
	I0501 03:29:53.849868       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [ec898beec5e3d2f8d855ee92cc720b0d9dca6366cf80da972ab7c460dbaebf7b] <==
	I0501 03:29:39.244027       1 serving.go:380] Generated self-signed cert in-memory
	W0501 03:29:41.711546       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0501 03:29:41.711596       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0501 03:29:41.711606       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0501 03:29:41.711612       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0501 03:29:41.758649       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0"
	I0501 03:29:41.759856       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0501 03:29:41.771653       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0501 03:29:41.771812       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0501 03:29:41.774380       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0501 03:29:41.775615       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0501 03:29:41.875095       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0501 03:29:48.933436       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	E0501 03:29:48.933966       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	May 01 03:29:50 pause-542495 kubelet[3302]: I0501 03:29:50.648845    3302 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/24d8d34219349d61e1fe05674be00f92-ca-certs\") pod \"kube-controller-manager-pause-542495\" (UID: \"24d8d34219349d61e1fe05674be00f92\") " pod="kube-system/kube-controller-manager-pause-542495"
	May 01 03:29:50 pause-542495 kubelet[3302]: I0501 03:29:50.648862    3302 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/24d8d34219349d61e1fe05674be00f92-k8s-certs\") pod \"kube-controller-manager-pause-542495\" (UID: \"24d8d34219349d61e1fe05674be00f92\") " pod="kube-system/kube-controller-manager-pause-542495"
	May 01 03:29:50 pause-542495 kubelet[3302]: I0501 03:29:50.648876    3302 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/24d8d34219349d61e1fe05674be00f92-kubeconfig\") pod \"kube-controller-manager-pause-542495\" (UID: \"24d8d34219349d61e1fe05674be00f92\") " pod="kube-system/kube-controller-manager-pause-542495"
	May 01 03:29:50 pause-542495 kubelet[3302]: I0501 03:29:50.648892    3302 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/24d8d34219349d61e1fe05674be00f92-usr-share-ca-certificates\") pod \"kube-controller-manager-pause-542495\" (UID: \"24d8d34219349d61e1fe05674be00f92\") " pod="kube-system/kube-controller-manager-pause-542495"
	May 01 03:29:50 pause-542495 kubelet[3302]: E0501 03:29:50.648966    3302 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-542495?timeout=10s\": dial tcp 192.168.39.4:8443: connect: connection refused" interval="400ms"
	May 01 03:29:50 pause-542495 kubelet[3302]: I0501 03:29:50.747389    3302 kubelet_node_status.go:73] "Attempting to register node" node="pause-542495"
	May 01 03:29:50 pause-542495 kubelet[3302]: E0501 03:29:50.748317    3302 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.4:8443: connect: connection refused" node="pause-542495"
	May 01 03:29:50 pause-542495 kubelet[3302]: I0501 03:29:50.905795    3302 scope.go:117] "RemoveContainer" containerID="4f8f7cca080d5d6cd94a98511e14b930e6b8bee5eee4b99ed932d0626be72bc0"
	May 01 03:29:50 pause-542495 kubelet[3302]: I0501 03:29:50.906524    3302 scope.go:117] "RemoveContainer" containerID="92a924bfd194ca856d32145f1b416a6feaeb97526d5e7d1f3806a954427b41cc"
	May 01 03:29:50 pause-542495 kubelet[3302]: I0501 03:29:50.908533    3302 scope.go:117] "RemoveContainer" containerID="ec898beec5e3d2f8d855ee92cc720b0d9dca6366cf80da972ab7c460dbaebf7b"
	May 01 03:29:51 pause-542495 kubelet[3302]: E0501 03:29:51.050332    3302 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-542495?timeout=10s\": dial tcp 192.168.39.4:8443: connect: connection refused" interval="800ms"
	May 01 03:29:51 pause-542495 kubelet[3302]: I0501 03:29:51.150760    3302 kubelet_node_status.go:73] "Attempting to register node" node="pause-542495"
	May 01 03:29:51 pause-542495 kubelet[3302]: E0501 03:29:51.151687    3302 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.4:8443: connect: connection refused" node="pause-542495"
	May 01 03:29:51 pause-542495 kubelet[3302]: I0501 03:29:51.953334    3302 kubelet_node_status.go:73] "Attempting to register node" node="pause-542495"
	May 01 03:29:53 pause-542495 kubelet[3302]: I0501 03:29:53.902679    3302 kubelet_node_status.go:112] "Node was previously registered" node="pause-542495"
	May 01 03:29:53 pause-542495 kubelet[3302]: I0501 03:29:53.903238    3302 kubelet_node_status.go:76] "Successfully registered node" node="pause-542495"
	May 01 03:29:53 pause-542495 kubelet[3302]: I0501 03:29:53.908112    3302 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	May 01 03:29:53 pause-542495 kubelet[3302]: I0501 03:29:53.909117    3302 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	May 01 03:29:54 pause-542495 kubelet[3302]: I0501 03:29:54.431068    3302 apiserver.go:52] "Watching apiserver"
	May 01 03:29:54 pause-542495 kubelet[3302]: I0501 03:29:54.440358    3302 topology_manager.go:215] "Topology Admit Handler" podUID="f44ac199-32c4-4977-8f63-564a23e4b83e" podNamespace="kube-system" podName="kube-proxy-x7vrf"
	May 01 03:29:54 pause-542495 kubelet[3302]: I0501 03:29:54.443359    3302 topology_manager.go:215] "Topology Admit Handler" podUID="1f2c4209-14df-46c4-abc7-cf93b398a872" podNamespace="kube-system" podName="coredns-7db6d8ff4d-lz5kj"
	May 01 03:29:54 pause-542495 kubelet[3302]: I0501 03:29:54.543858    3302 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	May 01 03:29:54 pause-542495 kubelet[3302]: I0501 03:29:54.544291    3302 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f44ac199-32c4-4977-8f63-564a23e4b83e-lib-modules\") pod \"kube-proxy-x7vrf\" (UID: \"f44ac199-32c4-4977-8f63-564a23e4b83e\") " pod="kube-system/kube-proxy-x7vrf"
	May 01 03:29:54 pause-542495 kubelet[3302]: I0501 03:29:54.545115    3302 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f44ac199-32c4-4977-8f63-564a23e4b83e-xtables-lock\") pod \"kube-proxy-x7vrf\" (UID: \"f44ac199-32c4-4977-8f63-564a23e4b83e\") " pod="kube-system/kube-proxy-x7vrf"
	May 01 03:29:54 pause-542495 kubelet[3302]: I0501 03:29:54.745426    3302 scope.go:117] "RemoveContainer" containerID="4d30e5e711023dd3054c30c78cddbaba96016c1d461380e99168233181b94590"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-542495 -n pause-542495
helpers_test.go:261: (dbg) Run:  kubectl --context pause-542495 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (54.74s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (278.58s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-503971 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-503971 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (4m38.255443474s)

                                                
                                                
-- stdout --
	* [old-k8s-version-503971] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18779
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18779-13391/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18779-13391/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "old-k8s-version-503971" primary control-plane node in "old-k8s-version-503971" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0501 03:30:12.547276   65502 out.go:291] Setting OutFile to fd 1 ...
	I0501 03:30:12.547417   65502 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 03:30:12.547434   65502 out.go:304] Setting ErrFile to fd 2...
	I0501 03:30:12.547450   65502 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 03:30:12.547718   65502 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18779-13391/.minikube/bin
	I0501 03:30:12.548366   65502 out.go:298] Setting JSON to false
	I0501 03:30:12.549428   65502 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7956,"bootTime":1714526257,"procs":218,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0501 03:30:12.549491   65502 start.go:139] virtualization: kvm guest
	I0501 03:30:12.551613   65502 out.go:177] * [old-k8s-version-503971] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0501 03:30:12.552973   65502 out.go:177]   - MINIKUBE_LOCATION=18779
	I0501 03:30:12.554272   65502 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0501 03:30:12.553057   65502 notify.go:220] Checking for updates...
	I0501 03:30:12.556671   65502 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18779-13391/kubeconfig
	I0501 03:30:12.557966   65502 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18779-13391/.minikube
	I0501 03:30:12.559376   65502 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0501 03:30:12.560810   65502 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0501 03:30:12.562481   65502 config.go:182] Loaded profile config "cert-options-582976": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0501 03:30:12.562600   65502 config.go:182] Loaded profile config "kubernetes-upgrade-046243": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0501 03:30:12.562731   65502 config.go:182] Loaded profile config "pause-542495": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0501 03:30:12.562841   65502 driver.go:392] Setting default libvirt URI to qemu:///system
	I0501 03:30:12.601023   65502 out.go:177] * Using the kvm2 driver based on user configuration
	I0501 03:30:12.602106   65502 start.go:297] selected driver: kvm2
	I0501 03:30:12.602117   65502 start.go:901] validating driver "kvm2" against <nil>
	I0501 03:30:12.602127   65502 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0501 03:30:12.602818   65502 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0501 03:30:12.602893   65502 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18779-13391/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0501 03:30:12.618994   65502 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0501 03:30:12.619061   65502 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0501 03:30:12.619303   65502 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0501 03:30:12.619358   65502 cni.go:84] Creating CNI manager for ""
	I0501 03:30:12.619368   65502 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0501 03:30:12.619379   65502 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0501 03:30:12.619429   65502 start.go:340] cluster config:
	{Name:old-k8s-version-503971 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-503971 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 03:30:12.619519   65502 iso.go:125] acquiring lock: {Name:mkcd2496aadb29931e179193d707c731bfda98b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0501 03:30:12.621066   65502 out.go:177] * Starting "old-k8s-version-503971" primary control-plane node in "old-k8s-version-503971" cluster
	I0501 03:30:12.622160   65502 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0501 03:30:12.622204   65502 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0501 03:30:12.622221   65502 cache.go:56] Caching tarball of preloaded images
	I0501 03:30:12.622306   65502 preload.go:173] Found /home/jenkins/minikube-integration/18779-13391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0501 03:30:12.622323   65502 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0501 03:30:12.622458   65502 profile.go:143] Saving config to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/old-k8s-version-503971/config.json ...
	I0501 03:30:12.622486   65502 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/old-k8s-version-503971/config.json: {Name:mk669fe3f056de7b3742f1c61796e4302042079b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 03:30:12.622663   65502 start.go:360] acquireMachinesLock for old-k8s-version-503971: {Name:mkc4548e5afe1d4b0a833f6c522103562b0cefff Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0501 03:30:14.808875   65502 start.go:364] duration metric: took 2.186175971s to acquireMachinesLock for "old-k8s-version-503971"
	I0501 03:30:14.808968   65502 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-503971 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-503971 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0501 03:30:14.809090   65502 start.go:125] createHost starting for "" (driver="kvm2")
	I0501 03:30:14.811132   65502 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0501 03:30:14.811344   65502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:30:14.811404   65502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:30:14.832570   65502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43321
	I0501 03:30:14.833923   65502 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:30:14.834627   65502 main.go:141] libmachine: Using API Version  1
	I0501 03:30:14.834658   65502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:30:14.835031   65502 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:30:14.835191   65502 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetMachineName
	I0501 03:30:14.835288   65502 main.go:141] libmachine: (old-k8s-version-503971) Calling .DriverName
	I0501 03:30:14.835409   65502 start.go:159] libmachine.API.Create for "old-k8s-version-503971" (driver="kvm2")
	I0501 03:30:14.835439   65502 client.go:168] LocalClient.Create starting
	I0501 03:30:14.835472   65502 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem
	I0501 03:30:14.835510   65502 main.go:141] libmachine: Decoding PEM data...
	I0501 03:30:14.835539   65502 main.go:141] libmachine: Parsing certificate...
	I0501 03:30:14.835613   65502 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem
	I0501 03:30:14.835655   65502 main.go:141] libmachine: Decoding PEM data...
	I0501 03:30:14.835675   65502 main.go:141] libmachine: Parsing certificate...
	I0501 03:30:14.835703   65502 main.go:141] libmachine: Running pre-create checks...
	I0501 03:30:14.835716   65502 main.go:141] libmachine: (old-k8s-version-503971) Calling .PreCreateCheck
	I0501 03:30:14.836137   65502 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetConfigRaw
	I0501 03:30:14.836571   65502 main.go:141] libmachine: Creating machine...
	I0501 03:30:14.836590   65502 main.go:141] libmachine: (old-k8s-version-503971) Calling .Create
	I0501 03:30:14.836699   65502 main.go:141] libmachine: (old-k8s-version-503971) Creating KVM machine...
	I0501 03:30:14.838192   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | found existing default KVM network
	I0501 03:30:14.839850   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | I0501 03:30:14.839672   65721 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:d3:be:79} reservation:<nil>}
	I0501 03:30:14.841237   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | I0501 03:30:14.841142   65721 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:a4:69:27} reservation:<nil>}
	I0501 03:30:14.842784   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | I0501 03:30:14.842704   65721 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002891d0}
	I0501 03:30:14.842892   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | created network xml: 
	I0501 03:30:14.842903   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | <network>
	I0501 03:30:14.842911   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG |   <name>mk-old-k8s-version-503971</name>
	I0501 03:30:14.842918   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG |   <dns enable='no'/>
	I0501 03:30:14.842925   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG |   
	I0501 03:30:14.842941   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I0501 03:30:14.842950   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG |     <dhcp>
	I0501 03:30:14.842956   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I0501 03:30:14.842964   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG |     </dhcp>
	I0501 03:30:14.842969   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG |   </ip>
	I0501 03:30:14.842976   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG |   
	I0501 03:30:14.842981   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | </network>
	I0501 03:30:14.842987   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | 
	I0501 03:30:14.849161   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | trying to create private KVM network mk-old-k8s-version-503971 192.168.61.0/24...
	I0501 03:30:14.935299   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | private KVM network mk-old-k8s-version-503971 192.168.61.0/24 created
	I0501 03:30:14.935382   65502 main.go:141] libmachine: (old-k8s-version-503971) Setting up store path in /home/jenkins/minikube-integration/18779-13391/.minikube/machines/old-k8s-version-503971 ...
	I0501 03:30:14.935469   65502 main.go:141] libmachine: (old-k8s-version-503971) Building disk image from file:///home/jenkins/minikube-integration/18779-13391/.minikube/cache/iso/amd64/minikube-v1.33.0-1714498396-18779-amd64.iso
	I0501 03:30:14.935534   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | I0501 03:30:14.935500   65721 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18779-13391/.minikube
	I0501 03:30:14.935662   65502 main.go:141] libmachine: (old-k8s-version-503971) Downloading /home/jenkins/minikube-integration/18779-13391/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18779-13391/.minikube/cache/iso/amd64/minikube-v1.33.0-1714498396-18779-amd64.iso...
	I0501 03:30:15.188664   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | I0501 03:30:15.188551   65721 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/old-k8s-version-503971/id_rsa...
	I0501 03:30:15.300917   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | I0501 03:30:15.300789   65721 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/old-k8s-version-503971/old-k8s-version-503971.rawdisk...
	I0501 03:30:15.300952   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | Writing magic tar header
	I0501 03:30:15.300972   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | Writing SSH key tar header
	I0501 03:30:15.301057   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | I0501 03:30:15.301007   65721 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18779-13391/.minikube/machines/old-k8s-version-503971 ...
	I0501 03:30:15.301155   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/old-k8s-version-503971
	I0501 03:30:15.301186   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18779-13391/.minikube/machines
	I0501 03:30:15.301210   65502 main.go:141] libmachine: (old-k8s-version-503971) Setting executable bit set on /home/jenkins/minikube-integration/18779-13391/.minikube/machines/old-k8s-version-503971 (perms=drwx------)
	I0501 03:30:15.301228   65502 main.go:141] libmachine: (old-k8s-version-503971) Setting executable bit set on /home/jenkins/minikube-integration/18779-13391/.minikube/machines (perms=drwxr-xr-x)
	I0501 03:30:15.301249   65502 main.go:141] libmachine: (old-k8s-version-503971) Setting executable bit set on /home/jenkins/minikube-integration/18779-13391/.minikube (perms=drwxr-xr-x)
	I0501 03:30:15.301264   65502 main.go:141] libmachine: (old-k8s-version-503971) Setting executable bit set on /home/jenkins/minikube-integration/18779-13391 (perms=drwxrwxr-x)
	I0501 03:30:15.301281   65502 main.go:141] libmachine: (old-k8s-version-503971) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0501 03:30:15.301295   65502 main.go:141] libmachine: (old-k8s-version-503971) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0501 03:30:15.301311   65502 main.go:141] libmachine: (old-k8s-version-503971) Creating domain...
	I0501 03:30:15.301330   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18779-13391/.minikube
	I0501 03:30:15.301344   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18779-13391
	I0501 03:30:15.301359   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0501 03:30:15.301371   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | Checking permissions on dir: /home/jenkins
	I0501 03:30:15.301395   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | Checking permissions on dir: /home
	I0501 03:30:15.301409   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | Skipping /home - not owner
	I0501 03:30:15.302639   65502 main.go:141] libmachine: (old-k8s-version-503971) define libvirt domain using xml: 
	I0501 03:30:15.302657   65502 main.go:141] libmachine: (old-k8s-version-503971) <domain type='kvm'>
	I0501 03:30:15.302666   65502 main.go:141] libmachine: (old-k8s-version-503971)   <name>old-k8s-version-503971</name>
	I0501 03:30:15.302675   65502 main.go:141] libmachine: (old-k8s-version-503971)   <memory unit='MiB'>2200</memory>
	I0501 03:30:15.302683   65502 main.go:141] libmachine: (old-k8s-version-503971)   <vcpu>2</vcpu>
	I0501 03:30:15.302695   65502 main.go:141] libmachine: (old-k8s-version-503971)   <features>
	I0501 03:30:15.302705   65502 main.go:141] libmachine: (old-k8s-version-503971)     <acpi/>
	I0501 03:30:15.302712   65502 main.go:141] libmachine: (old-k8s-version-503971)     <apic/>
	I0501 03:30:15.302733   65502 main.go:141] libmachine: (old-k8s-version-503971)     <pae/>
	I0501 03:30:15.302741   65502 main.go:141] libmachine: (old-k8s-version-503971)     
	I0501 03:30:15.302750   65502 main.go:141] libmachine: (old-k8s-version-503971)   </features>
	I0501 03:30:15.302758   65502 main.go:141] libmachine: (old-k8s-version-503971)   <cpu mode='host-passthrough'>
	I0501 03:30:15.302766   65502 main.go:141] libmachine: (old-k8s-version-503971)   
	I0501 03:30:15.302773   65502 main.go:141] libmachine: (old-k8s-version-503971)   </cpu>
	I0501 03:30:15.302781   65502 main.go:141] libmachine: (old-k8s-version-503971)   <os>
	I0501 03:30:15.302788   65502 main.go:141] libmachine: (old-k8s-version-503971)     <type>hvm</type>
	I0501 03:30:15.302796   65502 main.go:141] libmachine: (old-k8s-version-503971)     <boot dev='cdrom'/>
	I0501 03:30:15.302803   65502 main.go:141] libmachine: (old-k8s-version-503971)     <boot dev='hd'/>
	I0501 03:30:15.302812   65502 main.go:141] libmachine: (old-k8s-version-503971)     <bootmenu enable='no'/>
	I0501 03:30:15.302818   65502 main.go:141] libmachine: (old-k8s-version-503971)   </os>
	I0501 03:30:15.302830   65502 main.go:141] libmachine: (old-k8s-version-503971)   <devices>
	I0501 03:30:15.302839   65502 main.go:141] libmachine: (old-k8s-version-503971)     <disk type='file' device='cdrom'>
	I0501 03:30:15.302853   65502 main.go:141] libmachine: (old-k8s-version-503971)       <source file='/home/jenkins/minikube-integration/18779-13391/.minikube/machines/old-k8s-version-503971/boot2docker.iso'/>
	I0501 03:30:15.302869   65502 main.go:141] libmachine: (old-k8s-version-503971)       <target dev='hdc' bus='scsi'/>
	I0501 03:30:15.302880   65502 main.go:141] libmachine: (old-k8s-version-503971)       <readonly/>
	I0501 03:30:15.302889   65502 main.go:141] libmachine: (old-k8s-version-503971)     </disk>
	I0501 03:30:15.302903   65502 main.go:141] libmachine: (old-k8s-version-503971)     <disk type='file' device='disk'>
	I0501 03:30:15.302913   65502 main.go:141] libmachine: (old-k8s-version-503971)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0501 03:30:15.302931   65502 main.go:141] libmachine: (old-k8s-version-503971)       <source file='/home/jenkins/minikube-integration/18779-13391/.minikube/machines/old-k8s-version-503971/old-k8s-version-503971.rawdisk'/>
	I0501 03:30:15.302950   65502 main.go:141] libmachine: (old-k8s-version-503971)       <target dev='hda' bus='virtio'/>
	I0501 03:30:15.302965   65502 main.go:141] libmachine: (old-k8s-version-503971)     </disk>
	I0501 03:30:15.302974   65502 main.go:141] libmachine: (old-k8s-version-503971)     <interface type='network'>
	I0501 03:30:15.302982   65502 main.go:141] libmachine: (old-k8s-version-503971)       <source network='mk-old-k8s-version-503971'/>
	I0501 03:30:15.302992   65502 main.go:141] libmachine: (old-k8s-version-503971)       <model type='virtio'/>
	I0501 03:30:15.303005   65502 main.go:141] libmachine: (old-k8s-version-503971)     </interface>
	I0501 03:30:15.303018   65502 main.go:141] libmachine: (old-k8s-version-503971)     <interface type='network'>
	I0501 03:30:15.303030   65502 main.go:141] libmachine: (old-k8s-version-503971)       <source network='default'/>
	I0501 03:30:15.303041   65502 main.go:141] libmachine: (old-k8s-version-503971)       <model type='virtio'/>
	I0501 03:30:15.303050   65502 main.go:141] libmachine: (old-k8s-version-503971)     </interface>
	I0501 03:30:15.303060   65502 main.go:141] libmachine: (old-k8s-version-503971)     <serial type='pty'>
	I0501 03:30:15.303066   65502 main.go:141] libmachine: (old-k8s-version-503971)       <target port='0'/>
	I0501 03:30:15.303073   65502 main.go:141] libmachine: (old-k8s-version-503971)     </serial>
	I0501 03:30:15.303085   65502 main.go:141] libmachine: (old-k8s-version-503971)     <console type='pty'>
	I0501 03:30:15.303097   65502 main.go:141] libmachine: (old-k8s-version-503971)       <target type='serial' port='0'/>
	I0501 03:30:15.303112   65502 main.go:141] libmachine: (old-k8s-version-503971)     </console>
	I0501 03:30:15.303123   65502 main.go:141] libmachine: (old-k8s-version-503971)     <rng model='virtio'>
	I0501 03:30:15.303136   65502 main.go:141] libmachine: (old-k8s-version-503971)       <backend model='random'>/dev/random</backend>
	I0501 03:30:15.303151   65502 main.go:141] libmachine: (old-k8s-version-503971)     </rng>
	I0501 03:30:15.303162   65502 main.go:141] libmachine: (old-k8s-version-503971)     
	I0501 03:30:15.303170   65502 main.go:141] libmachine: (old-k8s-version-503971)     
	I0501 03:30:15.303177   65502 main.go:141] libmachine: (old-k8s-version-503971)   </devices>
	I0501 03:30:15.303187   65502 main.go:141] libmachine: (old-k8s-version-503971) </domain>
	I0501 03:30:15.303201   65502 main.go:141] libmachine: (old-k8s-version-503971) 
	I0501 03:30:15.308080   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:a6:40:1a in network default
	I0501 03:30:15.308802   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:30:15.308856   65502 main.go:141] libmachine: (old-k8s-version-503971) Ensuring networks are active...
	I0501 03:30:15.310268   65502 main.go:141] libmachine: (old-k8s-version-503971) Ensuring network default is active
	I0501 03:30:15.311072   65502 main.go:141] libmachine: (old-k8s-version-503971) Ensuring network mk-old-k8s-version-503971 is active
	I0501 03:30:15.311573   65502 main.go:141] libmachine: (old-k8s-version-503971) Getting domain xml...
	I0501 03:30:15.312472   65502 main.go:141] libmachine: (old-k8s-version-503971) Creating domain...
	I0501 03:30:17.277823   65502 main.go:141] libmachine: (old-k8s-version-503971) Waiting to get IP...
	I0501 03:30:17.280395   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:30:17.281143   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | unable to find current IP address of domain old-k8s-version-503971 in network mk-old-k8s-version-503971
	I0501 03:30:17.281171   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | I0501 03:30:17.281098   65721 retry.go:31] will retry after 267.919967ms: waiting for machine to come up
	I0501 03:30:17.818484   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:30:17.818513   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | unable to find current IP address of domain old-k8s-version-503971 in network mk-old-k8s-version-503971
	I0501 03:30:17.818527   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | I0501 03:30:17.807964   65721 retry.go:31] will retry after 299.613311ms: waiting for machine to come up
	I0501 03:30:18.109799   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:30:18.115067   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | unable to find current IP address of domain old-k8s-version-503971 in network mk-old-k8s-version-503971
	I0501 03:30:18.115097   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | I0501 03:30:18.114959   65721 retry.go:31] will retry after 335.426859ms: waiting for machine to come up
	I0501 03:30:18.452457   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:30:18.452995   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | unable to find current IP address of domain old-k8s-version-503971 in network mk-old-k8s-version-503971
	I0501 03:30:18.453019   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | I0501 03:30:18.452976   65721 retry.go:31] will retry after 492.882265ms: waiting for machine to come up
	I0501 03:30:18.947462   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:30:18.948107   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | unable to find current IP address of domain old-k8s-version-503971 in network mk-old-k8s-version-503971
	I0501 03:30:18.948141   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | I0501 03:30:18.948047   65721 retry.go:31] will retry after 554.404484ms: waiting for machine to come up
	I0501 03:30:19.504659   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:30:19.505239   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | unable to find current IP address of domain old-k8s-version-503971 in network mk-old-k8s-version-503971
	I0501 03:30:19.505271   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | I0501 03:30:19.505183   65721 retry.go:31] will retry after 809.507018ms: waiting for machine to come up
	I0501 03:30:20.316619   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:30:20.317111   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | unable to find current IP address of domain old-k8s-version-503971 in network mk-old-k8s-version-503971
	I0501 03:30:20.317141   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | I0501 03:30:20.317070   65721 retry.go:31] will retry after 1.17890373s: waiting for machine to come up
	I0501 03:30:21.497732   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:30:21.498209   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | unable to find current IP address of domain old-k8s-version-503971 in network mk-old-k8s-version-503971
	I0501 03:30:21.498260   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | I0501 03:30:21.498160   65721 retry.go:31] will retry after 1.305661751s: waiting for machine to come up
	I0501 03:30:22.805627   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:30:22.806155   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | unable to find current IP address of domain old-k8s-version-503971 in network mk-old-k8s-version-503971
	I0501 03:30:22.806187   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | I0501 03:30:22.806102   65721 retry.go:31] will retry after 1.340432437s: waiting for machine to come up
	I0501 03:30:24.148432   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:30:24.148698   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | unable to find current IP address of domain old-k8s-version-503971 in network mk-old-k8s-version-503971
	I0501 03:30:24.148736   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | I0501 03:30:24.148658   65721 retry.go:31] will retry after 2.165346193s: waiting for machine to come up
	I0501 03:30:26.315686   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:30:26.316230   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | unable to find current IP address of domain old-k8s-version-503971 in network mk-old-k8s-version-503971
	I0501 03:30:26.316265   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | I0501 03:30:26.316167   65721 retry.go:31] will retry after 2.091419447s: waiting for machine to come up
	I0501 03:30:28.410637   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:30:28.411224   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | unable to find current IP address of domain old-k8s-version-503971 in network mk-old-k8s-version-503971
	I0501 03:30:28.411249   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | I0501 03:30:28.411164   65721 retry.go:31] will retry after 2.966341708s: waiting for machine to come up
	I0501 03:30:31.379569   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:30:31.379977   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | unable to find current IP address of domain old-k8s-version-503971 in network mk-old-k8s-version-503971
	I0501 03:30:31.380010   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | I0501 03:30:31.379935   65721 retry.go:31] will retry after 2.98385883s: waiting for machine to come up
	I0501 03:30:34.365139   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:30:34.365626   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | unable to find current IP address of domain old-k8s-version-503971 in network mk-old-k8s-version-503971
	I0501 03:30:34.365672   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | I0501 03:30:34.365581   65721 retry.go:31] will retry after 3.807407638s: waiting for machine to come up
	I0501 03:30:38.176696   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:30:38.177157   65502 main.go:141] libmachine: (old-k8s-version-503971) Found IP for machine: 192.168.61.104
	I0501 03:30:38.177181   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has current primary IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:30:38.177187   65502 main.go:141] libmachine: (old-k8s-version-503971) Reserving static IP address...
	I0501 03:30:38.177481   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-503971", mac: "52:54:00:7d:68:83", ip: "192.168.61.104"} in network mk-old-k8s-version-503971
	I0501 03:30:38.251556   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | Getting to WaitForSSH function...
	I0501 03:30:38.251590   65502 main.go:141] libmachine: (old-k8s-version-503971) Reserved static IP address: 192.168.61.104
	I0501 03:30:38.251612   65502 main.go:141] libmachine: (old-k8s-version-503971) Waiting for SSH to be available...
	I0501 03:30:38.254035   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:30:38.254324   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:7d:68:83", ip: ""} in network mk-old-k8s-version-503971
	I0501 03:30:38.254351   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | unable to find defined IP address of network mk-old-k8s-version-503971 interface with MAC address 52:54:00:7d:68:83
	I0501 03:30:38.254485   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | Using SSH client type: external
	I0501 03:30:38.254546   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | Using SSH private key: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/old-k8s-version-503971/id_rsa (-rw-------)
	I0501 03:30:38.254588   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18779-13391/.minikube/machines/old-k8s-version-503971/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0501 03:30:38.254604   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | About to run SSH command:
	I0501 03:30:38.254622   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | exit 0
	I0501 03:30:38.258117   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | SSH cmd err, output: exit status 255: 
	I0501 03:30:38.258139   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0501 03:30:38.258146   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | command : exit 0
	I0501 03:30:38.258152   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | err     : exit status 255
	I0501 03:30:38.258159   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | output  : 
	I0501 03:30:41.258384   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | Getting to WaitForSSH function...
	I0501 03:30:41.260925   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:30:41.261296   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:68:83", ip: ""} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:30:41.261330   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:30:41.261420   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | Using SSH client type: external
	I0501 03:30:41.261444   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | Using SSH private key: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/old-k8s-version-503971/id_rsa (-rw-------)
	I0501 03:30:41.261490   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.104 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18779-13391/.minikube/machines/old-k8s-version-503971/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0501 03:30:41.261514   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | About to run SSH command:
	I0501 03:30:41.261528   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | exit 0
	I0501 03:30:41.386904   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | SSH cmd err, output: <nil>: 
	I0501 03:30:41.387199   65502 main.go:141] libmachine: (old-k8s-version-503971) KVM machine creation complete!
	I0501 03:30:41.387503   65502 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetConfigRaw
	I0501 03:30:41.388103   65502 main.go:141] libmachine: (old-k8s-version-503971) Calling .DriverName
	I0501 03:30:41.388344   65502 main.go:141] libmachine: (old-k8s-version-503971) Calling .DriverName
	I0501 03:30:41.388512   65502 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0501 03:30:41.388529   65502 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetState
	I0501 03:30:41.389801   65502 main.go:141] libmachine: Detecting operating system of created instance...
	I0501 03:30:41.389833   65502 main.go:141] libmachine: Waiting for SSH to be available...
	I0501 03:30:41.389844   65502 main.go:141] libmachine: Getting to WaitForSSH function...
	I0501 03:30:41.389859   65502 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHHostname
	I0501 03:30:41.392182   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:30:41.392545   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:68:83", ip: ""} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:30:41.392587   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:30:41.392746   65502 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHPort
	I0501 03:30:41.393036   65502 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHKeyPath
	I0501 03:30:41.393232   65502 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHKeyPath
	I0501 03:30:41.393406   65502 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHUsername
	I0501 03:30:41.393565   65502 main.go:141] libmachine: Using SSH client type: native
	I0501 03:30:41.393767   65502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.104 22 <nil> <nil>}
	I0501 03:30:41.393781   65502 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0501 03:30:41.494111   65502 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0501 03:30:41.494136   65502 main.go:141] libmachine: Detecting the provisioner...
	I0501 03:30:41.494146   65502 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHHostname
	I0501 03:30:41.496975   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:30:41.497297   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:68:83", ip: ""} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:30:41.497330   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:30:41.497571   65502 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHPort
	I0501 03:30:41.497760   65502 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHKeyPath
	I0501 03:30:41.497960   65502 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHKeyPath
	I0501 03:30:41.498060   65502 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHUsername
	I0501 03:30:41.498230   65502 main.go:141] libmachine: Using SSH client type: native
	I0501 03:30:41.498471   65502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.104 22 <nil> <nil>}
	I0501 03:30:41.498484   65502 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0501 03:30:41.599933   65502 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0501 03:30:41.600009   65502 main.go:141] libmachine: found compatible host: buildroot
	I0501 03:30:41.600018   65502 main.go:141] libmachine: Provisioning with buildroot...
	I0501 03:30:41.600026   65502 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetMachineName
	I0501 03:30:41.600282   65502 buildroot.go:166] provisioning hostname "old-k8s-version-503971"
	I0501 03:30:41.600305   65502 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetMachineName
	I0501 03:30:41.600459   65502 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHHostname
	I0501 03:30:41.603164   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:30:41.603594   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:68:83", ip: ""} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:30:41.603624   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:30:41.603796   65502 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHPort
	I0501 03:30:41.603962   65502 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHKeyPath
	I0501 03:30:41.604125   65502 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHKeyPath
	I0501 03:30:41.604272   65502 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHUsername
	I0501 03:30:41.604419   65502 main.go:141] libmachine: Using SSH client type: native
	I0501 03:30:41.604639   65502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.104 22 <nil> <nil>}
	I0501 03:30:41.604658   65502 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-503971 && echo "old-k8s-version-503971" | sudo tee /etc/hostname
	I0501 03:30:41.725848   65502 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-503971
	
	I0501 03:30:41.725882   65502 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHHostname
	I0501 03:30:41.728561   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:30:41.728959   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:68:83", ip: ""} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:30:41.729003   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:30:41.729180   65502 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHPort
	I0501 03:30:41.729382   65502 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHKeyPath
	I0501 03:30:41.729513   65502 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHKeyPath
	I0501 03:30:41.729604   65502 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHUsername
	I0501 03:30:41.729736   65502 main.go:141] libmachine: Using SSH client type: native
	I0501 03:30:41.729907   65502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.104 22 <nil> <nil>}
	I0501 03:30:41.729924   65502 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-503971' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-503971/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-503971' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0501 03:30:41.841082   65502 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0501 03:30:41.841114   65502 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18779-13391/.minikube CaCertPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18779-13391/.minikube}
	I0501 03:30:41.841137   65502 buildroot.go:174] setting up certificates
	I0501 03:30:41.841150   65502 provision.go:84] configureAuth start
	I0501 03:30:41.841163   65502 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetMachineName
	I0501 03:30:41.841471   65502 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetIP
	I0501 03:30:41.844393   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:30:41.844723   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:68:83", ip: ""} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:30:41.844757   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:30:41.844942   65502 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHHostname
	I0501 03:30:41.847201   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:30:41.847511   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:68:83", ip: ""} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:30:41.847533   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:30:41.847698   65502 provision.go:143] copyHostCerts
	I0501 03:30:41.847749   65502 exec_runner.go:144] found /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem, removing ...
	I0501 03:30:41.847760   65502 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem
	I0501 03:30:41.847815   65502 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem (1078 bytes)
	I0501 03:30:41.847906   65502 exec_runner.go:144] found /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem, removing ...
	I0501 03:30:41.847918   65502 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem
	I0501 03:30:41.847946   65502 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem (1123 bytes)
	I0501 03:30:41.848007   65502 exec_runner.go:144] found /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem, removing ...
	I0501 03:30:41.848014   65502 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem
	I0501 03:30:41.848040   65502 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem (1675 bytes)
	I0501 03:30:41.848101   65502 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-503971 san=[127.0.0.1 192.168.61.104 localhost minikube old-k8s-version-503971]
	I0501 03:30:42.129743   65502 provision.go:177] copyRemoteCerts
	I0501 03:30:42.129807   65502 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0501 03:30:42.129834   65502 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHHostname
	I0501 03:30:42.132552   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:30:42.132883   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:68:83", ip: ""} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:30:42.132912   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:30:42.133134   65502 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHPort
	I0501 03:30:42.133384   65502 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHKeyPath
	I0501 03:30:42.133585   65502 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHUsername
	I0501 03:30:42.133723   65502 sshutil.go:53] new ssh client: &{IP:192.168.61.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/old-k8s-version-503971/id_rsa Username:docker}
	I0501 03:30:42.219090   65502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0501 03:30:42.247347   65502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0501 03:30:42.275755   65502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0501 03:30:42.305243   65502 provision.go:87] duration metric: took 464.078319ms to configureAuth
	I0501 03:30:42.305275   65502 buildroot.go:189] setting minikube options for container-runtime
	I0501 03:30:42.305461   65502 config.go:182] Loaded profile config "old-k8s-version-503971": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0501 03:30:42.305560   65502 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHHostname
	I0501 03:30:42.308502   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:30:42.308899   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:68:83", ip: ""} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:30:42.308926   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:30:42.309137   65502 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHPort
	I0501 03:30:42.309338   65502 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHKeyPath
	I0501 03:30:42.309522   65502 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHKeyPath
	I0501 03:30:42.309669   65502 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHUsername
	I0501 03:30:42.309839   65502 main.go:141] libmachine: Using SSH client type: native
	I0501 03:30:42.309998   65502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.104 22 <nil> <nil>}
	I0501 03:30:42.310016   65502 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0501 03:30:42.617007   65502 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0501 03:30:42.617043   65502 main.go:141] libmachine: Checking connection to Docker...
	I0501 03:30:42.617070   65502 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetURL
	I0501 03:30:42.618412   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | Using libvirt version 6000000
	I0501 03:30:42.620667   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:30:42.621024   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:68:83", ip: ""} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:30:42.621047   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:30:42.621237   65502 main.go:141] libmachine: Docker is up and running!
	I0501 03:30:42.621253   65502 main.go:141] libmachine: Reticulating splines...
	I0501 03:30:42.621265   65502 client.go:171] duration metric: took 27.78581521s to LocalClient.Create
	I0501 03:30:42.621301   65502 start.go:167] duration metric: took 27.785892327s to libmachine.API.Create "old-k8s-version-503971"
	I0501 03:30:42.621316   65502 start.go:293] postStartSetup for "old-k8s-version-503971" (driver="kvm2")
	I0501 03:30:42.621335   65502 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0501 03:30:42.621360   65502 main.go:141] libmachine: (old-k8s-version-503971) Calling .DriverName
	I0501 03:30:42.621643   65502 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0501 03:30:42.621672   65502 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHHostname
	I0501 03:30:42.624286   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:30:42.624696   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:68:83", ip: ""} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:30:42.624726   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:30:42.624958   65502 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHPort
	I0501 03:30:42.625164   65502 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHKeyPath
	I0501 03:30:42.625376   65502 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHUsername
	I0501 03:30:42.625547   65502 sshutil.go:53] new ssh client: &{IP:192.168.61.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/old-k8s-version-503971/id_rsa Username:docker}
	I0501 03:30:42.713761   65502 ssh_runner.go:195] Run: cat /etc/os-release
	I0501 03:30:42.719682   65502 info.go:137] Remote host: Buildroot 2023.02.9
	I0501 03:30:42.719708   65502 filesync.go:126] Scanning /home/jenkins/minikube-integration/18779-13391/.minikube/addons for local assets ...
	I0501 03:30:42.719769   65502 filesync.go:126] Scanning /home/jenkins/minikube-integration/18779-13391/.minikube/files for local assets ...
	I0501 03:30:42.719857   65502 filesync.go:149] local asset: /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem -> 207242.pem in /etc/ssl/certs
	I0501 03:30:42.719975   65502 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0501 03:30:42.732076   65502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem --> /etc/ssl/certs/207242.pem (1708 bytes)
	I0501 03:30:42.763447   65502 start.go:296] duration metric: took 142.112552ms for postStartSetup
	I0501 03:30:42.763511   65502 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetConfigRaw
	I0501 03:30:42.764263   65502 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetIP
	I0501 03:30:42.767182   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:30:42.767626   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:68:83", ip: ""} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:30:42.767657   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:30:42.767988   65502 profile.go:143] Saving config to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/old-k8s-version-503971/config.json ...
	I0501 03:30:42.768204   65502 start.go:128] duration metric: took 27.959102304s to createHost
	I0501 03:30:42.768232   65502 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHHostname
	I0501 03:30:42.770545   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:30:42.770891   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:68:83", ip: ""} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:30:42.770916   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:30:42.771041   65502 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHPort
	I0501 03:30:42.771236   65502 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHKeyPath
	I0501 03:30:42.771386   65502 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHKeyPath
	I0501 03:30:42.771545   65502 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHUsername
	I0501 03:30:42.771697   65502 main.go:141] libmachine: Using SSH client type: native
	I0501 03:30:42.771926   65502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.104 22 <nil> <nil>}
	I0501 03:30:42.771941   65502 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0501 03:30:42.871965   65502 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714534242.856037398
	
	I0501 03:30:42.871993   65502 fix.go:216] guest clock: 1714534242.856037398
	I0501 03:30:42.872005   65502 fix.go:229] Guest: 2024-05-01 03:30:42.856037398 +0000 UTC Remote: 2024-05-01 03:30:42.768218477 +0000 UTC m=+30.278133484 (delta=87.818921ms)
	I0501 03:30:42.872062   65502 fix.go:200] guest clock delta is within tolerance: 87.818921ms
	I0501 03:30:42.872074   65502 start.go:83] releasing machines lock for "old-k8s-version-503971", held for 28.063153275s
	I0501 03:30:42.872110   65502 main.go:141] libmachine: (old-k8s-version-503971) Calling .DriverName
	I0501 03:30:42.872435   65502 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetIP
	I0501 03:30:42.875419   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:30:42.875757   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:68:83", ip: ""} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:30:42.875782   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:30:42.875975   65502 main.go:141] libmachine: (old-k8s-version-503971) Calling .DriverName
	I0501 03:30:42.876502   65502 main.go:141] libmachine: (old-k8s-version-503971) Calling .DriverName
	I0501 03:30:42.876671   65502 main.go:141] libmachine: (old-k8s-version-503971) Calling .DriverName
	I0501 03:30:42.876773   65502 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0501 03:30:42.876819   65502 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHHostname
	I0501 03:30:42.876933   65502 ssh_runner.go:195] Run: cat /version.json
	I0501 03:30:42.876956   65502 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHHostname
	I0501 03:30:42.879479   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:30:42.879734   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:30:42.879882   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:68:83", ip: ""} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:30:42.879905   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:30:42.880002   65502 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHPort
	I0501 03:30:42.880120   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:68:83", ip: ""} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:30:42.880146   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:30:42.880156   65502 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHKeyPath
	I0501 03:30:42.880333   65502 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHUsername
	I0501 03:30:42.880346   65502 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHPort
	I0501 03:30:42.880504   65502 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHKeyPath
	I0501 03:30:42.880513   65502 sshutil.go:53] new ssh client: &{IP:192.168.61.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/old-k8s-version-503971/id_rsa Username:docker}
	I0501 03:30:42.880638   65502 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHUsername
	I0501 03:30:42.880771   65502 sshutil.go:53] new ssh client: &{IP:192.168.61.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/old-k8s-version-503971/id_rsa Username:docker}
	I0501 03:30:42.961011   65502 ssh_runner.go:195] Run: systemctl --version
	I0501 03:30:42.988631   65502 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0501 03:30:43.163111   65502 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0501 03:30:43.170959   65502 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0501 03:30:43.171037   65502 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0501 03:30:43.198233   65502 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0501 03:30:43.198261   65502 start.go:494] detecting cgroup driver to use...
	I0501 03:30:43.198333   65502 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0501 03:30:43.217035   65502 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0501 03:30:43.232313   65502 docker.go:217] disabling cri-docker service (if available) ...
	I0501 03:30:43.232400   65502 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0501 03:30:43.251010   65502 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0501 03:30:43.267584   65502 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0501 03:30:43.408987   65502 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0501 03:30:43.581459   65502 docker.go:233] disabling docker service ...
	I0501 03:30:43.581529   65502 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0501 03:30:43.599496   65502 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0501 03:30:43.614428   65502 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0501 03:30:43.765839   65502 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0501 03:30:43.903703   65502 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0501 03:30:43.922440   65502 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0501 03:30:43.948364   65502 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0501 03:30:43.948428   65502 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:30:43.960971   65502 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0501 03:30:43.961039   65502 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:30:43.972949   65502 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:30:43.985013   65502 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:30:43.996978   65502 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0501 03:30:44.009121   65502 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0501 03:30:44.019692   65502 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0501 03:30:44.019749   65502 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0501 03:30:44.035122   65502 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0501 03:30:44.049560   65502 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 03:30:44.178215   65502 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0501 03:30:44.369046   65502 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0501 03:30:44.369133   65502 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0501 03:30:44.375374   65502 start.go:562] Will wait 60s for crictl version
	I0501 03:30:44.375449   65502 ssh_runner.go:195] Run: which crictl
	I0501 03:30:44.380830   65502 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0501 03:30:44.422063   65502 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0501 03:30:44.422157   65502 ssh_runner.go:195] Run: crio --version
	I0501 03:30:44.460265   65502 ssh_runner.go:195] Run: crio --version
	I0501 03:30:44.501809   65502 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0501 03:30:44.502920   65502 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetIP
	I0501 03:30:44.506073   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:30:44.506516   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:68:83", ip: ""} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:30:44.506545   65502 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:30:44.506772   65502 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0501 03:30:44.511654   65502 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0501 03:30:44.527001   65502 kubeadm.go:877] updating cluster {Name:old-k8s-version-503971 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-503971 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.104 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0501 03:30:44.527145   65502 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0501 03:30:44.527210   65502 ssh_runner.go:195] Run: sudo crictl images --output json
	I0501 03:30:44.576772   65502 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0501 03:30:44.576863   65502 ssh_runner.go:195] Run: which lz4
	I0501 03:30:44.582006   65502 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0501 03:30:44.587493   65502 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0501 03:30:44.587530   65502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0501 03:30:46.840075   65502 crio.go:462] duration metric: took 2.258108991s to copy over tarball
	I0501 03:30:46.840154   65502 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0501 03:30:49.842324   65502 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.002139829s)
	I0501 03:30:49.842356   65502 crio.go:469] duration metric: took 3.002253578s to extract the tarball
	I0501 03:30:49.842366   65502 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0501 03:30:49.890494   65502 ssh_runner.go:195] Run: sudo crictl images --output json
	I0501 03:30:49.952833   65502 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0501 03:30:49.952865   65502 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0501 03:30:49.952939   65502 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0501 03:30:49.952973   65502 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0501 03:30:49.952998   65502 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0501 03:30:49.953002   65502 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0501 03:30:49.952978   65502 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0501 03:30:49.952971   65502 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0501 03:30:49.953044   65502 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0501 03:30:49.953095   65502 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0501 03:30:49.954439   65502 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0501 03:30:49.954542   65502 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0501 03:30:49.954562   65502 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0501 03:30:49.954604   65502 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0501 03:30:49.954623   65502 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0501 03:30:49.954709   65502 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0501 03:30:49.954722   65502 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0501 03:30:49.954782   65502 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0501 03:30:50.072986   65502 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0501 03:30:50.118614   65502 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0501 03:30:50.125554   65502 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0501 03:30:50.125751   65502 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0501 03:30:50.125797   65502 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0501 03:30:50.125838   65502 ssh_runner.go:195] Run: which crictl
	I0501 03:30:50.176894   65502 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0501 03:30:50.176949   65502 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0501 03:30:50.177007   65502 ssh_runner.go:195] Run: which crictl
	I0501 03:30:50.182111   65502 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0501 03:30:50.193301   65502 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0501 03:30:50.193341   65502 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0501 03:30:50.193349   65502 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0501 03:30:50.193377   65502 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0501 03:30:50.193415   65502 ssh_runner.go:195] Run: which crictl
	I0501 03:30:50.244120   65502 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0501 03:30:50.244175   65502 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0501 03:30:50.244229   65502 ssh_runner.go:195] Run: which crictl
	I0501 03:30:50.272477   65502 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0501 03:30:50.272531   65502 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0501 03:30:50.272597   65502 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0501 03:30:50.272597   65502 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0501 03:30:50.321368   65502 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0501 03:30:50.328355   65502 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0501 03:30:50.328444   65502 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0501 03:30:50.331393   65502 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0501 03:30:50.377703   65502 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0501 03:30:50.377748   65502 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0501 03:30:50.377799   65502 ssh_runner.go:195] Run: which crictl
	I0501 03:30:50.384874   65502 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0501 03:30:50.384928   65502 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0501 03:30:50.384999   65502 ssh_runner.go:195] Run: which crictl
	I0501 03:30:50.385027   65502 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0501 03:30:50.400233   65502 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0501 03:30:50.431071   65502 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0501 03:30:50.431140   65502 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0501 03:30:50.468327   65502 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0501 03:30:50.468374   65502 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0501 03:30:50.468423   65502 ssh_runner.go:195] Run: which crictl
	I0501 03:30:50.483635   65502 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0501 03:30:50.483884   65502 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0501 03:30:50.523093   65502 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0501 03:30:50.883241   65502 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0501 03:30:51.038440   65502 cache_images.go:92] duration metric: took 1.085549933s to LoadCachedImages
	W0501 03:30:51.038528   65502 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I0501 03:30:51.038551   65502 kubeadm.go:928] updating node { 192.168.61.104 8443 v1.20.0 crio true true} ...
	I0501 03:30:51.038735   65502 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-503971 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.104
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-503971 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0501 03:30:51.038844   65502 ssh_runner.go:195] Run: crio config
	I0501 03:30:51.094689   65502 cni.go:84] Creating CNI manager for ""
	I0501 03:30:51.094721   65502 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0501 03:30:51.094740   65502 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0501 03:30:51.094766   65502 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.104 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-503971 NodeName:old-k8s-version-503971 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.104"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.104 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0501 03:30:51.094961   65502 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.104
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-503971"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.104
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.104"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0501 03:30:51.095038   65502 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0501 03:30:51.107613   65502 binaries.go:44] Found k8s binaries, skipping transfer
	I0501 03:30:51.107689   65502 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0501 03:30:51.119057   65502 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0501 03:30:51.139902   65502 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0501 03:30:51.160697   65502 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0501 03:30:51.181308   65502 ssh_runner.go:195] Run: grep 192.168.61.104	control-plane.minikube.internal$ /etc/hosts
	I0501 03:30:51.186095   65502 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.104	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0501 03:30:51.200407   65502 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 03:30:51.341718   65502 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0501 03:30:51.361824   65502 certs.go:68] Setting up /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/old-k8s-version-503971 for IP: 192.168.61.104
	I0501 03:30:51.361849   65502 certs.go:194] generating shared ca certs ...
	I0501 03:30:51.361886   65502 certs.go:226] acquiring lock for ca certs: {Name:mk85a75d14c00780d4bf09cd9bdaf0f96f1b8fce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 03:30:51.362071   65502 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18779-13391/.minikube/ca.key
	I0501 03:30:51.362139   65502 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.key
	I0501 03:30:51.362154   65502 certs.go:256] generating profile certs ...
	I0501 03:30:51.362224   65502 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/old-k8s-version-503971/client.key
	I0501 03:30:51.362241   65502 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/old-k8s-version-503971/client.crt with IP's: []
	I0501 03:30:51.545067   65502 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/old-k8s-version-503971/client.crt ...
	I0501 03:30:51.545100   65502 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/old-k8s-version-503971/client.crt: {Name:mkb291995c78a70d2aa99b3de57a89e0b204a34e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 03:30:51.545321   65502 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/old-k8s-version-503971/client.key ...
	I0501 03:30:51.545341   65502 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/old-k8s-version-503971/client.key: {Name:mkbd7ea061c299f0c055a413768768a5fe4e6594 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 03:30:51.545470   65502 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/old-k8s-version-503971/apiserver.key.760b883a
	I0501 03:30:51.545493   65502 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/old-k8s-version-503971/apiserver.crt.760b883a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.104]
	I0501 03:30:51.858137   65502 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/old-k8s-version-503971/apiserver.crt.760b883a ...
	I0501 03:30:51.858174   65502 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/old-k8s-version-503971/apiserver.crt.760b883a: {Name:mk43b28d265a30fadff81730d277d5e9a53ed81b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 03:30:51.858338   65502 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/old-k8s-version-503971/apiserver.key.760b883a ...
	I0501 03:30:51.858354   65502 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/old-k8s-version-503971/apiserver.key.760b883a: {Name:mk6abbd75de4d0204a5ddb349b7dd731c6dad335 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 03:30:51.858453   65502 certs.go:381] copying /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/old-k8s-version-503971/apiserver.crt.760b883a -> /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/old-k8s-version-503971/apiserver.crt
	I0501 03:30:51.858556   65502 certs.go:385] copying /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/old-k8s-version-503971/apiserver.key.760b883a -> /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/old-k8s-version-503971/apiserver.key
	I0501 03:30:51.858613   65502 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/old-k8s-version-503971/proxy-client.key
	I0501 03:30:51.858629   65502 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/old-k8s-version-503971/proxy-client.crt with IP's: []
	I0501 03:30:51.926667   65502 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/old-k8s-version-503971/proxy-client.crt ...
	I0501 03:30:51.926698   65502 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/old-k8s-version-503971/proxy-client.crt: {Name:mk06c320401a2419a3c417ef2b2bfd213f5e04ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 03:30:51.951290   65502 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/old-k8s-version-503971/proxy-client.key ...
	I0501 03:30:51.951325   65502 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/old-k8s-version-503971/proxy-client.key: {Name:mk8232ae44275fffebff8fcc51b89dbe91275d3d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 03:30:51.951553   65502 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/20724.pem (1338 bytes)
	W0501 03:30:51.951624   65502 certs.go:480] ignoring /home/jenkins/minikube-integration/18779-13391/.minikube/certs/20724_empty.pem, impossibly tiny 0 bytes
	I0501 03:30:51.951636   65502 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca-key.pem (1679 bytes)
	I0501 03:30:51.951668   65502 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem (1078 bytes)
	I0501 03:30:51.951700   65502 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem (1123 bytes)
	I0501 03:30:51.951735   65502 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem (1675 bytes)
	I0501 03:30:51.951809   65502 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem (1708 bytes)
	I0501 03:30:51.952467   65502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0501 03:30:51.986170   65502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0501 03:30:52.013909   65502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0501 03:30:52.046333   65502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0501 03:30:52.076235   65502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/old-k8s-version-503971/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0501 03:30:52.109294   65502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/old-k8s-version-503971/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0501 03:30:52.140100   65502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/old-k8s-version-503971/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0501 03:30:52.172091   65502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/old-k8s-version-503971/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0501 03:30:52.206615   65502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0501 03:30:52.247633   65502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/certs/20724.pem --> /usr/share/ca-certificates/20724.pem (1338 bytes)
	I0501 03:30:52.293157   65502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem --> /usr/share/ca-certificates/207242.pem (1708 bytes)
	I0501 03:30:52.325589   65502 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0501 03:30:52.344575   65502 ssh_runner.go:195] Run: openssl version
	I0501 03:30:52.351318   65502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/207242.pem && ln -fs /usr/share/ca-certificates/207242.pem /etc/ssl/certs/207242.pem"
	I0501 03:30:52.363721   65502 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/207242.pem
	I0501 03:30:52.369508   65502 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May  1 02:20 /usr/share/ca-certificates/207242.pem
	I0501 03:30:52.369580   65502 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/207242.pem
	I0501 03:30:52.376486   65502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/207242.pem /etc/ssl/certs/3ec20f2e.0"
	I0501 03:30:52.389675   65502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0501 03:30:52.402595   65502 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0501 03:30:52.408132   65502 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May  1 02:08 /usr/share/ca-certificates/minikubeCA.pem
	I0501 03:30:52.408192   65502 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0501 03:30:52.415072   65502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0501 03:30:52.427777   65502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20724.pem && ln -fs /usr/share/ca-certificates/20724.pem /etc/ssl/certs/20724.pem"
	I0501 03:30:52.440074   65502 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20724.pem
	I0501 03:30:52.445322   65502 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May  1 02:20 /usr/share/ca-certificates/20724.pem
	I0501 03:30:52.445383   65502 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20724.pem
	I0501 03:30:52.451972   65502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20724.pem /etc/ssl/certs/51391683.0"
	I0501 03:30:52.464182   65502 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0501 03:30:52.468944   65502 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0501 03:30:52.469006   65502 kubeadm.go:391] StartCluster: {Name:old-k8s-version-503971 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-503971 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.104 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 03:30:52.469144   65502 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0501 03:30:52.469185   65502 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0501 03:30:52.507958   65502 cri.go:89] found id: ""
	I0501 03:30:52.508028   65502 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0501 03:30:52.519327   65502 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0501 03:30:52.530245   65502 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0501 03:30:52.541053   65502 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0501 03:30:52.541073   65502 kubeadm.go:156] found existing configuration files:
	
	I0501 03:30:52.541122   65502 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0501 03:30:52.551167   65502 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0501 03:30:52.551227   65502 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0501 03:30:52.561184   65502 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0501 03:30:52.571255   65502 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0501 03:30:52.571308   65502 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0501 03:30:52.581638   65502 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0501 03:30:52.591465   65502 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0501 03:30:52.591560   65502 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0501 03:30:52.601628   65502 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0501 03:30:52.612311   65502 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0501 03:30:52.612381   65502 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0501 03:30:52.624432   65502 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0501 03:30:52.751406   65502 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0501 03:30:52.751534   65502 kubeadm.go:309] [preflight] Running pre-flight checks
	I0501 03:30:52.936474   65502 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0501 03:30:52.936623   65502 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0501 03:30:52.936772   65502 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0501 03:30:53.175705   65502 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0501 03:30:53.177702   65502 out.go:204]   - Generating certificates and keys ...
	I0501 03:30:53.177814   65502 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0501 03:30:53.177917   65502 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0501 03:30:53.284246   65502 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0501 03:30:53.744671   65502 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0501 03:30:53.912918   65502 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0501 03:30:54.082836   65502 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0501 03:30:54.217925   65502 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0501 03:30:54.218391   65502 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-503971] and IPs [192.168.61.104 127.0.0.1 ::1]
	I0501 03:30:54.462226   65502 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0501 03:30:54.462608   65502 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-503971] and IPs [192.168.61.104 127.0.0.1 ::1]
	I0501 03:30:54.776205   65502 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0501 03:30:54.908978   65502 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0501 03:30:55.044122   65502 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0501 03:30:55.044449   65502 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0501 03:30:55.210328   65502 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0501 03:30:55.452313   65502 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0501 03:30:55.640378   65502 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0501 03:30:55.759466   65502 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0501 03:30:55.785297   65502 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0501 03:30:55.787319   65502 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0501 03:30:55.787397   65502 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0501 03:30:55.936196   65502 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0501 03:30:55.939274   65502 out.go:204]   - Booting up control plane ...
	I0501 03:30:55.939411   65502 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0501 03:30:55.944024   65502 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0501 03:30:55.945218   65502 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0501 03:30:55.946149   65502 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0501 03:30:55.951042   65502 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0501 03:31:35.949222   65502 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0501 03:31:35.949620   65502 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0501 03:31:35.949913   65502 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0501 03:31:40.950624   65502 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0501 03:31:40.950904   65502 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0501 03:31:50.951688   65502 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0501 03:31:50.951865   65502 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0501 03:32:10.953613   65502 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0501 03:32:10.953839   65502 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0501 03:32:50.953758   65502 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0501 03:32:50.954046   65502 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0501 03:32:50.954075   65502 kubeadm.go:309] 
	I0501 03:32:50.954123   65502 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0501 03:32:50.954178   65502 kubeadm.go:309] 		timed out waiting for the condition
	I0501 03:32:50.954188   65502 kubeadm.go:309] 
	I0501 03:32:50.954272   65502 kubeadm.go:309] 	This error is likely caused by:
	I0501 03:32:50.954345   65502 kubeadm.go:309] 		- The kubelet is not running
	I0501 03:32:50.954507   65502 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0501 03:32:50.954524   65502 kubeadm.go:309] 
	I0501 03:32:50.954672   65502 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0501 03:32:50.954720   65502 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0501 03:32:50.954757   65502 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0501 03:32:50.954781   65502 kubeadm.go:309] 
	I0501 03:32:50.954917   65502 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0501 03:32:50.955035   65502 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0501 03:32:50.955057   65502 kubeadm.go:309] 
	I0501 03:32:50.955182   65502 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0501 03:32:50.955301   65502 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0501 03:32:50.955414   65502 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0501 03:32:50.955548   65502 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0501 03:32:50.955570   65502 kubeadm.go:309] 
	I0501 03:32:50.956358   65502 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0501 03:32:50.956483   65502 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0501 03:32:50.956579   65502 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0501 03:32:50.956845   65502 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-503971] and IPs [192.168.61.104 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-503971] and IPs [192.168.61.104 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-503971] and IPs [192.168.61.104 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-503971] and IPs [192.168.61.104 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0501 03:32:50.956908   65502 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0501 03:32:53.526270   65502 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.569327036s)
	I0501 03:32:53.526348   65502 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 03:32:53.547866   65502 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0501 03:32:53.561519   65502 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0501 03:32:53.561543   65502 kubeadm.go:156] found existing configuration files:
	
	I0501 03:32:53.561596   65502 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0501 03:32:53.575559   65502 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0501 03:32:53.575636   65502 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0501 03:32:53.589689   65502 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0501 03:32:53.600864   65502 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0501 03:32:53.600931   65502 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0501 03:32:53.612113   65502 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0501 03:32:53.622529   65502 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0501 03:32:53.622585   65502 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0501 03:32:53.632969   65502 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0501 03:32:53.643278   65502 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0501 03:32:53.643345   65502 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0501 03:32:53.653978   65502 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0501 03:32:53.912331   65502 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0501 03:34:49.970818   65502 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0501 03:34:49.970928   65502 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0501 03:34:49.972856   65502 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0501 03:34:49.972927   65502 kubeadm.go:309] [preflight] Running pre-flight checks
	I0501 03:34:49.973023   65502 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0501 03:34:49.973188   65502 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0501 03:34:49.973300   65502 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0501 03:34:49.973396   65502 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0501 03:34:49.975105   65502 out.go:204]   - Generating certificates and keys ...
	I0501 03:34:49.975201   65502 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0501 03:34:49.975257   65502 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0501 03:34:49.975337   65502 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0501 03:34:49.975396   65502 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0501 03:34:49.975458   65502 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0501 03:34:49.975503   65502 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0501 03:34:49.975577   65502 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0501 03:34:49.975651   65502 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0501 03:34:49.975734   65502 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0501 03:34:49.975850   65502 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0501 03:34:49.975908   65502 kubeadm.go:309] [certs] Using the existing "sa" key
	I0501 03:34:49.975997   65502 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0501 03:34:49.976064   65502 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0501 03:34:49.976143   65502 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0501 03:34:49.976208   65502 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0501 03:34:49.976254   65502 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0501 03:34:49.976338   65502 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0501 03:34:49.976408   65502 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0501 03:34:49.976442   65502 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0501 03:34:49.976554   65502 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0501 03:34:49.977969   65502 out.go:204]   - Booting up control plane ...
	I0501 03:34:49.978062   65502 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0501 03:34:49.978166   65502 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0501 03:34:49.978253   65502 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0501 03:34:49.978370   65502 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0501 03:34:49.978614   65502 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0501 03:34:49.978691   65502 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0501 03:34:49.978760   65502 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0501 03:34:49.978925   65502 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0501 03:34:49.979001   65502 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0501 03:34:49.979246   65502 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0501 03:34:49.979341   65502 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0501 03:34:49.979518   65502 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0501 03:34:49.979593   65502 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0501 03:34:49.979777   65502 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0501 03:34:49.979876   65502 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0501 03:34:49.980068   65502 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0501 03:34:49.980076   65502 kubeadm.go:309] 
	I0501 03:34:49.980123   65502 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0501 03:34:49.980175   65502 kubeadm.go:309] 		timed out waiting for the condition
	I0501 03:34:49.980185   65502 kubeadm.go:309] 
	I0501 03:34:49.980233   65502 kubeadm.go:309] 	This error is likely caused by:
	I0501 03:34:49.980274   65502 kubeadm.go:309] 		- The kubelet is not running
	I0501 03:34:49.980425   65502 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0501 03:34:49.980441   65502 kubeadm.go:309] 
	I0501 03:34:49.980568   65502 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0501 03:34:49.980598   65502 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0501 03:34:49.980630   65502 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0501 03:34:49.980640   65502 kubeadm.go:309] 
	I0501 03:34:49.980739   65502 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0501 03:34:49.980807   65502 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0501 03:34:49.980814   65502 kubeadm.go:309] 
	I0501 03:34:49.980901   65502 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0501 03:34:49.980977   65502 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0501 03:34:49.981079   65502 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0501 03:34:49.981183   65502 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0501 03:34:49.981250   65502 kubeadm.go:393] duration metric: took 3m57.512247503s to StartCluster
	I0501 03:34:49.981262   65502 kubeadm.go:309] 
	I0501 03:34:49.981294   65502 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:34:49.981358   65502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:34:50.029233   65502 cri.go:89] found id: ""
	I0501 03:34:50.029258   65502 logs.go:276] 0 containers: []
	W0501 03:34:50.029266   65502 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:34:50.029271   65502 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:34:50.029318   65502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:34:50.070933   65502 cri.go:89] found id: ""
	I0501 03:34:50.070959   65502 logs.go:276] 0 containers: []
	W0501 03:34:50.070966   65502 logs.go:278] No container was found matching "etcd"
	I0501 03:34:50.070972   65502 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:34:50.071037   65502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:34:50.119064   65502 cri.go:89] found id: ""
	I0501 03:34:50.119096   65502 logs.go:276] 0 containers: []
	W0501 03:34:50.119108   65502 logs.go:278] No container was found matching "coredns"
	I0501 03:34:50.119114   65502 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:34:50.119175   65502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:34:50.167167   65502 cri.go:89] found id: ""
	I0501 03:34:50.167195   65502 logs.go:276] 0 containers: []
	W0501 03:34:50.167203   65502 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:34:50.167218   65502 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:34:50.167287   65502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:34:50.216757   65502 cri.go:89] found id: ""
	I0501 03:34:50.216788   65502 logs.go:276] 0 containers: []
	W0501 03:34:50.216800   65502 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:34:50.216807   65502 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:34:50.216874   65502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:34:50.261048   65502 cri.go:89] found id: ""
	I0501 03:34:50.261080   65502 logs.go:276] 0 containers: []
	W0501 03:34:50.261092   65502 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:34:50.261101   65502 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:34:50.261164   65502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:34:50.304428   65502 cri.go:89] found id: ""
	I0501 03:34:50.304457   65502 logs.go:276] 0 containers: []
	W0501 03:34:50.304477   65502 logs.go:278] No container was found matching "kindnet"
	I0501 03:34:50.304490   65502 logs.go:123] Gathering logs for kubelet ...
	I0501 03:34:50.304505   65502 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:34:50.361129   65502 logs.go:123] Gathering logs for dmesg ...
	I0501 03:34:50.361185   65502 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:34:50.390982   65502 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:34:50.391021   65502 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:34:50.577939   65502 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:34:50.577973   65502 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:34:50.577990   65502 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:34:50.678066   65502 logs.go:123] Gathering logs for container status ...
	I0501 03:34:50.678101   65502 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0501 03:34:50.724754   65502 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0501 03:34:50.724800   65502 out.go:239] * 
	* 
	W0501 03:34:50.724854   65502 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0501 03:34:50.724886   65502 out.go:239] * 
	* 
	W0501 03:34:50.725733   65502 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0501 03:34:50.730217   65502 out.go:177] 
	W0501 03:34:50.731435   65502 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0501 03:34:50.731499   65502 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0501 03:34:50.731527   65502 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0501 03:34:50.732915   65502 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-503971 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-503971 -n old-k8s-version-503971
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-503971 -n old-k8s-version-503971: exit status 6 (264.096311ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0501 03:34:51.045520   68719 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-503971" does not appear in /home/jenkins/minikube-integration/18779-13391/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-503971" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (278.58s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (139.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-892672 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-892672 --alsologtostderr -v=3: exit status 82 (2m0.566769627s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-892672"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0501 03:32:16.401230   67645 out.go:291] Setting OutFile to fd 1 ...
	I0501 03:32:16.401362   67645 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 03:32:16.401375   67645 out.go:304] Setting ErrFile to fd 2...
	I0501 03:32:16.401381   67645 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 03:32:16.401607   67645 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18779-13391/.minikube/bin
	I0501 03:32:16.401887   67645 out.go:298] Setting JSON to false
	I0501 03:32:16.401964   67645 mustload.go:65] Loading cluster: no-preload-892672
	I0501 03:32:16.402293   67645 config.go:182] Loaded profile config "no-preload-892672": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0501 03:32:16.402352   67645 profile.go:143] Saving config to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/no-preload-892672/config.json ...
	I0501 03:32:16.402531   67645 mustload.go:65] Loading cluster: no-preload-892672
	I0501 03:32:16.402677   67645 config.go:182] Loaded profile config "no-preload-892672": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0501 03:32:16.402716   67645 stop.go:39] StopHost: no-preload-892672
	I0501 03:32:16.403162   67645 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:32:16.403215   67645 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:32:16.418379   67645 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36335
	I0501 03:32:16.418919   67645 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:32:16.419605   67645 main.go:141] libmachine: Using API Version  1
	I0501 03:32:16.419633   67645 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:32:16.420014   67645 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:32:16.423331   67645 out.go:177] * Stopping node "no-preload-892672"  ...
	I0501 03:32:16.424913   67645 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0501 03:32:16.424949   67645 main.go:141] libmachine: (no-preload-892672) Calling .DriverName
	I0501 03:32:16.425197   67645 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0501 03:32:16.425227   67645 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHHostname
	I0501 03:32:16.428120   67645 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:32:16.428567   67645 main.go:141] libmachine: (no-preload-892672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6d:9a", ip: ""} in network mk-no-preload-892672: {Iface:virbr1 ExpiryTime:2024-05-01 04:31:00 +0000 UTC Type:0 Mac:52:54:00:c7:6d:9a Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:no-preload-892672 Clientid:01:52:54:00:c7:6d:9a}
	I0501 03:32:16.428603   67645 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined IP address 192.168.39.144 and MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:32:16.428739   67645 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHPort
	I0501 03:32:16.428917   67645 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHKeyPath
	I0501 03:32:16.429081   67645 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHUsername
	I0501 03:32:16.429218   67645 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/no-preload-892672/id_rsa Username:docker}
	I0501 03:32:16.572516   67645 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0501 03:32:16.621460   67645 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0501 03:32:16.700172   67645 main.go:141] libmachine: Stopping "no-preload-892672"...
	I0501 03:32:16.700227   67645 main.go:141] libmachine: (no-preload-892672) Calling .GetState
	I0501 03:32:16.701807   67645 main.go:141] libmachine: (no-preload-892672) Calling .Stop
	I0501 03:32:16.706093   67645 main.go:141] libmachine: (no-preload-892672) Waiting for machine to stop 0/120
	I0501 03:32:17.708289   67645 main.go:141] libmachine: (no-preload-892672) Waiting for machine to stop 1/120
	I0501 03:32:18.709811   67645 main.go:141] libmachine: (no-preload-892672) Waiting for machine to stop 2/120
	I0501 03:32:19.711399   67645 main.go:141] libmachine: (no-preload-892672) Waiting for machine to stop 3/120
	I0501 03:32:20.713211   67645 main.go:141] libmachine: (no-preload-892672) Waiting for machine to stop 4/120
	I0501 03:32:21.715371   67645 main.go:141] libmachine: (no-preload-892672) Waiting for machine to stop 5/120
	I0501 03:32:22.716942   67645 main.go:141] libmachine: (no-preload-892672) Waiting for machine to stop 6/120
	I0501 03:32:23.718343   67645 main.go:141] libmachine: (no-preload-892672) Waiting for machine to stop 7/120
	I0501 03:32:24.719686   67645 main.go:141] libmachine: (no-preload-892672) Waiting for machine to stop 8/120
	I0501 03:32:25.721011   67645 main.go:141] libmachine: (no-preload-892672) Waiting for machine to stop 9/120
	I0501 03:32:26.722310   67645 main.go:141] libmachine: (no-preload-892672) Waiting for machine to stop 10/120
	I0501 03:32:27.723768   67645 main.go:141] libmachine: (no-preload-892672) Waiting for machine to stop 11/120
	I0501 03:32:28.725104   67645 main.go:141] libmachine: (no-preload-892672) Waiting for machine to stop 12/120
	I0501 03:32:29.726310   67645 main.go:141] libmachine: (no-preload-892672) Waiting for machine to stop 13/120
	I0501 03:32:30.727495   67645 main.go:141] libmachine: (no-preload-892672) Waiting for machine to stop 14/120
	I0501 03:32:31.729390   67645 main.go:141] libmachine: (no-preload-892672) Waiting for machine to stop 15/120
	I0501 03:32:32.730661   67645 main.go:141] libmachine: (no-preload-892672) Waiting for machine to stop 16/120
	I0501 03:32:33.731873   67645 main.go:141] libmachine: (no-preload-892672) Waiting for machine to stop 17/120
	I0501 03:32:34.733158   67645 main.go:141] libmachine: (no-preload-892672) Waiting for machine to stop 18/120
	I0501 03:32:35.734458   67645 main.go:141] libmachine: (no-preload-892672) Waiting for machine to stop 19/120
	I0501 03:32:36.736637   67645 main.go:141] libmachine: (no-preload-892672) Waiting for machine to stop 20/120
	I0501 03:32:37.738218   67645 main.go:141] libmachine: (no-preload-892672) Waiting for machine to stop 21/120
	I0501 03:32:38.740088   67645 main.go:141] libmachine: (no-preload-892672) Waiting for machine to stop 22/120
	I0501 03:32:39.741504   67645 main.go:141] libmachine: (no-preload-892672) Waiting for machine to stop 23/120
	I0501 03:32:40.743105   67645 main.go:141] libmachine: (no-preload-892672) Waiting for machine to stop 24/120
	I0501 03:32:41.745074   67645 main.go:141] libmachine: (no-preload-892672) Waiting for machine to stop 25/120
	I0501 03:32:42.746454   67645 main.go:141] libmachine: (no-preload-892672) Waiting for machine to stop 26/120
	I0501 03:32:43.748033   67645 main.go:141] libmachine: (no-preload-892672) Waiting for machine to stop 27/120
	I0501 03:32:44.749591   67645 main.go:141] libmachine: (no-preload-892672) Waiting for machine to stop 28/120
	I0501 03:32:45.751045   67645 main.go:141] libmachine: (no-preload-892672) Waiting for machine to stop 29/120
	I0501 03:32:46.753310   67645 main.go:141] libmachine: (no-preload-892672) Waiting for machine to stop 30/120
	I0501 03:32:47.754839   67645 main.go:141] libmachine: (no-preload-892672) Waiting for machine to stop 31/120
	I0501 03:32:48.756404   67645 main.go:141] libmachine: (no-preload-892672) Waiting for machine to stop 32/120
	I0501 03:32:49.757831   67645 main.go:141] libmachine: (no-preload-892672) Waiting for machine to stop 33/120
	I0501 03:32:50.759303   67645 main.go:141] libmachine: (no-preload-892672) Waiting for machine to stop 34/120
	I0501 03:32:51.761355   67645 main.go:141] libmachine: (no-preload-892672) Waiting for machine to stop 35/120
	I0501 03:32:52.762993   67645 main.go:141] libmachine: (no-preload-892672) Waiting for machine to stop 36/120
	I0501 03:32:53.764426   67645 main.go:141] libmachine: (no-preload-892672) Waiting for machine to stop 37/120
	I0501 03:32:54.766287   67645 main.go:141] libmachine: (no-preload-892672) Waiting for machine to stop 38/120
	I0501 03:32:55.767827   67645 main.go:141] libmachine: (no-preload-892672) Waiting for machine to stop 39/120
	I0501 03:32:56.769944   67645 main.go:141] libmachine: (no-preload-892672) Waiting for machine to stop 40/120
	I0501 03:32:57.771338   67645 main.go:141] libmachine: (no-preload-892672) Waiting for machine to stop 41/120
	I0501 03:32:58.772673   67645 main.go:141] libmachine: (no-preload-892672) Waiting for machine to stop 42/120
	I0501 03:32:59.774053   67645 main.go:141] libmachine: (no-preload-892672) Waiting for machine to stop 43/120
	I0501 03:33:00.776439   67645 main.go:141] libmachine: (no-preload-892672) Waiting for machine to stop 44/120
	I0501 03:33:01.778365   67645 main.go:141] libmachine: (no-preload-892672) Waiting for machine to stop 45/120
	I0501 03:33:02.779858   67645 main.go:141] libmachine: (no-preload-892672) Waiting for machine to stop 46/120
	I0501 03:33:03.781542   67645 main.go:141] libmachine: (no-preload-892672) Waiting for machine to stop 47/120
	I0501 03:33:04.782939   67645 main.go:141] libmachine: (no-preload-892672) Waiting for machine to stop 48/120
	I0501 03:33:05.785085   67645 main.go:141] libmachine: (no-preload-892672) Waiting for machine to stop 49/120
	I0501 03:33:06.787453   67645 main.go:141] libmachine: (no-preload-892672) Waiting for machine to stop 50/120
	I0501 03:33:07.789022   67645 main.go:141] libmachine: (no-preload-892672) Waiting for machine to stop 51/120
	I0501 03:33:08.790337   67645 main.go:141] libmachine: (no-preload-892672) Waiting for machine to stop 52/120
	I0501 03:33:09.791856   67645 main.go:141] libmachine: (no-preload-892672) Waiting for machine to stop 53/120
	I0501 03:33:10.793428   67645 main.go:141] libmachine: (no-preload-892672) Waiting for machine to stop 54/120
	I0501 03:33:11.795542   67645 main.go:141] libmachine: (no-preload-892672) Waiting for machine to stop 55/120
	I0501 03:33:12.797534   67645 main.go:141] libmachine: (no-preload-892672) Waiting for machine to stop 56/120
	I0501 03:33:13.798983   67645 main.go:141] libmachine: (no-preload-892672) Waiting for machine to stop 57/120
	I0501 03:33:14.800228   67645 main.go:141] libmachine: (no-preload-892672) Waiting for machine to stop 58/120
	I0501 03:33:15.801735   67645 main.go:141] libmachine: (no-preload-892672) Waiting for machine to stop 59/120
	I0501 03:33:16.803398   67645 main.go:141] libmachine: (no-preload-892672) Waiting for machine to stop 60/120
	I0501 03:33:17.804811   67645 main.go:141] libmachine: (no-preload-892672) Waiting for machine to stop 61/120
	I0501 03:33:18.806725   67645 main.go:141] libmachine: (no-preload-892672) Waiting for machine to stop 62/120
	I0501 03:33:19.808214   67645 main.go:141] libmachine: (no-preload-892672) Waiting for machine to stop 63/120
	I0501 03:33:20.809694   67645 main.go:141] libmachine: (no-preload-892672) Waiting for machine to stop 64/120
	I0501 03:33:21.811699   67645 main.go:141] libmachine: (no-preload-892672) Waiting for machine to stop 65/120
	I0501 03:33:22.812966   67645 main.go:141] libmachine: (no-preload-892672) Waiting for machine to stop 66/120
	I0501 03:33:23.814356   67645 main.go:141] libmachine: (no-preload-892672) Waiting for machine to stop 67/120
	I0501 03:33:24.815643   67645 main.go:141] libmachine: (no-preload-892672) Waiting for machine to stop 68/120
	I0501 03:33:25.817019   67645 main.go:141] libmachine: (no-preload-892672) Waiting for machine to stop 69/120
	I0501 03:33:26.819499   67645 main.go:141] libmachine: (no-preload-892672) Waiting for machine to stop 70/120
	I0501 03:33:27.820720   67645 main.go:141] libmachine: (no-preload-892672) Waiting for machine to stop 71/120
	I0501 03:33:28.822159   67645 main.go:141] libmachine: (no-preload-892672) Waiting for machine to stop 72/120
	I0501 03:33:29.823452   67645 main.go:141] libmachine: (no-preload-892672) Waiting for machine to stop 73/120
	I0501 03:33:30.824877   67645 main.go:141] libmachine: (no-preload-892672) Waiting for machine to stop 74/120
	I0501 03:33:31.826958   67645 main.go:141] libmachine: (no-preload-892672) Waiting for machine to stop 75/120
	I0501 03:33:32.828398   67645 main.go:141] libmachine: (no-preload-892672) Waiting for machine to stop 76/120
	I0501 03:33:33.829719   67645 main.go:141] libmachine: (no-preload-892672) Waiting for machine to stop 77/120
	I0501 03:33:34.831079   67645 main.go:141] libmachine: (no-preload-892672) Waiting for machine to stop 78/120
	I0501 03:33:35.832487   67645 main.go:141] libmachine: (no-preload-892672) Waiting for machine to stop 79/120
	I0501 03:33:36.834849   67645 main.go:141] libmachine: (no-preload-892672) Waiting for machine to stop 80/120
	I0501 03:33:37.836238   67645 main.go:141] libmachine: (no-preload-892672) Waiting for machine to stop 81/120
	I0501 03:33:38.837561   67645 main.go:141] libmachine: (no-preload-892672) Waiting for machine to stop 82/120
	I0501 03:33:39.838997   67645 main.go:141] libmachine: (no-preload-892672) Waiting for machine to stop 83/120
	I0501 03:33:40.840800   67645 main.go:141] libmachine: (no-preload-892672) Waiting for machine to stop 84/120
	I0501 03:33:41.842700   67645 main.go:141] libmachine: (no-preload-892672) Waiting for machine to stop 85/120
	I0501 03:33:42.844130   67645 main.go:141] libmachine: (no-preload-892672) Waiting for machine to stop 86/120
	I0501 03:33:43.845674   67645 main.go:141] libmachine: (no-preload-892672) Waiting for machine to stop 87/120
	I0501 03:33:44.846970   67645 main.go:141] libmachine: (no-preload-892672) Waiting for machine to stop 88/120
	I0501 03:33:45.848363   67645 main.go:141] libmachine: (no-preload-892672) Waiting for machine to stop 89/120
	I0501 03:33:46.850637   67645 main.go:141] libmachine: (no-preload-892672) Waiting for machine to stop 90/120
	I0501 03:33:47.851972   67645 main.go:141] libmachine: (no-preload-892672) Waiting for machine to stop 91/120
	I0501 03:33:48.853323   67645 main.go:141] libmachine: (no-preload-892672) Waiting for machine to stop 92/120
	I0501 03:33:49.854957   67645 main.go:141] libmachine: (no-preload-892672) Waiting for machine to stop 93/120
	I0501 03:33:50.856521   67645 main.go:141] libmachine: (no-preload-892672) Waiting for machine to stop 94/120
	I0501 03:33:51.858610   67645 main.go:141] libmachine: (no-preload-892672) Waiting for machine to stop 95/120
	I0501 03:33:52.860998   67645 main.go:141] libmachine: (no-preload-892672) Waiting for machine to stop 96/120
	I0501 03:33:53.862496   67645 main.go:141] libmachine: (no-preload-892672) Waiting for machine to stop 97/120
	I0501 03:33:54.863925   67645 main.go:141] libmachine: (no-preload-892672) Waiting for machine to stop 98/120
	I0501 03:33:55.865327   67645 main.go:141] libmachine: (no-preload-892672) Waiting for machine to stop 99/120
	I0501 03:33:56.867569   67645 main.go:141] libmachine: (no-preload-892672) Waiting for machine to stop 100/120
	I0501 03:33:57.868942   67645 main.go:141] libmachine: (no-preload-892672) Waiting for machine to stop 101/120
	I0501 03:33:58.870497   67645 main.go:141] libmachine: (no-preload-892672) Waiting for machine to stop 102/120
	I0501 03:33:59.871860   67645 main.go:141] libmachine: (no-preload-892672) Waiting for machine to stop 103/120
	I0501 03:34:00.873198   67645 main.go:141] libmachine: (no-preload-892672) Waiting for machine to stop 104/120
	I0501 03:34:01.875160   67645 main.go:141] libmachine: (no-preload-892672) Waiting for machine to stop 105/120
	I0501 03:34:02.876331   67645 main.go:141] libmachine: (no-preload-892672) Waiting for machine to stop 106/120
	I0501 03:34:03.877847   67645 main.go:141] libmachine: (no-preload-892672) Waiting for machine to stop 107/120
	I0501 03:34:04.879280   67645 main.go:141] libmachine: (no-preload-892672) Waiting for machine to stop 108/120
	I0501 03:34:05.880789   67645 main.go:141] libmachine: (no-preload-892672) Waiting for machine to stop 109/120
	I0501 03:34:06.883014   67645 main.go:141] libmachine: (no-preload-892672) Waiting for machine to stop 110/120
	I0501 03:34:07.884417   67645 main.go:141] libmachine: (no-preload-892672) Waiting for machine to stop 111/120
	I0501 03:34:08.885711   67645 main.go:141] libmachine: (no-preload-892672) Waiting for machine to stop 112/120
	I0501 03:34:09.887146   67645 main.go:141] libmachine: (no-preload-892672) Waiting for machine to stop 113/120
	I0501 03:34:10.888535   67645 main.go:141] libmachine: (no-preload-892672) Waiting for machine to stop 114/120
	I0501 03:34:11.890694   67645 main.go:141] libmachine: (no-preload-892672) Waiting for machine to stop 115/120
	I0501 03:34:12.892139   67645 main.go:141] libmachine: (no-preload-892672) Waiting for machine to stop 116/120
	I0501 03:34:13.893583   67645 main.go:141] libmachine: (no-preload-892672) Waiting for machine to stop 117/120
	I0501 03:34:14.894861   67645 main.go:141] libmachine: (no-preload-892672) Waiting for machine to stop 118/120
	I0501 03:34:15.896155   67645 main.go:141] libmachine: (no-preload-892672) Waiting for machine to stop 119/120
	I0501 03:34:16.897098   67645 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0501 03:34:16.897165   67645 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0501 03:34:16.899322   67645 out.go:177] 
	W0501 03:34:16.900749   67645 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0501 03:34:16.900762   67645 out.go:239] * 
	* 
	W0501 03:34:16.903216   67645 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0501 03:34:16.904597   67645 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-892672 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-892672 -n no-preload-892672
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-892672 -n no-preload-892672: exit status 3 (18.589133329s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0501 03:34:35.494737   68357 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.144:22: connect: no route to host
	E0501 03:34:35.494757   68357 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.144:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-892672" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (139.16s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (139.15s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-277128 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-277128 --alsologtostderr -v=3: exit status 82 (2m0.53183237s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-277128"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0501 03:32:22.292717   67726 out.go:291] Setting OutFile to fd 1 ...
	I0501 03:32:22.292831   67726 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 03:32:22.292841   67726 out.go:304] Setting ErrFile to fd 2...
	I0501 03:32:22.292845   67726 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 03:32:22.293078   67726 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18779-13391/.minikube/bin
	I0501 03:32:22.293313   67726 out.go:298] Setting JSON to false
	I0501 03:32:22.293395   67726 mustload.go:65] Loading cluster: embed-certs-277128
	I0501 03:32:22.293748   67726 config.go:182] Loaded profile config "embed-certs-277128": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0501 03:32:22.293808   67726 profile.go:143] Saving config to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/embed-certs-277128/config.json ...
	I0501 03:32:22.293972   67726 mustload.go:65] Loading cluster: embed-certs-277128
	I0501 03:32:22.294078   67726 config.go:182] Loaded profile config "embed-certs-277128": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0501 03:32:22.294101   67726 stop.go:39] StopHost: embed-certs-277128
	I0501 03:32:22.294465   67726 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:32:22.294514   67726 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:32:22.309548   67726 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43615
	I0501 03:32:22.310063   67726 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:32:22.310691   67726 main.go:141] libmachine: Using API Version  1
	I0501 03:32:22.310715   67726 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:32:22.311090   67726 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:32:22.313734   67726 out.go:177] * Stopping node "embed-certs-277128"  ...
	I0501 03:32:22.315110   67726 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0501 03:32:22.315142   67726 main.go:141] libmachine: (embed-certs-277128) Calling .DriverName
	I0501 03:32:22.315406   67726 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0501 03:32:22.315436   67726 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHHostname
	I0501 03:32:22.318416   67726 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:32:22.318867   67726 main.go:141] libmachine: (embed-certs-277128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:7d", ip: ""} in network mk-embed-certs-277128: {Iface:virbr2 ExpiryTime:2024-05-01 04:31:26 +0000 UTC Type:0 Mac:52:54:00:96:11:7d Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-277128 Clientid:01:52:54:00:96:11:7d}
	I0501 03:32:22.318899   67726 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined IP address 192.168.50.218 and MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:32:22.319035   67726 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHPort
	I0501 03:32:22.319232   67726 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHKeyPath
	I0501 03:32:22.319398   67726 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHUsername
	I0501 03:32:22.319584   67726 sshutil.go:53] new ssh client: &{IP:192.168.50.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/embed-certs-277128/id_rsa Username:docker}
	I0501 03:32:22.431754   67726 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0501 03:32:22.498666   67726 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0501 03:32:22.560304   67726 main.go:141] libmachine: Stopping "embed-certs-277128"...
	I0501 03:32:22.560340   67726 main.go:141] libmachine: (embed-certs-277128) Calling .GetState
	I0501 03:32:22.562208   67726 main.go:141] libmachine: (embed-certs-277128) Calling .Stop
	I0501 03:32:22.566092   67726 main.go:141] libmachine: (embed-certs-277128) Waiting for machine to stop 0/120
	I0501 03:32:23.567779   67726 main.go:141] libmachine: (embed-certs-277128) Waiting for machine to stop 1/120
	I0501 03:32:24.569184   67726 main.go:141] libmachine: (embed-certs-277128) Waiting for machine to stop 2/120
	I0501 03:32:25.570655   67726 main.go:141] libmachine: (embed-certs-277128) Waiting for machine to stop 3/120
	I0501 03:32:26.572158   67726 main.go:141] libmachine: (embed-certs-277128) Waiting for machine to stop 4/120
	I0501 03:32:27.574080   67726 main.go:141] libmachine: (embed-certs-277128) Waiting for machine to stop 5/120
	I0501 03:32:28.575513   67726 main.go:141] libmachine: (embed-certs-277128) Waiting for machine to stop 6/120
	I0501 03:32:29.576854   67726 main.go:141] libmachine: (embed-certs-277128) Waiting for machine to stop 7/120
	I0501 03:32:30.578252   67726 main.go:141] libmachine: (embed-certs-277128) Waiting for machine to stop 8/120
	I0501 03:32:31.579524   67726 main.go:141] libmachine: (embed-certs-277128) Waiting for machine to stop 9/120
	I0501 03:32:32.581652   67726 main.go:141] libmachine: (embed-certs-277128) Waiting for machine to stop 10/120
	I0501 03:32:33.583112   67726 main.go:141] libmachine: (embed-certs-277128) Waiting for machine to stop 11/120
	I0501 03:32:34.584824   67726 main.go:141] libmachine: (embed-certs-277128) Waiting for machine to stop 12/120
	I0501 03:32:35.586124   67726 main.go:141] libmachine: (embed-certs-277128) Waiting for machine to stop 13/120
	I0501 03:32:36.587442   67726 main.go:141] libmachine: (embed-certs-277128) Waiting for machine to stop 14/120
	I0501 03:32:37.589021   67726 main.go:141] libmachine: (embed-certs-277128) Waiting for machine to stop 15/120
	I0501 03:32:38.590601   67726 main.go:141] libmachine: (embed-certs-277128) Waiting for machine to stop 16/120
	I0501 03:32:39.592006   67726 main.go:141] libmachine: (embed-certs-277128) Waiting for machine to stop 17/120
	I0501 03:32:40.593583   67726 main.go:141] libmachine: (embed-certs-277128) Waiting for machine to stop 18/120
	I0501 03:32:41.594853   67726 main.go:141] libmachine: (embed-certs-277128) Waiting for machine to stop 19/120
	I0501 03:32:42.597161   67726 main.go:141] libmachine: (embed-certs-277128) Waiting for machine to stop 20/120
	I0501 03:32:43.598821   67726 main.go:141] libmachine: (embed-certs-277128) Waiting for machine to stop 21/120
	I0501 03:32:44.600338   67726 main.go:141] libmachine: (embed-certs-277128) Waiting for machine to stop 22/120
	I0501 03:32:45.601928   67726 main.go:141] libmachine: (embed-certs-277128) Waiting for machine to stop 23/120
	I0501 03:32:46.603395   67726 main.go:141] libmachine: (embed-certs-277128) Waiting for machine to stop 24/120
	I0501 03:32:47.605328   67726 main.go:141] libmachine: (embed-certs-277128) Waiting for machine to stop 25/120
	I0501 03:32:48.606786   67726 main.go:141] libmachine: (embed-certs-277128) Waiting for machine to stop 26/120
	I0501 03:32:49.607998   67726 main.go:141] libmachine: (embed-certs-277128) Waiting for machine to stop 27/120
	I0501 03:32:50.609376   67726 main.go:141] libmachine: (embed-certs-277128) Waiting for machine to stop 28/120
	I0501 03:32:51.610763   67726 main.go:141] libmachine: (embed-certs-277128) Waiting for machine to stop 29/120
	I0501 03:32:52.612877   67726 main.go:141] libmachine: (embed-certs-277128) Waiting for machine to stop 30/120
	I0501 03:32:53.614248   67726 main.go:141] libmachine: (embed-certs-277128) Waiting for machine to stop 31/120
	I0501 03:32:54.615869   67726 main.go:141] libmachine: (embed-certs-277128) Waiting for machine to stop 32/120
	I0501 03:32:55.617342   67726 main.go:141] libmachine: (embed-certs-277128) Waiting for machine to stop 33/120
	I0501 03:32:56.618735   67726 main.go:141] libmachine: (embed-certs-277128) Waiting for machine to stop 34/120
	I0501 03:32:57.620779   67726 main.go:141] libmachine: (embed-certs-277128) Waiting for machine to stop 35/120
	I0501 03:32:58.621943   67726 main.go:141] libmachine: (embed-certs-277128) Waiting for machine to stop 36/120
	I0501 03:32:59.623209   67726 main.go:141] libmachine: (embed-certs-277128) Waiting for machine to stop 37/120
	I0501 03:33:00.624682   67726 main.go:141] libmachine: (embed-certs-277128) Waiting for machine to stop 38/120
	I0501 03:33:01.626065   67726 main.go:141] libmachine: (embed-certs-277128) Waiting for machine to stop 39/120
	I0501 03:33:02.627998   67726 main.go:141] libmachine: (embed-certs-277128) Waiting for machine to stop 40/120
	I0501 03:33:03.629530   67726 main.go:141] libmachine: (embed-certs-277128) Waiting for machine to stop 41/120
	I0501 03:33:04.631167   67726 main.go:141] libmachine: (embed-certs-277128) Waiting for machine to stop 42/120
	I0501 03:33:05.633024   67726 main.go:141] libmachine: (embed-certs-277128) Waiting for machine to stop 43/120
	I0501 03:33:06.634807   67726 main.go:141] libmachine: (embed-certs-277128) Waiting for machine to stop 44/120
	I0501 03:33:07.636863   67726 main.go:141] libmachine: (embed-certs-277128) Waiting for machine to stop 45/120
	I0501 03:33:08.638128   67726 main.go:141] libmachine: (embed-certs-277128) Waiting for machine to stop 46/120
	I0501 03:33:09.639552   67726 main.go:141] libmachine: (embed-certs-277128) Waiting for machine to stop 47/120
	I0501 03:33:10.640995   67726 main.go:141] libmachine: (embed-certs-277128) Waiting for machine to stop 48/120
	I0501 03:33:11.642297   67726 main.go:141] libmachine: (embed-certs-277128) Waiting for machine to stop 49/120
	I0501 03:33:12.644845   67726 main.go:141] libmachine: (embed-certs-277128) Waiting for machine to stop 50/120
	I0501 03:33:13.646101   67726 main.go:141] libmachine: (embed-certs-277128) Waiting for machine to stop 51/120
	I0501 03:33:14.647437   67726 main.go:141] libmachine: (embed-certs-277128) Waiting for machine to stop 52/120
	I0501 03:33:15.648842   67726 main.go:141] libmachine: (embed-certs-277128) Waiting for machine to stop 53/120
	I0501 03:33:16.650439   67726 main.go:141] libmachine: (embed-certs-277128) Waiting for machine to stop 54/120
	I0501 03:33:17.652745   67726 main.go:141] libmachine: (embed-certs-277128) Waiting for machine to stop 55/120
	I0501 03:33:18.654704   67726 main.go:141] libmachine: (embed-certs-277128) Waiting for machine to stop 56/120
	I0501 03:33:19.656999   67726 main.go:141] libmachine: (embed-certs-277128) Waiting for machine to stop 57/120
	I0501 03:33:20.658346   67726 main.go:141] libmachine: (embed-certs-277128) Waiting for machine to stop 58/120
	I0501 03:33:21.659685   67726 main.go:141] libmachine: (embed-certs-277128) Waiting for machine to stop 59/120
	I0501 03:33:22.660992   67726 main.go:141] libmachine: (embed-certs-277128) Waiting for machine to stop 60/120
	I0501 03:33:23.662300   67726 main.go:141] libmachine: (embed-certs-277128) Waiting for machine to stop 61/120
	I0501 03:33:24.663895   67726 main.go:141] libmachine: (embed-certs-277128) Waiting for machine to stop 62/120
	I0501 03:33:25.665221   67726 main.go:141] libmachine: (embed-certs-277128) Waiting for machine to stop 63/120
	I0501 03:33:26.666626   67726 main.go:141] libmachine: (embed-certs-277128) Waiting for machine to stop 64/120
	I0501 03:33:27.668631   67726 main.go:141] libmachine: (embed-certs-277128) Waiting for machine to stop 65/120
	I0501 03:33:28.669851   67726 main.go:141] libmachine: (embed-certs-277128) Waiting for machine to stop 66/120
	I0501 03:33:29.671133   67726 main.go:141] libmachine: (embed-certs-277128) Waiting for machine to stop 67/120
	I0501 03:33:30.672421   67726 main.go:141] libmachine: (embed-certs-277128) Waiting for machine to stop 68/120
	I0501 03:33:31.673779   67726 main.go:141] libmachine: (embed-certs-277128) Waiting for machine to stop 69/120
	I0501 03:33:32.675591   67726 main.go:141] libmachine: (embed-certs-277128) Waiting for machine to stop 70/120
	I0501 03:33:33.676820   67726 main.go:141] libmachine: (embed-certs-277128) Waiting for machine to stop 71/120
	I0501 03:33:34.678320   67726 main.go:141] libmachine: (embed-certs-277128) Waiting for machine to stop 72/120
	I0501 03:33:35.679528   67726 main.go:141] libmachine: (embed-certs-277128) Waiting for machine to stop 73/120
	I0501 03:33:36.680901   67726 main.go:141] libmachine: (embed-certs-277128) Waiting for machine to stop 74/120
	I0501 03:33:37.683081   67726 main.go:141] libmachine: (embed-certs-277128) Waiting for machine to stop 75/120
	I0501 03:33:38.684338   67726 main.go:141] libmachine: (embed-certs-277128) Waiting for machine to stop 76/120
	I0501 03:33:39.685703   67726 main.go:141] libmachine: (embed-certs-277128) Waiting for machine to stop 77/120
	I0501 03:33:40.687049   67726 main.go:141] libmachine: (embed-certs-277128) Waiting for machine to stop 78/120
	I0501 03:33:41.688402   67726 main.go:141] libmachine: (embed-certs-277128) Waiting for machine to stop 79/120
	I0501 03:33:42.690790   67726 main.go:141] libmachine: (embed-certs-277128) Waiting for machine to stop 80/120
	I0501 03:33:43.692016   67726 main.go:141] libmachine: (embed-certs-277128) Waiting for machine to stop 81/120
	I0501 03:33:44.693584   67726 main.go:141] libmachine: (embed-certs-277128) Waiting for machine to stop 82/120
	I0501 03:33:45.694953   67726 main.go:141] libmachine: (embed-certs-277128) Waiting for machine to stop 83/120
	I0501 03:33:46.696945   67726 main.go:141] libmachine: (embed-certs-277128) Waiting for machine to stop 84/120
	I0501 03:33:47.699182   67726 main.go:141] libmachine: (embed-certs-277128) Waiting for machine to stop 85/120
	I0501 03:33:48.700477   67726 main.go:141] libmachine: (embed-certs-277128) Waiting for machine to stop 86/120
	I0501 03:33:49.701678   67726 main.go:141] libmachine: (embed-certs-277128) Waiting for machine to stop 87/120
	I0501 03:33:50.703200   67726 main.go:141] libmachine: (embed-certs-277128) Waiting for machine to stop 88/120
	I0501 03:33:51.704493   67726 main.go:141] libmachine: (embed-certs-277128) Waiting for machine to stop 89/120
	I0501 03:33:52.706734   67726 main.go:141] libmachine: (embed-certs-277128) Waiting for machine to stop 90/120
	I0501 03:33:53.708029   67726 main.go:141] libmachine: (embed-certs-277128) Waiting for machine to stop 91/120
	I0501 03:33:54.709241   67726 main.go:141] libmachine: (embed-certs-277128) Waiting for machine to stop 92/120
	I0501 03:33:55.710551   67726 main.go:141] libmachine: (embed-certs-277128) Waiting for machine to stop 93/120
	I0501 03:33:56.711773   67726 main.go:141] libmachine: (embed-certs-277128) Waiting for machine to stop 94/120
	I0501 03:33:57.713642   67726 main.go:141] libmachine: (embed-certs-277128) Waiting for machine to stop 95/120
	I0501 03:33:58.715319   67726 main.go:141] libmachine: (embed-certs-277128) Waiting for machine to stop 96/120
	I0501 03:33:59.716793   67726 main.go:141] libmachine: (embed-certs-277128) Waiting for machine to stop 97/120
	I0501 03:34:00.718096   67726 main.go:141] libmachine: (embed-certs-277128) Waiting for machine to stop 98/120
	I0501 03:34:01.719398   67726 main.go:141] libmachine: (embed-certs-277128) Waiting for machine to stop 99/120
	I0501 03:34:02.721651   67726 main.go:141] libmachine: (embed-certs-277128) Waiting for machine to stop 100/120
	I0501 03:34:03.723186   67726 main.go:141] libmachine: (embed-certs-277128) Waiting for machine to stop 101/120
	I0501 03:34:04.724478   67726 main.go:141] libmachine: (embed-certs-277128) Waiting for machine to stop 102/120
	I0501 03:34:05.725902   67726 main.go:141] libmachine: (embed-certs-277128) Waiting for machine to stop 103/120
	I0501 03:34:06.727229   67726 main.go:141] libmachine: (embed-certs-277128) Waiting for machine to stop 104/120
	I0501 03:34:07.729442   67726 main.go:141] libmachine: (embed-certs-277128) Waiting for machine to stop 105/120
	I0501 03:34:08.730830   67726 main.go:141] libmachine: (embed-certs-277128) Waiting for machine to stop 106/120
	I0501 03:34:09.732662   67726 main.go:141] libmachine: (embed-certs-277128) Waiting for machine to stop 107/120
	I0501 03:34:10.733879   67726 main.go:141] libmachine: (embed-certs-277128) Waiting for machine to stop 108/120
	I0501 03:34:11.735284   67726 main.go:141] libmachine: (embed-certs-277128) Waiting for machine to stop 109/120
	I0501 03:34:12.737746   67726 main.go:141] libmachine: (embed-certs-277128) Waiting for machine to stop 110/120
	I0501 03:34:13.739977   67726 main.go:141] libmachine: (embed-certs-277128) Waiting for machine to stop 111/120
	I0501 03:34:14.741425   67726 main.go:141] libmachine: (embed-certs-277128) Waiting for machine to stop 112/120
	I0501 03:34:15.742890   67726 main.go:141] libmachine: (embed-certs-277128) Waiting for machine to stop 113/120
	I0501 03:34:16.744305   67726 main.go:141] libmachine: (embed-certs-277128) Waiting for machine to stop 114/120
	I0501 03:34:17.745980   67726 main.go:141] libmachine: (embed-certs-277128) Waiting for machine to stop 115/120
	I0501 03:34:18.747337   67726 main.go:141] libmachine: (embed-certs-277128) Waiting for machine to stop 116/120
	I0501 03:34:19.748660   67726 main.go:141] libmachine: (embed-certs-277128) Waiting for machine to stop 117/120
	I0501 03:34:20.750076   67726 main.go:141] libmachine: (embed-certs-277128) Waiting for machine to stop 118/120
	I0501 03:34:21.751278   67726 main.go:141] libmachine: (embed-certs-277128) Waiting for machine to stop 119/120
	I0501 03:34:22.752713   67726 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0501 03:34:22.752772   67726 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0501 03:34:22.754648   67726 out.go:177] 
	W0501 03:34:22.755958   67726 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0501 03:34:22.755990   67726 out.go:239] * 
	* 
	W0501 03:34:22.758442   67726 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0501 03:34:22.759888   67726 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-277128 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-277128 -n embed-certs-277128
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-277128 -n embed-certs-277128: exit status 3 (18.621396652s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0501 03:34:41.382802   68403 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.218:22: connect: no route to host
	E0501 03:34:41.382827   68403 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.218:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-277128" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (139.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (138.99s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-715118 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-715118 --alsologtostderr -v=3: exit status 82 (2m0.506733725s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-715118"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0501 03:33:19.277002   68122 out.go:291] Setting OutFile to fd 1 ...
	I0501 03:33:19.277251   68122 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 03:33:19.277261   68122 out.go:304] Setting ErrFile to fd 2...
	I0501 03:33:19.277267   68122 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 03:33:19.277466   68122 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18779-13391/.minikube/bin
	I0501 03:33:19.277719   68122 out.go:298] Setting JSON to false
	I0501 03:33:19.277806   68122 mustload.go:65] Loading cluster: default-k8s-diff-port-715118
	I0501 03:33:19.278120   68122 config.go:182] Loaded profile config "default-k8s-diff-port-715118": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0501 03:33:19.278198   68122 profile.go:143] Saving config to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/default-k8s-diff-port-715118/config.json ...
	I0501 03:33:19.278375   68122 mustload.go:65] Loading cluster: default-k8s-diff-port-715118
	I0501 03:33:19.278523   68122 config.go:182] Loaded profile config "default-k8s-diff-port-715118": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0501 03:33:19.278556   68122 stop.go:39] StopHost: default-k8s-diff-port-715118
	I0501 03:33:19.278945   68122 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:33:19.278988   68122 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:33:19.293167   68122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44331
	I0501 03:33:19.293653   68122 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:33:19.294136   68122 main.go:141] libmachine: Using API Version  1
	I0501 03:33:19.294159   68122 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:33:19.294476   68122 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:33:19.296834   68122 out.go:177] * Stopping node "default-k8s-diff-port-715118"  ...
	I0501 03:33:19.298117   68122 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0501 03:33:19.298145   68122 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .DriverName
	I0501 03:33:19.298371   68122 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0501 03:33:19.298418   68122 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHHostname
	I0501 03:33:19.301230   68122 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:33:19.301645   68122 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:12:31", ip: ""} in network mk-default-k8s-diff-port-715118: {Iface:virbr3 ExpiryTime:2024-05-01 04:32:26 +0000 UTC Type:0 Mac:52:54:00:87:12:31 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-715118 Clientid:01:52:54:00:87:12:31}
	I0501 03:33:19.301677   68122 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined IP address 192.168.72.158 and MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:33:19.301811   68122 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHPort
	I0501 03:33:19.301954   68122 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHKeyPath
	I0501 03:33:19.302076   68122 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHUsername
	I0501 03:33:19.302220   68122 sshutil.go:53] new ssh client: &{IP:192.168.72.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/default-k8s-diff-port-715118/id_rsa Username:docker}
	I0501 03:33:19.406981   68122 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0501 03:33:19.468688   68122 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0501 03:33:19.528778   68122 main.go:141] libmachine: Stopping "default-k8s-diff-port-715118"...
	I0501 03:33:19.528804   68122 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetState
	I0501 03:33:19.530759   68122 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .Stop
	I0501 03:33:19.534540   68122 main.go:141] libmachine: (default-k8s-diff-port-715118) Waiting for machine to stop 0/120
	I0501 03:33:20.535894   68122 main.go:141] libmachine: (default-k8s-diff-port-715118) Waiting for machine to stop 1/120
	I0501 03:33:21.537110   68122 main.go:141] libmachine: (default-k8s-diff-port-715118) Waiting for machine to stop 2/120
	I0501 03:33:22.538351   68122 main.go:141] libmachine: (default-k8s-diff-port-715118) Waiting for machine to stop 3/120
	I0501 03:33:23.539674   68122 main.go:141] libmachine: (default-k8s-diff-port-715118) Waiting for machine to stop 4/120
	I0501 03:33:24.541628   68122 main.go:141] libmachine: (default-k8s-diff-port-715118) Waiting for machine to stop 5/120
	I0501 03:33:25.542870   68122 main.go:141] libmachine: (default-k8s-diff-port-715118) Waiting for machine to stop 6/120
	I0501 03:33:26.544874   68122 main.go:141] libmachine: (default-k8s-diff-port-715118) Waiting for machine to stop 7/120
	I0501 03:33:27.546224   68122 main.go:141] libmachine: (default-k8s-diff-port-715118) Waiting for machine to stop 8/120
	I0501 03:33:28.547686   68122 main.go:141] libmachine: (default-k8s-diff-port-715118) Waiting for machine to stop 9/120
	I0501 03:33:29.549811   68122 main.go:141] libmachine: (default-k8s-diff-port-715118) Waiting for machine to stop 10/120
	I0501 03:33:30.551279   68122 main.go:141] libmachine: (default-k8s-diff-port-715118) Waiting for machine to stop 11/120
	I0501 03:33:31.552505   68122 main.go:141] libmachine: (default-k8s-diff-port-715118) Waiting for machine to stop 12/120
	I0501 03:33:32.553827   68122 main.go:141] libmachine: (default-k8s-diff-port-715118) Waiting for machine to stop 13/120
	I0501 03:33:33.555249   68122 main.go:141] libmachine: (default-k8s-diff-port-715118) Waiting for machine to stop 14/120
	I0501 03:33:34.557330   68122 main.go:141] libmachine: (default-k8s-diff-port-715118) Waiting for machine to stop 15/120
	I0501 03:33:35.558689   68122 main.go:141] libmachine: (default-k8s-diff-port-715118) Waiting for machine to stop 16/120
	I0501 03:33:36.560053   68122 main.go:141] libmachine: (default-k8s-diff-port-715118) Waiting for machine to stop 17/120
	I0501 03:33:37.561621   68122 main.go:141] libmachine: (default-k8s-diff-port-715118) Waiting for machine to stop 18/120
	I0501 03:33:38.563034   68122 main.go:141] libmachine: (default-k8s-diff-port-715118) Waiting for machine to stop 19/120
	I0501 03:33:39.565195   68122 main.go:141] libmachine: (default-k8s-diff-port-715118) Waiting for machine to stop 20/120
	I0501 03:33:40.566563   68122 main.go:141] libmachine: (default-k8s-diff-port-715118) Waiting for machine to stop 21/120
	I0501 03:33:41.568817   68122 main.go:141] libmachine: (default-k8s-diff-port-715118) Waiting for machine to stop 22/120
	I0501 03:33:42.570216   68122 main.go:141] libmachine: (default-k8s-diff-port-715118) Waiting for machine to stop 23/120
	I0501 03:33:43.571667   68122 main.go:141] libmachine: (default-k8s-diff-port-715118) Waiting for machine to stop 24/120
	I0501 03:33:44.573546   68122 main.go:141] libmachine: (default-k8s-diff-port-715118) Waiting for machine to stop 25/120
	I0501 03:33:45.575083   68122 main.go:141] libmachine: (default-k8s-diff-port-715118) Waiting for machine to stop 26/120
	I0501 03:33:46.576571   68122 main.go:141] libmachine: (default-k8s-diff-port-715118) Waiting for machine to stop 27/120
	I0501 03:33:47.578031   68122 main.go:141] libmachine: (default-k8s-diff-port-715118) Waiting for machine to stop 28/120
	I0501 03:33:48.579422   68122 main.go:141] libmachine: (default-k8s-diff-port-715118) Waiting for machine to stop 29/120
	I0501 03:33:49.581357   68122 main.go:141] libmachine: (default-k8s-diff-port-715118) Waiting for machine to stop 30/120
	I0501 03:33:50.582791   68122 main.go:141] libmachine: (default-k8s-diff-port-715118) Waiting for machine to stop 31/120
	I0501 03:33:51.584020   68122 main.go:141] libmachine: (default-k8s-diff-port-715118) Waiting for machine to stop 32/120
	I0501 03:33:52.585498   68122 main.go:141] libmachine: (default-k8s-diff-port-715118) Waiting for machine to stop 33/120
	I0501 03:33:53.586984   68122 main.go:141] libmachine: (default-k8s-diff-port-715118) Waiting for machine to stop 34/120
	I0501 03:33:54.588740   68122 main.go:141] libmachine: (default-k8s-diff-port-715118) Waiting for machine to stop 35/120
	I0501 03:33:55.590168   68122 main.go:141] libmachine: (default-k8s-diff-port-715118) Waiting for machine to stop 36/120
	I0501 03:33:56.591566   68122 main.go:141] libmachine: (default-k8s-diff-port-715118) Waiting for machine to stop 37/120
	I0501 03:33:57.593098   68122 main.go:141] libmachine: (default-k8s-diff-port-715118) Waiting for machine to stop 38/120
	I0501 03:33:58.594535   68122 main.go:141] libmachine: (default-k8s-diff-port-715118) Waiting for machine to stop 39/120
	I0501 03:33:59.596748   68122 main.go:141] libmachine: (default-k8s-diff-port-715118) Waiting for machine to stop 40/120
	I0501 03:34:00.598076   68122 main.go:141] libmachine: (default-k8s-diff-port-715118) Waiting for machine to stop 41/120
	I0501 03:34:01.599461   68122 main.go:141] libmachine: (default-k8s-diff-port-715118) Waiting for machine to stop 42/120
	I0501 03:34:02.600735   68122 main.go:141] libmachine: (default-k8s-diff-port-715118) Waiting for machine to stop 43/120
	I0501 03:34:03.602154   68122 main.go:141] libmachine: (default-k8s-diff-port-715118) Waiting for machine to stop 44/120
	I0501 03:34:04.604143   68122 main.go:141] libmachine: (default-k8s-diff-port-715118) Waiting for machine to stop 45/120
	I0501 03:34:05.605490   68122 main.go:141] libmachine: (default-k8s-diff-port-715118) Waiting for machine to stop 46/120
	I0501 03:34:06.606905   68122 main.go:141] libmachine: (default-k8s-diff-port-715118) Waiting for machine to stop 47/120
	I0501 03:34:07.608344   68122 main.go:141] libmachine: (default-k8s-diff-port-715118) Waiting for machine to stop 48/120
	I0501 03:34:08.609558   68122 main.go:141] libmachine: (default-k8s-diff-port-715118) Waiting for machine to stop 49/120
	I0501 03:34:09.611722   68122 main.go:141] libmachine: (default-k8s-diff-port-715118) Waiting for machine to stop 50/120
	I0501 03:34:10.613205   68122 main.go:141] libmachine: (default-k8s-diff-port-715118) Waiting for machine to stop 51/120
	I0501 03:34:11.614444   68122 main.go:141] libmachine: (default-k8s-diff-port-715118) Waiting for machine to stop 52/120
	I0501 03:34:12.615859   68122 main.go:141] libmachine: (default-k8s-diff-port-715118) Waiting for machine to stop 53/120
	I0501 03:34:13.617170   68122 main.go:141] libmachine: (default-k8s-diff-port-715118) Waiting for machine to stop 54/120
	I0501 03:34:14.619271   68122 main.go:141] libmachine: (default-k8s-diff-port-715118) Waiting for machine to stop 55/120
	I0501 03:34:15.620866   68122 main.go:141] libmachine: (default-k8s-diff-port-715118) Waiting for machine to stop 56/120
	I0501 03:34:16.622115   68122 main.go:141] libmachine: (default-k8s-diff-port-715118) Waiting for machine to stop 57/120
	I0501 03:34:17.623629   68122 main.go:141] libmachine: (default-k8s-diff-port-715118) Waiting for machine to stop 58/120
	I0501 03:34:18.624880   68122 main.go:141] libmachine: (default-k8s-diff-port-715118) Waiting for machine to stop 59/120
	I0501 03:34:19.627225   68122 main.go:141] libmachine: (default-k8s-diff-port-715118) Waiting for machine to stop 60/120
	I0501 03:34:20.628706   68122 main.go:141] libmachine: (default-k8s-diff-port-715118) Waiting for machine to stop 61/120
	I0501 03:34:21.630190   68122 main.go:141] libmachine: (default-k8s-diff-port-715118) Waiting for machine to stop 62/120
	I0501 03:34:22.631465   68122 main.go:141] libmachine: (default-k8s-diff-port-715118) Waiting for machine to stop 63/120
	I0501 03:34:23.632824   68122 main.go:141] libmachine: (default-k8s-diff-port-715118) Waiting for machine to stop 64/120
	I0501 03:34:24.634286   68122 main.go:141] libmachine: (default-k8s-diff-port-715118) Waiting for machine to stop 65/120
	I0501 03:34:25.635710   68122 main.go:141] libmachine: (default-k8s-diff-port-715118) Waiting for machine to stop 66/120
	I0501 03:34:26.636964   68122 main.go:141] libmachine: (default-k8s-diff-port-715118) Waiting for machine to stop 67/120
	I0501 03:34:27.638282   68122 main.go:141] libmachine: (default-k8s-diff-port-715118) Waiting for machine to stop 68/120
	I0501 03:34:28.639619   68122 main.go:141] libmachine: (default-k8s-diff-port-715118) Waiting for machine to stop 69/120
	I0501 03:34:29.641702   68122 main.go:141] libmachine: (default-k8s-diff-port-715118) Waiting for machine to stop 70/120
	I0501 03:34:30.643027   68122 main.go:141] libmachine: (default-k8s-diff-port-715118) Waiting for machine to stop 71/120
	I0501 03:34:31.644266   68122 main.go:141] libmachine: (default-k8s-diff-port-715118) Waiting for machine to stop 72/120
	I0501 03:34:32.645598   68122 main.go:141] libmachine: (default-k8s-diff-port-715118) Waiting for machine to stop 73/120
	I0501 03:34:33.647038   68122 main.go:141] libmachine: (default-k8s-diff-port-715118) Waiting for machine to stop 74/120
	I0501 03:34:34.648992   68122 main.go:141] libmachine: (default-k8s-diff-port-715118) Waiting for machine to stop 75/120
	I0501 03:34:35.650291   68122 main.go:141] libmachine: (default-k8s-diff-port-715118) Waiting for machine to stop 76/120
	I0501 03:34:36.651922   68122 main.go:141] libmachine: (default-k8s-diff-port-715118) Waiting for machine to stop 77/120
	I0501 03:34:37.653400   68122 main.go:141] libmachine: (default-k8s-diff-port-715118) Waiting for machine to stop 78/120
	I0501 03:34:38.655013   68122 main.go:141] libmachine: (default-k8s-diff-port-715118) Waiting for machine to stop 79/120
	I0501 03:34:39.657425   68122 main.go:141] libmachine: (default-k8s-diff-port-715118) Waiting for machine to stop 80/120
	I0501 03:34:40.658867   68122 main.go:141] libmachine: (default-k8s-diff-port-715118) Waiting for machine to stop 81/120
	I0501 03:34:41.660450   68122 main.go:141] libmachine: (default-k8s-diff-port-715118) Waiting for machine to stop 82/120
	I0501 03:34:42.661838   68122 main.go:141] libmachine: (default-k8s-diff-port-715118) Waiting for machine to stop 83/120
	I0501 03:34:43.663312   68122 main.go:141] libmachine: (default-k8s-diff-port-715118) Waiting for machine to stop 84/120
	I0501 03:34:44.665435   68122 main.go:141] libmachine: (default-k8s-diff-port-715118) Waiting for machine to stop 85/120
	I0501 03:34:45.667059   68122 main.go:141] libmachine: (default-k8s-diff-port-715118) Waiting for machine to stop 86/120
	I0501 03:34:46.668376   68122 main.go:141] libmachine: (default-k8s-diff-port-715118) Waiting for machine to stop 87/120
	I0501 03:34:47.669847   68122 main.go:141] libmachine: (default-k8s-diff-port-715118) Waiting for machine to stop 88/120
	I0501 03:34:48.671366   68122 main.go:141] libmachine: (default-k8s-diff-port-715118) Waiting for machine to stop 89/120
	I0501 03:34:49.673377   68122 main.go:141] libmachine: (default-k8s-diff-port-715118) Waiting for machine to stop 90/120
	I0501 03:34:50.674892   68122 main.go:141] libmachine: (default-k8s-diff-port-715118) Waiting for machine to stop 91/120
	I0501 03:34:51.676985   68122 main.go:141] libmachine: (default-k8s-diff-port-715118) Waiting for machine to stop 92/120
	I0501 03:34:52.678348   68122 main.go:141] libmachine: (default-k8s-diff-port-715118) Waiting for machine to stop 93/120
	I0501 03:34:53.679673   68122 main.go:141] libmachine: (default-k8s-diff-port-715118) Waiting for machine to stop 94/120
	I0501 03:34:54.681587   68122 main.go:141] libmachine: (default-k8s-diff-port-715118) Waiting for machine to stop 95/120
	I0501 03:34:55.682927   68122 main.go:141] libmachine: (default-k8s-diff-port-715118) Waiting for machine to stop 96/120
	I0501 03:34:56.684298   68122 main.go:141] libmachine: (default-k8s-diff-port-715118) Waiting for machine to stop 97/120
	I0501 03:34:57.685594   68122 main.go:141] libmachine: (default-k8s-diff-port-715118) Waiting for machine to stop 98/120
	I0501 03:34:58.687004   68122 main.go:141] libmachine: (default-k8s-diff-port-715118) Waiting for machine to stop 99/120
	I0501 03:34:59.689226   68122 main.go:141] libmachine: (default-k8s-diff-port-715118) Waiting for machine to stop 100/120
	I0501 03:35:00.691121   68122 main.go:141] libmachine: (default-k8s-diff-port-715118) Waiting for machine to stop 101/120
	I0501 03:35:01.692506   68122 main.go:141] libmachine: (default-k8s-diff-port-715118) Waiting for machine to stop 102/120
	I0501 03:35:02.694083   68122 main.go:141] libmachine: (default-k8s-diff-port-715118) Waiting for machine to stop 103/120
	I0501 03:35:03.695620   68122 main.go:141] libmachine: (default-k8s-diff-port-715118) Waiting for machine to stop 104/120
	I0501 03:35:04.697787   68122 main.go:141] libmachine: (default-k8s-diff-port-715118) Waiting for machine to stop 105/120
	I0501 03:35:05.699329   68122 main.go:141] libmachine: (default-k8s-diff-port-715118) Waiting for machine to stop 106/120
	I0501 03:35:06.700949   68122 main.go:141] libmachine: (default-k8s-diff-port-715118) Waiting for machine to stop 107/120
	I0501 03:35:07.702292   68122 main.go:141] libmachine: (default-k8s-diff-port-715118) Waiting for machine to stop 108/120
	I0501 03:35:08.703842   68122 main.go:141] libmachine: (default-k8s-diff-port-715118) Waiting for machine to stop 109/120
	I0501 03:35:09.705957   68122 main.go:141] libmachine: (default-k8s-diff-port-715118) Waiting for machine to stop 110/120
	I0501 03:35:10.707326   68122 main.go:141] libmachine: (default-k8s-diff-port-715118) Waiting for machine to stop 111/120
	I0501 03:35:11.709117   68122 main.go:141] libmachine: (default-k8s-diff-port-715118) Waiting for machine to stop 112/120
	I0501 03:35:12.710578   68122 main.go:141] libmachine: (default-k8s-diff-port-715118) Waiting for machine to stop 113/120
	I0501 03:35:13.712855   68122 main.go:141] libmachine: (default-k8s-diff-port-715118) Waiting for machine to stop 114/120
	I0501 03:35:14.714872   68122 main.go:141] libmachine: (default-k8s-diff-port-715118) Waiting for machine to stop 115/120
	I0501 03:35:15.716325   68122 main.go:141] libmachine: (default-k8s-diff-port-715118) Waiting for machine to stop 116/120
	I0501 03:35:16.717522   68122 main.go:141] libmachine: (default-k8s-diff-port-715118) Waiting for machine to stop 117/120
	I0501 03:35:17.718959   68122 main.go:141] libmachine: (default-k8s-diff-port-715118) Waiting for machine to stop 118/120
	I0501 03:35:18.721809   68122 main.go:141] libmachine: (default-k8s-diff-port-715118) Waiting for machine to stop 119/120
	I0501 03:35:19.722297   68122 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0501 03:35:19.722366   68122 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0501 03:35:19.724219   68122 out.go:177] 
	W0501 03:35:19.725470   68122 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0501 03:35:19.725481   68122 out.go:239] * 
	* 
	W0501 03:35:19.727909   68122 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0501 03:35:19.729250   68122 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-715118 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-715118 -n default-k8s-diff-port-715118
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-715118 -n default-k8s-diff-port-715118: exit status 3 (18.484155408s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0501 03:35:38.214755   68999 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.158:22: connect: no route to host
	E0501 03:35:38.214798   68999 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.158:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-715118" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (138.99s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.39s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-892672 -n no-preload-892672
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-892672 -n no-preload-892672: exit status 3 (3.171495854s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0501 03:34:38.666752   68481 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.144:22: connect: no route to host
	E0501 03:34:38.666774   68481 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.144:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-892672 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-892672 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.149194059s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.144:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-892672 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-892672 -n no-preload-892672
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-892672 -n no-preload-892672: exit status 3 (3.066696066s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0501 03:34:47.882736   68594 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.144:22: connect: no route to host
	E0501 03:34:47.882757   68594 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.144:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-892672" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.39s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-277128 -n embed-certs-277128
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-277128 -n embed-certs-277128: exit status 3 (3.167611092s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0501 03:34:44.550783   68546 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.218:22: connect: no route to host
	E0501 03:34:44.550807   68546 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.218:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-277128 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-277128 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.157621015s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.50.218:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-277128 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-277128 -n embed-certs-277128
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-277128 -n embed-certs-277128: exit status 3 (3.059153806s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0501 03:34:53.766847   68701 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.218:22: connect: no route to host
	E0501 03:34:53.766887   68701 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.218:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-277128" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.51s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-503971 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-503971 create -f testdata/busybox.yaml: exit status 1 (46.292191ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-503971" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-503971 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-503971 -n old-k8s-version-503971
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-503971 -n old-k8s-version-503971: exit status 6 (229.656541ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0501 03:34:51.325446   68773 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-503971" does not appear in /home/jenkins/minikube-integration/18779-13391/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-503971" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-503971 -n old-k8s-version-503971
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-503971 -n old-k8s-version-503971: exit status 6 (234.144463ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0501 03:34:51.559273   68803 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-503971" does not appear in /home/jenkins/minikube-integration/18779-13391/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-503971" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.51s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (105.04s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-503971 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-503971 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m44.771050973s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-503971 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-503971 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-503971 describe deploy/metrics-server -n kube-system: exit status 1 (41.559254ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-503971" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-503971 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-503971 -n old-k8s-version-503971
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-503971 -n old-k8s-version-503971: exit status 6 (229.630718ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0501 03:36:36.602230   69447 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-503971" does not appear in /home/jenkins/minikube-integration/18779-13391/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-503971" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (105.04s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-715118 -n default-k8s-diff-port-715118
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-715118 -n default-k8s-diff-port-715118: exit status 3 (3.16780193s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0501 03:35:41.382769   69110 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.158:22: connect: no route to host
	E0501 03:35:41.382791   69110 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.158:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-715118 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-715118 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.152838107s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.72.158:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-715118 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-715118 -n default-k8s-diff-port-715118
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-715118 -n default-k8s-diff-port-715118: exit status 3 (3.063300837s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0501 03:35:50.598875   69190 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.158:22: connect: no route to host
	E0501 03:35:50.598910   69190 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.158:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-715118" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (726.86s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-503971 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
E0501 03:39:56.198754   20724 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/functional-960026/client.crt: no such file or directory
E0501 03:41:19.250060   20724 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/functional-960026/client.crt: no such file or directory
E0501 03:41:24.419442   20724 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/addons-286595/client.crt: no such file or directory
E0501 03:44:27.470958   20724 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/addons-286595/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-503971 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (12m3.257529803s)

                                                
                                                
-- stdout --
	* [old-k8s-version-503971] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18779
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18779-13391/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18779-13391/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.0
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-503971" primary control-plane node in "old-k8s-version-503971" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-503971" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0501 03:36:41.470152   69580 out.go:291] Setting OutFile to fd 1 ...
	I0501 03:36:41.470256   69580 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 03:36:41.470264   69580 out.go:304] Setting ErrFile to fd 2...
	I0501 03:36:41.470268   69580 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 03:36:41.470484   69580 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18779-13391/.minikube/bin
	I0501 03:36:41.470989   69580 out.go:298] Setting JSON to false
	I0501 03:36:41.471856   69580 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":8345,"bootTime":1714526257,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0501 03:36:41.471911   69580 start.go:139] virtualization: kvm guest
	I0501 03:36:41.473901   69580 out.go:177] * [old-k8s-version-503971] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0501 03:36:41.474994   69580 out.go:177]   - MINIKUBE_LOCATION=18779
	I0501 03:36:41.475003   69580 notify.go:220] Checking for updates...
	I0501 03:36:41.477150   69580 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0501 03:36:41.478394   69580 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18779-13391/kubeconfig
	I0501 03:36:41.479462   69580 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18779-13391/.minikube
	I0501 03:36:41.480507   69580 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0501 03:36:41.481543   69580 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0501 03:36:41.482907   69580 config.go:182] Loaded profile config "old-k8s-version-503971": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0501 03:36:41.483279   69580 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:36:41.483311   69580 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:36:41.497758   69580 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40255
	I0501 03:36:41.498090   69580 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:36:41.498591   69580 main.go:141] libmachine: Using API Version  1
	I0501 03:36:41.498616   69580 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:36:41.498891   69580 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:36:41.499055   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .DriverName
	I0501 03:36:41.500675   69580 out.go:177] * Kubernetes 1.30.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.0
	I0501 03:36:41.501716   69580 driver.go:392] Setting default libvirt URI to qemu:///system
	I0501 03:36:41.501974   69580 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:36:41.502024   69580 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:36:41.515991   69580 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40591
	I0501 03:36:41.516392   69580 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:36:41.516826   69580 main.go:141] libmachine: Using API Version  1
	I0501 03:36:41.516846   69580 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:36:41.517120   69580 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:36:41.517281   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .DriverName
	I0501 03:36:41.551130   69580 out.go:177] * Using the kvm2 driver based on existing profile
	I0501 03:36:41.552244   69580 start.go:297] selected driver: kvm2
	I0501 03:36:41.552253   69580 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-503971 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-503971 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.104 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 03:36:41.552369   69580 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0501 03:36:41.553004   69580 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0501 03:36:41.553071   69580 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18779-13391/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0501 03:36:41.567362   69580 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0501 03:36:41.567736   69580 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0501 03:36:41.567815   69580 cni.go:84] Creating CNI manager for ""
	I0501 03:36:41.567832   69580 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0501 03:36:41.567881   69580 start.go:340] cluster config:
	{Name:old-k8s-version-503971 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-503971 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.104 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 03:36:41.568012   69580 iso.go:125] acquiring lock: {Name:mkcd2496aadb29931e179193d707c731bfda98b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0501 03:36:41.569791   69580 out.go:177] * Starting "old-k8s-version-503971" primary control-plane node in "old-k8s-version-503971" cluster
	I0501 03:36:41.571352   69580 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0501 03:36:41.571389   69580 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0501 03:36:41.571408   69580 cache.go:56] Caching tarball of preloaded images
	I0501 03:36:41.571478   69580 preload.go:173] Found /home/jenkins/minikube-integration/18779-13391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0501 03:36:41.571490   69580 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0501 03:36:41.571588   69580 profile.go:143] Saving config to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/old-k8s-version-503971/config.json ...
	I0501 03:36:41.571775   69580 start.go:360] acquireMachinesLock for old-k8s-version-503971: {Name:mkc4548e5afe1d4b0a833f6c522103562b0cefff Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0501 03:40:13.516002   69580 start.go:364] duration metric: took 3m31.9441828s to acquireMachinesLock for "old-k8s-version-503971"
	I0501 03:40:13.516087   69580 start.go:96] Skipping create...Using existing machine configuration
	I0501 03:40:13.516100   69580 fix.go:54] fixHost starting: 
	I0501 03:40:13.516559   69580 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:40:13.516601   69580 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:40:13.537158   69580 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38535
	I0501 03:40:13.537631   69580 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:40:13.538169   69580 main.go:141] libmachine: Using API Version  1
	I0501 03:40:13.538197   69580 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:40:13.538570   69580 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:40:13.538769   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .DriverName
	I0501 03:40:13.538958   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetState
	I0501 03:40:13.540454   69580 fix.go:112] recreateIfNeeded on old-k8s-version-503971: state=Stopped err=<nil>
	I0501 03:40:13.540486   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .DriverName
	W0501 03:40:13.540787   69580 fix.go:138] unexpected machine state, will restart: <nil>
	I0501 03:40:13.542670   69580 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-503971" ...
	I0501 03:40:13.544100   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .Start
	I0501 03:40:13.544328   69580 main.go:141] libmachine: (old-k8s-version-503971) Ensuring networks are active...
	I0501 03:40:13.545238   69580 main.go:141] libmachine: (old-k8s-version-503971) Ensuring network default is active
	I0501 03:40:13.545621   69580 main.go:141] libmachine: (old-k8s-version-503971) Ensuring network mk-old-k8s-version-503971 is active
	I0501 03:40:13.546072   69580 main.go:141] libmachine: (old-k8s-version-503971) Getting domain xml...
	I0501 03:40:13.546928   69580 main.go:141] libmachine: (old-k8s-version-503971) Creating domain...
	I0501 03:40:14.858558   69580 main.go:141] libmachine: (old-k8s-version-503971) Waiting to get IP...
	I0501 03:40:14.859690   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:14.860108   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | unable to find current IP address of domain old-k8s-version-503971 in network mk-old-k8s-version-503971
	I0501 03:40:14.860215   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | I0501 03:40:14.860103   70499 retry.go:31] will retry after 294.057322ms: waiting for machine to come up
	I0501 03:40:15.155490   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:15.155922   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | unable to find current IP address of domain old-k8s-version-503971 in network mk-old-k8s-version-503971
	I0501 03:40:15.155954   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | I0501 03:40:15.155870   70499 retry.go:31] will retry after 281.238966ms: waiting for machine to come up
	I0501 03:40:15.439196   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:15.439735   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | unable to find current IP address of domain old-k8s-version-503971 in network mk-old-k8s-version-503971
	I0501 03:40:15.439783   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | I0501 03:40:15.439697   70499 retry.go:31] will retry after 429.353689ms: waiting for machine to come up
	I0501 03:40:15.871266   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:15.871947   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | unable to find current IP address of domain old-k8s-version-503971 in network mk-old-k8s-version-503971
	I0501 03:40:15.871970   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | I0501 03:40:15.871895   70499 retry.go:31] will retry after 478.685219ms: waiting for machine to come up
	I0501 03:40:16.352661   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:16.353125   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | unable to find current IP address of domain old-k8s-version-503971 in network mk-old-k8s-version-503971
	I0501 03:40:16.353161   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | I0501 03:40:16.353087   70499 retry.go:31] will retry after 642.905156ms: waiting for machine to come up
	I0501 03:40:16.997533   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:16.998034   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | unable to find current IP address of domain old-k8s-version-503971 in network mk-old-k8s-version-503971
	I0501 03:40:16.998076   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | I0501 03:40:16.997984   70499 retry.go:31] will retry after 596.56948ms: waiting for machine to come up
	I0501 03:40:17.596671   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:17.597182   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | unable to find current IP address of domain old-k8s-version-503971 in network mk-old-k8s-version-503971
	I0501 03:40:17.597207   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | I0501 03:40:17.597132   70499 retry.go:31] will retry after 770.742109ms: waiting for machine to come up
	I0501 03:40:18.369337   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:18.369833   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | unable to find current IP address of domain old-k8s-version-503971 in network mk-old-k8s-version-503971
	I0501 03:40:18.369864   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | I0501 03:40:18.369780   70499 retry.go:31] will retry after 1.382502808s: waiting for machine to come up
	I0501 03:40:19.753936   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:19.754419   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | unable to find current IP address of domain old-k8s-version-503971 in network mk-old-k8s-version-503971
	I0501 03:40:19.754458   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | I0501 03:40:19.754363   70499 retry.go:31] will retry after 1.344792989s: waiting for machine to come up
	I0501 03:40:21.101047   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:21.101474   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | unable to find current IP address of domain old-k8s-version-503971 in network mk-old-k8s-version-503971
	I0501 03:40:21.101514   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | I0501 03:40:21.101442   70499 retry.go:31] will retry after 1.636964906s: waiting for machine to come up
	I0501 03:40:22.740241   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:22.740692   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | unable to find current IP address of domain old-k8s-version-503971 in network mk-old-k8s-version-503971
	I0501 03:40:22.740722   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | I0501 03:40:22.740656   70499 retry.go:31] will retry after 1.899831455s: waiting for machine to come up
	I0501 03:40:24.642609   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:24.643075   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | unable to find current IP address of domain old-k8s-version-503971 in network mk-old-k8s-version-503971
	I0501 03:40:24.643104   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | I0501 03:40:24.643019   70499 retry.go:31] will retry after 3.503333894s: waiting for machine to come up
	I0501 03:40:28.148102   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:28.148506   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | unable to find current IP address of domain old-k8s-version-503971 in network mk-old-k8s-version-503971
	I0501 03:40:28.148547   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | I0501 03:40:28.148463   70499 retry.go:31] will retry after 4.150508159s: waiting for machine to come up
	I0501 03:40:32.303427   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:32.303804   69580 main.go:141] libmachine: (old-k8s-version-503971) Found IP for machine: 192.168.61.104
	I0501 03:40:32.303837   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has current primary IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:32.303851   69580 main.go:141] libmachine: (old-k8s-version-503971) Reserving static IP address...
	I0501 03:40:32.304254   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "old-k8s-version-503971", mac: "52:54:00:7d:68:83", ip: "192.168.61.104"} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:40:32.304286   69580 main.go:141] libmachine: (old-k8s-version-503971) Reserved static IP address: 192.168.61.104
	I0501 03:40:32.304305   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | skip adding static IP to network mk-old-k8s-version-503971 - found existing host DHCP lease matching {name: "old-k8s-version-503971", mac: "52:54:00:7d:68:83", ip: "192.168.61.104"}
	I0501 03:40:32.304323   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | Getting to WaitForSSH function...
	I0501 03:40:32.304337   69580 main.go:141] libmachine: (old-k8s-version-503971) Waiting for SSH to be available...
	I0501 03:40:32.306619   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:32.306972   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:68:83", ip: ""} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:40:32.307011   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:32.307114   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | Using SSH client type: external
	I0501 03:40:32.307138   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | Using SSH private key: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/old-k8s-version-503971/id_rsa (-rw-------)
	I0501 03:40:32.307174   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.104 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18779-13391/.minikube/machines/old-k8s-version-503971/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0501 03:40:32.307188   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | About to run SSH command:
	I0501 03:40:32.307224   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | exit 0
	I0501 03:40:32.438508   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | SSH cmd err, output: <nil>: 
	I0501 03:40:32.438882   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetConfigRaw
	I0501 03:40:32.439452   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetIP
	I0501 03:40:32.441984   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:32.442342   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:68:83", ip: ""} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:40:32.442369   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:32.442668   69580 profile.go:143] Saving config to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/old-k8s-version-503971/config.json ...
	I0501 03:40:32.442875   69580 machine.go:94] provisionDockerMachine start ...
	I0501 03:40:32.442897   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .DriverName
	I0501 03:40:32.443077   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHHostname
	I0501 03:40:32.445129   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:32.445442   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:68:83", ip: ""} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:40:32.445480   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:32.445628   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHPort
	I0501 03:40:32.445806   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHKeyPath
	I0501 03:40:32.445974   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHKeyPath
	I0501 03:40:32.446122   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHUsername
	I0501 03:40:32.446314   69580 main.go:141] libmachine: Using SSH client type: native
	I0501 03:40:32.446548   69580 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.104 22 <nil> <nil>}
	I0501 03:40:32.446564   69580 main.go:141] libmachine: About to run SSH command:
	hostname
	I0501 03:40:32.559346   69580 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0501 03:40:32.559379   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetMachineName
	I0501 03:40:32.559630   69580 buildroot.go:166] provisioning hostname "old-k8s-version-503971"
	I0501 03:40:32.559654   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetMachineName
	I0501 03:40:32.559832   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHHostname
	I0501 03:40:32.562176   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:32.562547   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:68:83", ip: ""} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:40:32.562582   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:32.562716   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHPort
	I0501 03:40:32.562892   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHKeyPath
	I0501 03:40:32.563019   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHKeyPath
	I0501 03:40:32.563161   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHUsername
	I0501 03:40:32.563332   69580 main.go:141] libmachine: Using SSH client type: native
	I0501 03:40:32.563545   69580 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.104 22 <nil> <nil>}
	I0501 03:40:32.563564   69580 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-503971 && echo "old-k8s-version-503971" | sudo tee /etc/hostname
	I0501 03:40:32.699918   69580 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-503971
	
	I0501 03:40:32.699961   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHHostname
	I0501 03:40:32.702721   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:32.703134   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:68:83", ip: ""} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:40:32.703158   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:32.703361   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHPort
	I0501 03:40:32.703547   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHKeyPath
	I0501 03:40:32.703744   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHKeyPath
	I0501 03:40:32.703881   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHUsername
	I0501 03:40:32.704037   69580 main.go:141] libmachine: Using SSH client type: native
	I0501 03:40:32.704199   69580 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.104 22 <nil> <nil>}
	I0501 03:40:32.704215   69580 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-503971' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-503971/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-503971' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0501 03:40:32.830277   69580 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0501 03:40:32.830307   69580 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18779-13391/.minikube CaCertPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18779-13391/.minikube}
	I0501 03:40:32.830323   69580 buildroot.go:174] setting up certificates
	I0501 03:40:32.830331   69580 provision.go:84] configureAuth start
	I0501 03:40:32.830340   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetMachineName
	I0501 03:40:32.830629   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetIP
	I0501 03:40:32.833575   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:32.833887   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:68:83", ip: ""} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:40:32.833932   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:32.834070   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHHostname
	I0501 03:40:32.836309   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:32.836664   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:68:83", ip: ""} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:40:32.836691   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:32.836824   69580 provision.go:143] copyHostCerts
	I0501 03:40:32.836885   69580 exec_runner.go:144] found /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem, removing ...
	I0501 03:40:32.836895   69580 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem
	I0501 03:40:32.836945   69580 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem (1078 bytes)
	I0501 03:40:32.837046   69580 exec_runner.go:144] found /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem, removing ...
	I0501 03:40:32.837054   69580 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem
	I0501 03:40:32.837072   69580 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem (1123 bytes)
	I0501 03:40:32.837129   69580 exec_runner.go:144] found /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem, removing ...
	I0501 03:40:32.837136   69580 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem
	I0501 03:40:32.837152   69580 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem (1675 bytes)
	I0501 03:40:32.837202   69580 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-503971 san=[127.0.0.1 192.168.61.104 localhost minikube old-k8s-version-503971]
	I0501 03:40:33.047948   69580 provision.go:177] copyRemoteCerts
	I0501 03:40:33.048004   69580 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0501 03:40:33.048030   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHHostname
	I0501 03:40:33.050591   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:33.050975   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:68:83", ip: ""} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:40:33.051012   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:33.051142   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHPort
	I0501 03:40:33.051310   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHKeyPath
	I0501 03:40:33.051465   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHUsername
	I0501 03:40:33.051574   69580 sshutil.go:53] new ssh client: &{IP:192.168.61.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/old-k8s-version-503971/id_rsa Username:docker}
	I0501 03:40:33.143991   69580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0501 03:40:33.175494   69580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0501 03:40:33.204770   69580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0501 03:40:33.232728   69580 provision.go:87] duration metric: took 402.386279ms to configureAuth
	I0501 03:40:33.232756   69580 buildroot.go:189] setting minikube options for container-runtime
	I0501 03:40:33.232962   69580 config.go:182] Loaded profile config "old-k8s-version-503971": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0501 03:40:33.233051   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHHostname
	I0501 03:40:33.235656   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:33.236006   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:68:83", ip: ""} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:40:33.236038   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:33.236162   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHPort
	I0501 03:40:33.236339   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHKeyPath
	I0501 03:40:33.236484   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHKeyPath
	I0501 03:40:33.236633   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHUsername
	I0501 03:40:33.236817   69580 main.go:141] libmachine: Using SSH client type: native
	I0501 03:40:33.236980   69580 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.104 22 <nil> <nil>}
	I0501 03:40:33.236997   69580 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0501 03:40:33.526370   69580 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0501 03:40:33.526419   69580 machine.go:97] duration metric: took 1.083510254s to provisionDockerMachine
	I0501 03:40:33.526432   69580 start.go:293] postStartSetup for "old-k8s-version-503971" (driver="kvm2")
	I0501 03:40:33.526443   69580 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0501 03:40:33.526470   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .DriverName
	I0501 03:40:33.526788   69580 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0501 03:40:33.526831   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHHostname
	I0501 03:40:33.529815   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:33.530209   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:68:83", ip: ""} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:40:33.530268   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:33.530364   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHPort
	I0501 03:40:33.530559   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHKeyPath
	I0501 03:40:33.530741   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHUsername
	I0501 03:40:33.530909   69580 sshutil.go:53] new ssh client: &{IP:192.168.61.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/old-k8s-version-503971/id_rsa Username:docker}
	I0501 03:40:33.620224   69580 ssh_runner.go:195] Run: cat /etc/os-release
	I0501 03:40:33.625417   69580 info.go:137] Remote host: Buildroot 2023.02.9
	I0501 03:40:33.625447   69580 filesync.go:126] Scanning /home/jenkins/minikube-integration/18779-13391/.minikube/addons for local assets ...
	I0501 03:40:33.625511   69580 filesync.go:126] Scanning /home/jenkins/minikube-integration/18779-13391/.minikube/files for local assets ...
	I0501 03:40:33.625594   69580 filesync.go:149] local asset: /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem -> 207242.pem in /etc/ssl/certs
	I0501 03:40:33.625691   69580 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0501 03:40:33.637311   69580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem --> /etc/ssl/certs/207242.pem (1708 bytes)
	I0501 03:40:33.666707   69580 start.go:296] duration metric: took 140.263297ms for postStartSetup
	I0501 03:40:33.666740   69580 fix.go:56] duration metric: took 20.150640355s for fixHost
	I0501 03:40:33.666758   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHHostname
	I0501 03:40:33.669394   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:33.669822   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:68:83", ip: ""} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:40:33.669852   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:33.669963   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHPort
	I0501 03:40:33.670213   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHKeyPath
	I0501 03:40:33.670388   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHKeyPath
	I0501 03:40:33.670589   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHUsername
	I0501 03:40:33.670794   69580 main.go:141] libmachine: Using SSH client type: native
	I0501 03:40:33.670972   69580 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.104 22 <nil> <nil>}
	I0501 03:40:33.670984   69580 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0501 03:40:33.783810   69580 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714534833.728910946
	
	I0501 03:40:33.783839   69580 fix.go:216] guest clock: 1714534833.728910946
	I0501 03:40:33.783850   69580 fix.go:229] Guest: 2024-05-01 03:40:33.728910946 +0000 UTC Remote: 2024-05-01 03:40:33.666743363 +0000 UTC m=+232.246108464 (delta=62.167583ms)
	I0501 03:40:33.783893   69580 fix.go:200] guest clock delta is within tolerance: 62.167583ms
	I0501 03:40:33.783903   69580 start.go:83] releasing machines lock for "old-k8s-version-503971", held for 20.267840723s
	I0501 03:40:33.783933   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .DriverName
	I0501 03:40:33.784203   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetIP
	I0501 03:40:33.786846   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:33.787202   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:68:83", ip: ""} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:40:33.787230   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:33.787385   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .DriverName
	I0501 03:40:33.787837   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .DriverName
	I0501 03:40:33.787997   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .DriverName
	I0501 03:40:33.788085   69580 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0501 03:40:33.788126   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHHostname
	I0501 03:40:33.788252   69580 ssh_runner.go:195] Run: cat /version.json
	I0501 03:40:33.788279   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHHostname
	I0501 03:40:33.790748   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:33.791086   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:33.791118   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:68:83", ip: ""} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:40:33.791142   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:33.791435   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHPort
	I0501 03:40:33.791491   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:68:83", ip: ""} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:40:33.791532   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:33.791618   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHKeyPath
	I0501 03:40:33.791740   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHPort
	I0501 03:40:33.791815   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHUsername
	I0501 03:40:33.791937   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHKeyPath
	I0501 03:40:33.792014   69580 sshutil.go:53] new ssh client: &{IP:192.168.61.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/old-k8s-version-503971/id_rsa Username:docker}
	I0501 03:40:33.792069   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHUsername
	I0501 03:40:33.792206   69580 sshutil.go:53] new ssh client: &{IP:192.168.61.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/old-k8s-version-503971/id_rsa Username:docker}
	I0501 03:40:33.876242   69580 ssh_runner.go:195] Run: systemctl --version
	I0501 03:40:33.901692   69580 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0501 03:40:34.056758   69580 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0501 03:40:34.065070   69580 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0501 03:40:34.065156   69580 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0501 03:40:34.085337   69580 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0501 03:40:34.085364   69580 start.go:494] detecting cgroup driver to use...
	I0501 03:40:34.085432   69580 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0501 03:40:34.102723   69580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0501 03:40:34.118792   69580 docker.go:217] disabling cri-docker service (if available) ...
	I0501 03:40:34.118847   69580 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0501 03:40:34.133978   69580 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0501 03:40:34.153890   69580 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0501 03:40:34.283815   69580 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0501 03:40:34.475851   69580 docker.go:233] disabling docker service ...
	I0501 03:40:34.475926   69580 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0501 03:40:34.500769   69580 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0501 03:40:34.517315   69580 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0501 03:40:34.674322   69580 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0501 03:40:34.833281   69580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0501 03:40:34.852610   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0501 03:40:34.879434   69580 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0501 03:40:34.879517   69580 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:40:34.892197   69580 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0501 03:40:34.892269   69580 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:40:34.904437   69580 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:40:34.919950   69580 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:40:34.933772   69580 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0501 03:40:34.947563   69580 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0501 03:40:34.965724   69580 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0501 03:40:34.965795   69580 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0501 03:40:34.984251   69580 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0501 03:40:34.997050   69580 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 03:40:35.155852   69580 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0501 03:40:35.362090   69580 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0501 03:40:35.362164   69580 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0501 03:40:35.368621   69580 start.go:562] Will wait 60s for crictl version
	I0501 03:40:35.368701   69580 ssh_runner.go:195] Run: which crictl
	I0501 03:40:35.373792   69580 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0501 03:40:35.436905   69580 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0501 03:40:35.437018   69580 ssh_runner.go:195] Run: crio --version
	I0501 03:40:35.485130   69580 ssh_runner.go:195] Run: crio --version
	I0501 03:40:35.528700   69580 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0501 03:40:35.530015   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetIP
	I0501 03:40:35.533706   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:35.534178   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:68:83", ip: ""} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:40:35.534254   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:35.534515   69580 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0501 03:40:35.541542   69580 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0501 03:40:35.563291   69580 kubeadm.go:877] updating cluster {Name:old-k8s-version-503971 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-503971 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.104 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0501 03:40:35.563434   69580 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0501 03:40:35.563512   69580 ssh_runner.go:195] Run: sudo crictl images --output json
	I0501 03:40:35.646548   69580 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0501 03:40:35.646635   69580 ssh_runner.go:195] Run: which lz4
	I0501 03:40:35.652824   69580 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0501 03:40:35.660056   69580 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0501 03:40:35.660099   69580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0501 03:40:37.870306   69580 crio.go:462] duration metric: took 2.217531377s to copy over tarball
	I0501 03:40:37.870393   69580 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0501 03:40:41.534681   69580 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.664236925s)
	I0501 03:40:41.599216   69580 crio.go:469] duration metric: took 3.72886857s to extract the tarball
	I0501 03:40:41.599238   69580 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0501 03:40:41.649221   69580 ssh_runner.go:195] Run: sudo crictl images --output json
	I0501 03:40:41.697169   69580 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0501 03:40:41.697198   69580 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0501 03:40:41.697302   69580 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0501 03:40:41.697346   69580 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0501 03:40:41.697367   69580 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0501 03:40:41.697352   69580 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0501 03:40:41.697375   69580 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0501 03:40:41.697329   69580 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0501 03:40:41.697275   69580 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0501 03:40:41.697329   69580 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0501 03:40:41.698950   69580 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0501 03:40:41.699010   69580 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0501 03:40:41.699114   69580 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0501 03:40:41.699251   69580 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0501 03:40:41.699292   69580 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0501 03:40:41.699020   69580 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0501 03:40:41.699550   69580 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0501 03:40:41.699715   69580 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0501 03:40:41.830042   69580 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0501 03:40:41.881770   69580 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0501 03:40:41.881834   69580 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0501 03:40:41.881896   69580 ssh_runner.go:195] Run: which crictl
	I0501 03:40:41.887083   69580 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0501 03:40:41.894597   69580 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0501 03:40:41.935993   69580 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0501 03:40:41.937339   69580 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0501 03:40:41.961728   69580 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0501 03:40:41.961778   69580 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0501 03:40:41.961827   69580 ssh_runner.go:195] Run: which crictl
	I0501 03:40:42.004327   69580 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0501 03:40:42.004395   69580 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0501 03:40:42.004435   69580 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0501 03:40:42.004493   69580 ssh_runner.go:195] Run: which crictl
	I0501 03:40:42.053743   69580 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0501 03:40:42.055914   69580 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0501 03:40:42.056267   69580 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0501 03:40:42.056610   69580 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0501 03:40:42.060229   69580 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0501 03:40:42.070489   69580 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0501 03:40:42.127829   69580 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0501 03:40:42.127880   69580 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0501 03:40:42.127927   69580 ssh_runner.go:195] Run: which crictl
	I0501 03:40:42.201731   69580 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0501 03:40:42.201783   69580 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0501 03:40:42.201814   69580 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0501 03:40:42.201842   69580 ssh_runner.go:195] Run: which crictl
	I0501 03:40:42.211112   69580 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0501 03:40:42.211163   69580 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0501 03:40:42.211227   69580 ssh_runner.go:195] Run: which crictl
	I0501 03:40:42.217794   69580 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0501 03:40:42.217835   69580 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0501 03:40:42.217873   69580 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0501 03:40:42.217917   69580 ssh_runner.go:195] Run: which crictl
	I0501 03:40:42.217873   69580 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0501 03:40:42.220250   69580 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0501 03:40:42.274880   69580 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0501 03:40:42.294354   69580 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0501 03:40:42.294436   69580 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0501 03:40:42.305191   69580 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0501 03:40:42.342502   69580 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0501 03:40:42.560474   69580 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0501 03:40:42.712970   69580 cache_images.go:92] duration metric: took 1.015752585s to LoadCachedImages
	W0501 03:40:42.713057   69580 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	I0501 03:40:42.713074   69580 kubeadm.go:928] updating node { 192.168.61.104 8443 v1.20.0 crio true true} ...
	I0501 03:40:42.713227   69580 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-503971 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.104
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-503971 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0501 03:40:42.713323   69580 ssh_runner.go:195] Run: crio config
	I0501 03:40:42.771354   69580 cni.go:84] Creating CNI manager for ""
	I0501 03:40:42.771384   69580 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0501 03:40:42.771403   69580 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0501 03:40:42.771428   69580 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.104 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-503971 NodeName:old-k8s-version-503971 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.104"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.104 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0501 03:40:42.771644   69580 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.104
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-503971"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.104
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.104"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0501 03:40:42.771722   69580 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0501 03:40:42.784978   69580 binaries.go:44] Found k8s binaries, skipping transfer
	I0501 03:40:42.785057   69580 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0501 03:40:42.800945   69580 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0501 03:40:42.824293   69580 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0501 03:40:42.845949   69580 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0501 03:40:42.867390   69580 ssh_runner.go:195] Run: grep 192.168.61.104	control-plane.minikube.internal$ /etc/hosts
	I0501 03:40:42.872038   69580 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.104	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0501 03:40:42.890213   69580 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 03:40:43.041533   69580 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0501 03:40:43.070048   69580 certs.go:68] Setting up /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/old-k8s-version-503971 for IP: 192.168.61.104
	I0501 03:40:43.070075   69580 certs.go:194] generating shared ca certs ...
	I0501 03:40:43.070097   69580 certs.go:226] acquiring lock for ca certs: {Name:mk85a75d14c00780d4bf09cd9bdaf0f96f1b8fce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 03:40:43.070315   69580 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18779-13391/.minikube/ca.key
	I0501 03:40:43.070388   69580 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.key
	I0501 03:40:43.070419   69580 certs.go:256] generating profile certs ...
	I0501 03:40:43.070558   69580 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/old-k8s-version-503971/client.key
	I0501 03:40:43.070631   69580 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/old-k8s-version-503971/apiserver.key.760b883a
	I0501 03:40:43.070670   69580 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/old-k8s-version-503971/proxy-client.key
	I0501 03:40:43.070804   69580 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/20724.pem (1338 bytes)
	W0501 03:40:43.070852   69580 certs.go:480] ignoring /home/jenkins/minikube-integration/18779-13391/.minikube/certs/20724_empty.pem, impossibly tiny 0 bytes
	I0501 03:40:43.070865   69580 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca-key.pem (1679 bytes)
	I0501 03:40:43.070914   69580 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem (1078 bytes)
	I0501 03:40:43.070955   69580 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem (1123 bytes)
	I0501 03:40:43.070985   69580 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem (1675 bytes)
	I0501 03:40:43.071044   69580 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem (1708 bytes)
	I0501 03:40:43.071869   69580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0501 03:40:43.110078   69580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0501 03:40:43.164382   69580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0501 03:40:43.197775   69580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0501 03:40:43.230575   69580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/old-k8s-version-503971/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0501 03:40:43.260059   69580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/old-k8s-version-503971/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0501 03:40:43.288704   69580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/old-k8s-version-503971/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0501 03:40:43.315417   69580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/old-k8s-version-503971/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0501 03:40:43.363440   69580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0501 03:40:43.396043   69580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/certs/20724.pem --> /usr/share/ca-certificates/20724.pem (1338 bytes)
	I0501 03:40:43.425997   69580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem --> /usr/share/ca-certificates/207242.pem (1708 bytes)
	I0501 03:40:43.456927   69580 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0501 03:40:43.478177   69580 ssh_runner.go:195] Run: openssl version
	I0501 03:40:43.484513   69580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0501 03:40:43.497230   69580 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0501 03:40:43.504025   69580 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May  1 02:08 /usr/share/ca-certificates/minikubeCA.pem
	I0501 03:40:43.504112   69580 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0501 03:40:43.513309   69580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0501 03:40:43.528592   69580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20724.pem && ln -fs /usr/share/ca-certificates/20724.pem /etc/ssl/certs/20724.pem"
	I0501 03:40:43.544560   69580 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20724.pem
	I0501 03:40:43.550975   69580 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May  1 02:20 /usr/share/ca-certificates/20724.pem
	I0501 03:40:43.551047   69580 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20724.pem
	I0501 03:40:43.559214   69580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20724.pem /etc/ssl/certs/51391683.0"
	I0501 03:40:43.575362   69580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/207242.pem && ln -fs /usr/share/ca-certificates/207242.pem /etc/ssl/certs/207242.pem"
	I0501 03:40:43.587848   69580 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/207242.pem
	I0501 03:40:43.593131   69580 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May  1 02:20 /usr/share/ca-certificates/207242.pem
	I0501 03:40:43.593183   69580 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/207242.pem
	I0501 03:40:43.600365   69580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/207242.pem /etc/ssl/certs/3ec20f2e.0"
	I0501 03:40:43.613912   69580 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0501 03:40:43.619576   69580 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0501 03:40:43.628551   69580 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0501 03:40:43.637418   69580 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0501 03:40:43.645060   69580 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0501 03:40:43.654105   69580 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0501 03:40:43.663501   69580 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0501 03:40:43.670855   69580 kubeadm.go:391] StartCluster: {Name:old-k8s-version-503971 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-503971 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.104 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 03:40:43.670937   69580 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0501 03:40:43.670982   69580 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0501 03:40:43.720350   69580 cri.go:89] found id: ""
	I0501 03:40:43.720419   69580 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0501 03:40:43.732518   69580 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0501 03:40:43.732544   69580 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0501 03:40:43.732552   69580 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0501 03:40:43.732612   69580 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0501 03:40:43.743804   69580 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0501 03:40:43.745071   69580 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-503971" does not appear in /home/jenkins/minikube-integration/18779-13391/kubeconfig
	I0501 03:40:43.745785   69580 kubeconfig.go:62] /home/jenkins/minikube-integration/18779-13391/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-503971" cluster setting kubeconfig missing "old-k8s-version-503971" context setting]
	I0501 03:40:43.747054   69580 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18779-13391/kubeconfig: {Name:mk5d1131557f885800877622151c0915337cb23a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 03:40:43.748989   69580 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0501 03:40:43.760349   69580 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.104
	I0501 03:40:43.760389   69580 kubeadm.go:1154] stopping kube-system containers ...
	I0501 03:40:43.760403   69580 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0501 03:40:43.760473   69580 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0501 03:40:43.804745   69580 cri.go:89] found id: ""
	I0501 03:40:43.804841   69580 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0501 03:40:43.825960   69580 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0501 03:40:43.838038   69580 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0501 03:40:43.838062   69580 kubeadm.go:156] found existing configuration files:
	
	I0501 03:40:43.838115   69580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0501 03:40:43.849075   69580 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0501 03:40:43.849164   69580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0501 03:40:43.860634   69580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0501 03:40:43.871244   69580 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0501 03:40:43.871313   69580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0501 03:40:43.882184   69580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0501 03:40:43.893193   69580 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0501 03:40:43.893254   69580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0501 03:40:43.904257   69580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0501 03:40:43.915414   69580 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0501 03:40:43.915492   69580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0501 03:40:43.927372   69580 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0501 03:40:43.939117   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:40:44.098502   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:40:45.150125   69580 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.051581029s)
	I0501 03:40:45.150161   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:40:45.443307   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:40:45.563369   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:40:45.678620   69580 api_server.go:52] waiting for apiserver process to appear ...
	I0501 03:40:45.678731   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:46.179675   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:46.679449   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:47.179179   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:47.678890   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:48.179190   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:48.679276   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:49.179698   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:49.679121   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:50.179723   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:50.679603   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:51.179094   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:51.679850   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:52.179568   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:52.679100   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:53.179470   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:53.679115   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:54.178815   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:54.679769   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:55.179576   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:55.678864   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:56.179617   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:56.679034   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:57.179062   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:57.679579   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:58.179221   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:58.679728   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:59.178851   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:59.679647   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:00.179397   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:00.678839   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:01.179679   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:01.679527   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:02.179435   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:02.679626   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:03.179351   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:03.679618   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:04.179426   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:04.678853   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:05.179143   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:05.679065   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:06.179513   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:06.679246   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:07.179629   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:07.679601   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:08.179634   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:08.679603   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:09.179675   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:09.678837   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:10.178860   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:10.679638   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:11.179802   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:11.679355   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:12.178847   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:12.679660   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:13.179641   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:13.678808   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:14.178955   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:14.679651   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:15.179623   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:15.678862   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:16.179775   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:16.679614   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:17.179604   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:17.679100   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:18.179166   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:18.679202   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:19.179631   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:19.679583   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:20.179584   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:20.679493   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:21.178945   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:21.678785   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:22.179435   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:22.679615   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:23.179610   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:23.679473   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:24.179613   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:24.679672   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:25.179400   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:25.679793   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:26.179809   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:26.679430   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:27.179043   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:27.678801   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:28.179629   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:28.679111   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:29.179599   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:29.679624   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:30.179585   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:30.679442   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:31.179530   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:31.679423   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:32.179628   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:32.679456   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:33.179336   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:33.679221   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:34.178900   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:34.679236   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:35.179595   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:35.679520   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:36.179639   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:36.678883   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:37.179198   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:37.679101   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:38.179088   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:38.679354   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:39.179163   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:39.678809   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:40.179768   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:40.679046   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:41.179618   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:41.679751   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:42.178848   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:42.679525   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:43.179706   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:43.679665   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:44.179053   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:44.679615   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:45.178830   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:45.679547   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:41:45.679620   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:41:45.718568   69580 cri.go:89] found id: ""
	I0501 03:41:45.718597   69580 logs.go:276] 0 containers: []
	W0501 03:41:45.718611   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:41:45.718619   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:41:45.718678   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:41:45.755572   69580 cri.go:89] found id: ""
	I0501 03:41:45.755596   69580 logs.go:276] 0 containers: []
	W0501 03:41:45.755604   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:41:45.755609   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:41:45.755654   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:41:45.793411   69580 cri.go:89] found id: ""
	I0501 03:41:45.793440   69580 logs.go:276] 0 containers: []
	W0501 03:41:45.793450   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:41:45.793458   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:41:45.793526   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:41:45.834547   69580 cri.go:89] found id: ""
	I0501 03:41:45.834572   69580 logs.go:276] 0 containers: []
	W0501 03:41:45.834579   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:41:45.834585   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:41:45.834668   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:41:45.873293   69580 cri.go:89] found id: ""
	I0501 03:41:45.873321   69580 logs.go:276] 0 containers: []
	W0501 03:41:45.873332   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:41:45.873348   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:41:45.873411   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:41:45.911703   69580 cri.go:89] found id: ""
	I0501 03:41:45.911734   69580 logs.go:276] 0 containers: []
	W0501 03:41:45.911745   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:41:45.911766   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:41:45.911826   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:41:45.949577   69580 cri.go:89] found id: ""
	I0501 03:41:45.949602   69580 logs.go:276] 0 containers: []
	W0501 03:41:45.949610   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:41:45.949616   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:41:45.949666   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:41:45.986174   69580 cri.go:89] found id: ""
	I0501 03:41:45.986199   69580 logs.go:276] 0 containers: []
	W0501 03:41:45.986207   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:41:45.986216   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:41:45.986228   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:41:46.041028   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:41:46.041064   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:41:46.057097   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:41:46.057126   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:41:46.195021   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:41:46.195042   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:41:46.195055   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:41:46.261153   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:41:46.261197   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:41:48.809274   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:48.824295   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:41:48.824369   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:41:48.869945   69580 cri.go:89] found id: ""
	I0501 03:41:48.869975   69580 logs.go:276] 0 containers: []
	W0501 03:41:48.869985   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:41:48.869993   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:41:48.870053   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:41:48.918088   69580 cri.go:89] found id: ""
	I0501 03:41:48.918113   69580 logs.go:276] 0 containers: []
	W0501 03:41:48.918122   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:41:48.918131   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:41:48.918190   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:41:48.958102   69580 cri.go:89] found id: ""
	I0501 03:41:48.958132   69580 logs.go:276] 0 containers: []
	W0501 03:41:48.958143   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:41:48.958149   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:41:48.958207   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:41:48.997163   69580 cri.go:89] found id: ""
	I0501 03:41:48.997194   69580 logs.go:276] 0 containers: []
	W0501 03:41:48.997211   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:41:48.997218   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:41:48.997284   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:41:49.040132   69580 cri.go:89] found id: ""
	I0501 03:41:49.040156   69580 logs.go:276] 0 containers: []
	W0501 03:41:49.040164   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:41:49.040170   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:41:49.040228   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:41:49.079680   69580 cri.go:89] found id: ""
	I0501 03:41:49.079712   69580 logs.go:276] 0 containers: []
	W0501 03:41:49.079724   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:41:49.079732   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:41:49.079790   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:41:49.120577   69580 cri.go:89] found id: ""
	I0501 03:41:49.120610   69580 logs.go:276] 0 containers: []
	W0501 03:41:49.120623   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:41:49.120630   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:41:49.120700   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:41:49.167098   69580 cri.go:89] found id: ""
	I0501 03:41:49.167123   69580 logs.go:276] 0 containers: []
	W0501 03:41:49.167133   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:41:49.167141   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:41:49.167152   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:41:49.242834   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:41:49.242868   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:41:49.264011   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:41:49.264033   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:41:49.367711   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:41:49.367739   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:41:49.367764   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:41:49.441925   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:41:49.441964   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:41:51.986536   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:52.001651   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:41:52.001734   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:41:52.039550   69580 cri.go:89] found id: ""
	I0501 03:41:52.039571   69580 logs.go:276] 0 containers: []
	W0501 03:41:52.039579   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:41:52.039584   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:41:52.039636   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:41:52.082870   69580 cri.go:89] found id: ""
	I0501 03:41:52.082892   69580 logs.go:276] 0 containers: []
	W0501 03:41:52.082900   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:41:52.082905   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:41:52.082949   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:41:52.126970   69580 cri.go:89] found id: ""
	I0501 03:41:52.126996   69580 logs.go:276] 0 containers: []
	W0501 03:41:52.127009   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:41:52.127014   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:41:52.127076   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:41:52.169735   69580 cri.go:89] found id: ""
	I0501 03:41:52.169761   69580 logs.go:276] 0 containers: []
	W0501 03:41:52.169769   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:41:52.169774   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:41:52.169826   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:41:52.207356   69580 cri.go:89] found id: ""
	I0501 03:41:52.207392   69580 logs.go:276] 0 containers: []
	W0501 03:41:52.207404   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:41:52.207412   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:41:52.207472   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:41:52.250074   69580 cri.go:89] found id: ""
	I0501 03:41:52.250102   69580 logs.go:276] 0 containers: []
	W0501 03:41:52.250113   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:41:52.250121   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:41:52.250180   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:41:52.290525   69580 cri.go:89] found id: ""
	I0501 03:41:52.290550   69580 logs.go:276] 0 containers: []
	W0501 03:41:52.290558   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:41:52.290564   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:41:52.290610   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:41:52.336058   69580 cri.go:89] found id: ""
	I0501 03:41:52.336084   69580 logs.go:276] 0 containers: []
	W0501 03:41:52.336092   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:41:52.336103   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:41:52.336118   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:41:52.392738   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:41:52.392773   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:41:52.408475   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:41:52.408503   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:41:52.493567   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:41:52.493594   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:41:52.493608   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:41:52.566550   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:41:52.566583   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:41:55.117129   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:55.134840   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:41:55.134918   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:41:55.193990   69580 cri.go:89] found id: ""
	I0501 03:41:55.194019   69580 logs.go:276] 0 containers: []
	W0501 03:41:55.194029   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:41:55.194038   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:41:55.194100   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:41:55.261710   69580 cri.go:89] found id: ""
	I0501 03:41:55.261743   69580 logs.go:276] 0 containers: []
	W0501 03:41:55.261754   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:41:55.261761   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:41:55.261823   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:41:55.302432   69580 cri.go:89] found id: ""
	I0501 03:41:55.302468   69580 logs.go:276] 0 containers: []
	W0501 03:41:55.302480   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:41:55.302488   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:41:55.302550   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:41:55.346029   69580 cri.go:89] found id: ""
	I0501 03:41:55.346058   69580 logs.go:276] 0 containers: []
	W0501 03:41:55.346067   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:41:55.346073   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:41:55.346117   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:41:55.393206   69580 cri.go:89] found id: ""
	I0501 03:41:55.393229   69580 logs.go:276] 0 containers: []
	W0501 03:41:55.393236   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:41:55.393242   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:41:55.393295   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:41:55.437908   69580 cri.go:89] found id: ""
	I0501 03:41:55.437940   69580 logs.go:276] 0 containers: []
	W0501 03:41:55.437952   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:41:55.437960   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:41:55.438020   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:41:55.480439   69580 cri.go:89] found id: ""
	I0501 03:41:55.480468   69580 logs.go:276] 0 containers: []
	W0501 03:41:55.480480   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:41:55.480488   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:41:55.480589   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:41:55.524782   69580 cri.go:89] found id: ""
	I0501 03:41:55.524811   69580 logs.go:276] 0 containers: []
	W0501 03:41:55.524819   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:41:55.524828   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:41:55.524840   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:41:55.604337   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:41:55.604373   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:41:55.649427   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:41:55.649455   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:41:55.707928   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:41:55.707976   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:41:55.723289   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:41:55.723316   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:41:55.805146   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:41:58.306145   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:58.322207   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:41:58.322280   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:41:58.370291   69580 cri.go:89] found id: ""
	I0501 03:41:58.370319   69580 logs.go:276] 0 containers: []
	W0501 03:41:58.370331   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:41:58.370338   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:41:58.370417   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:41:58.421230   69580 cri.go:89] found id: ""
	I0501 03:41:58.421256   69580 logs.go:276] 0 containers: []
	W0501 03:41:58.421264   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:41:58.421270   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:41:58.421317   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:41:58.463694   69580 cri.go:89] found id: ""
	I0501 03:41:58.463724   69580 logs.go:276] 0 containers: []
	W0501 03:41:58.463735   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:41:58.463743   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:41:58.463797   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:41:58.507756   69580 cri.go:89] found id: ""
	I0501 03:41:58.507785   69580 logs.go:276] 0 containers: []
	W0501 03:41:58.507791   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:41:58.507797   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:41:58.507870   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:41:58.554852   69580 cri.go:89] found id: ""
	I0501 03:41:58.554884   69580 logs.go:276] 0 containers: []
	W0501 03:41:58.554895   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:41:58.554903   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:41:58.554969   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:41:58.602467   69580 cri.go:89] found id: ""
	I0501 03:41:58.602495   69580 logs.go:276] 0 containers: []
	W0501 03:41:58.602505   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:41:58.602511   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:41:58.602561   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:41:58.652718   69580 cri.go:89] found id: ""
	I0501 03:41:58.652749   69580 logs.go:276] 0 containers: []
	W0501 03:41:58.652759   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:41:58.652766   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:41:58.652837   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:41:58.694351   69580 cri.go:89] found id: ""
	I0501 03:41:58.694377   69580 logs.go:276] 0 containers: []
	W0501 03:41:58.694385   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:41:58.694393   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:41:58.694434   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:41:58.779878   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:41:58.779911   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:41:58.826733   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:41:58.826768   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:41:58.883808   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:41:58.883842   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:41:58.900463   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:41:58.900495   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:41:58.991346   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:42:01.492396   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:42:01.508620   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:42:01.508756   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:42:01.555669   69580 cri.go:89] found id: ""
	I0501 03:42:01.555696   69580 logs.go:276] 0 containers: []
	W0501 03:42:01.555712   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:42:01.555720   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:42:01.555782   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:42:01.597591   69580 cri.go:89] found id: ""
	I0501 03:42:01.597615   69580 logs.go:276] 0 containers: []
	W0501 03:42:01.597626   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:42:01.597635   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:42:01.597693   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:42:01.636259   69580 cri.go:89] found id: ""
	I0501 03:42:01.636286   69580 logs.go:276] 0 containers: []
	W0501 03:42:01.636297   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:42:01.636305   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:42:01.636361   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:42:01.684531   69580 cri.go:89] found id: ""
	I0501 03:42:01.684562   69580 logs.go:276] 0 containers: []
	W0501 03:42:01.684572   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:42:01.684579   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:42:01.684647   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:42:01.725591   69580 cri.go:89] found id: ""
	I0501 03:42:01.725621   69580 logs.go:276] 0 containers: []
	W0501 03:42:01.725628   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:42:01.725652   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:42:01.725718   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:42:01.767868   69580 cri.go:89] found id: ""
	I0501 03:42:01.767901   69580 logs.go:276] 0 containers: []
	W0501 03:42:01.767910   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:42:01.767917   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:42:01.767977   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:42:01.817590   69580 cri.go:89] found id: ""
	I0501 03:42:01.817618   69580 logs.go:276] 0 containers: []
	W0501 03:42:01.817629   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:42:01.817637   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:42:01.817697   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:42:01.863549   69580 cri.go:89] found id: ""
	I0501 03:42:01.863576   69580 logs.go:276] 0 containers: []
	W0501 03:42:01.863586   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:42:01.863595   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:42:01.863607   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:42:01.879134   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:42:01.879162   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:42:01.967015   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:42:01.967043   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:42:01.967059   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:42:02.051576   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:42:02.051614   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:42:02.095614   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:42:02.095644   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:42:04.652974   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:42:04.671018   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:42:04.671103   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:42:04.712392   69580 cri.go:89] found id: ""
	I0501 03:42:04.712425   69580 logs.go:276] 0 containers: []
	W0501 03:42:04.712435   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:42:04.712442   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:42:04.712503   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:42:04.756854   69580 cri.go:89] found id: ""
	I0501 03:42:04.756881   69580 logs.go:276] 0 containers: []
	W0501 03:42:04.756893   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:42:04.756900   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:42:04.756962   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:42:04.797665   69580 cri.go:89] found id: ""
	I0501 03:42:04.797694   69580 logs.go:276] 0 containers: []
	W0501 03:42:04.797703   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:42:04.797709   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:42:04.797756   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:42:04.838441   69580 cri.go:89] found id: ""
	I0501 03:42:04.838472   69580 logs.go:276] 0 containers: []
	W0501 03:42:04.838483   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:42:04.838491   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:42:04.838556   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:42:04.879905   69580 cri.go:89] found id: ""
	I0501 03:42:04.879935   69580 logs.go:276] 0 containers: []
	W0501 03:42:04.879945   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:42:04.879952   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:42:04.880012   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:42:04.924759   69580 cri.go:89] found id: ""
	I0501 03:42:04.924792   69580 logs.go:276] 0 containers: []
	W0501 03:42:04.924804   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:42:04.924813   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:42:04.924879   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:42:04.965638   69580 cri.go:89] found id: ""
	I0501 03:42:04.965663   69580 logs.go:276] 0 containers: []
	W0501 03:42:04.965670   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:42:04.965676   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:42:04.965721   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:42:05.013127   69580 cri.go:89] found id: ""
	I0501 03:42:05.013153   69580 logs.go:276] 0 containers: []
	W0501 03:42:05.013163   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:42:05.013173   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:42:05.013185   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:42:05.108388   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:42:05.108409   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:42:05.108422   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:42:05.198239   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:42:05.198281   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:42:05.241042   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:42:05.241076   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:42:05.299017   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:42:05.299069   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:42:07.815458   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:42:07.832047   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:42:07.832125   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:42:07.882950   69580 cri.go:89] found id: ""
	I0501 03:42:07.882985   69580 logs.go:276] 0 containers: []
	W0501 03:42:07.882996   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:42:07.883002   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:42:07.883051   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:42:07.928086   69580 cri.go:89] found id: ""
	I0501 03:42:07.928111   69580 logs.go:276] 0 containers: []
	W0501 03:42:07.928119   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:42:07.928124   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:42:07.928177   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:42:07.976216   69580 cri.go:89] found id: ""
	I0501 03:42:07.976250   69580 logs.go:276] 0 containers: []
	W0501 03:42:07.976268   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:42:07.976274   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:42:07.976331   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:42:08.019903   69580 cri.go:89] found id: ""
	I0501 03:42:08.019932   69580 logs.go:276] 0 containers: []
	W0501 03:42:08.019943   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:42:08.019951   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:42:08.020009   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:42:08.075980   69580 cri.go:89] found id: ""
	I0501 03:42:08.076004   69580 logs.go:276] 0 containers: []
	W0501 03:42:08.076012   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:42:08.076018   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:42:08.076065   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:42:08.114849   69580 cri.go:89] found id: ""
	I0501 03:42:08.114881   69580 logs.go:276] 0 containers: []
	W0501 03:42:08.114891   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:42:08.114897   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:42:08.114955   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:42:08.159427   69580 cri.go:89] found id: ""
	I0501 03:42:08.159457   69580 logs.go:276] 0 containers: []
	W0501 03:42:08.159468   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:42:08.159476   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:42:08.159543   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:42:08.200117   69580 cri.go:89] found id: ""
	I0501 03:42:08.200151   69580 logs.go:276] 0 containers: []
	W0501 03:42:08.200163   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:42:08.200182   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:42:08.200197   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:42:08.281926   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:42:08.281972   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:42:08.331393   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:42:08.331429   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:42:08.386758   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:42:08.386793   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:42:08.402551   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:42:08.402581   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:42:08.489678   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:42:10.990653   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:42:11.007879   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:42:11.007958   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:42:11.049842   69580 cri.go:89] found id: ""
	I0501 03:42:11.049867   69580 logs.go:276] 0 containers: []
	W0501 03:42:11.049879   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:42:11.049885   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:42:11.049933   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:42:11.091946   69580 cri.go:89] found id: ""
	I0501 03:42:11.091980   69580 logs.go:276] 0 containers: []
	W0501 03:42:11.091992   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:42:11.092000   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:42:11.092079   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:42:11.140100   69580 cri.go:89] found id: ""
	I0501 03:42:11.140129   69580 logs.go:276] 0 containers: []
	W0501 03:42:11.140138   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:42:11.140144   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:42:11.140207   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:42:11.182796   69580 cri.go:89] found id: ""
	I0501 03:42:11.182821   69580 logs.go:276] 0 containers: []
	W0501 03:42:11.182832   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:42:11.182838   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:42:11.182896   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:42:11.222985   69580 cri.go:89] found id: ""
	I0501 03:42:11.223016   69580 logs.go:276] 0 containers: []
	W0501 03:42:11.223027   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:42:11.223033   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:42:11.223114   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:42:11.265793   69580 cri.go:89] found id: ""
	I0501 03:42:11.265818   69580 logs.go:276] 0 containers: []
	W0501 03:42:11.265830   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:42:11.265838   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:42:11.265913   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:42:11.309886   69580 cri.go:89] found id: ""
	I0501 03:42:11.309912   69580 logs.go:276] 0 containers: []
	W0501 03:42:11.309924   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:42:11.309931   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:42:11.309989   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:42:11.357757   69580 cri.go:89] found id: ""
	I0501 03:42:11.357791   69580 logs.go:276] 0 containers: []
	W0501 03:42:11.357803   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:42:11.357823   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:42:11.357839   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:42:11.412668   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:42:11.412704   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:42:11.428380   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:42:11.428422   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:42:11.521898   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:42:11.521924   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:42:11.521940   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:42:11.607081   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:42:11.607116   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:42:14.153054   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:42:14.173046   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:42:14.173150   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:42:14.219583   69580 cri.go:89] found id: ""
	I0501 03:42:14.219605   69580 logs.go:276] 0 containers: []
	W0501 03:42:14.219613   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:42:14.219619   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:42:14.219664   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:42:14.260316   69580 cri.go:89] found id: ""
	I0501 03:42:14.260349   69580 logs.go:276] 0 containers: []
	W0501 03:42:14.260357   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:42:14.260366   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:42:14.260420   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:42:14.305049   69580 cri.go:89] found id: ""
	I0501 03:42:14.305085   69580 logs.go:276] 0 containers: []
	W0501 03:42:14.305109   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:42:14.305117   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:42:14.305198   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:42:14.359589   69580 cri.go:89] found id: ""
	I0501 03:42:14.359614   69580 logs.go:276] 0 containers: []
	W0501 03:42:14.359622   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:42:14.359628   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:42:14.359672   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:42:14.403867   69580 cri.go:89] found id: ""
	I0501 03:42:14.403895   69580 logs.go:276] 0 containers: []
	W0501 03:42:14.403904   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:42:14.403910   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:42:14.403987   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:42:14.446626   69580 cri.go:89] found id: ""
	I0501 03:42:14.446655   69580 logs.go:276] 0 containers: []
	W0501 03:42:14.446675   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:42:14.446683   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:42:14.446754   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:42:14.490983   69580 cri.go:89] found id: ""
	I0501 03:42:14.491016   69580 logs.go:276] 0 containers: []
	W0501 03:42:14.491028   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:42:14.491036   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:42:14.491117   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:42:14.534180   69580 cri.go:89] found id: ""
	I0501 03:42:14.534205   69580 logs.go:276] 0 containers: []
	W0501 03:42:14.534213   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:42:14.534221   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:42:14.534236   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:42:14.621433   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:42:14.621491   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:42:14.680265   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:42:14.680310   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:42:14.738943   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:42:14.738983   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:42:14.754145   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:42:14.754176   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:42:14.839974   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:42:17.340948   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:42:17.360007   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:42:17.360068   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:42:17.403201   69580 cri.go:89] found id: ""
	I0501 03:42:17.403231   69580 logs.go:276] 0 containers: []
	W0501 03:42:17.403239   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:42:17.403245   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:42:17.403301   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:42:17.442940   69580 cri.go:89] found id: ""
	I0501 03:42:17.442966   69580 logs.go:276] 0 containers: []
	W0501 03:42:17.442975   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:42:17.442981   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:42:17.443038   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:42:17.487219   69580 cri.go:89] found id: ""
	I0501 03:42:17.487248   69580 logs.go:276] 0 containers: []
	W0501 03:42:17.487259   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:42:17.487267   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:42:17.487324   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:42:17.528551   69580 cri.go:89] found id: ""
	I0501 03:42:17.528583   69580 logs.go:276] 0 containers: []
	W0501 03:42:17.528593   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:42:17.528601   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:42:17.528668   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:42:17.577005   69580 cri.go:89] found id: ""
	I0501 03:42:17.577041   69580 logs.go:276] 0 containers: []
	W0501 03:42:17.577052   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:42:17.577061   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:42:17.577132   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:42:17.618924   69580 cri.go:89] found id: ""
	I0501 03:42:17.618949   69580 logs.go:276] 0 containers: []
	W0501 03:42:17.618957   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:42:17.618963   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:42:17.619022   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:42:17.660487   69580 cri.go:89] found id: ""
	I0501 03:42:17.660514   69580 logs.go:276] 0 containers: []
	W0501 03:42:17.660525   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:42:17.660532   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:42:17.660592   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:42:17.701342   69580 cri.go:89] found id: ""
	I0501 03:42:17.701370   69580 logs.go:276] 0 containers: []
	W0501 03:42:17.701378   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:42:17.701387   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:42:17.701400   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:42:17.757034   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:42:17.757069   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:42:17.772955   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:42:17.772984   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:42:17.888062   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:42:17.888088   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:42:17.888101   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:42:17.969274   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:42:17.969312   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:42:20.521053   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:42:20.536065   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:42:20.536141   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:42:20.577937   69580 cri.go:89] found id: ""
	I0501 03:42:20.577967   69580 logs.go:276] 0 containers: []
	W0501 03:42:20.577977   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:42:20.577986   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:42:20.578055   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:42:20.626690   69580 cri.go:89] found id: ""
	I0501 03:42:20.626714   69580 logs.go:276] 0 containers: []
	W0501 03:42:20.626722   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:42:20.626728   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:42:20.626809   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:42:20.670849   69580 cri.go:89] found id: ""
	I0501 03:42:20.670872   69580 logs.go:276] 0 containers: []
	W0501 03:42:20.670881   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:42:20.670886   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:42:20.670946   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:42:20.711481   69580 cri.go:89] found id: ""
	I0501 03:42:20.711511   69580 logs.go:276] 0 containers: []
	W0501 03:42:20.711522   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:42:20.711531   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:42:20.711596   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:42:20.753413   69580 cri.go:89] found id: ""
	I0501 03:42:20.753443   69580 logs.go:276] 0 containers: []
	W0501 03:42:20.753452   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:42:20.753459   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:42:20.753536   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:42:20.791424   69580 cri.go:89] found id: ""
	I0501 03:42:20.791452   69580 logs.go:276] 0 containers: []
	W0501 03:42:20.791461   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:42:20.791466   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:42:20.791526   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:42:20.833718   69580 cri.go:89] found id: ""
	I0501 03:42:20.833740   69580 logs.go:276] 0 containers: []
	W0501 03:42:20.833748   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:42:20.833752   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:42:20.833799   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:42:20.879788   69580 cri.go:89] found id: ""
	I0501 03:42:20.879818   69580 logs.go:276] 0 containers: []
	W0501 03:42:20.879828   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:42:20.879839   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:42:20.879855   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:42:20.895266   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:42:20.895304   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:42:20.976429   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:42:20.976452   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:42:20.976465   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:42:21.063573   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:42:21.063611   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:42:21.113510   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:42:21.113543   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:42:23.672203   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:42:23.687849   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:42:23.687946   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:42:23.731428   69580 cri.go:89] found id: ""
	I0501 03:42:23.731455   69580 logs.go:276] 0 containers: []
	W0501 03:42:23.731467   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:42:23.731473   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:42:23.731534   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:42:23.772219   69580 cri.go:89] found id: ""
	I0501 03:42:23.772248   69580 logs.go:276] 0 containers: []
	W0501 03:42:23.772259   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:42:23.772266   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:42:23.772369   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:42:23.837203   69580 cri.go:89] found id: ""
	I0501 03:42:23.837235   69580 logs.go:276] 0 containers: []
	W0501 03:42:23.837247   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:42:23.837255   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:42:23.837317   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:42:23.884681   69580 cri.go:89] found id: ""
	I0501 03:42:23.884709   69580 logs.go:276] 0 containers: []
	W0501 03:42:23.884716   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:42:23.884722   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:42:23.884783   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:42:23.927544   69580 cri.go:89] found id: ""
	I0501 03:42:23.927576   69580 logs.go:276] 0 containers: []
	W0501 03:42:23.927584   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:42:23.927590   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:42:23.927652   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:42:23.970428   69580 cri.go:89] found id: ""
	I0501 03:42:23.970457   69580 logs.go:276] 0 containers: []
	W0501 03:42:23.970467   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:42:23.970476   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:42:23.970541   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:42:24.010545   69580 cri.go:89] found id: ""
	I0501 03:42:24.010573   69580 logs.go:276] 0 containers: []
	W0501 03:42:24.010583   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:42:24.010593   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:42:24.010653   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:42:24.053547   69580 cri.go:89] found id: ""
	I0501 03:42:24.053574   69580 logs.go:276] 0 containers: []
	W0501 03:42:24.053582   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:42:24.053591   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:42:24.053602   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:42:24.108416   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:42:24.108452   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:42:24.124052   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:42:24.124083   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:42:24.209024   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:42:24.209048   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:42:24.209063   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:42:24.291644   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:42:24.291693   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:42:26.840623   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:42:26.856231   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:42:26.856320   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:42:26.897988   69580 cri.go:89] found id: ""
	I0501 03:42:26.898022   69580 logs.go:276] 0 containers: []
	W0501 03:42:26.898033   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:42:26.898041   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:42:26.898109   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:42:26.937608   69580 cri.go:89] found id: ""
	I0501 03:42:26.937638   69580 logs.go:276] 0 containers: []
	W0501 03:42:26.937660   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:42:26.937668   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:42:26.937731   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:42:26.979799   69580 cri.go:89] found id: ""
	I0501 03:42:26.979836   69580 logs.go:276] 0 containers: []
	W0501 03:42:26.979847   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:42:26.979854   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:42:26.979922   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:42:27.018863   69580 cri.go:89] found id: ""
	I0501 03:42:27.018896   69580 logs.go:276] 0 containers: []
	W0501 03:42:27.018903   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:42:27.018909   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:42:27.018959   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:42:27.057864   69580 cri.go:89] found id: ""
	I0501 03:42:27.057893   69580 logs.go:276] 0 containers: []
	W0501 03:42:27.057904   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:42:27.057912   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:42:27.057982   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:42:27.102909   69580 cri.go:89] found id: ""
	I0501 03:42:27.102939   69580 logs.go:276] 0 containers: []
	W0501 03:42:27.102950   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:42:27.102958   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:42:27.103019   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:42:27.148292   69580 cri.go:89] found id: ""
	I0501 03:42:27.148326   69580 logs.go:276] 0 containers: []
	W0501 03:42:27.148336   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:42:27.148344   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:42:27.148407   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:42:27.197557   69580 cri.go:89] found id: ""
	I0501 03:42:27.197581   69580 logs.go:276] 0 containers: []
	W0501 03:42:27.197588   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:42:27.197596   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:42:27.197609   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:42:27.281768   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:42:27.281793   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:42:27.281806   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:42:27.361496   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:42:27.361528   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:42:27.407640   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:42:27.407675   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:42:27.472533   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:42:27.472576   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:42:29.987773   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:42:30.003511   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:42:30.003619   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:42:30.049330   69580 cri.go:89] found id: ""
	I0501 03:42:30.049363   69580 logs.go:276] 0 containers: []
	W0501 03:42:30.049377   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:42:30.049384   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:42:30.049439   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:42:30.088521   69580 cri.go:89] found id: ""
	I0501 03:42:30.088549   69580 logs.go:276] 0 containers: []
	W0501 03:42:30.088560   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:42:30.088568   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:42:30.088624   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:42:30.132731   69580 cri.go:89] found id: ""
	I0501 03:42:30.132765   69580 logs.go:276] 0 containers: []
	W0501 03:42:30.132777   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:42:30.132784   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:42:30.132847   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:42:30.178601   69580 cri.go:89] found id: ""
	I0501 03:42:30.178639   69580 logs.go:276] 0 containers: []
	W0501 03:42:30.178648   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:42:30.178656   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:42:30.178714   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:42:30.230523   69580 cri.go:89] found id: ""
	I0501 03:42:30.230549   69580 logs.go:276] 0 containers: []
	W0501 03:42:30.230561   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:42:30.230569   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:42:30.230632   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:42:30.289234   69580 cri.go:89] found id: ""
	I0501 03:42:30.289262   69580 logs.go:276] 0 containers: []
	W0501 03:42:30.289270   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:42:30.289277   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:42:30.289342   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:42:30.332596   69580 cri.go:89] found id: ""
	I0501 03:42:30.332627   69580 logs.go:276] 0 containers: []
	W0501 03:42:30.332637   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:42:30.332644   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:42:30.332710   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:42:30.383871   69580 cri.go:89] found id: ""
	I0501 03:42:30.383901   69580 logs.go:276] 0 containers: []
	W0501 03:42:30.383908   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:42:30.383917   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:42:30.383929   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:42:30.464382   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:42:30.464404   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:42:30.464417   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:42:30.550604   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:42:30.550637   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:42:30.594927   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:42:30.594959   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:42:30.648392   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:42:30.648426   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:42:33.167591   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:42:33.183804   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:42:33.183874   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:42:33.223501   69580 cri.go:89] found id: ""
	I0501 03:42:33.223525   69580 logs.go:276] 0 containers: []
	W0501 03:42:33.223532   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:42:33.223539   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:42:33.223600   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:42:33.268674   69580 cri.go:89] found id: ""
	I0501 03:42:33.268705   69580 logs.go:276] 0 containers: []
	W0501 03:42:33.268741   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:42:33.268749   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:42:33.268807   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:42:33.310613   69580 cri.go:89] found id: ""
	I0501 03:42:33.310655   69580 logs.go:276] 0 containers: []
	W0501 03:42:33.310666   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:42:33.310674   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:42:33.310737   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:42:33.353156   69580 cri.go:89] found id: ""
	I0501 03:42:33.353177   69580 logs.go:276] 0 containers: []
	W0501 03:42:33.353184   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:42:33.353189   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:42:33.353237   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:42:33.389702   69580 cri.go:89] found id: ""
	I0501 03:42:33.389730   69580 logs.go:276] 0 containers: []
	W0501 03:42:33.389743   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:42:33.389751   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:42:33.389817   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:42:33.431244   69580 cri.go:89] found id: ""
	I0501 03:42:33.431275   69580 logs.go:276] 0 containers: []
	W0501 03:42:33.431290   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:42:33.431298   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:42:33.431384   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:42:33.472382   69580 cri.go:89] found id: ""
	I0501 03:42:33.472412   69580 logs.go:276] 0 containers: []
	W0501 03:42:33.472423   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:42:33.472431   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:42:33.472519   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:42:33.517042   69580 cri.go:89] found id: ""
	I0501 03:42:33.517064   69580 logs.go:276] 0 containers: []
	W0501 03:42:33.517071   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:42:33.517079   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:42:33.517091   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:42:33.573343   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:42:33.573372   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:42:33.588932   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:42:33.588963   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:42:33.674060   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:42:33.674090   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:42:33.674106   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:42:33.756635   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:42:33.756684   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:42:36.300909   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:42:36.320407   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:42:36.320474   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:42:36.367236   69580 cri.go:89] found id: ""
	I0501 03:42:36.367261   69580 logs.go:276] 0 containers: []
	W0501 03:42:36.367269   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:42:36.367274   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:42:36.367335   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:42:36.406440   69580 cri.go:89] found id: ""
	I0501 03:42:36.406471   69580 logs.go:276] 0 containers: []
	W0501 03:42:36.406482   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:42:36.406489   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:42:36.406552   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:42:36.443931   69580 cri.go:89] found id: ""
	I0501 03:42:36.443957   69580 logs.go:276] 0 containers: []
	W0501 03:42:36.443964   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:42:36.443969   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:42:36.444024   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:42:36.486169   69580 cri.go:89] found id: ""
	I0501 03:42:36.486200   69580 logs.go:276] 0 containers: []
	W0501 03:42:36.486213   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:42:36.486220   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:42:36.486276   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:42:36.532211   69580 cri.go:89] found id: ""
	I0501 03:42:36.532237   69580 logs.go:276] 0 containers: []
	W0501 03:42:36.532246   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:42:36.532251   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:42:36.532311   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:42:36.571889   69580 cri.go:89] found id: ""
	I0501 03:42:36.571921   69580 logs.go:276] 0 containers: []
	W0501 03:42:36.571933   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:42:36.571940   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:42:36.572000   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:42:36.612126   69580 cri.go:89] found id: ""
	I0501 03:42:36.612159   69580 logs.go:276] 0 containers: []
	W0501 03:42:36.612170   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:42:36.612177   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:42:36.612238   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:42:36.654067   69580 cri.go:89] found id: ""
	I0501 03:42:36.654096   69580 logs.go:276] 0 containers: []
	W0501 03:42:36.654106   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:42:36.654117   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:42:36.654129   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:42:36.740205   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:42:36.740226   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:42:36.740237   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:42:36.821403   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:42:36.821437   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:42:36.874829   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:42:36.874867   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:42:36.928312   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:42:36.928342   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:42:39.444598   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:42:39.460086   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:42:39.460151   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:42:39.500833   69580 cri.go:89] found id: ""
	I0501 03:42:39.500859   69580 logs.go:276] 0 containers: []
	W0501 03:42:39.500870   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:42:39.500879   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:42:39.500936   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:42:39.544212   69580 cri.go:89] found id: ""
	I0501 03:42:39.544238   69580 logs.go:276] 0 containers: []
	W0501 03:42:39.544248   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:42:39.544260   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:42:39.544326   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:42:39.582167   69580 cri.go:89] found id: ""
	I0501 03:42:39.582200   69580 logs.go:276] 0 containers: []
	W0501 03:42:39.582218   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:42:39.582231   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:42:39.582296   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:42:39.624811   69580 cri.go:89] found id: ""
	I0501 03:42:39.624837   69580 logs.go:276] 0 containers: []
	W0501 03:42:39.624848   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:42:39.624855   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:42:39.624913   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:42:39.666001   69580 cri.go:89] found id: ""
	I0501 03:42:39.666030   69580 logs.go:276] 0 containers: []
	W0501 03:42:39.666041   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:42:39.666048   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:42:39.666111   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:42:39.708790   69580 cri.go:89] found id: ""
	I0501 03:42:39.708820   69580 logs.go:276] 0 containers: []
	W0501 03:42:39.708831   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:42:39.708839   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:42:39.708896   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:42:39.750585   69580 cri.go:89] found id: ""
	I0501 03:42:39.750609   69580 logs.go:276] 0 containers: []
	W0501 03:42:39.750617   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:42:39.750622   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:42:39.750670   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:42:39.798576   69580 cri.go:89] found id: ""
	I0501 03:42:39.798612   69580 logs.go:276] 0 containers: []
	W0501 03:42:39.798624   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:42:39.798636   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:42:39.798651   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:42:39.891759   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:42:39.891782   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:42:39.891797   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:42:39.974419   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:42:39.974462   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:42:40.020700   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:42:40.020728   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:42:40.073946   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:42:40.073980   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:42:42.590933   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:42:42.606044   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:42:42.606120   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:42:42.653074   69580 cri.go:89] found id: ""
	I0501 03:42:42.653104   69580 logs.go:276] 0 containers: []
	W0501 03:42:42.653115   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:42:42.653123   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:42:42.653195   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:42:42.693770   69580 cri.go:89] found id: ""
	I0501 03:42:42.693809   69580 logs.go:276] 0 containers: []
	W0501 03:42:42.693821   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:42:42.693829   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:42:42.693885   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:42:42.739087   69580 cri.go:89] found id: ""
	I0501 03:42:42.739115   69580 logs.go:276] 0 containers: []
	W0501 03:42:42.739125   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:42:42.739133   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:42:42.739196   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:42:42.779831   69580 cri.go:89] found id: ""
	I0501 03:42:42.779863   69580 logs.go:276] 0 containers: []
	W0501 03:42:42.779876   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:42:42.779885   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:42:42.779950   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:42:42.826759   69580 cri.go:89] found id: ""
	I0501 03:42:42.826791   69580 logs.go:276] 0 containers: []
	W0501 03:42:42.826799   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:42:42.826804   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:42:42.826854   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:42:42.872602   69580 cri.go:89] found id: ""
	I0501 03:42:42.872629   69580 logs.go:276] 0 containers: []
	W0501 03:42:42.872640   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:42:42.872648   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:42:42.872707   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:42:42.913833   69580 cri.go:89] found id: ""
	I0501 03:42:42.913862   69580 logs.go:276] 0 containers: []
	W0501 03:42:42.913872   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:42:42.913879   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:42:42.913936   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:42:42.953629   69580 cri.go:89] found id: ""
	I0501 03:42:42.953657   69580 logs.go:276] 0 containers: []
	W0501 03:42:42.953667   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:42:42.953679   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:42:42.953695   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:42:42.968420   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:42:42.968447   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:42:43.046840   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:42:43.046874   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:42:43.046898   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:42:43.135453   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:42:43.135492   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:42:43.184103   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:42:43.184141   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:42:45.738246   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:42:45.753193   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:42:45.753258   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:42:45.791191   69580 cri.go:89] found id: ""
	I0501 03:42:45.791216   69580 logs.go:276] 0 containers: []
	W0501 03:42:45.791224   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:42:45.791236   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:42:45.791285   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:42:45.831935   69580 cri.go:89] found id: ""
	I0501 03:42:45.831967   69580 logs.go:276] 0 containers: []
	W0501 03:42:45.831978   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:42:45.831986   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:42:45.832041   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:42:45.869492   69580 cri.go:89] found id: ""
	I0501 03:42:45.869517   69580 logs.go:276] 0 containers: []
	W0501 03:42:45.869529   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:42:45.869536   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:42:45.869593   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:42:45.910642   69580 cri.go:89] found id: ""
	I0501 03:42:45.910672   69580 logs.go:276] 0 containers: []
	W0501 03:42:45.910682   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:42:45.910691   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:42:45.910754   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:42:45.951489   69580 cri.go:89] found id: ""
	I0501 03:42:45.951518   69580 logs.go:276] 0 containers: []
	W0501 03:42:45.951528   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:42:45.951535   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:42:45.951582   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:42:45.991388   69580 cri.go:89] found id: ""
	I0501 03:42:45.991410   69580 logs.go:276] 0 containers: []
	W0501 03:42:45.991418   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:42:45.991423   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:42:45.991467   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:42:46.036524   69580 cri.go:89] found id: ""
	I0501 03:42:46.036546   69580 logs.go:276] 0 containers: []
	W0501 03:42:46.036553   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:42:46.036560   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:42:46.036622   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:42:46.087472   69580 cri.go:89] found id: ""
	I0501 03:42:46.087495   69580 logs.go:276] 0 containers: []
	W0501 03:42:46.087504   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:42:46.087513   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:42:46.087526   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:42:46.101283   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:42:46.101314   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:42:46.176459   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:42:46.176491   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:42:46.176506   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:42:46.261921   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:42:46.261956   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:42:46.309879   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:42:46.309910   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:42:48.867064   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:42:48.884082   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:42:48.884192   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:42:48.929681   69580 cri.go:89] found id: ""
	I0501 03:42:48.929708   69580 logs.go:276] 0 containers: []
	W0501 03:42:48.929716   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:42:48.929722   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:42:48.929789   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:42:48.977850   69580 cri.go:89] found id: ""
	I0501 03:42:48.977882   69580 logs.go:276] 0 containers: []
	W0501 03:42:48.977894   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:42:48.977901   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:42:48.977962   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:42:49.022590   69580 cri.go:89] found id: ""
	I0501 03:42:49.022619   69580 logs.go:276] 0 containers: []
	W0501 03:42:49.022629   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:42:49.022637   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:42:49.022706   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:42:49.064092   69580 cri.go:89] found id: ""
	I0501 03:42:49.064122   69580 logs.go:276] 0 containers: []
	W0501 03:42:49.064143   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:42:49.064152   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:42:49.064220   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:42:49.103962   69580 cri.go:89] found id: ""
	I0501 03:42:49.103990   69580 logs.go:276] 0 containers: []
	W0501 03:42:49.104002   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:42:49.104009   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:42:49.104070   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:42:49.144566   69580 cri.go:89] found id: ""
	I0501 03:42:49.144596   69580 logs.go:276] 0 containers: []
	W0501 03:42:49.144604   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:42:49.144610   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:42:49.144669   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:42:49.183110   69580 cri.go:89] found id: ""
	I0501 03:42:49.183141   69580 logs.go:276] 0 containers: []
	W0501 03:42:49.183161   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:42:49.183166   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:42:49.183239   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:42:49.225865   69580 cri.go:89] found id: ""
	I0501 03:42:49.225890   69580 logs.go:276] 0 containers: []
	W0501 03:42:49.225902   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:42:49.225912   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:42:49.225926   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:42:49.312967   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:42:49.313005   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:42:49.361171   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:42:49.361206   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:42:49.418731   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:42:49.418780   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:42:49.436976   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:42:49.437007   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:42:49.517994   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:42:52.018675   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:42:52.033946   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:42:52.034022   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:42:52.081433   69580 cri.go:89] found id: ""
	I0501 03:42:52.081465   69580 logs.go:276] 0 containers: []
	W0501 03:42:52.081477   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:42:52.081485   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:42:52.081544   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:42:52.123914   69580 cri.go:89] found id: ""
	I0501 03:42:52.123947   69580 logs.go:276] 0 containers: []
	W0501 03:42:52.123958   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:42:52.123966   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:42:52.124023   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:42:52.164000   69580 cri.go:89] found id: ""
	I0501 03:42:52.164020   69580 logs.go:276] 0 containers: []
	W0501 03:42:52.164027   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:42:52.164033   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:42:52.164086   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:42:52.205984   69580 cri.go:89] found id: ""
	I0501 03:42:52.206011   69580 logs.go:276] 0 containers: []
	W0501 03:42:52.206023   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:42:52.206031   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:42:52.206096   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:42:52.252743   69580 cri.go:89] found id: ""
	I0501 03:42:52.252766   69580 logs.go:276] 0 containers: []
	W0501 03:42:52.252774   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:42:52.252779   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:42:52.252839   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:42:52.296814   69580 cri.go:89] found id: ""
	I0501 03:42:52.296838   69580 logs.go:276] 0 containers: []
	W0501 03:42:52.296856   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:42:52.296864   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:42:52.296928   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:42:52.335996   69580 cri.go:89] found id: ""
	I0501 03:42:52.336023   69580 logs.go:276] 0 containers: []
	W0501 03:42:52.336034   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:42:52.336042   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:42:52.336105   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:42:52.377470   69580 cri.go:89] found id: ""
	I0501 03:42:52.377498   69580 logs.go:276] 0 containers: []
	W0501 03:42:52.377513   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:42:52.377524   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:42:52.377540   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:42:52.432644   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:42:52.432680   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:42:52.447518   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:42:52.447552   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:42:52.530967   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:42:52.530992   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:42:52.531005   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:42:52.612280   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:42:52.612327   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:42:55.170134   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:42:55.185252   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:42:55.185328   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:42:55.227741   69580 cri.go:89] found id: ""
	I0501 03:42:55.227764   69580 logs.go:276] 0 containers: []
	W0501 03:42:55.227771   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:42:55.227777   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:42:55.227820   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:42:55.270796   69580 cri.go:89] found id: ""
	I0501 03:42:55.270823   69580 logs.go:276] 0 containers: []
	W0501 03:42:55.270834   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:42:55.270840   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:42:55.270898   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:42:55.312146   69580 cri.go:89] found id: ""
	I0501 03:42:55.312171   69580 logs.go:276] 0 containers: []
	W0501 03:42:55.312180   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:42:55.312190   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:42:55.312236   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:42:55.354410   69580 cri.go:89] found id: ""
	I0501 03:42:55.354436   69580 logs.go:276] 0 containers: []
	W0501 03:42:55.354445   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:42:55.354450   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:42:55.354509   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:42:55.393550   69580 cri.go:89] found id: ""
	I0501 03:42:55.393580   69580 logs.go:276] 0 containers: []
	W0501 03:42:55.393589   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:42:55.393594   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:42:55.393651   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:42:55.431468   69580 cri.go:89] found id: ""
	I0501 03:42:55.431497   69580 logs.go:276] 0 containers: []
	W0501 03:42:55.431507   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:42:55.431514   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:42:55.431566   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:42:55.470491   69580 cri.go:89] found id: ""
	I0501 03:42:55.470513   69580 logs.go:276] 0 containers: []
	W0501 03:42:55.470520   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:42:55.470526   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:42:55.470571   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:42:55.509849   69580 cri.go:89] found id: ""
	I0501 03:42:55.509875   69580 logs.go:276] 0 containers: []
	W0501 03:42:55.509885   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:42:55.509894   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:42:55.509909   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:42:55.566680   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:42:55.566762   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:42:55.584392   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:42:55.584423   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:42:55.663090   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:42:55.663116   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:42:55.663131   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:42:55.741459   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:42:55.741494   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:42:58.294435   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:42:58.310204   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:42:58.310267   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:42:58.350292   69580 cri.go:89] found id: ""
	I0501 03:42:58.350322   69580 logs.go:276] 0 containers: []
	W0501 03:42:58.350334   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:42:58.350343   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:42:58.350431   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:42:58.395998   69580 cri.go:89] found id: ""
	I0501 03:42:58.396029   69580 logs.go:276] 0 containers: []
	W0501 03:42:58.396041   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:42:58.396049   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:42:58.396131   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:42:58.434371   69580 cri.go:89] found id: ""
	I0501 03:42:58.434414   69580 logs.go:276] 0 containers: []
	W0501 03:42:58.434427   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:42:58.434434   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:42:58.434493   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:42:58.473457   69580 cri.go:89] found id: ""
	I0501 03:42:58.473489   69580 logs.go:276] 0 containers: []
	W0501 03:42:58.473499   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:42:58.473507   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:42:58.473572   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:42:58.515172   69580 cri.go:89] found id: ""
	I0501 03:42:58.515201   69580 logs.go:276] 0 containers: []
	W0501 03:42:58.515212   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:42:58.515221   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:42:58.515291   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:42:58.560305   69580 cri.go:89] found id: ""
	I0501 03:42:58.560333   69580 logs.go:276] 0 containers: []
	W0501 03:42:58.560341   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:42:58.560348   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:42:58.560407   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:42:58.617980   69580 cri.go:89] found id: ""
	I0501 03:42:58.618005   69580 logs.go:276] 0 containers: []
	W0501 03:42:58.618013   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:42:58.618019   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:42:58.618080   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:42:58.659800   69580 cri.go:89] found id: ""
	I0501 03:42:58.659827   69580 logs.go:276] 0 containers: []
	W0501 03:42:58.659838   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:42:58.659848   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:42:58.659862   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:42:58.718134   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:42:58.718169   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:42:58.733972   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:42:58.734001   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:42:58.813055   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:42:58.813082   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:42:58.813099   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:42:58.897293   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:42:58.897331   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:43:01.442980   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:43:01.459602   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:43:01.459687   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:43:01.502817   69580 cri.go:89] found id: ""
	I0501 03:43:01.502848   69580 logs.go:276] 0 containers: []
	W0501 03:43:01.502857   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:43:01.502863   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:43:01.502924   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:43:01.547251   69580 cri.go:89] found id: ""
	I0501 03:43:01.547289   69580 logs.go:276] 0 containers: []
	W0501 03:43:01.547301   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:43:01.547308   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:43:01.547376   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:43:01.590179   69580 cri.go:89] found id: ""
	I0501 03:43:01.590211   69580 logs.go:276] 0 containers: []
	W0501 03:43:01.590221   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:43:01.590228   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:43:01.590296   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:43:01.628772   69580 cri.go:89] found id: ""
	I0501 03:43:01.628814   69580 logs.go:276] 0 containers: []
	W0501 03:43:01.628826   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:43:01.628834   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:43:01.628893   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:43:01.677414   69580 cri.go:89] found id: ""
	I0501 03:43:01.677440   69580 logs.go:276] 0 containers: []
	W0501 03:43:01.677448   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:43:01.677453   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:43:01.677500   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:43:01.723107   69580 cri.go:89] found id: ""
	I0501 03:43:01.723139   69580 logs.go:276] 0 containers: []
	W0501 03:43:01.723152   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:43:01.723160   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:43:01.723225   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:43:01.771846   69580 cri.go:89] found id: ""
	I0501 03:43:01.771873   69580 logs.go:276] 0 containers: []
	W0501 03:43:01.771883   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:43:01.771890   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:43:01.771952   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:43:01.818145   69580 cri.go:89] found id: ""
	I0501 03:43:01.818179   69580 logs.go:276] 0 containers: []
	W0501 03:43:01.818191   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:43:01.818202   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:43:01.818218   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:43:01.881502   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:43:01.881546   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:43:01.897580   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:43:01.897614   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:43:01.981959   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:43:01.981980   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:43:01.981996   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:43:02.066228   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:43:02.066269   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:43:04.609855   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:43:04.626885   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:43:04.626962   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:43:04.668248   69580 cri.go:89] found id: ""
	I0501 03:43:04.668277   69580 logs.go:276] 0 containers: []
	W0501 03:43:04.668290   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:43:04.668298   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:43:04.668364   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:43:04.711032   69580 cri.go:89] found id: ""
	I0501 03:43:04.711057   69580 logs.go:276] 0 containers: []
	W0501 03:43:04.711068   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:43:04.711076   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:43:04.711136   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:43:04.754197   69580 cri.go:89] found id: ""
	I0501 03:43:04.754232   69580 logs.go:276] 0 containers: []
	W0501 03:43:04.754241   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:43:04.754248   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:43:04.754317   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:43:04.801062   69580 cri.go:89] found id: ""
	I0501 03:43:04.801089   69580 logs.go:276] 0 containers: []
	W0501 03:43:04.801097   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:43:04.801103   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:43:04.801163   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:43:04.849425   69580 cri.go:89] found id: ""
	I0501 03:43:04.849454   69580 logs.go:276] 0 containers: []
	W0501 03:43:04.849465   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:43:04.849473   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:43:04.849536   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:43:04.892555   69580 cri.go:89] found id: ""
	I0501 03:43:04.892589   69580 logs.go:276] 0 containers: []
	W0501 03:43:04.892597   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:43:04.892603   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:43:04.892661   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:43:04.934101   69580 cri.go:89] found id: ""
	I0501 03:43:04.934129   69580 logs.go:276] 0 containers: []
	W0501 03:43:04.934137   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:43:04.934142   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:43:04.934191   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:43:04.985720   69580 cri.go:89] found id: ""
	I0501 03:43:04.985747   69580 logs.go:276] 0 containers: []
	W0501 03:43:04.985760   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:43:04.985773   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:43:04.985789   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:43:05.060634   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:43:05.060692   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:43:05.082007   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:43:05.082036   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:43:05.164613   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:43:05.164636   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:43:05.164652   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:43:05.244064   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:43:05.244103   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:43:07.793867   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:43:07.811161   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:43:07.811236   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:43:07.850738   69580 cri.go:89] found id: ""
	I0501 03:43:07.850765   69580 logs.go:276] 0 containers: []
	W0501 03:43:07.850775   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:43:07.850782   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:43:07.850841   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:43:07.892434   69580 cri.go:89] found id: ""
	I0501 03:43:07.892466   69580 logs.go:276] 0 containers: []
	W0501 03:43:07.892476   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:43:07.892483   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:43:07.892543   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:43:07.934093   69580 cri.go:89] found id: ""
	I0501 03:43:07.934122   69580 logs.go:276] 0 containers: []
	W0501 03:43:07.934133   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:43:07.934141   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:43:07.934200   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:43:07.976165   69580 cri.go:89] found id: ""
	I0501 03:43:07.976196   69580 logs.go:276] 0 containers: []
	W0501 03:43:07.976205   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:43:07.976216   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:43:07.976278   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:43:08.016925   69580 cri.go:89] found id: ""
	I0501 03:43:08.016956   69580 logs.go:276] 0 containers: []
	W0501 03:43:08.016968   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:43:08.016975   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:43:08.017038   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:43:08.063385   69580 cri.go:89] found id: ""
	I0501 03:43:08.063438   69580 logs.go:276] 0 containers: []
	W0501 03:43:08.063454   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:43:08.063465   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:43:08.063551   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:43:08.103586   69580 cri.go:89] found id: ""
	I0501 03:43:08.103610   69580 logs.go:276] 0 containers: []
	W0501 03:43:08.103618   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:43:08.103628   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:43:08.103672   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:43:08.142564   69580 cri.go:89] found id: ""
	I0501 03:43:08.142594   69580 logs.go:276] 0 containers: []
	W0501 03:43:08.142605   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:43:08.142617   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:43:08.142635   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:43:08.231532   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:43:08.231556   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:43:08.231571   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:43:08.311009   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:43:08.311053   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:43:08.357841   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:43:08.357877   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:43:08.409577   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:43:08.409610   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:43:10.924898   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:43:10.941525   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:43:10.941591   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:43:11.009214   69580 cri.go:89] found id: ""
	I0501 03:43:11.009238   69580 logs.go:276] 0 containers: []
	W0501 03:43:11.009247   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:43:11.009255   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:43:11.009316   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:43:11.072233   69580 cri.go:89] found id: ""
	I0501 03:43:11.072259   69580 logs.go:276] 0 containers: []
	W0501 03:43:11.072267   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:43:11.072273   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:43:11.072327   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:43:11.111662   69580 cri.go:89] found id: ""
	I0501 03:43:11.111691   69580 logs.go:276] 0 containers: []
	W0501 03:43:11.111701   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:43:11.111708   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:43:11.111765   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:43:11.151540   69580 cri.go:89] found id: ""
	I0501 03:43:11.151570   69580 logs.go:276] 0 containers: []
	W0501 03:43:11.151580   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:43:11.151594   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:43:11.151656   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:43:11.194030   69580 cri.go:89] found id: ""
	I0501 03:43:11.194064   69580 logs.go:276] 0 containers: []
	W0501 03:43:11.194076   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:43:11.194083   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:43:11.194146   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:43:11.233010   69580 cri.go:89] found id: ""
	I0501 03:43:11.233045   69580 logs.go:276] 0 containers: []
	W0501 03:43:11.233056   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:43:11.233063   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:43:11.233117   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:43:11.270979   69580 cri.go:89] found id: ""
	I0501 03:43:11.271009   69580 logs.go:276] 0 containers: []
	W0501 03:43:11.271019   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:43:11.271026   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:43:11.271088   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:43:11.312338   69580 cri.go:89] found id: ""
	I0501 03:43:11.312369   69580 logs.go:276] 0 containers: []
	W0501 03:43:11.312381   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:43:11.312393   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:43:11.312408   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:43:11.364273   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:43:11.364307   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:43:11.418603   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:43:11.418634   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:43:11.433409   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:43:11.433438   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:43:11.511243   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:43:11.511265   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:43:11.511280   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:43:14.089834   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:43:14.104337   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:43:14.104419   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:43:14.148799   69580 cri.go:89] found id: ""
	I0501 03:43:14.148826   69580 logs.go:276] 0 containers: []
	W0501 03:43:14.148833   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:43:14.148839   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:43:14.148904   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:43:14.191330   69580 cri.go:89] found id: ""
	I0501 03:43:14.191366   69580 logs.go:276] 0 containers: []
	W0501 03:43:14.191378   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:43:14.191386   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:43:14.191448   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:43:14.245978   69580 cri.go:89] found id: ""
	I0501 03:43:14.246010   69580 logs.go:276] 0 containers: []
	W0501 03:43:14.246018   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:43:14.246024   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:43:14.246093   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:43:14.287188   69580 cri.go:89] found id: ""
	I0501 03:43:14.287215   69580 logs.go:276] 0 containers: []
	W0501 03:43:14.287223   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:43:14.287228   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:43:14.287276   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:43:14.328060   69580 cri.go:89] found id: ""
	I0501 03:43:14.328093   69580 logs.go:276] 0 containers: []
	W0501 03:43:14.328104   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:43:14.328113   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:43:14.328179   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:43:14.370734   69580 cri.go:89] found id: ""
	I0501 03:43:14.370765   69580 logs.go:276] 0 containers: []
	W0501 03:43:14.370776   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:43:14.370783   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:43:14.370837   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:43:14.414690   69580 cri.go:89] found id: ""
	I0501 03:43:14.414713   69580 logs.go:276] 0 containers: []
	W0501 03:43:14.414721   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:43:14.414726   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:43:14.414790   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:43:14.459030   69580 cri.go:89] found id: ""
	I0501 03:43:14.459060   69580 logs.go:276] 0 containers: []
	W0501 03:43:14.459072   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:43:14.459083   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:43:14.459098   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:43:14.519728   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:43:14.519761   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:43:14.535841   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:43:14.535871   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:43:14.615203   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:43:14.615231   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:43:14.615249   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:43:14.707677   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:43:14.707725   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:43:17.254918   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:43:17.270643   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:43:17.270698   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:43:17.310692   69580 cri.go:89] found id: ""
	I0501 03:43:17.310724   69580 logs.go:276] 0 containers: []
	W0501 03:43:17.310732   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:43:17.310739   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:43:17.310806   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:43:17.349932   69580 cri.go:89] found id: ""
	I0501 03:43:17.349959   69580 logs.go:276] 0 containers: []
	W0501 03:43:17.349969   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:43:17.349976   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:43:17.350040   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:43:17.393073   69580 cri.go:89] found id: ""
	I0501 03:43:17.393099   69580 logs.go:276] 0 containers: []
	W0501 03:43:17.393109   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:43:17.393116   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:43:17.393176   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:43:17.429736   69580 cri.go:89] found id: ""
	I0501 03:43:17.429763   69580 logs.go:276] 0 containers: []
	W0501 03:43:17.429773   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:43:17.429787   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:43:17.429858   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:43:17.473052   69580 cri.go:89] found id: ""
	I0501 03:43:17.473085   69580 logs.go:276] 0 containers: []
	W0501 03:43:17.473097   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:43:17.473105   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:43:17.473168   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:43:17.514035   69580 cri.go:89] found id: ""
	I0501 03:43:17.514062   69580 logs.go:276] 0 containers: []
	W0501 03:43:17.514071   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:43:17.514078   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:43:17.514126   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:43:17.553197   69580 cri.go:89] found id: ""
	I0501 03:43:17.553225   69580 logs.go:276] 0 containers: []
	W0501 03:43:17.553234   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:43:17.553240   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:43:17.553300   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:43:17.592170   69580 cri.go:89] found id: ""
	I0501 03:43:17.592192   69580 logs.go:276] 0 containers: []
	W0501 03:43:17.592199   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:43:17.592208   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:43:17.592220   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:43:17.647549   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:43:17.647584   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:43:17.663084   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:43:17.663114   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:43:17.748357   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:43:17.748385   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:43:17.748401   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:43:17.832453   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:43:17.832491   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:43:20.375927   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:43:20.391840   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:43:20.391918   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:43:20.434158   69580 cri.go:89] found id: ""
	I0501 03:43:20.434185   69580 logs.go:276] 0 containers: []
	W0501 03:43:20.434193   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:43:20.434198   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:43:20.434254   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:43:20.477209   69580 cri.go:89] found id: ""
	I0501 03:43:20.477237   69580 logs.go:276] 0 containers: []
	W0501 03:43:20.477253   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:43:20.477259   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:43:20.477309   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:43:20.517227   69580 cri.go:89] found id: ""
	I0501 03:43:20.517260   69580 logs.go:276] 0 containers: []
	W0501 03:43:20.517270   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:43:20.517282   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:43:20.517340   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:43:20.555771   69580 cri.go:89] found id: ""
	I0501 03:43:20.555802   69580 logs.go:276] 0 containers: []
	W0501 03:43:20.555812   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:43:20.555820   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:43:20.555866   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:43:20.598177   69580 cri.go:89] found id: ""
	I0501 03:43:20.598200   69580 logs.go:276] 0 containers: []
	W0501 03:43:20.598213   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:43:20.598218   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:43:20.598326   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:43:20.637336   69580 cri.go:89] found id: ""
	I0501 03:43:20.637364   69580 logs.go:276] 0 containers: []
	W0501 03:43:20.637373   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:43:20.637378   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:43:20.637435   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:43:20.687736   69580 cri.go:89] found id: ""
	I0501 03:43:20.687761   69580 logs.go:276] 0 containers: []
	W0501 03:43:20.687768   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:43:20.687782   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:43:20.687840   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:43:20.726102   69580 cri.go:89] found id: ""
	I0501 03:43:20.726135   69580 logs.go:276] 0 containers: []
	W0501 03:43:20.726143   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:43:20.726154   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:43:20.726169   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:43:20.780874   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:43:20.780905   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:43:20.795798   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:43:20.795836   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:43:20.882337   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:43:20.882367   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:43:20.882381   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:43:20.962138   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:43:20.962188   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:43:23.512174   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:43:23.528344   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:43:23.528417   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:43:23.567182   69580 cri.go:89] found id: ""
	I0501 03:43:23.567212   69580 logs.go:276] 0 containers: []
	W0501 03:43:23.567222   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:43:23.567230   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:43:23.567291   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:43:23.607522   69580 cri.go:89] found id: ""
	I0501 03:43:23.607556   69580 logs.go:276] 0 containers: []
	W0501 03:43:23.607567   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:43:23.607574   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:43:23.607637   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:43:23.650932   69580 cri.go:89] found id: ""
	I0501 03:43:23.650959   69580 logs.go:276] 0 containers: []
	W0501 03:43:23.650970   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:43:23.650976   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:43:23.651035   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:43:23.695392   69580 cri.go:89] found id: ""
	I0501 03:43:23.695419   69580 logs.go:276] 0 containers: []
	W0501 03:43:23.695428   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:43:23.695436   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:43:23.695514   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:43:23.736577   69580 cri.go:89] found id: ""
	I0501 03:43:23.736607   69580 logs.go:276] 0 containers: []
	W0501 03:43:23.736619   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:43:23.736627   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:43:23.736685   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:43:23.776047   69580 cri.go:89] found id: ""
	I0501 03:43:23.776070   69580 logs.go:276] 0 containers: []
	W0501 03:43:23.776077   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:43:23.776082   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:43:23.776134   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:43:23.813896   69580 cri.go:89] found id: ""
	I0501 03:43:23.813934   69580 logs.go:276] 0 containers: []
	W0501 03:43:23.813943   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:43:23.813949   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:43:23.813997   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:43:23.858898   69580 cri.go:89] found id: ""
	I0501 03:43:23.858925   69580 logs.go:276] 0 containers: []
	W0501 03:43:23.858936   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:43:23.858947   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:43:23.858964   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:43:23.901796   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:43:23.901850   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:43:23.957009   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:43:23.957040   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:43:23.972811   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:43:23.972839   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:43:24.055535   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:43:24.055557   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:43:24.055576   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:43:26.640114   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:43:26.657217   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:43:26.657285   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:43:26.701191   69580 cri.go:89] found id: ""
	I0501 03:43:26.701218   69580 logs.go:276] 0 containers: []
	W0501 03:43:26.701227   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:43:26.701232   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:43:26.701287   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:43:26.740710   69580 cri.go:89] found id: ""
	I0501 03:43:26.740737   69580 logs.go:276] 0 containers: []
	W0501 03:43:26.740745   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:43:26.740750   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:43:26.740808   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:43:26.778682   69580 cri.go:89] found id: ""
	I0501 03:43:26.778710   69580 logs.go:276] 0 containers: []
	W0501 03:43:26.778724   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:43:26.778730   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:43:26.778789   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:43:26.822143   69580 cri.go:89] found id: ""
	I0501 03:43:26.822190   69580 logs.go:276] 0 containers: []
	W0501 03:43:26.822201   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:43:26.822209   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:43:26.822270   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:43:26.865938   69580 cri.go:89] found id: ""
	I0501 03:43:26.865976   69580 logs.go:276] 0 containers: []
	W0501 03:43:26.865988   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:43:26.865996   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:43:26.866058   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:43:26.914939   69580 cri.go:89] found id: ""
	I0501 03:43:26.914969   69580 logs.go:276] 0 containers: []
	W0501 03:43:26.914979   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:43:26.914986   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:43:26.915043   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:43:26.961822   69580 cri.go:89] found id: ""
	I0501 03:43:26.961850   69580 logs.go:276] 0 containers: []
	W0501 03:43:26.961860   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:43:26.961867   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:43:26.961920   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:43:27.005985   69580 cri.go:89] found id: ""
	I0501 03:43:27.006012   69580 logs.go:276] 0 containers: []
	W0501 03:43:27.006021   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:43:27.006032   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:43:27.006046   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:43:27.058265   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:43:27.058303   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:43:27.076270   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:43:27.076308   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:43:27.152627   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:43:27.152706   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:43:27.152728   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:43:27.229638   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:43:27.229678   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:43:29.775960   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:43:29.792849   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:43:29.792925   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:43:29.832508   69580 cri.go:89] found id: ""
	I0501 03:43:29.832537   69580 logs.go:276] 0 containers: []
	W0501 03:43:29.832551   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:43:29.832559   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:43:29.832617   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:43:29.873160   69580 cri.go:89] found id: ""
	I0501 03:43:29.873188   69580 logs.go:276] 0 containers: []
	W0501 03:43:29.873199   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:43:29.873207   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:43:29.873271   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:43:29.919431   69580 cri.go:89] found id: ""
	I0501 03:43:29.919459   69580 logs.go:276] 0 containers: []
	W0501 03:43:29.919468   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:43:29.919474   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:43:29.919533   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:43:29.967944   69580 cri.go:89] found id: ""
	I0501 03:43:29.967976   69580 logs.go:276] 0 containers: []
	W0501 03:43:29.967987   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:43:29.967995   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:43:29.968060   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:43:30.011626   69580 cri.go:89] found id: ""
	I0501 03:43:30.011657   69580 logs.go:276] 0 containers: []
	W0501 03:43:30.011669   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:43:30.011678   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:43:30.011743   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:43:30.051998   69580 cri.go:89] found id: ""
	I0501 03:43:30.052020   69580 logs.go:276] 0 containers: []
	W0501 03:43:30.052028   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:43:30.052034   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:43:30.052095   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:43:30.094140   69580 cri.go:89] found id: ""
	I0501 03:43:30.094164   69580 logs.go:276] 0 containers: []
	W0501 03:43:30.094172   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:43:30.094179   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:43:30.094253   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:43:30.132363   69580 cri.go:89] found id: ""
	I0501 03:43:30.132391   69580 logs.go:276] 0 containers: []
	W0501 03:43:30.132399   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:43:30.132411   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:43:30.132422   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:43:30.221368   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:43:30.221410   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:43:30.271279   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:43:30.271317   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:43:30.325549   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:43:30.325586   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:43:30.345337   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:43:30.345376   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:43:30.427552   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:43:32.928667   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:43:32.945489   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:43:32.945557   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:43:32.989604   69580 cri.go:89] found id: ""
	I0501 03:43:32.989628   69580 logs.go:276] 0 containers: []
	W0501 03:43:32.989636   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:43:32.989642   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:43:32.989701   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:43:33.030862   69580 cri.go:89] found id: ""
	I0501 03:43:33.030892   69580 logs.go:276] 0 containers: []
	W0501 03:43:33.030903   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:43:33.030912   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:43:33.030977   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:43:33.079795   69580 cri.go:89] found id: ""
	I0501 03:43:33.079827   69580 logs.go:276] 0 containers: []
	W0501 03:43:33.079835   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:43:33.079841   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:43:33.079898   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:43:33.120612   69580 cri.go:89] found id: ""
	I0501 03:43:33.120636   69580 logs.go:276] 0 containers: []
	W0501 03:43:33.120644   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:43:33.120649   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:43:33.120694   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:43:33.161824   69580 cri.go:89] found id: ""
	I0501 03:43:33.161851   69580 logs.go:276] 0 containers: []
	W0501 03:43:33.161861   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:43:33.161868   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:43:33.161924   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:43:33.200068   69580 cri.go:89] found id: ""
	I0501 03:43:33.200098   69580 logs.go:276] 0 containers: []
	W0501 03:43:33.200107   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:43:33.200113   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:43:33.200175   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:43:33.239314   69580 cri.go:89] found id: ""
	I0501 03:43:33.239341   69580 logs.go:276] 0 containers: []
	W0501 03:43:33.239351   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:43:33.239359   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:43:33.239427   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:43:33.281381   69580 cri.go:89] found id: ""
	I0501 03:43:33.281408   69580 logs.go:276] 0 containers: []
	W0501 03:43:33.281419   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:43:33.281431   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:43:33.281447   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:43:33.297992   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:43:33.298047   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:43:33.383273   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:43:33.383292   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:43:33.383303   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:43:33.465256   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:43:33.465289   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:43:33.509593   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:43:33.509621   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:43:36.065074   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:43:36.081361   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:43:36.081429   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:43:36.130394   69580 cri.go:89] found id: ""
	I0501 03:43:36.130436   69580 logs.go:276] 0 containers: []
	W0501 03:43:36.130448   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:43:36.130456   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:43:36.130524   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:43:36.171013   69580 cri.go:89] found id: ""
	I0501 03:43:36.171038   69580 logs.go:276] 0 containers: []
	W0501 03:43:36.171046   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:43:36.171052   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:43:36.171099   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:43:36.215372   69580 cri.go:89] found id: ""
	I0501 03:43:36.215411   69580 logs.go:276] 0 containers: []
	W0501 03:43:36.215424   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:43:36.215431   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:43:36.215493   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:43:36.257177   69580 cri.go:89] found id: ""
	I0501 03:43:36.257204   69580 logs.go:276] 0 containers: []
	W0501 03:43:36.257216   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:43:36.257223   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:43:36.257293   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:43:36.299035   69580 cri.go:89] found id: ""
	I0501 03:43:36.299066   69580 logs.go:276] 0 containers: []
	W0501 03:43:36.299085   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:43:36.299094   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:43:36.299166   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:43:36.339060   69580 cri.go:89] found id: ""
	I0501 03:43:36.339087   69580 logs.go:276] 0 containers: []
	W0501 03:43:36.339097   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:43:36.339105   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:43:36.339163   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:43:36.379982   69580 cri.go:89] found id: ""
	I0501 03:43:36.380016   69580 logs.go:276] 0 containers: []
	W0501 03:43:36.380028   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:43:36.380037   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:43:36.380100   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:43:36.419702   69580 cri.go:89] found id: ""
	I0501 03:43:36.419734   69580 logs.go:276] 0 containers: []
	W0501 03:43:36.419746   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:43:36.419758   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:43:36.419780   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:43:36.472553   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:43:36.472774   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:43:36.488402   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:43:36.488439   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:43:36.566390   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:43:36.566433   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:43:36.566446   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:43:36.643493   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:43:36.643527   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:43:39.199060   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:43:39.216612   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:43:39.216695   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:43:39.262557   69580 cri.go:89] found id: ""
	I0501 03:43:39.262581   69580 logs.go:276] 0 containers: []
	W0501 03:43:39.262589   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:43:39.262595   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:43:39.262642   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:43:39.331051   69580 cri.go:89] found id: ""
	I0501 03:43:39.331076   69580 logs.go:276] 0 containers: []
	W0501 03:43:39.331093   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:43:39.331098   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:43:39.331162   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:43:39.382033   69580 cri.go:89] found id: ""
	I0501 03:43:39.382058   69580 logs.go:276] 0 containers: []
	W0501 03:43:39.382066   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:43:39.382071   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:43:39.382122   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:43:39.424019   69580 cri.go:89] found id: ""
	I0501 03:43:39.424049   69580 logs.go:276] 0 containers: []
	W0501 03:43:39.424058   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:43:39.424064   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:43:39.424120   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:43:39.465787   69580 cri.go:89] found id: ""
	I0501 03:43:39.465833   69580 logs.go:276] 0 containers: []
	W0501 03:43:39.465846   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:43:39.465855   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:43:39.465916   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:43:39.507746   69580 cri.go:89] found id: ""
	I0501 03:43:39.507781   69580 logs.go:276] 0 containers: []
	W0501 03:43:39.507791   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:43:39.507798   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:43:39.507861   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:43:39.550737   69580 cri.go:89] found id: ""
	I0501 03:43:39.550768   69580 logs.go:276] 0 containers: []
	W0501 03:43:39.550775   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:43:39.550781   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:43:39.550831   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:43:39.592279   69580 cri.go:89] found id: ""
	I0501 03:43:39.592329   69580 logs.go:276] 0 containers: []
	W0501 03:43:39.592343   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:43:39.592356   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:43:39.592373   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:43:39.648858   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:43:39.648896   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:43:39.665316   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:43:39.665343   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:43:39.743611   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:43:39.743632   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:43:39.743646   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:43:39.829285   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:43:39.829322   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:43:42.374457   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:43:42.389944   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:43:42.390002   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:43:42.431270   69580 cri.go:89] found id: ""
	I0501 03:43:42.431294   69580 logs.go:276] 0 containers: []
	W0501 03:43:42.431302   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:43:42.431308   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:43:42.431366   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:43:42.470515   69580 cri.go:89] found id: ""
	I0501 03:43:42.470546   69580 logs.go:276] 0 containers: []
	W0501 03:43:42.470558   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:43:42.470566   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:43:42.470619   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:43:42.518472   69580 cri.go:89] found id: ""
	I0501 03:43:42.518494   69580 logs.go:276] 0 containers: []
	W0501 03:43:42.518501   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:43:42.518506   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:43:42.518555   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:43:42.562192   69580 cri.go:89] found id: ""
	I0501 03:43:42.562220   69580 logs.go:276] 0 containers: []
	W0501 03:43:42.562231   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:43:42.562239   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:43:42.562300   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:43:42.599372   69580 cri.go:89] found id: ""
	I0501 03:43:42.599403   69580 logs.go:276] 0 containers: []
	W0501 03:43:42.599414   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:43:42.599422   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:43:42.599483   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:43:42.636738   69580 cri.go:89] found id: ""
	I0501 03:43:42.636766   69580 logs.go:276] 0 containers: []
	W0501 03:43:42.636777   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:43:42.636786   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:43:42.636845   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:43:42.682087   69580 cri.go:89] found id: ""
	I0501 03:43:42.682115   69580 logs.go:276] 0 containers: []
	W0501 03:43:42.682125   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:43:42.682133   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:43:42.682198   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:43:42.724280   69580 cri.go:89] found id: ""
	I0501 03:43:42.724316   69580 logs.go:276] 0 containers: []
	W0501 03:43:42.724328   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:43:42.724340   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:43:42.724354   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:43:42.771667   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:43:42.771702   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:43:42.827390   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:43:42.827428   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:43:42.843452   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:43:42.843480   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:43:42.925544   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:43:42.925563   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:43:42.925577   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:43:45.515104   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:43:45.529545   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:43:45.529619   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:43:45.573451   69580 cri.go:89] found id: ""
	I0501 03:43:45.573475   69580 logs.go:276] 0 containers: []
	W0501 03:43:45.573483   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:43:45.573489   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:43:45.573536   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:43:45.613873   69580 cri.go:89] found id: ""
	I0501 03:43:45.613897   69580 logs.go:276] 0 containers: []
	W0501 03:43:45.613905   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:43:45.613910   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:43:45.613954   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:43:45.660195   69580 cri.go:89] found id: ""
	I0501 03:43:45.660215   69580 logs.go:276] 0 containers: []
	W0501 03:43:45.660221   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:43:45.660226   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:43:45.660284   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:43:45.703539   69580 cri.go:89] found id: ""
	I0501 03:43:45.703566   69580 logs.go:276] 0 containers: []
	W0501 03:43:45.703574   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:43:45.703580   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:43:45.703637   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:43:45.754635   69580 cri.go:89] found id: ""
	I0501 03:43:45.754659   69580 logs.go:276] 0 containers: []
	W0501 03:43:45.754668   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:43:45.754675   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:43:45.754738   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:43:45.800836   69580 cri.go:89] found id: ""
	I0501 03:43:45.800866   69580 logs.go:276] 0 containers: []
	W0501 03:43:45.800884   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:43:45.800892   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:43:45.800955   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:43:45.859057   69580 cri.go:89] found id: ""
	I0501 03:43:45.859084   69580 logs.go:276] 0 containers: []
	W0501 03:43:45.859092   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:43:45.859098   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:43:45.859145   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:43:45.913173   69580 cri.go:89] found id: ""
	I0501 03:43:45.913204   69580 logs.go:276] 0 containers: []
	W0501 03:43:45.913216   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:43:45.913227   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:43:45.913243   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:43:45.930050   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:43:45.930087   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:43:46.006047   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:43:46.006081   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:43:46.006097   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:43:46.086630   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:43:46.086666   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:43:46.134635   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:43:46.134660   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:43:48.690330   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:43:48.705024   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:43:48.705093   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:43:48.750244   69580 cri.go:89] found id: ""
	I0501 03:43:48.750278   69580 logs.go:276] 0 containers: []
	W0501 03:43:48.750299   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:43:48.750307   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:43:48.750377   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:43:48.791231   69580 cri.go:89] found id: ""
	I0501 03:43:48.791264   69580 logs.go:276] 0 containers: []
	W0501 03:43:48.791276   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:43:48.791283   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:43:48.791348   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:43:48.834692   69580 cri.go:89] found id: ""
	I0501 03:43:48.834720   69580 logs.go:276] 0 containers: []
	W0501 03:43:48.834731   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:43:48.834739   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:43:48.834809   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:43:48.877383   69580 cri.go:89] found id: ""
	I0501 03:43:48.877415   69580 logs.go:276] 0 containers: []
	W0501 03:43:48.877424   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:43:48.877430   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:43:48.877479   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:43:48.919728   69580 cri.go:89] found id: ""
	I0501 03:43:48.919756   69580 logs.go:276] 0 containers: []
	W0501 03:43:48.919767   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:43:48.919775   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:43:48.919836   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:43:48.962090   69580 cri.go:89] found id: ""
	I0501 03:43:48.962122   69580 logs.go:276] 0 containers: []
	W0501 03:43:48.962137   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:43:48.962144   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:43:48.962205   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:43:48.998456   69580 cri.go:89] found id: ""
	I0501 03:43:48.998487   69580 logs.go:276] 0 containers: []
	W0501 03:43:48.998498   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:43:48.998506   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:43:48.998566   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:43:49.042591   69580 cri.go:89] found id: ""
	I0501 03:43:49.042623   69580 logs.go:276] 0 containers: []
	W0501 03:43:49.042633   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:43:49.042645   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:43:49.042661   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:43:49.088533   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:43:49.088571   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:43:49.145252   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:43:49.145288   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:43:49.163093   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:43:49.163120   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:43:49.240805   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:43:49.240831   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:43:49.240844   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:43:51.825530   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:43:51.839596   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:43:51.839669   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:43:51.879493   69580 cri.go:89] found id: ""
	I0501 03:43:51.879516   69580 logs.go:276] 0 containers: []
	W0501 03:43:51.879524   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:43:51.879530   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:43:51.879585   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:43:51.921577   69580 cri.go:89] found id: ""
	I0501 03:43:51.921608   69580 logs.go:276] 0 containers: []
	W0501 03:43:51.921620   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:43:51.921627   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:43:51.921693   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:43:51.961000   69580 cri.go:89] found id: ""
	I0501 03:43:51.961028   69580 logs.go:276] 0 containers: []
	W0501 03:43:51.961037   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:43:51.961043   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:43:51.961103   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:43:52.006087   69580 cri.go:89] found id: ""
	I0501 03:43:52.006118   69580 logs.go:276] 0 containers: []
	W0501 03:43:52.006129   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:43:52.006137   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:43:52.006201   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:43:52.047196   69580 cri.go:89] found id: ""
	I0501 03:43:52.047228   69580 logs.go:276] 0 containers: []
	W0501 03:43:52.047239   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:43:52.047250   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:43:52.047319   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:43:52.086380   69580 cri.go:89] found id: ""
	I0501 03:43:52.086423   69580 logs.go:276] 0 containers: []
	W0501 03:43:52.086434   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:43:52.086442   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:43:52.086499   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:43:52.128824   69580 cri.go:89] found id: ""
	I0501 03:43:52.128851   69580 logs.go:276] 0 containers: []
	W0501 03:43:52.128861   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:43:52.128868   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:43:52.128933   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:43:52.168743   69580 cri.go:89] found id: ""
	I0501 03:43:52.168769   69580 logs.go:276] 0 containers: []
	W0501 03:43:52.168776   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:43:52.168788   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:43:52.168802   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:43:52.184391   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:43:52.184419   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:43:52.268330   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:43:52.268368   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:43:52.268386   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:43:52.350556   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:43:52.350586   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:43:52.395930   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:43:52.395967   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:43:54.952879   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:43:54.968440   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:43:54.968517   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:43:55.008027   69580 cri.go:89] found id: ""
	I0501 03:43:55.008056   69580 logs.go:276] 0 containers: []
	W0501 03:43:55.008067   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:43:55.008074   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:43:55.008137   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:43:55.048848   69580 cri.go:89] found id: ""
	I0501 03:43:55.048869   69580 logs.go:276] 0 containers: []
	W0501 03:43:55.048877   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:43:55.048882   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:43:55.048931   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:43:55.085886   69580 cri.go:89] found id: ""
	I0501 03:43:55.085910   69580 logs.go:276] 0 containers: []
	W0501 03:43:55.085919   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:43:55.085924   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:43:55.085971   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:43:55.119542   69580 cri.go:89] found id: ""
	I0501 03:43:55.119567   69580 logs.go:276] 0 containers: []
	W0501 03:43:55.119574   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:43:55.119580   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:43:55.119636   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:43:55.158327   69580 cri.go:89] found id: ""
	I0501 03:43:55.158357   69580 logs.go:276] 0 containers: []
	W0501 03:43:55.158367   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:43:55.158374   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:43:55.158449   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:43:55.200061   69580 cri.go:89] found id: ""
	I0501 03:43:55.200085   69580 logs.go:276] 0 containers: []
	W0501 03:43:55.200093   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:43:55.200100   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:43:55.200146   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:43:55.239446   69580 cri.go:89] found id: ""
	I0501 03:43:55.239476   69580 logs.go:276] 0 containers: []
	W0501 03:43:55.239487   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:43:55.239493   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:43:55.239557   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:43:55.275593   69580 cri.go:89] found id: ""
	I0501 03:43:55.275623   69580 logs.go:276] 0 containers: []
	W0501 03:43:55.275635   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:43:55.275646   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:43:55.275662   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:43:55.356701   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:43:55.356724   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:43:55.356740   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:43:55.437445   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:43:55.437483   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:43:55.489024   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:43:55.489051   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:43:55.548083   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:43:55.548114   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:43:58.067063   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:43:58.080485   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:43:58.080539   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:43:58.121459   69580 cri.go:89] found id: ""
	I0501 03:43:58.121488   69580 logs.go:276] 0 containers: []
	W0501 03:43:58.121498   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:43:58.121505   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:43:58.121562   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:43:58.161445   69580 cri.go:89] found id: ""
	I0501 03:43:58.161479   69580 logs.go:276] 0 containers: []
	W0501 03:43:58.161489   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:43:58.161499   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:43:58.161560   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:43:58.203216   69580 cri.go:89] found id: ""
	I0501 03:43:58.203238   69580 logs.go:276] 0 containers: []
	W0501 03:43:58.203246   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:43:58.203251   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:43:58.203297   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:43:58.239496   69580 cri.go:89] found id: ""
	I0501 03:43:58.239526   69580 logs.go:276] 0 containers: []
	W0501 03:43:58.239538   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:43:58.239546   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:43:58.239605   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:43:58.280331   69580 cri.go:89] found id: ""
	I0501 03:43:58.280359   69580 logs.go:276] 0 containers: []
	W0501 03:43:58.280370   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:43:58.280378   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:43:58.280438   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:43:58.318604   69580 cri.go:89] found id: ""
	I0501 03:43:58.318634   69580 logs.go:276] 0 containers: []
	W0501 03:43:58.318646   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:43:58.318653   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:43:58.318712   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:43:58.359360   69580 cri.go:89] found id: ""
	I0501 03:43:58.359383   69580 logs.go:276] 0 containers: []
	W0501 03:43:58.359392   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:43:58.359398   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:43:58.359446   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:43:58.401172   69580 cri.go:89] found id: ""
	I0501 03:43:58.401202   69580 logs.go:276] 0 containers: []
	W0501 03:43:58.401211   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:43:58.401220   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:43:58.401232   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:43:58.416877   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:43:58.416907   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:43:58.489812   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:43:58.489835   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:43:58.489849   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:43:58.574971   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:43:58.575004   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:43:58.619526   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:43:58.619557   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:44:01.173759   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:44:01.187838   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:44:01.187922   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:44:01.227322   69580 cri.go:89] found id: ""
	I0501 03:44:01.227355   69580 logs.go:276] 0 containers: []
	W0501 03:44:01.227366   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:44:01.227372   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:44:01.227432   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:44:01.268418   69580 cri.go:89] found id: ""
	I0501 03:44:01.268453   69580 logs.go:276] 0 containers: []
	W0501 03:44:01.268465   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:44:01.268472   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:44:01.268530   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:44:01.314641   69580 cri.go:89] found id: ""
	I0501 03:44:01.314667   69580 logs.go:276] 0 containers: []
	W0501 03:44:01.314675   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:44:01.314681   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:44:01.314739   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:44:01.361237   69580 cri.go:89] found id: ""
	I0501 03:44:01.361272   69580 logs.go:276] 0 containers: []
	W0501 03:44:01.361288   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:44:01.361294   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:44:01.361348   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:44:01.400650   69580 cri.go:89] found id: ""
	I0501 03:44:01.400676   69580 logs.go:276] 0 containers: []
	W0501 03:44:01.400684   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:44:01.400690   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:44:01.400739   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:44:01.447998   69580 cri.go:89] found id: ""
	I0501 03:44:01.448023   69580 logs.go:276] 0 containers: []
	W0501 03:44:01.448032   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:44:01.448040   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:44:01.448101   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:44:01.492172   69580 cri.go:89] found id: ""
	I0501 03:44:01.492199   69580 logs.go:276] 0 containers: []
	W0501 03:44:01.492207   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:44:01.492213   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:44:01.492265   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:44:01.538589   69580 cri.go:89] found id: ""
	I0501 03:44:01.538617   69580 logs.go:276] 0 containers: []
	W0501 03:44:01.538628   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:44:01.538638   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:44:01.538653   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:44:01.592914   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:44:01.592952   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:44:01.611706   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:44:01.611754   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:44:01.693469   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:44:01.693488   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:44:01.693501   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:44:01.774433   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:44:01.774470   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:44:04.321593   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:44:04.335428   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:44:04.335497   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:44:04.378479   69580 cri.go:89] found id: ""
	I0501 03:44:04.378505   69580 logs.go:276] 0 containers: []
	W0501 03:44:04.378516   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:44:04.378525   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:44:04.378585   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:44:04.420025   69580 cri.go:89] found id: ""
	I0501 03:44:04.420050   69580 logs.go:276] 0 containers: []
	W0501 03:44:04.420059   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:44:04.420065   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:44:04.420113   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:44:04.464009   69580 cri.go:89] found id: ""
	I0501 03:44:04.464039   69580 logs.go:276] 0 containers: []
	W0501 03:44:04.464047   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:44:04.464052   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:44:04.464113   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:44:04.502039   69580 cri.go:89] found id: ""
	I0501 03:44:04.502069   69580 logs.go:276] 0 containers: []
	W0501 03:44:04.502081   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:44:04.502088   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:44:04.502150   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:44:04.544566   69580 cri.go:89] found id: ""
	I0501 03:44:04.544593   69580 logs.go:276] 0 containers: []
	W0501 03:44:04.544605   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:44:04.544614   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:44:04.544672   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:44:04.584067   69580 cri.go:89] found id: ""
	I0501 03:44:04.584095   69580 logs.go:276] 0 containers: []
	W0501 03:44:04.584104   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:44:04.584112   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:44:04.584174   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:44:04.625165   69580 cri.go:89] found id: ""
	I0501 03:44:04.625197   69580 logs.go:276] 0 containers: []
	W0501 03:44:04.625210   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:44:04.625219   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:44:04.625292   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:44:04.667796   69580 cri.go:89] found id: ""
	I0501 03:44:04.667830   69580 logs.go:276] 0 containers: []
	W0501 03:44:04.667839   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:44:04.667850   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:44:04.667868   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:44:04.722269   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:44:04.722303   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:44:04.738232   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:44:04.738265   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:44:04.821551   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:44:04.821578   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:44:04.821595   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:44:04.902575   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:44:04.902618   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:44:07.449793   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:44:07.466348   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:44:07.466450   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:44:07.510325   69580 cri.go:89] found id: ""
	I0501 03:44:07.510352   69580 logs.go:276] 0 containers: []
	W0501 03:44:07.510363   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:44:07.510371   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:44:07.510450   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:44:07.550722   69580 cri.go:89] found id: ""
	I0501 03:44:07.550748   69580 logs.go:276] 0 containers: []
	W0501 03:44:07.550756   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:44:07.550762   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:44:07.550810   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:44:07.589592   69580 cri.go:89] found id: ""
	I0501 03:44:07.589617   69580 logs.go:276] 0 containers: []
	W0501 03:44:07.589625   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:44:07.589630   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:44:07.589678   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:44:07.631628   69580 cri.go:89] found id: ""
	I0501 03:44:07.631655   69580 logs.go:276] 0 containers: []
	W0501 03:44:07.631662   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:44:07.631668   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:44:07.631726   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:44:07.674709   69580 cri.go:89] found id: ""
	I0501 03:44:07.674743   69580 logs.go:276] 0 containers: []
	W0501 03:44:07.674753   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:44:07.674760   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:44:07.674811   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:44:07.714700   69580 cri.go:89] found id: ""
	I0501 03:44:07.714767   69580 logs.go:276] 0 containers: []
	W0501 03:44:07.714788   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:44:07.714797   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:44:07.714856   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:44:07.753440   69580 cri.go:89] found id: ""
	I0501 03:44:07.753467   69580 logs.go:276] 0 containers: []
	W0501 03:44:07.753478   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:44:07.753485   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:44:07.753549   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:44:07.791579   69580 cri.go:89] found id: ""
	I0501 03:44:07.791606   69580 logs.go:276] 0 containers: []
	W0501 03:44:07.791617   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:44:07.791628   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:44:07.791644   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:44:07.845568   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:44:07.845606   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:44:07.861861   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:44:07.861885   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:44:07.941719   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:44:07.941743   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:44:07.941757   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:44:08.022684   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:44:08.022720   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:44:10.575417   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:44:10.593408   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:44:10.593468   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:44:10.641322   69580 cri.go:89] found id: ""
	I0501 03:44:10.641357   69580 logs.go:276] 0 containers: []
	W0501 03:44:10.641370   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:44:10.641378   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:44:10.641442   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:44:10.686330   69580 cri.go:89] found id: ""
	I0501 03:44:10.686358   69580 logs.go:276] 0 containers: []
	W0501 03:44:10.686368   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:44:10.686377   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:44:10.686458   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:44:10.734414   69580 cri.go:89] found id: ""
	I0501 03:44:10.734444   69580 logs.go:276] 0 containers: []
	W0501 03:44:10.734456   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:44:10.734463   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:44:10.734527   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:44:10.776063   69580 cri.go:89] found id: ""
	I0501 03:44:10.776095   69580 logs.go:276] 0 containers: []
	W0501 03:44:10.776106   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:44:10.776113   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:44:10.776176   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:44:10.819035   69580 cri.go:89] found id: ""
	I0501 03:44:10.819065   69580 logs.go:276] 0 containers: []
	W0501 03:44:10.819076   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:44:10.819084   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:44:10.819150   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:44:10.868912   69580 cri.go:89] found id: ""
	I0501 03:44:10.868938   69580 logs.go:276] 0 containers: []
	W0501 03:44:10.868946   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:44:10.868952   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:44:10.869000   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:44:10.910517   69580 cri.go:89] found id: ""
	I0501 03:44:10.910549   69580 logs.go:276] 0 containers: []
	W0501 03:44:10.910572   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:44:10.910581   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:44:10.910678   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:44:10.949267   69580 cri.go:89] found id: ""
	I0501 03:44:10.949297   69580 logs.go:276] 0 containers: []
	W0501 03:44:10.949306   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:44:10.949314   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:44:10.949327   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:44:11.004731   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:44:11.004779   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:44:11.022146   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:44:11.022174   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:44:11.108992   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:44:11.109020   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:44:11.109035   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:44:11.192571   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:44:11.192605   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:44:13.739336   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:44:13.758622   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:44:13.758721   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:44:13.805395   69580 cri.go:89] found id: ""
	I0501 03:44:13.805423   69580 logs.go:276] 0 containers: []
	W0501 03:44:13.805434   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:44:13.805442   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:44:13.805523   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:44:13.847372   69580 cri.go:89] found id: ""
	I0501 03:44:13.847400   69580 logs.go:276] 0 containers: []
	W0501 03:44:13.847409   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:44:13.847417   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:44:13.847474   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:44:13.891842   69580 cri.go:89] found id: ""
	I0501 03:44:13.891867   69580 logs.go:276] 0 containers: []
	W0501 03:44:13.891874   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:44:13.891880   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:44:13.891935   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:44:13.933382   69580 cri.go:89] found id: ""
	I0501 03:44:13.933411   69580 logs.go:276] 0 containers: []
	W0501 03:44:13.933422   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:44:13.933430   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:44:13.933490   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:44:13.973955   69580 cri.go:89] found id: ""
	I0501 03:44:13.973980   69580 logs.go:276] 0 containers: []
	W0501 03:44:13.973991   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:44:13.974000   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:44:13.974053   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:44:14.015202   69580 cri.go:89] found id: ""
	I0501 03:44:14.015226   69580 logs.go:276] 0 containers: []
	W0501 03:44:14.015234   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:44:14.015240   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:44:14.015287   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:44:14.057441   69580 cri.go:89] found id: ""
	I0501 03:44:14.057471   69580 logs.go:276] 0 containers: []
	W0501 03:44:14.057483   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:44:14.057491   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:44:14.057551   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:44:14.099932   69580 cri.go:89] found id: ""
	I0501 03:44:14.099961   69580 logs.go:276] 0 containers: []
	W0501 03:44:14.099972   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:44:14.099983   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:44:14.099996   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:44:14.160386   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:44:14.160418   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:44:14.176880   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:44:14.176908   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:44:14.272137   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:44:14.272155   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:44:14.272168   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:44:14.366523   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:44:14.366571   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:44:16.914394   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:44:16.930976   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:44:16.931038   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:44:16.977265   69580 cri.go:89] found id: ""
	I0501 03:44:16.977294   69580 logs.go:276] 0 containers: []
	W0501 03:44:16.977303   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:44:16.977309   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:44:16.977363   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:44:17.015656   69580 cri.go:89] found id: ""
	I0501 03:44:17.015686   69580 logs.go:276] 0 containers: []
	W0501 03:44:17.015694   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:44:17.015700   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:44:17.015768   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:44:17.056079   69580 cri.go:89] found id: ""
	I0501 03:44:17.056111   69580 logs.go:276] 0 containers: []
	W0501 03:44:17.056121   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:44:17.056129   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:44:17.056188   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:44:17.099504   69580 cri.go:89] found id: ""
	I0501 03:44:17.099528   69580 logs.go:276] 0 containers: []
	W0501 03:44:17.099536   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:44:17.099542   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:44:17.099606   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:44:17.141371   69580 cri.go:89] found id: ""
	I0501 03:44:17.141401   69580 logs.go:276] 0 containers: []
	W0501 03:44:17.141410   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:44:17.141417   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:44:17.141484   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:44:17.184143   69580 cri.go:89] found id: ""
	I0501 03:44:17.184167   69580 logs.go:276] 0 containers: []
	W0501 03:44:17.184179   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:44:17.184193   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:44:17.184246   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:44:17.224012   69580 cri.go:89] found id: ""
	I0501 03:44:17.224049   69580 logs.go:276] 0 containers: []
	W0501 03:44:17.224061   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:44:17.224069   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:44:17.224136   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:44:17.268185   69580 cri.go:89] found id: ""
	I0501 03:44:17.268216   69580 logs.go:276] 0 containers: []
	W0501 03:44:17.268224   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:44:17.268233   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:44:17.268248   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:44:17.351342   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:44:17.351392   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:44:17.398658   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:44:17.398689   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:44:17.452476   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:44:17.452517   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:44:17.468734   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:44:17.468771   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:44:17.558971   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:44:20.059342   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:44:20.075707   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:44:20.075791   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:44:20.114436   69580 cri.go:89] found id: ""
	I0501 03:44:20.114472   69580 logs.go:276] 0 containers: []
	W0501 03:44:20.114486   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:44:20.114495   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:44:20.114562   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:44:20.155607   69580 cri.go:89] found id: ""
	I0501 03:44:20.155638   69580 logs.go:276] 0 containers: []
	W0501 03:44:20.155649   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:44:20.155657   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:44:20.155715   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:44:20.198188   69580 cri.go:89] found id: ""
	I0501 03:44:20.198218   69580 logs.go:276] 0 containers: []
	W0501 03:44:20.198227   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:44:20.198234   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:44:20.198291   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:44:20.237183   69580 cri.go:89] found id: ""
	I0501 03:44:20.237213   69580 logs.go:276] 0 containers: []
	W0501 03:44:20.237223   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:44:20.237232   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:44:20.237286   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:44:20.279289   69580 cri.go:89] found id: ""
	I0501 03:44:20.279320   69580 logs.go:276] 0 containers: []
	W0501 03:44:20.279332   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:44:20.279341   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:44:20.279409   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:44:20.334066   69580 cri.go:89] found id: ""
	I0501 03:44:20.334091   69580 logs.go:276] 0 containers: []
	W0501 03:44:20.334112   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:44:20.334121   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:44:20.334181   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:44:20.385740   69580 cri.go:89] found id: ""
	I0501 03:44:20.385775   69580 logs.go:276] 0 containers: []
	W0501 03:44:20.385785   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:44:20.385796   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:44:20.385860   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:44:20.425151   69580 cri.go:89] found id: ""
	I0501 03:44:20.425176   69580 logs.go:276] 0 containers: []
	W0501 03:44:20.425183   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:44:20.425193   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:44:20.425214   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:44:20.472563   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:44:20.472605   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:44:20.526589   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:44:20.526626   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:44:20.541978   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:44:20.542013   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:44:20.619513   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:44:20.619540   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:44:20.619555   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:44:23.203031   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:44:23.219964   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:44:23.220043   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:44:23.264287   69580 cri.go:89] found id: ""
	I0501 03:44:23.264315   69580 logs.go:276] 0 containers: []
	W0501 03:44:23.264323   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:44:23.264328   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:44:23.264395   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:44:23.310337   69580 cri.go:89] found id: ""
	I0501 03:44:23.310366   69580 logs.go:276] 0 containers: []
	W0501 03:44:23.310375   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:44:23.310383   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:44:23.310461   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:44:23.364550   69580 cri.go:89] found id: ""
	I0501 03:44:23.364577   69580 logs.go:276] 0 containers: []
	W0501 03:44:23.364588   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:44:23.364596   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:44:23.364676   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:44:23.412620   69580 cri.go:89] found id: ""
	I0501 03:44:23.412647   69580 logs.go:276] 0 containers: []
	W0501 03:44:23.412657   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:44:23.412665   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:44:23.412726   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:44:23.461447   69580 cri.go:89] found id: ""
	I0501 03:44:23.461477   69580 logs.go:276] 0 containers: []
	W0501 03:44:23.461488   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:44:23.461496   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:44:23.461558   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:44:23.514868   69580 cri.go:89] found id: ""
	I0501 03:44:23.514896   69580 logs.go:276] 0 containers: []
	W0501 03:44:23.514915   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:44:23.514924   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:44:23.514984   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:44:23.559171   69580 cri.go:89] found id: ""
	I0501 03:44:23.559200   69580 logs.go:276] 0 containers: []
	W0501 03:44:23.559210   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:44:23.559218   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:44:23.559284   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:44:23.601713   69580 cri.go:89] found id: ""
	I0501 03:44:23.601740   69580 logs.go:276] 0 containers: []
	W0501 03:44:23.601749   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:44:23.601760   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:44:23.601772   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:44:23.656147   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:44:23.656187   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:44:23.673507   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:44:23.673545   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:44:23.771824   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:44:23.771846   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:44:23.771861   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:44:23.861128   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:44:23.861161   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:44:26.406507   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:44:26.421836   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:44:26.421894   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:44:26.462758   69580 cri.go:89] found id: ""
	I0501 03:44:26.462785   69580 logs.go:276] 0 containers: []
	W0501 03:44:26.462796   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:44:26.462804   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:44:26.462860   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:44:26.505067   69580 cri.go:89] found id: ""
	I0501 03:44:26.505098   69580 logs.go:276] 0 containers: []
	W0501 03:44:26.505110   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:44:26.505121   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:44:26.505182   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:44:26.544672   69580 cri.go:89] found id: ""
	I0501 03:44:26.544699   69580 logs.go:276] 0 containers: []
	W0501 03:44:26.544711   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:44:26.544717   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:44:26.544764   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:44:26.590579   69580 cri.go:89] found id: ""
	I0501 03:44:26.590605   69580 logs.go:276] 0 containers: []
	W0501 03:44:26.590614   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:44:26.590620   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:44:26.590670   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:44:26.637887   69580 cri.go:89] found id: ""
	I0501 03:44:26.637920   69580 logs.go:276] 0 containers: []
	W0501 03:44:26.637930   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:44:26.637939   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:44:26.637998   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:44:26.686778   69580 cri.go:89] found id: ""
	I0501 03:44:26.686807   69580 logs.go:276] 0 containers: []
	W0501 03:44:26.686815   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:44:26.686821   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:44:26.686882   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:44:26.729020   69580 cri.go:89] found id: ""
	I0501 03:44:26.729045   69580 logs.go:276] 0 containers: []
	W0501 03:44:26.729054   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:44:26.729060   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:44:26.729124   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:44:26.769022   69580 cri.go:89] found id: ""
	I0501 03:44:26.769043   69580 logs.go:276] 0 containers: []
	W0501 03:44:26.769051   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:44:26.769059   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:44:26.769073   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:44:26.854985   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:44:26.855011   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:44:26.855024   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:44:26.937031   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:44:26.937063   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:44:27.006267   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:44:27.006301   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:44:27.080503   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:44:27.080545   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:44:29.598176   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:44:29.614465   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:44:29.614523   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:44:29.662384   69580 cri.go:89] found id: ""
	I0501 03:44:29.662421   69580 logs.go:276] 0 containers: []
	W0501 03:44:29.662433   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:44:29.662439   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:44:29.662483   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:44:29.705262   69580 cri.go:89] found id: ""
	I0501 03:44:29.705286   69580 logs.go:276] 0 containers: []
	W0501 03:44:29.705295   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:44:29.705300   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:44:29.705345   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:44:29.752308   69580 cri.go:89] found id: ""
	I0501 03:44:29.752335   69580 logs.go:276] 0 containers: []
	W0501 03:44:29.752343   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:44:29.752349   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:44:29.752403   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:44:29.802702   69580 cri.go:89] found id: ""
	I0501 03:44:29.802729   69580 logs.go:276] 0 containers: []
	W0501 03:44:29.802741   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:44:29.802749   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:44:29.802814   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:44:29.854112   69580 cri.go:89] found id: ""
	I0501 03:44:29.854138   69580 logs.go:276] 0 containers: []
	W0501 03:44:29.854149   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:44:29.854157   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:44:29.854217   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:44:29.898447   69580 cri.go:89] found id: ""
	I0501 03:44:29.898470   69580 logs.go:276] 0 containers: []
	W0501 03:44:29.898480   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:44:29.898486   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:44:29.898545   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:44:29.938832   69580 cri.go:89] found id: ""
	I0501 03:44:29.938862   69580 logs.go:276] 0 containers: []
	W0501 03:44:29.938873   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:44:29.938881   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:44:29.938948   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:44:29.987697   69580 cri.go:89] found id: ""
	I0501 03:44:29.987721   69580 logs.go:276] 0 containers: []
	W0501 03:44:29.987730   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:44:29.987738   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:44:29.987753   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:44:30.042446   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:44:30.042473   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:44:30.095358   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:44:30.095389   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:44:30.110745   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:44:30.110782   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:44:30.190923   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:44:30.190951   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:44:30.190965   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:44:32.772208   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:44:32.791063   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:44:32.791145   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:44:32.856883   69580 cri.go:89] found id: ""
	I0501 03:44:32.856909   69580 logs.go:276] 0 containers: []
	W0501 03:44:32.856920   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:44:32.856927   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:44:32.856988   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:44:32.928590   69580 cri.go:89] found id: ""
	I0501 03:44:32.928625   69580 logs.go:276] 0 containers: []
	W0501 03:44:32.928637   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:44:32.928644   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:44:32.928707   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:44:32.978068   69580 cri.go:89] found id: ""
	I0501 03:44:32.978100   69580 logs.go:276] 0 containers: []
	W0501 03:44:32.978113   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:44:32.978120   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:44:32.978184   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:44:33.018873   69580 cri.go:89] found id: ""
	I0501 03:44:33.018897   69580 logs.go:276] 0 containers: []
	W0501 03:44:33.018905   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:44:33.018911   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:44:33.018970   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:44:33.060633   69580 cri.go:89] found id: ""
	I0501 03:44:33.060661   69580 logs.go:276] 0 containers: []
	W0501 03:44:33.060673   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:44:33.060681   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:44:33.060735   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:44:33.099862   69580 cri.go:89] found id: ""
	I0501 03:44:33.099891   69580 logs.go:276] 0 containers: []
	W0501 03:44:33.099900   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:44:33.099906   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:44:33.099953   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:44:33.139137   69580 cri.go:89] found id: ""
	I0501 03:44:33.139163   69580 logs.go:276] 0 containers: []
	W0501 03:44:33.139171   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:44:33.139177   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:44:33.139224   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:44:33.178800   69580 cri.go:89] found id: ""
	I0501 03:44:33.178826   69580 logs.go:276] 0 containers: []
	W0501 03:44:33.178834   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:44:33.178842   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:44:33.178856   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:44:33.233811   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:44:33.233842   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:44:33.248931   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:44:33.248958   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:44:33.325530   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:44:33.325551   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:44:33.325563   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:44:33.412071   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:44:33.412103   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:44:35.954706   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:44:35.970256   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:44:35.970333   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:44:36.010417   69580 cri.go:89] found id: ""
	I0501 03:44:36.010443   69580 logs.go:276] 0 containers: []
	W0501 03:44:36.010452   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:44:36.010459   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:44:36.010524   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:44:36.051571   69580 cri.go:89] found id: ""
	I0501 03:44:36.051600   69580 logs.go:276] 0 containers: []
	W0501 03:44:36.051611   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:44:36.051619   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:44:36.051683   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:44:36.092148   69580 cri.go:89] found id: ""
	I0501 03:44:36.092176   69580 logs.go:276] 0 containers: []
	W0501 03:44:36.092185   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:44:36.092190   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:44:36.092247   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:44:36.136243   69580 cri.go:89] found id: ""
	I0501 03:44:36.136282   69580 logs.go:276] 0 containers: []
	W0501 03:44:36.136290   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:44:36.136296   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:44:36.136342   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:44:36.178154   69580 cri.go:89] found id: ""
	I0501 03:44:36.178183   69580 logs.go:276] 0 containers: []
	W0501 03:44:36.178193   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:44:36.178200   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:44:36.178264   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:44:36.217050   69580 cri.go:89] found id: ""
	I0501 03:44:36.217077   69580 logs.go:276] 0 containers: []
	W0501 03:44:36.217089   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:44:36.217096   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:44:36.217172   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:44:36.260438   69580 cri.go:89] found id: ""
	I0501 03:44:36.260470   69580 logs.go:276] 0 containers: []
	W0501 03:44:36.260481   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:44:36.260488   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:44:36.260546   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:44:36.303410   69580 cri.go:89] found id: ""
	I0501 03:44:36.303436   69580 logs.go:276] 0 containers: []
	W0501 03:44:36.303448   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:44:36.303459   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:44:36.303475   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:44:36.390427   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:44:36.390468   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:44:36.433631   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:44:36.433663   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:44:36.486334   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:44:36.486365   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:44:36.502145   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:44:36.502175   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:44:36.586733   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:44:39.087607   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:44:39.102475   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:44:39.102552   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:44:39.141916   69580 cri.go:89] found id: ""
	I0501 03:44:39.141947   69580 logs.go:276] 0 containers: []
	W0501 03:44:39.141958   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:44:39.141964   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:44:39.142012   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:44:39.188472   69580 cri.go:89] found id: ""
	I0501 03:44:39.188501   69580 logs.go:276] 0 containers: []
	W0501 03:44:39.188512   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:44:39.188520   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:44:39.188582   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:44:39.243282   69580 cri.go:89] found id: ""
	I0501 03:44:39.243306   69580 logs.go:276] 0 containers: []
	W0501 03:44:39.243313   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:44:39.243318   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:44:39.243377   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:44:39.288254   69580 cri.go:89] found id: ""
	I0501 03:44:39.288284   69580 logs.go:276] 0 containers: []
	W0501 03:44:39.288296   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:44:39.288304   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:44:39.288379   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:44:39.330846   69580 cri.go:89] found id: ""
	I0501 03:44:39.330879   69580 logs.go:276] 0 containers: []
	W0501 03:44:39.330892   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:44:39.330901   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:44:39.330969   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:44:39.377603   69580 cri.go:89] found id: ""
	I0501 03:44:39.377632   69580 logs.go:276] 0 containers: []
	W0501 03:44:39.377642   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:44:39.377649   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:44:39.377710   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:44:39.421545   69580 cri.go:89] found id: ""
	I0501 03:44:39.421574   69580 logs.go:276] 0 containers: []
	W0501 03:44:39.421585   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:44:39.421594   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:44:39.421653   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:44:39.463394   69580 cri.go:89] found id: ""
	I0501 03:44:39.463424   69580 logs.go:276] 0 containers: []
	W0501 03:44:39.463435   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:44:39.463447   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:44:39.463464   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:44:39.552196   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:44:39.552218   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:44:39.552229   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:44:39.648509   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:44:39.648549   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:44:39.702829   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:44:39.702866   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:44:39.757712   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:44:39.757746   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:44:42.273443   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:44:42.289788   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:44:42.289856   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:44:42.336802   69580 cri.go:89] found id: ""
	I0501 03:44:42.336833   69580 logs.go:276] 0 containers: []
	W0501 03:44:42.336846   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:44:42.336854   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:44:42.336919   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:44:42.387973   69580 cri.go:89] found id: ""
	I0501 03:44:42.388017   69580 logs.go:276] 0 containers: []
	W0501 03:44:42.388028   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:44:42.388036   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:44:42.388103   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:44:42.444866   69580 cri.go:89] found id: ""
	I0501 03:44:42.444895   69580 logs.go:276] 0 containers: []
	W0501 03:44:42.444906   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:44:42.444914   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:44:42.444987   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:44:42.493647   69580 cri.go:89] found id: ""
	I0501 03:44:42.493676   69580 logs.go:276] 0 containers: []
	W0501 03:44:42.493686   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:44:42.493692   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:44:42.493748   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:44:42.535046   69580 cri.go:89] found id: ""
	I0501 03:44:42.535075   69580 logs.go:276] 0 containers: []
	W0501 03:44:42.535086   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:44:42.535093   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:44:42.535161   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:44:42.579453   69580 cri.go:89] found id: ""
	I0501 03:44:42.579486   69580 logs.go:276] 0 containers: []
	W0501 03:44:42.579499   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:44:42.579507   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:44:42.579568   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:44:42.621903   69580 cri.go:89] found id: ""
	I0501 03:44:42.621931   69580 logs.go:276] 0 containers: []
	W0501 03:44:42.621942   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:44:42.621950   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:44:42.622009   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:44:42.666202   69580 cri.go:89] found id: ""
	I0501 03:44:42.666232   69580 logs.go:276] 0 containers: []
	W0501 03:44:42.666243   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:44:42.666257   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:44:42.666272   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:44:42.736032   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:44:42.736078   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:44:42.750773   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:44:42.750799   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:44:42.836942   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:44:42.836975   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:44:42.836997   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:44:42.930660   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:44:42.930695   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:44:45.479619   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:44:45.495112   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:44:45.495174   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:44:45.536693   69580 cri.go:89] found id: ""
	I0501 03:44:45.536722   69580 logs.go:276] 0 containers: []
	W0501 03:44:45.536730   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:44:45.536737   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:44:45.536785   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:44:45.577838   69580 cri.go:89] found id: ""
	I0501 03:44:45.577866   69580 logs.go:276] 0 containers: []
	W0501 03:44:45.577876   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:44:45.577894   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:44:45.577958   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:44:45.615842   69580 cri.go:89] found id: ""
	I0501 03:44:45.615868   69580 logs.go:276] 0 containers: []
	W0501 03:44:45.615879   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:44:45.615892   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:44:45.615953   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:44:45.654948   69580 cri.go:89] found id: ""
	I0501 03:44:45.654972   69580 logs.go:276] 0 containers: []
	W0501 03:44:45.654980   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:44:45.654986   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:44:45.655042   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:44:45.695104   69580 cri.go:89] found id: ""
	I0501 03:44:45.695129   69580 logs.go:276] 0 containers: []
	W0501 03:44:45.695138   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:44:45.695145   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:44:45.695212   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:44:45.737609   69580 cri.go:89] found id: ""
	I0501 03:44:45.737633   69580 logs.go:276] 0 containers: []
	W0501 03:44:45.737641   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:44:45.737647   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:44:45.737693   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:44:45.778655   69580 cri.go:89] found id: ""
	I0501 03:44:45.778685   69580 logs.go:276] 0 containers: []
	W0501 03:44:45.778696   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:44:45.778702   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:44:45.778781   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:44:45.819430   69580 cri.go:89] found id: ""
	I0501 03:44:45.819452   69580 logs.go:276] 0 containers: []
	W0501 03:44:45.819460   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:44:45.819469   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:44:45.819485   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:44:45.875879   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:44:45.875911   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:44:45.892035   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:44:45.892062   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:44:45.975803   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:44:45.975836   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:44:45.975853   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:44:46.058183   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:44:46.058222   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:44:48.604991   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:44:48.621226   69580 kubeadm.go:591] duration metric: took 4m4.888665162s to restartPrimaryControlPlane
	W0501 03:44:48.621351   69580 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0501 03:44:48.621407   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0501 03:44:49.654748   69580 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.033320548s)
	I0501 03:44:49.654838   69580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 03:44:49.671511   69580 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0501 03:44:49.684266   69580 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0501 03:44:49.697079   69580 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0501 03:44:49.697101   69580 kubeadm.go:156] found existing configuration files:
	
	I0501 03:44:49.697159   69580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0501 03:44:49.710609   69580 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0501 03:44:49.710692   69580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0501 03:44:49.723647   69580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0501 03:44:49.736855   69580 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0501 03:44:49.737023   69580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0501 03:44:49.748842   69580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0501 03:44:49.760856   69580 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0501 03:44:49.760923   69580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0501 03:44:49.772685   69580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0501 03:44:49.784035   69580 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0501 03:44:49.784114   69580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0501 03:44:49.795699   69580 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0501 03:44:49.869387   69580 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0501 03:44:49.869481   69580 kubeadm.go:309] [preflight] Running pre-flight checks
	I0501 03:44:50.028858   69580 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0501 03:44:50.028999   69580 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0501 03:44:50.029182   69580 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0501 03:44:50.242773   69580 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0501 03:44:50.244816   69580 out.go:204]   - Generating certificates and keys ...
	I0501 03:44:50.244918   69580 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0501 03:44:50.245008   69580 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0501 03:44:50.245111   69580 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0501 03:44:50.245216   69580 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0501 03:44:50.245331   69580 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0501 03:44:50.245424   69580 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0501 03:44:50.245490   69580 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0501 03:44:50.245556   69580 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0501 03:44:50.245629   69580 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0501 03:44:50.245724   69580 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0501 03:44:50.245784   69580 kubeadm.go:309] [certs] Using the existing "sa" key
	I0501 03:44:50.245877   69580 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0501 03:44:50.501955   69580 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0501 03:44:50.683749   69580 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0501 03:44:50.905745   69580 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0501 03:44:51.005912   69580 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0501 03:44:51.025470   69580 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0501 03:44:51.029411   69580 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0501 03:44:51.029859   69580 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0501 03:44:51.181498   69580 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0501 03:44:51.183222   69580 out.go:204]   - Booting up control plane ...
	I0501 03:44:51.183334   69580 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0501 03:44:51.200394   69580 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0501 03:44:51.201612   69580 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0501 03:44:51.202445   69580 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0501 03:44:51.204681   69580 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0501 03:45:31.207553   69580 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0501 03:45:31.208328   69580 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0501 03:45:31.208516   69580 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0501 03:45:36.209029   69580 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0501 03:45:36.209300   69580 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0501 03:45:46.209837   69580 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0501 03:45:46.210120   69580 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0501 03:46:06.211471   69580 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0501 03:46:06.211673   69580 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0501 03:46:46.214470   69580 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0501 03:46:46.214695   69580 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0501 03:46:46.214721   69580 kubeadm.go:309] 
	I0501 03:46:46.214770   69580 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0501 03:46:46.214837   69580 kubeadm.go:309] 		timed out waiting for the condition
	I0501 03:46:46.214875   69580 kubeadm.go:309] 
	I0501 03:46:46.214936   69580 kubeadm.go:309] 	This error is likely caused by:
	I0501 03:46:46.214983   69580 kubeadm.go:309] 		- The kubelet is not running
	I0501 03:46:46.215076   69580 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0501 03:46:46.215084   69580 kubeadm.go:309] 
	I0501 03:46:46.215169   69580 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0501 03:46:46.215201   69580 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0501 03:46:46.215233   69580 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0501 03:46:46.215239   69580 kubeadm.go:309] 
	I0501 03:46:46.215380   69580 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0501 03:46:46.215489   69580 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0501 03:46:46.215505   69580 kubeadm.go:309] 
	I0501 03:46:46.215657   69580 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0501 03:46:46.215782   69580 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0501 03:46:46.215882   69580 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0501 03:46:46.215972   69580 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0501 03:46:46.215984   69580 kubeadm.go:309] 
	I0501 03:46:46.217243   69580 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0501 03:46:46.217352   69580 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0501 03:46:46.217426   69580 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0501 03:46:46.217550   69580 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0501 03:46:46.217611   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0501 03:46:47.375634   69580 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.157990231s)
	I0501 03:46:47.375723   69580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 03:46:47.392333   69580 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0501 03:46:47.404983   69580 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0501 03:46:47.405007   69580 kubeadm.go:156] found existing configuration files:
	
	I0501 03:46:47.405054   69580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0501 03:46:47.417437   69580 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0501 03:46:47.417501   69580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0501 03:46:47.429929   69580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0501 03:46:47.441141   69580 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0501 03:46:47.441215   69580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0501 03:46:47.453012   69580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0501 03:46:47.463702   69580 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0501 03:46:47.463759   69580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0501 03:46:47.474783   69580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0501 03:46:47.485793   69580 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0501 03:46:47.485853   69580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0501 03:46:47.497706   69580 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0501 03:46:47.588221   69580 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0501 03:46:47.588340   69580 kubeadm.go:309] [preflight] Running pre-flight checks
	I0501 03:46:47.759631   69580 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0501 03:46:47.759801   69580 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0501 03:46:47.759949   69580 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0501 03:46:47.978077   69580 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0501 03:46:47.980130   69580 out.go:204]   - Generating certificates and keys ...
	I0501 03:46:47.980240   69580 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0501 03:46:47.980323   69580 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0501 03:46:47.980455   69580 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0501 03:46:47.980579   69580 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0501 03:46:47.980679   69580 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0501 03:46:47.980771   69580 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0501 03:46:47.980864   69580 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0501 03:46:47.981256   69580 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0501 03:46:47.981616   69580 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0501 03:46:47.981858   69580 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0501 03:46:47.981907   69580 kubeadm.go:309] [certs] Using the existing "sa" key
	I0501 03:46:47.981991   69580 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0501 03:46:48.100377   69580 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0501 03:46:48.463892   69580 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0501 03:46:48.521991   69580 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0501 03:46:48.735222   69580 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0501 03:46:48.753098   69580 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0501 03:46:48.756950   69580 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0501 03:46:48.757379   69580 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0501 03:46:48.937039   69580 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0501 03:46:48.939065   69580 out.go:204]   - Booting up control plane ...
	I0501 03:46:48.939183   69580 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0501 03:46:48.961380   69580 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0501 03:46:48.962890   69580 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0501 03:46:48.963978   69580 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0501 03:46:48.971754   69580 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0501 03:47:28.974873   69580 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0501 03:47:28.975296   69580 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0501 03:47:28.975545   69580 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0501 03:47:33.976469   69580 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0501 03:47:33.976699   69580 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0501 03:47:43.977443   69580 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0501 03:47:43.977663   69580 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0501 03:48:03.979113   69580 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0501 03:48:03.979409   69580 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0501 03:48:43.982479   69580 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0501 03:48:43.982781   69580 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0501 03:48:43.983363   69580 kubeadm.go:309] 
	I0501 03:48:43.983427   69580 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0501 03:48:43.983484   69580 kubeadm.go:309] 		timed out waiting for the condition
	I0501 03:48:43.983490   69580 kubeadm.go:309] 
	I0501 03:48:43.983520   69580 kubeadm.go:309] 	This error is likely caused by:
	I0501 03:48:43.983547   69580 kubeadm.go:309] 		- The kubelet is not running
	I0501 03:48:43.983633   69580 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0501 03:48:43.983637   69580 kubeadm.go:309] 
	I0501 03:48:43.983721   69580 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0501 03:48:43.983748   69580 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0501 03:48:43.983774   69580 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0501 03:48:43.983778   69580 kubeadm.go:309] 
	I0501 03:48:43.983861   69580 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0501 03:48:43.983928   69580 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0501 03:48:43.983932   69580 kubeadm.go:309] 
	I0501 03:48:43.984023   69580 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0501 03:48:43.984094   69580 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0501 03:48:43.984155   69580 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0501 03:48:43.984212   69580 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0501 03:48:43.984216   69580 kubeadm.go:309] 
	I0501 03:48:43.985577   69580 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0501 03:48:43.985777   69580 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0501 03:48:43.985875   69580 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0501 03:48:43.985971   69580 kubeadm.go:393] duration metric: took 8m0.315126498s to StartCluster
	I0501 03:48:43.986025   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:48:43.986092   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:48:44.038296   69580 cri.go:89] found id: ""
	I0501 03:48:44.038328   69580 logs.go:276] 0 containers: []
	W0501 03:48:44.038339   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:48:44.038346   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:48:44.038426   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:48:44.081855   69580 cri.go:89] found id: ""
	I0501 03:48:44.081891   69580 logs.go:276] 0 containers: []
	W0501 03:48:44.081904   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:48:44.081913   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:48:44.081996   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:48:44.131400   69580 cri.go:89] found id: ""
	I0501 03:48:44.131435   69580 logs.go:276] 0 containers: []
	W0501 03:48:44.131445   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:48:44.131451   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:48:44.131519   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:48:44.178274   69580 cri.go:89] found id: ""
	I0501 03:48:44.178302   69580 logs.go:276] 0 containers: []
	W0501 03:48:44.178310   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:48:44.178316   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:48:44.178376   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:48:44.223087   69580 cri.go:89] found id: ""
	I0501 03:48:44.223115   69580 logs.go:276] 0 containers: []
	W0501 03:48:44.223125   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:48:44.223133   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:48:44.223196   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:48:44.266093   69580 cri.go:89] found id: ""
	I0501 03:48:44.266122   69580 logs.go:276] 0 containers: []
	W0501 03:48:44.266135   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:48:44.266143   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:48:44.266204   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:48:44.307766   69580 cri.go:89] found id: ""
	I0501 03:48:44.307795   69580 logs.go:276] 0 containers: []
	W0501 03:48:44.307806   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:48:44.307813   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:48:44.307876   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:48:44.348548   69580 cri.go:89] found id: ""
	I0501 03:48:44.348576   69580 logs.go:276] 0 containers: []
	W0501 03:48:44.348585   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:48:44.348594   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:48:44.348614   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:48:44.394160   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:48:44.394209   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:48:44.449845   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:48:44.449879   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:48:44.467663   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:48:44.467694   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:48:44.556150   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:48:44.556183   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:48:44.556199   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W0501 03:48:44.661110   69580 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0501 03:48:44.661169   69580 out.go:239] * 
	* 
	W0501 03:48:44.661226   69580 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0501 03:48:44.661246   69580 out.go:239] * 
	* 
	W0501 03:48:44.662064   69580 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0501 03:48:44.665608   69580 out.go:177] 
	W0501 03:48:44.666799   69580 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0501 03:48:44.666851   69580 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0501 03:48:44.666870   69580 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0501 03:48:44.668487   69580 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-503971 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-503971 -n old-k8s-version-503971
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-503971 -n old-k8s-version-503971: exit status 2 (252.298513ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-503971 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-503971 logs -n 25: (1.698147466s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p cert-options-582976                                 | cert-options-582976          | jenkins | v1.33.0 | 01 May 24 03:30 UTC | 01 May 24 03:30 UTC |
	| delete  | -p pause-542495                                        | pause-542495                 | jenkins | v1.33.0 | 01 May 24 03:30 UTC | 01 May 24 03:30 UTC |
	| start   | -p no-preload-892672                                   | no-preload-892672            | jenkins | v1.33.0 | 01 May 24 03:30 UTC | 01 May 24 03:32 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-277128                                  | embed-certs-277128           | jenkins | v1.33.0 | 01 May 24 03:30 UTC | 01 May 24 03:32 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-046243                           | kubernetes-upgrade-046243    | jenkins | v1.33.0 | 01 May 24 03:30 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-046243                           | kubernetes-upgrade-046243    | jenkins | v1.33.0 | 01 May 24 03:30 UTC | 01 May 24 03:32 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-046243                           | kubernetes-upgrade-046243    | jenkins | v1.33.0 | 01 May 24 03:32 UTC | 01 May 24 03:32 UTC |
	| delete  | -p                                                     | disable-driver-mounts-483221 | jenkins | v1.33.0 | 01 May 24 03:32 UTC | 01 May 24 03:32 UTC |
	|         | disable-driver-mounts-483221                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-715118 | jenkins | v1.33.0 | 01 May 24 03:32 UTC | 01 May 24 03:33 UTC |
	|         | default-k8s-diff-port-715118                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-892672             | no-preload-892672            | jenkins | v1.33.0 | 01 May 24 03:32 UTC | 01 May 24 03:32 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-892672                                   | no-preload-892672            | jenkins | v1.33.0 | 01 May 24 03:32 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-277128            | embed-certs-277128           | jenkins | v1.33.0 | 01 May 24 03:32 UTC | 01 May 24 03:32 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-277128                                  | embed-certs-277128           | jenkins | v1.33.0 | 01 May 24 03:32 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-715118  | default-k8s-diff-port-715118 | jenkins | v1.33.0 | 01 May 24 03:33 UTC | 01 May 24 03:33 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-715118 | jenkins | v1.33.0 | 01 May 24 03:33 UTC |                     |
	|         | default-k8s-diff-port-715118                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-892672                  | no-preload-892672            | jenkins | v1.33.0 | 01 May 24 03:34 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-277128                 | embed-certs-277128           | jenkins | v1.33.0 | 01 May 24 03:34 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-892672                                   | no-preload-892672            | jenkins | v1.33.0 | 01 May 24 03:34 UTC | 01 May 24 03:46 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-503971        | old-k8s-version-503971       | jenkins | v1.33.0 | 01 May 24 03:34 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p embed-certs-277128                                  | embed-certs-277128           | jenkins | v1.33.0 | 01 May 24 03:34 UTC | 01 May 24 03:44 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-715118       | default-k8s-diff-port-715118 | jenkins | v1.33.0 | 01 May 24 03:35 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-715118 | jenkins | v1.33.0 | 01 May 24 03:35 UTC | 01 May 24 03:45 UTC |
	|         | default-k8s-diff-port-715118                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-503971                              | old-k8s-version-503971       | jenkins | v1.33.0 | 01 May 24 03:36 UTC | 01 May 24 03:36 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-503971             | old-k8s-version-503971       | jenkins | v1.33.0 | 01 May 24 03:36 UTC | 01 May 24 03:36 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-503971                              | old-k8s-version-503971       | jenkins | v1.33.0 | 01 May 24 03:36 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/01 03:36:41
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0501 03:36:41.470152   69580 out.go:291] Setting OutFile to fd 1 ...
	I0501 03:36:41.470256   69580 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 03:36:41.470264   69580 out.go:304] Setting ErrFile to fd 2...
	I0501 03:36:41.470268   69580 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 03:36:41.470484   69580 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18779-13391/.minikube/bin
	I0501 03:36:41.470989   69580 out.go:298] Setting JSON to false
	I0501 03:36:41.471856   69580 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":8345,"bootTime":1714526257,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0501 03:36:41.471911   69580 start.go:139] virtualization: kvm guest
	I0501 03:36:41.473901   69580 out.go:177] * [old-k8s-version-503971] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0501 03:36:41.474994   69580 out.go:177]   - MINIKUBE_LOCATION=18779
	I0501 03:36:41.475003   69580 notify.go:220] Checking for updates...
	I0501 03:36:41.477150   69580 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0501 03:36:41.478394   69580 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18779-13391/kubeconfig
	I0501 03:36:41.479462   69580 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18779-13391/.minikube
	I0501 03:36:41.480507   69580 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0501 03:36:41.481543   69580 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0501 03:36:41.482907   69580 config.go:182] Loaded profile config "old-k8s-version-503971": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0501 03:36:41.483279   69580 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:36:41.483311   69580 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:36:41.497758   69580 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40255
	I0501 03:36:41.498090   69580 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:36:41.498591   69580 main.go:141] libmachine: Using API Version  1
	I0501 03:36:41.498616   69580 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:36:41.498891   69580 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:36:41.499055   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .DriverName
	I0501 03:36:41.500675   69580 out.go:177] * Kubernetes 1.30.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.0
	I0501 03:36:41.501716   69580 driver.go:392] Setting default libvirt URI to qemu:///system
	I0501 03:36:41.501974   69580 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:36:41.502024   69580 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:36:41.515991   69580 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40591
	I0501 03:36:41.516392   69580 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:36:41.516826   69580 main.go:141] libmachine: Using API Version  1
	I0501 03:36:41.516846   69580 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:36:41.517120   69580 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:36:41.517281   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .DriverName
	I0501 03:36:41.551130   69580 out.go:177] * Using the kvm2 driver based on existing profile
	I0501 03:36:41.552244   69580 start.go:297] selected driver: kvm2
	I0501 03:36:41.552253   69580 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-503971 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-503971 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.104 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 03:36:41.552369   69580 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0501 03:36:41.553004   69580 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0501 03:36:41.553071   69580 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18779-13391/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0501 03:36:41.567362   69580 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0501 03:36:41.567736   69580 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0501 03:36:41.567815   69580 cni.go:84] Creating CNI manager for ""
	I0501 03:36:41.567832   69580 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0501 03:36:41.567881   69580 start.go:340] cluster config:
	{Name:old-k8s-version-503971 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-503971 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.104 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 03:36:41.568012   69580 iso.go:125] acquiring lock: {Name:mkcd2496aadb29931e179193d707c731bfda98b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0501 03:36:41.569791   69580 out.go:177] * Starting "old-k8s-version-503971" primary control-plane node in "old-k8s-version-503971" cluster
	I0501 03:36:38.886755   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:36:41.571352   69580 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0501 03:36:41.571389   69580 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0501 03:36:41.571408   69580 cache.go:56] Caching tarball of preloaded images
	I0501 03:36:41.571478   69580 preload.go:173] Found /home/jenkins/minikube-integration/18779-13391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0501 03:36:41.571490   69580 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0501 03:36:41.571588   69580 profile.go:143] Saving config to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/old-k8s-version-503971/config.json ...
	I0501 03:36:41.571775   69580 start.go:360] acquireMachinesLock for old-k8s-version-503971: {Name:mkc4548e5afe1d4b0a833f6c522103562b0cefff Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0501 03:36:44.966689   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:36:48.038769   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:36:54.118675   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:36:57.190716   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:37:03.270653   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:37:06.342693   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:37:12.422726   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:37:15.494702   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:37:21.574646   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:37:24.646711   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:37:30.726724   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:37:33.798628   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:37:39.878657   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:37:42.950647   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:37:49.030699   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:37:52.102665   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:37:58.182647   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:38:01.254620   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:38:07.334707   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:38:10.406670   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:38:16.486684   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:38:19.558714   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:38:25.638642   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:38:28.710687   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:38:34.790659   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:38:37.862651   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:38:43.942639   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:38:47.014729   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:38:53.094674   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:38:56.166684   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:39:02.246662   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:39:05.318633   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:39:11.398705   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:39:14.470640   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:39:20.550642   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:39:23.622701   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:39:32.707273   68864 start.go:364] duration metric: took 4m38.787656406s to acquireMachinesLock for "embed-certs-277128"
	I0501 03:39:32.707327   68864 start.go:96] Skipping create...Using existing machine configuration
	I0501 03:39:32.707336   68864 fix.go:54] fixHost starting: 
	I0501 03:39:32.707655   68864 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:39:32.707697   68864 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:39:32.722689   68864 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35015
	I0501 03:39:32.723061   68864 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:39:32.723536   68864 main.go:141] libmachine: Using API Version  1
	I0501 03:39:32.723557   68864 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:39:32.723848   68864 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:39:32.724041   68864 main.go:141] libmachine: (embed-certs-277128) Calling .DriverName
	I0501 03:39:32.724164   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetState
	I0501 03:39:32.725542   68864 fix.go:112] recreateIfNeeded on embed-certs-277128: state=Stopped err=<nil>
	I0501 03:39:32.725569   68864 main.go:141] libmachine: (embed-certs-277128) Calling .DriverName
	W0501 03:39:32.725714   68864 fix.go:138] unexpected machine state, will restart: <nil>
	I0501 03:39:32.727403   68864 out.go:177] * Restarting existing kvm2 VM for "embed-certs-277128" ...
	I0501 03:39:29.702654   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:39:32.704906   68640 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0501 03:39:32.704940   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetMachineName
	I0501 03:39:32.705254   68640 buildroot.go:166] provisioning hostname "no-preload-892672"
	I0501 03:39:32.705278   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetMachineName
	I0501 03:39:32.705499   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHHostname
	I0501 03:39:32.707128   68640 machine.go:97] duration metric: took 4m44.649178925s to provisionDockerMachine
	I0501 03:39:32.707171   68640 fix.go:56] duration metric: took 4m44.67002247s for fixHost
	I0501 03:39:32.707178   68640 start.go:83] releasing machines lock for "no-preload-892672", held for 4m44.670048235s
	W0501 03:39:32.707201   68640 start.go:713] error starting host: provision: host is not running
	W0501 03:39:32.707293   68640 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0501 03:39:32.707305   68640 start.go:728] Will try again in 5 seconds ...
	I0501 03:39:32.728616   68864 main.go:141] libmachine: (embed-certs-277128) Calling .Start
	I0501 03:39:32.728768   68864 main.go:141] libmachine: (embed-certs-277128) Ensuring networks are active...
	I0501 03:39:32.729434   68864 main.go:141] libmachine: (embed-certs-277128) Ensuring network default is active
	I0501 03:39:32.729789   68864 main.go:141] libmachine: (embed-certs-277128) Ensuring network mk-embed-certs-277128 is active
	I0501 03:39:32.730218   68864 main.go:141] libmachine: (embed-certs-277128) Getting domain xml...
	I0501 03:39:32.730972   68864 main.go:141] libmachine: (embed-certs-277128) Creating domain...
	I0501 03:39:37.711605   68640 start.go:360] acquireMachinesLock for no-preload-892672: {Name:mkc4548e5afe1d4b0a833f6c522103562b0cefff Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0501 03:39:33.914124   68864 main.go:141] libmachine: (embed-certs-277128) Waiting to get IP...
	I0501 03:39:33.915022   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:33.915411   68864 main.go:141] libmachine: (embed-certs-277128) DBG | unable to find current IP address of domain embed-certs-277128 in network mk-embed-certs-277128
	I0501 03:39:33.915473   68864 main.go:141] libmachine: (embed-certs-277128) DBG | I0501 03:39:33.915391   70171 retry.go:31] will retry after 278.418743ms: waiting for machine to come up
	I0501 03:39:34.195933   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:34.196470   68864 main.go:141] libmachine: (embed-certs-277128) DBG | unable to find current IP address of domain embed-certs-277128 in network mk-embed-certs-277128
	I0501 03:39:34.196497   68864 main.go:141] libmachine: (embed-certs-277128) DBG | I0501 03:39:34.196417   70171 retry.go:31] will retry after 375.593174ms: waiting for machine to come up
	I0501 03:39:34.574178   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:34.574666   68864 main.go:141] libmachine: (embed-certs-277128) DBG | unable to find current IP address of domain embed-certs-277128 in network mk-embed-certs-277128
	I0501 03:39:34.574689   68864 main.go:141] libmachine: (embed-certs-277128) DBG | I0501 03:39:34.574617   70171 retry.go:31] will retry after 377.853045ms: waiting for machine to come up
	I0501 03:39:34.954022   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:34.954436   68864 main.go:141] libmachine: (embed-certs-277128) DBG | unable to find current IP address of domain embed-certs-277128 in network mk-embed-certs-277128
	I0501 03:39:34.954465   68864 main.go:141] libmachine: (embed-certs-277128) DBG | I0501 03:39:34.954375   70171 retry.go:31] will retry after 374.024178ms: waiting for machine to come up
	I0501 03:39:35.330087   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:35.330514   68864 main.go:141] libmachine: (embed-certs-277128) DBG | unable to find current IP address of domain embed-certs-277128 in network mk-embed-certs-277128
	I0501 03:39:35.330545   68864 main.go:141] libmachine: (embed-certs-277128) DBG | I0501 03:39:35.330478   70171 retry.go:31] will retry after 488.296666ms: waiting for machine to come up
	I0501 03:39:35.820177   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:35.820664   68864 main.go:141] libmachine: (embed-certs-277128) DBG | unable to find current IP address of domain embed-certs-277128 in network mk-embed-certs-277128
	I0501 03:39:35.820692   68864 main.go:141] libmachine: (embed-certs-277128) DBG | I0501 03:39:35.820629   70171 retry.go:31] will retry after 665.825717ms: waiting for machine to come up
	I0501 03:39:36.488492   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:36.488910   68864 main.go:141] libmachine: (embed-certs-277128) DBG | unable to find current IP address of domain embed-certs-277128 in network mk-embed-certs-277128
	I0501 03:39:36.488941   68864 main.go:141] libmachine: (embed-certs-277128) DBG | I0501 03:39:36.488860   70171 retry.go:31] will retry after 1.04269192s: waiting for machine to come up
	I0501 03:39:37.532622   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:37.533006   68864 main.go:141] libmachine: (embed-certs-277128) DBG | unable to find current IP address of domain embed-certs-277128 in network mk-embed-certs-277128
	I0501 03:39:37.533032   68864 main.go:141] libmachine: (embed-certs-277128) DBG | I0501 03:39:37.532968   70171 retry.go:31] will retry after 1.348239565s: waiting for machine to come up
	I0501 03:39:38.882895   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:38.883364   68864 main.go:141] libmachine: (embed-certs-277128) DBG | unable to find current IP address of domain embed-certs-277128 in network mk-embed-certs-277128
	I0501 03:39:38.883396   68864 main.go:141] libmachine: (embed-certs-277128) DBG | I0501 03:39:38.883301   70171 retry.go:31] will retry after 1.718495999s: waiting for machine to come up
	I0501 03:39:40.604329   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:40.604760   68864 main.go:141] libmachine: (embed-certs-277128) DBG | unable to find current IP address of domain embed-certs-277128 in network mk-embed-certs-277128
	I0501 03:39:40.604791   68864 main.go:141] libmachine: (embed-certs-277128) DBG | I0501 03:39:40.604703   70171 retry.go:31] will retry after 2.237478005s: waiting for machine to come up
	I0501 03:39:42.843398   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:42.843920   68864 main.go:141] libmachine: (embed-certs-277128) DBG | unable to find current IP address of domain embed-certs-277128 in network mk-embed-certs-277128
	I0501 03:39:42.843949   68864 main.go:141] libmachine: (embed-certs-277128) DBG | I0501 03:39:42.843869   70171 retry.go:31] will retry after 2.618059388s: waiting for machine to come up
	I0501 03:39:45.465576   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:45.465968   68864 main.go:141] libmachine: (embed-certs-277128) DBG | unable to find current IP address of domain embed-certs-277128 in network mk-embed-certs-277128
	I0501 03:39:45.465992   68864 main.go:141] libmachine: (embed-certs-277128) DBG | I0501 03:39:45.465928   70171 retry.go:31] will retry after 2.895120972s: waiting for machine to come up
	I0501 03:39:48.362239   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:48.362651   68864 main.go:141] libmachine: (embed-certs-277128) DBG | unable to find current IP address of domain embed-certs-277128 in network mk-embed-certs-277128
	I0501 03:39:48.362683   68864 main.go:141] libmachine: (embed-certs-277128) DBG | I0501 03:39:48.362617   70171 retry.go:31] will retry after 2.857441112s: waiting for machine to come up
	I0501 03:39:52.791989   69237 start.go:364] duration metric: took 4m2.036138912s to acquireMachinesLock for "default-k8s-diff-port-715118"
	I0501 03:39:52.792059   69237 start.go:96] Skipping create...Using existing machine configuration
	I0501 03:39:52.792071   69237 fix.go:54] fixHost starting: 
	I0501 03:39:52.792454   69237 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:39:52.792492   69237 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:39:52.809707   69237 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46419
	I0501 03:39:52.810075   69237 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:39:52.810544   69237 main.go:141] libmachine: Using API Version  1
	I0501 03:39:52.810564   69237 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:39:52.810881   69237 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:39:52.811067   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .DriverName
	I0501 03:39:52.811217   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetState
	I0501 03:39:52.812787   69237 fix.go:112] recreateIfNeeded on default-k8s-diff-port-715118: state=Stopped err=<nil>
	I0501 03:39:52.812820   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .DriverName
	W0501 03:39:52.812969   69237 fix.go:138] unexpected machine state, will restart: <nil>
	I0501 03:39:52.815136   69237 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-715118" ...
	I0501 03:39:51.223450   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:51.223938   68864 main.go:141] libmachine: (embed-certs-277128) Found IP for machine: 192.168.50.218
	I0501 03:39:51.223965   68864 main.go:141] libmachine: (embed-certs-277128) Reserving static IP address...
	I0501 03:39:51.223982   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has current primary IP address 192.168.50.218 and MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:51.224375   68864 main.go:141] libmachine: (embed-certs-277128) Reserved static IP address: 192.168.50.218
	I0501 03:39:51.224440   68864 main.go:141] libmachine: (embed-certs-277128) DBG | found host DHCP lease matching {name: "embed-certs-277128", mac: "52:54:00:96:11:7d", ip: "192.168.50.218"} in network mk-embed-certs-277128: {Iface:virbr2 ExpiryTime:2024-05-01 04:39:45 +0000 UTC Type:0 Mac:52:54:00:96:11:7d Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-277128 Clientid:01:52:54:00:96:11:7d}
	I0501 03:39:51.224454   68864 main.go:141] libmachine: (embed-certs-277128) Waiting for SSH to be available...
	I0501 03:39:51.224491   68864 main.go:141] libmachine: (embed-certs-277128) DBG | skip adding static IP to network mk-embed-certs-277128 - found existing host DHCP lease matching {name: "embed-certs-277128", mac: "52:54:00:96:11:7d", ip: "192.168.50.218"}
	I0501 03:39:51.224507   68864 main.go:141] libmachine: (embed-certs-277128) DBG | Getting to WaitForSSH function...
	I0501 03:39:51.226437   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:51.226733   68864 main.go:141] libmachine: (embed-certs-277128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:7d", ip: ""} in network mk-embed-certs-277128: {Iface:virbr2 ExpiryTime:2024-05-01 04:39:45 +0000 UTC Type:0 Mac:52:54:00:96:11:7d Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-277128 Clientid:01:52:54:00:96:11:7d}
	I0501 03:39:51.226764   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined IP address 192.168.50.218 and MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:51.226863   68864 main.go:141] libmachine: (embed-certs-277128) DBG | Using SSH client type: external
	I0501 03:39:51.226886   68864 main.go:141] libmachine: (embed-certs-277128) DBG | Using SSH private key: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/embed-certs-277128/id_rsa (-rw-------)
	I0501 03:39:51.226917   68864 main.go:141] libmachine: (embed-certs-277128) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.218 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18779-13391/.minikube/machines/embed-certs-277128/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0501 03:39:51.226930   68864 main.go:141] libmachine: (embed-certs-277128) DBG | About to run SSH command:
	I0501 03:39:51.226941   68864 main.go:141] libmachine: (embed-certs-277128) DBG | exit 0
	I0501 03:39:51.354225   68864 main.go:141] libmachine: (embed-certs-277128) DBG | SSH cmd err, output: <nil>: 
	I0501 03:39:51.354641   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetConfigRaw
	I0501 03:39:51.355337   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetIP
	I0501 03:39:51.357934   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:51.358265   68864 main.go:141] libmachine: (embed-certs-277128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:7d", ip: ""} in network mk-embed-certs-277128: {Iface:virbr2 ExpiryTime:2024-05-01 04:39:45 +0000 UTC Type:0 Mac:52:54:00:96:11:7d Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-277128 Clientid:01:52:54:00:96:11:7d}
	I0501 03:39:51.358302   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined IP address 192.168.50.218 and MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:51.358584   68864 profile.go:143] Saving config to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/embed-certs-277128/config.json ...
	I0501 03:39:51.358753   68864 machine.go:94] provisionDockerMachine start ...
	I0501 03:39:51.358771   68864 main.go:141] libmachine: (embed-certs-277128) Calling .DriverName
	I0501 03:39:51.358940   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHHostname
	I0501 03:39:51.361202   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:51.361564   68864 main.go:141] libmachine: (embed-certs-277128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:7d", ip: ""} in network mk-embed-certs-277128: {Iface:virbr2 ExpiryTime:2024-05-01 04:39:45 +0000 UTC Type:0 Mac:52:54:00:96:11:7d Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-277128 Clientid:01:52:54:00:96:11:7d}
	I0501 03:39:51.361600   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined IP address 192.168.50.218 and MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:51.361711   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHPort
	I0501 03:39:51.361884   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHKeyPath
	I0501 03:39:51.362054   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHKeyPath
	I0501 03:39:51.362170   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHUsername
	I0501 03:39:51.362344   68864 main.go:141] libmachine: Using SSH client type: native
	I0501 03:39:51.362572   68864 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.218 22 <nil> <nil>}
	I0501 03:39:51.362586   68864 main.go:141] libmachine: About to run SSH command:
	hostname
	I0501 03:39:51.467448   68864 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0501 03:39:51.467480   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetMachineName
	I0501 03:39:51.467740   68864 buildroot.go:166] provisioning hostname "embed-certs-277128"
	I0501 03:39:51.467772   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetMachineName
	I0501 03:39:51.467953   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHHostname
	I0501 03:39:51.470653   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:51.471022   68864 main.go:141] libmachine: (embed-certs-277128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:7d", ip: ""} in network mk-embed-certs-277128: {Iface:virbr2 ExpiryTime:2024-05-01 04:39:45 +0000 UTC Type:0 Mac:52:54:00:96:11:7d Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-277128 Clientid:01:52:54:00:96:11:7d}
	I0501 03:39:51.471044   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined IP address 192.168.50.218 and MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:51.471159   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHPort
	I0501 03:39:51.471341   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHKeyPath
	I0501 03:39:51.471482   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHKeyPath
	I0501 03:39:51.471590   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHUsername
	I0501 03:39:51.471729   68864 main.go:141] libmachine: Using SSH client type: native
	I0501 03:39:51.471913   68864 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.218 22 <nil> <nil>}
	I0501 03:39:51.471934   68864 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-277128 && echo "embed-certs-277128" | sudo tee /etc/hostname
	I0501 03:39:51.594372   68864 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-277128
	
	I0501 03:39:51.594422   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHHostname
	I0501 03:39:51.596978   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:51.597307   68864 main.go:141] libmachine: (embed-certs-277128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:7d", ip: ""} in network mk-embed-certs-277128: {Iface:virbr2 ExpiryTime:2024-05-01 04:39:45 +0000 UTC Type:0 Mac:52:54:00:96:11:7d Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-277128 Clientid:01:52:54:00:96:11:7d}
	I0501 03:39:51.597334   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined IP address 192.168.50.218 and MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:51.597495   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHPort
	I0501 03:39:51.597710   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHKeyPath
	I0501 03:39:51.597865   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHKeyPath
	I0501 03:39:51.597971   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHUsername
	I0501 03:39:51.598097   68864 main.go:141] libmachine: Using SSH client type: native
	I0501 03:39:51.598250   68864 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.218 22 <nil> <nil>}
	I0501 03:39:51.598271   68864 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-277128' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-277128/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-277128' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0501 03:39:51.712791   68864 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0501 03:39:51.712825   68864 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18779-13391/.minikube CaCertPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18779-13391/.minikube}
	I0501 03:39:51.712850   68864 buildroot.go:174] setting up certificates
	I0501 03:39:51.712860   68864 provision.go:84] configureAuth start
	I0501 03:39:51.712869   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetMachineName
	I0501 03:39:51.713158   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetIP
	I0501 03:39:51.715577   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:51.715885   68864 main.go:141] libmachine: (embed-certs-277128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:7d", ip: ""} in network mk-embed-certs-277128: {Iface:virbr2 ExpiryTime:2024-05-01 04:39:45 +0000 UTC Type:0 Mac:52:54:00:96:11:7d Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-277128 Clientid:01:52:54:00:96:11:7d}
	I0501 03:39:51.715918   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined IP address 192.168.50.218 and MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:51.716040   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHHostname
	I0501 03:39:51.718057   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:51.718342   68864 main.go:141] libmachine: (embed-certs-277128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:7d", ip: ""} in network mk-embed-certs-277128: {Iface:virbr2 ExpiryTime:2024-05-01 04:39:45 +0000 UTC Type:0 Mac:52:54:00:96:11:7d Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-277128 Clientid:01:52:54:00:96:11:7d}
	I0501 03:39:51.718367   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined IP address 192.168.50.218 and MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:51.718550   68864 provision.go:143] copyHostCerts
	I0501 03:39:51.718612   68864 exec_runner.go:144] found /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem, removing ...
	I0501 03:39:51.718622   68864 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem
	I0501 03:39:51.718685   68864 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem (1078 bytes)
	I0501 03:39:51.718790   68864 exec_runner.go:144] found /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem, removing ...
	I0501 03:39:51.718798   68864 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem
	I0501 03:39:51.718823   68864 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem (1123 bytes)
	I0501 03:39:51.718881   68864 exec_runner.go:144] found /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem, removing ...
	I0501 03:39:51.718888   68864 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem
	I0501 03:39:51.718907   68864 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem (1675 bytes)
	I0501 03:39:51.718957   68864 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca-key.pem org=jenkins.embed-certs-277128 san=[127.0.0.1 192.168.50.218 embed-certs-277128 localhost minikube]
	I0501 03:39:52.100402   68864 provision.go:177] copyRemoteCerts
	I0501 03:39:52.100459   68864 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0501 03:39:52.100494   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHHostname
	I0501 03:39:52.103133   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:52.103363   68864 main.go:141] libmachine: (embed-certs-277128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:7d", ip: ""} in network mk-embed-certs-277128: {Iface:virbr2 ExpiryTime:2024-05-01 04:39:45 +0000 UTC Type:0 Mac:52:54:00:96:11:7d Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-277128 Clientid:01:52:54:00:96:11:7d}
	I0501 03:39:52.103391   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined IP address 192.168.50.218 and MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:52.103522   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHPort
	I0501 03:39:52.103694   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHKeyPath
	I0501 03:39:52.103790   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHUsername
	I0501 03:39:52.103874   68864 sshutil.go:53] new ssh client: &{IP:192.168.50.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/embed-certs-277128/id_rsa Username:docker}
	I0501 03:39:52.186017   68864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0501 03:39:52.211959   68864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0501 03:39:52.237362   68864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0501 03:39:52.264036   68864 provision.go:87] duration metric: took 551.163591ms to configureAuth
	I0501 03:39:52.264060   68864 buildroot.go:189] setting minikube options for container-runtime
	I0501 03:39:52.264220   68864 config.go:182] Loaded profile config "embed-certs-277128": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0501 03:39:52.264290   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHHostname
	I0501 03:39:52.266809   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:52.267117   68864 main.go:141] libmachine: (embed-certs-277128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:7d", ip: ""} in network mk-embed-certs-277128: {Iface:virbr2 ExpiryTime:2024-05-01 04:39:45 +0000 UTC Type:0 Mac:52:54:00:96:11:7d Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-277128 Clientid:01:52:54:00:96:11:7d}
	I0501 03:39:52.267140   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined IP address 192.168.50.218 and MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:52.267336   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHPort
	I0501 03:39:52.267529   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHKeyPath
	I0501 03:39:52.267713   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHKeyPath
	I0501 03:39:52.267863   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHUsername
	I0501 03:39:52.268096   68864 main.go:141] libmachine: Using SSH client type: native
	I0501 03:39:52.268273   68864 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.218 22 <nil> <nil>}
	I0501 03:39:52.268290   68864 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0501 03:39:52.543539   68864 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0501 03:39:52.543569   68864 machine.go:97] duration metric: took 1.184800934s to provisionDockerMachine
	I0501 03:39:52.543585   68864 start.go:293] postStartSetup for "embed-certs-277128" (driver="kvm2")
	I0501 03:39:52.543600   68864 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0501 03:39:52.543621   68864 main.go:141] libmachine: (embed-certs-277128) Calling .DriverName
	I0501 03:39:52.543974   68864 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0501 03:39:52.544007   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHHostname
	I0501 03:39:52.546566   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:52.546918   68864 main.go:141] libmachine: (embed-certs-277128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:7d", ip: ""} in network mk-embed-certs-277128: {Iface:virbr2 ExpiryTime:2024-05-01 04:39:45 +0000 UTC Type:0 Mac:52:54:00:96:11:7d Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-277128 Clientid:01:52:54:00:96:11:7d}
	I0501 03:39:52.546955   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined IP address 192.168.50.218 and MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:52.547108   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHPort
	I0501 03:39:52.547310   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHKeyPath
	I0501 03:39:52.547480   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHUsername
	I0501 03:39:52.547622   68864 sshutil.go:53] new ssh client: &{IP:192.168.50.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/embed-certs-277128/id_rsa Username:docker}
	I0501 03:39:52.636313   68864 ssh_runner.go:195] Run: cat /etc/os-release
	I0501 03:39:52.641408   68864 info.go:137] Remote host: Buildroot 2023.02.9
	I0501 03:39:52.641435   68864 filesync.go:126] Scanning /home/jenkins/minikube-integration/18779-13391/.minikube/addons for local assets ...
	I0501 03:39:52.641514   68864 filesync.go:126] Scanning /home/jenkins/minikube-integration/18779-13391/.minikube/files for local assets ...
	I0501 03:39:52.641598   68864 filesync.go:149] local asset: /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem -> 207242.pem in /etc/ssl/certs
	I0501 03:39:52.641708   68864 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0501 03:39:52.653421   68864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem --> /etc/ssl/certs/207242.pem (1708 bytes)
	I0501 03:39:52.681796   68864 start.go:296] duration metric: took 138.197388ms for postStartSetup
	I0501 03:39:52.681840   68864 fix.go:56] duration metric: took 19.974504059s for fixHost
	I0501 03:39:52.681866   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHHostname
	I0501 03:39:52.684189   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:52.684447   68864 main.go:141] libmachine: (embed-certs-277128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:7d", ip: ""} in network mk-embed-certs-277128: {Iface:virbr2 ExpiryTime:2024-05-01 04:39:45 +0000 UTC Type:0 Mac:52:54:00:96:11:7d Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-277128 Clientid:01:52:54:00:96:11:7d}
	I0501 03:39:52.684475   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined IP address 192.168.50.218 and MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:52.684691   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHPort
	I0501 03:39:52.684901   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHKeyPath
	I0501 03:39:52.685077   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHKeyPath
	I0501 03:39:52.685226   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHUsername
	I0501 03:39:52.685393   68864 main.go:141] libmachine: Using SSH client type: native
	I0501 03:39:52.685556   68864 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.218 22 <nil> <nil>}
	I0501 03:39:52.685568   68864 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0501 03:39:52.791802   68864 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714534792.758254619
	
	I0501 03:39:52.791830   68864 fix.go:216] guest clock: 1714534792.758254619
	I0501 03:39:52.791841   68864 fix.go:229] Guest: 2024-05-01 03:39:52.758254619 +0000 UTC Remote: 2024-05-01 03:39:52.681844878 +0000 UTC m=+298.906990848 (delta=76.409741ms)
	I0501 03:39:52.791886   68864 fix.go:200] guest clock delta is within tolerance: 76.409741ms
	I0501 03:39:52.791892   68864 start.go:83] releasing machines lock for "embed-certs-277128", held for 20.08458366s
	I0501 03:39:52.791918   68864 main.go:141] libmachine: (embed-certs-277128) Calling .DriverName
	I0501 03:39:52.792188   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetIP
	I0501 03:39:52.794820   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:52.795217   68864 main.go:141] libmachine: (embed-certs-277128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:7d", ip: ""} in network mk-embed-certs-277128: {Iface:virbr2 ExpiryTime:2024-05-01 04:39:45 +0000 UTC Type:0 Mac:52:54:00:96:11:7d Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-277128 Clientid:01:52:54:00:96:11:7d}
	I0501 03:39:52.795256   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined IP address 192.168.50.218 and MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:52.795427   68864 main.go:141] libmachine: (embed-certs-277128) Calling .DriverName
	I0501 03:39:52.795971   68864 main.go:141] libmachine: (embed-certs-277128) Calling .DriverName
	I0501 03:39:52.796142   68864 main.go:141] libmachine: (embed-certs-277128) Calling .DriverName
	I0501 03:39:52.796235   68864 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0501 03:39:52.796285   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHHostname
	I0501 03:39:52.796324   68864 ssh_runner.go:195] Run: cat /version.json
	I0501 03:39:52.796346   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHHostname
	I0501 03:39:52.799128   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:52.799153   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:52.799536   68864 main.go:141] libmachine: (embed-certs-277128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:7d", ip: ""} in network mk-embed-certs-277128: {Iface:virbr2 ExpiryTime:2024-05-01 04:39:45 +0000 UTC Type:0 Mac:52:54:00:96:11:7d Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-277128 Clientid:01:52:54:00:96:11:7d}
	I0501 03:39:52.799570   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined IP address 192.168.50.218 and MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:52.799617   68864 main.go:141] libmachine: (embed-certs-277128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:7d", ip: ""} in network mk-embed-certs-277128: {Iface:virbr2 ExpiryTime:2024-05-01 04:39:45 +0000 UTC Type:0 Mac:52:54:00:96:11:7d Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-277128 Clientid:01:52:54:00:96:11:7d}
	I0501 03:39:52.799647   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined IP address 192.168.50.218 and MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:52.799779   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHPort
	I0501 03:39:52.799878   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHPort
	I0501 03:39:52.799961   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHKeyPath
	I0501 03:39:52.800048   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHKeyPath
	I0501 03:39:52.800117   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHUsername
	I0501 03:39:52.800189   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHUsername
	I0501 03:39:52.800243   68864 sshutil.go:53] new ssh client: &{IP:192.168.50.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/embed-certs-277128/id_rsa Username:docker}
	I0501 03:39:52.800299   68864 sshutil.go:53] new ssh client: &{IP:192.168.50.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/embed-certs-277128/id_rsa Username:docker}
	I0501 03:39:52.901147   68864 ssh_runner.go:195] Run: systemctl --version
	I0501 03:39:52.908399   68864 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0501 03:39:53.065012   68864 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0501 03:39:53.073635   68864 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0501 03:39:53.073724   68864 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0501 03:39:53.096146   68864 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0501 03:39:53.096179   68864 start.go:494] detecting cgroup driver to use...
	I0501 03:39:53.096253   68864 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0501 03:39:53.118525   68864 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0501 03:39:53.136238   68864 docker.go:217] disabling cri-docker service (if available) ...
	I0501 03:39:53.136301   68864 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0501 03:39:53.152535   68864 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0501 03:39:53.171415   68864 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0501 03:39:53.297831   68864 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0501 03:39:53.479469   68864 docker.go:233] disabling docker service ...
	I0501 03:39:53.479552   68864 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0501 03:39:53.497271   68864 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0501 03:39:53.512645   68864 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0501 03:39:53.658448   68864 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0501 03:39:53.787528   68864 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0501 03:39:53.804078   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0501 03:39:53.836146   68864 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0501 03:39:53.836206   68864 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:39:53.853846   68864 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0501 03:39:53.853915   68864 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:39:53.866319   68864 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:39:53.878410   68864 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:39:53.890304   68864 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0501 03:39:53.903821   68864 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:39:53.916750   68864 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:39:53.938933   68864 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:39:53.952103   68864 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0501 03:39:53.964833   68864 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0501 03:39:53.964893   68864 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0501 03:39:53.983039   68864 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0501 03:39:53.995830   68864 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 03:39:54.156748   68864 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0501 03:39:54.306973   68864 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0501 03:39:54.307051   68864 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0501 03:39:54.313515   68864 start.go:562] Will wait 60s for crictl version
	I0501 03:39:54.313569   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:39:54.317943   68864 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0501 03:39:54.356360   68864 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0501 03:39:54.356437   68864 ssh_runner.go:195] Run: crio --version
	I0501 03:39:54.391717   68864 ssh_runner.go:195] Run: crio --version
	I0501 03:39:54.428403   68864 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0501 03:39:52.816428   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .Start
	I0501 03:39:52.816592   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Ensuring networks are active...
	I0501 03:39:52.817317   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Ensuring network default is active
	I0501 03:39:52.817668   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Ensuring network mk-default-k8s-diff-port-715118 is active
	I0501 03:39:52.818040   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Getting domain xml...
	I0501 03:39:52.818777   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Creating domain...
	I0501 03:39:54.069624   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Waiting to get IP...
	I0501 03:39:54.070436   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:39:54.070855   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | unable to find current IP address of domain default-k8s-diff-port-715118 in network mk-default-k8s-diff-port-715118
	I0501 03:39:54.070891   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | I0501 03:39:54.070820   70304 retry.go:31] will retry after 260.072623ms: waiting for machine to come up
	I0501 03:39:54.332646   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:39:54.333077   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | unable to find current IP address of domain default-k8s-diff-port-715118 in network mk-default-k8s-diff-port-715118
	I0501 03:39:54.333115   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | I0501 03:39:54.333047   70304 retry.go:31] will retry after 270.897102ms: waiting for machine to come up
	I0501 03:39:54.605705   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:39:54.606102   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | unable to find current IP address of domain default-k8s-diff-port-715118 in network mk-default-k8s-diff-port-715118
	I0501 03:39:54.606155   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | I0501 03:39:54.606070   70304 retry.go:31] will retry after 417.613249ms: waiting for machine to come up
	I0501 03:39:55.025827   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:39:55.026340   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | unable to find current IP address of domain default-k8s-diff-port-715118 in network mk-default-k8s-diff-port-715118
	I0501 03:39:55.026371   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | I0501 03:39:55.026291   70304 retry.go:31] will retry after 428.515161ms: waiting for machine to come up
	I0501 03:39:55.456828   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:39:55.457443   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | unable to find current IP address of domain default-k8s-diff-port-715118 in network mk-default-k8s-diff-port-715118
	I0501 03:39:55.457480   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | I0501 03:39:55.457405   70304 retry.go:31] will retry after 701.294363ms: waiting for machine to come up
	I0501 03:39:54.429689   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetIP
	I0501 03:39:54.432488   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:54.432817   68864 main.go:141] libmachine: (embed-certs-277128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:7d", ip: ""} in network mk-embed-certs-277128: {Iface:virbr2 ExpiryTime:2024-05-01 04:39:45 +0000 UTC Type:0 Mac:52:54:00:96:11:7d Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-277128 Clientid:01:52:54:00:96:11:7d}
	I0501 03:39:54.432858   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined IP address 192.168.50.218 and MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:54.433039   68864 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0501 03:39:54.437866   68864 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0501 03:39:54.451509   68864 kubeadm.go:877] updating cluster {Name:embed-certs-277128 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.0 ClusterName:embed-certs-277128 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.218 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0501 03:39:54.451615   68864 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0501 03:39:54.451665   68864 ssh_runner.go:195] Run: sudo crictl images --output json
	I0501 03:39:54.494304   68864 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0501 03:39:54.494379   68864 ssh_runner.go:195] Run: which lz4
	I0501 03:39:54.499090   68864 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0501 03:39:54.503970   68864 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0501 03:39:54.503992   68864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394544937 bytes)
	I0501 03:39:56.216407   68864 crio.go:462] duration metric: took 1.717351739s to copy over tarball
	I0501 03:39:56.216488   68864 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0501 03:39:58.703133   68864 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.48661051s)
	I0501 03:39:58.703161   68864 crio.go:469] duration metric: took 2.486721448s to extract the tarball
	I0501 03:39:58.703171   68864 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0501 03:39:58.751431   68864 ssh_runner.go:195] Run: sudo crictl images --output json
	I0501 03:39:58.800353   68864 crio.go:514] all images are preloaded for cri-o runtime.
	I0501 03:39:58.800379   68864 cache_images.go:84] Images are preloaded, skipping loading
	I0501 03:39:58.800389   68864 kubeadm.go:928] updating node { 192.168.50.218 8443 v1.30.0 crio true true} ...
	I0501 03:39:58.800516   68864 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-277128 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.218
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:embed-certs-277128 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0501 03:39:58.800598   68864 ssh_runner.go:195] Run: crio config
	I0501 03:39:56.159966   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:39:56.160373   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | unable to find current IP address of domain default-k8s-diff-port-715118 in network mk-default-k8s-diff-port-715118
	I0501 03:39:56.160404   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | I0501 03:39:56.160334   70304 retry.go:31] will retry after 774.079459ms: waiting for machine to come up
	I0501 03:39:56.936654   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:39:56.937201   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | unable to find current IP address of domain default-k8s-diff-port-715118 in network mk-default-k8s-diff-port-715118
	I0501 03:39:56.937232   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | I0501 03:39:56.937161   70304 retry.go:31] will retry after 877.420181ms: waiting for machine to come up
	I0501 03:39:57.816002   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:39:57.816467   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | unable to find current IP address of domain default-k8s-diff-port-715118 in network mk-default-k8s-diff-port-715118
	I0501 03:39:57.816501   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | I0501 03:39:57.816425   70304 retry.go:31] will retry after 1.477997343s: waiting for machine to come up
	I0501 03:39:59.296533   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:39:59.296970   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | unable to find current IP address of domain default-k8s-diff-port-715118 in network mk-default-k8s-diff-port-715118
	I0501 03:39:59.296995   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | I0501 03:39:59.296922   70304 retry.go:31] will retry after 1.199617253s: waiting for machine to come up
	I0501 03:40:00.498388   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:00.498817   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | unable to find current IP address of domain default-k8s-diff-port-715118 in network mk-default-k8s-diff-port-715118
	I0501 03:40:00.498845   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | I0501 03:40:00.498770   70304 retry.go:31] will retry after 2.227608697s: waiting for machine to come up
	I0501 03:39:58.855600   68864 cni.go:84] Creating CNI manager for ""
	I0501 03:39:58.855630   68864 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0501 03:39:58.855650   68864 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0501 03:39:58.855686   68864 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.218 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-277128 NodeName:embed-certs-277128 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.218"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.218 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0501 03:39:58.855826   68864 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.218
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-277128"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.218
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.218"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0501 03:39:58.855890   68864 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0501 03:39:58.868074   68864 binaries.go:44] Found k8s binaries, skipping transfer
	I0501 03:39:58.868145   68864 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0501 03:39:58.879324   68864 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0501 03:39:58.897572   68864 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0501 03:39:58.918416   68864 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0501 03:39:58.940317   68864 ssh_runner.go:195] Run: grep 192.168.50.218	control-plane.minikube.internal$ /etc/hosts
	I0501 03:39:58.944398   68864 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.218	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0501 03:39:58.959372   68864 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 03:39:59.094172   68864 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0501 03:39:59.113612   68864 certs.go:68] Setting up /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/embed-certs-277128 for IP: 192.168.50.218
	I0501 03:39:59.113653   68864 certs.go:194] generating shared ca certs ...
	I0501 03:39:59.113669   68864 certs.go:226] acquiring lock for ca certs: {Name:mk85a75d14c00780d4bf09cd9bdaf0f96f1b8fce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 03:39:59.113863   68864 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18779-13391/.minikube/ca.key
	I0501 03:39:59.113919   68864 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.key
	I0501 03:39:59.113931   68864 certs.go:256] generating profile certs ...
	I0501 03:39:59.114044   68864 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/embed-certs-277128/client.key
	I0501 03:39:59.114117   68864 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/embed-certs-277128/apiserver.key.65584253
	I0501 03:39:59.114166   68864 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/embed-certs-277128/proxy-client.key
	I0501 03:39:59.114325   68864 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/20724.pem (1338 bytes)
	W0501 03:39:59.114369   68864 certs.go:480] ignoring /home/jenkins/minikube-integration/18779-13391/.minikube/certs/20724_empty.pem, impossibly tiny 0 bytes
	I0501 03:39:59.114383   68864 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca-key.pem (1679 bytes)
	I0501 03:39:59.114430   68864 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem (1078 bytes)
	I0501 03:39:59.114466   68864 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem (1123 bytes)
	I0501 03:39:59.114497   68864 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem (1675 bytes)
	I0501 03:39:59.114550   68864 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem (1708 bytes)
	I0501 03:39:59.115448   68864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0501 03:39:59.155890   68864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0501 03:39:59.209160   68864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0501 03:39:59.251552   68864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0501 03:39:59.288100   68864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/embed-certs-277128/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0501 03:39:59.325437   68864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/embed-certs-277128/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0501 03:39:59.352593   68864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/embed-certs-277128/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0501 03:39:59.378992   68864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/embed-certs-277128/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0501 03:39:59.405517   68864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/certs/20724.pem --> /usr/share/ca-certificates/20724.pem (1338 bytes)
	I0501 03:39:59.431253   68864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem --> /usr/share/ca-certificates/207242.pem (1708 bytes)
	I0501 03:39:59.457155   68864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0501 03:39:59.483696   68864 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0501 03:39:59.502758   68864 ssh_runner.go:195] Run: openssl version
	I0501 03:39:59.509307   68864 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20724.pem && ln -fs /usr/share/ca-certificates/20724.pem /etc/ssl/certs/20724.pem"
	I0501 03:39:59.521438   68864 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20724.pem
	I0501 03:39:59.526658   68864 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May  1 02:20 /usr/share/ca-certificates/20724.pem
	I0501 03:39:59.526706   68864 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20724.pem
	I0501 03:39:59.533201   68864 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20724.pem /etc/ssl/certs/51391683.0"
	I0501 03:39:59.546837   68864 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/207242.pem && ln -fs /usr/share/ca-certificates/207242.pem /etc/ssl/certs/207242.pem"
	I0501 03:39:59.560612   68864 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/207242.pem
	I0501 03:39:59.565545   68864 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May  1 02:20 /usr/share/ca-certificates/207242.pem
	I0501 03:39:59.565589   68864 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/207242.pem
	I0501 03:39:59.571737   68864 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/207242.pem /etc/ssl/certs/3ec20f2e.0"
	I0501 03:39:59.584602   68864 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0501 03:39:59.599088   68864 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0501 03:39:59.604230   68864 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May  1 02:08 /usr/share/ca-certificates/minikubeCA.pem
	I0501 03:39:59.604296   68864 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0501 03:39:59.610536   68864 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0501 03:39:59.624810   68864 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0501 03:39:59.629692   68864 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0501 03:39:59.636209   68864 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0501 03:39:59.642907   68864 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0501 03:39:59.649491   68864 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0501 03:39:59.655702   68864 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0501 03:39:59.661884   68864 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0501 03:39:59.668075   68864 kubeadm.go:391] StartCluster: {Name:embed-certs-277128 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.0 ClusterName:embed-certs-277128 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.218 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 03:39:59.668209   68864 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0501 03:39:59.668255   68864 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0501 03:39:59.712172   68864 cri.go:89] found id: ""
	I0501 03:39:59.712262   68864 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0501 03:39:59.723825   68864 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0501 03:39:59.723848   68864 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0501 03:39:59.723854   68864 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0501 03:39:59.723890   68864 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0501 03:39:59.735188   68864 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0501 03:39:59.736670   68864 kubeconfig.go:125] found "embed-certs-277128" server: "https://192.168.50.218:8443"
	I0501 03:39:59.739665   68864 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0501 03:39:59.750292   68864 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.218
	I0501 03:39:59.750329   68864 kubeadm.go:1154] stopping kube-system containers ...
	I0501 03:39:59.750339   68864 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0501 03:39:59.750388   68864 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0501 03:39:59.791334   68864 cri.go:89] found id: ""
	I0501 03:39:59.791436   68864 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0501 03:39:59.809162   68864 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0501 03:39:59.820979   68864 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0501 03:39:59.821013   68864 kubeadm.go:156] found existing configuration files:
	
	I0501 03:39:59.821072   68864 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0501 03:39:59.832368   68864 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0501 03:39:59.832443   68864 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0501 03:39:59.843920   68864 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0501 03:39:59.855489   68864 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0501 03:39:59.855562   68864 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0501 03:39:59.867337   68864 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0501 03:39:59.878582   68864 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0501 03:39:59.878659   68864 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0501 03:39:59.890049   68864 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0501 03:39:59.901054   68864 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0501 03:39:59.901110   68864 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0501 03:39:59.912900   68864 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0501 03:39:59.925358   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:40:00.065105   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:40:00.861756   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:40:01.089790   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:40:01.158944   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:40:01.249842   68864 api_server.go:52] waiting for apiserver process to appear ...
	I0501 03:40:01.250063   68864 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:01.750273   68864 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:02.250155   68864 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:02.291774   68864 api_server.go:72] duration metric: took 1.041932793s to wait for apiserver process to appear ...
	I0501 03:40:02.291807   68864 api_server.go:88] waiting for apiserver healthz status ...
	I0501 03:40:02.291831   68864 api_server.go:253] Checking apiserver healthz at https://192.168.50.218:8443/healthz ...
	I0501 03:40:02.292377   68864 api_server.go:269] stopped: https://192.168.50.218:8443/healthz: Get "https://192.168.50.218:8443/healthz": dial tcp 192.168.50.218:8443: connect: connection refused
	I0501 03:40:02.792584   68864 api_server.go:253] Checking apiserver healthz at https://192.168.50.218:8443/healthz ...
	I0501 03:40:02.727799   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:02.728314   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | unable to find current IP address of domain default-k8s-diff-port-715118 in network mk-default-k8s-diff-port-715118
	I0501 03:40:02.728347   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | I0501 03:40:02.728270   70304 retry.go:31] will retry after 1.844071576s: waiting for machine to come up
	I0501 03:40:04.574870   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:04.575326   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | unable to find current IP address of domain default-k8s-diff-port-715118 in network mk-default-k8s-diff-port-715118
	I0501 03:40:04.575349   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | I0501 03:40:04.575278   70304 retry.go:31] will retry after 2.989286916s: waiting for machine to come up
	I0501 03:40:04.843311   68864 api_server.go:279] https://192.168.50.218:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0501 03:40:04.843360   68864 api_server.go:103] status: https://192.168.50.218:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0501 03:40:04.843377   68864 api_server.go:253] Checking apiserver healthz at https://192.168.50.218:8443/healthz ...
	I0501 03:40:04.899616   68864 api_server.go:279] https://192.168.50.218:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0501 03:40:04.899655   68864 api_server.go:103] status: https://192.168.50.218:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0501 03:40:05.292097   68864 api_server.go:253] Checking apiserver healthz at https://192.168.50.218:8443/healthz ...
	I0501 03:40:05.300803   68864 api_server.go:279] https://192.168.50.218:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0501 03:40:05.300843   68864 api_server.go:103] status: https://192.168.50.218:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0501 03:40:05.792151   68864 api_server.go:253] Checking apiserver healthz at https://192.168.50.218:8443/healthz ...
	I0501 03:40:05.797124   68864 api_server.go:279] https://192.168.50.218:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0501 03:40:05.797158   68864 api_server.go:103] status: https://192.168.50.218:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0501 03:40:06.292821   68864 api_server.go:253] Checking apiserver healthz at https://192.168.50.218:8443/healthz ...
	I0501 03:40:06.297912   68864 api_server.go:279] https://192.168.50.218:8443/healthz returned 200:
	ok
	I0501 03:40:06.305165   68864 api_server.go:141] control plane version: v1.30.0
	I0501 03:40:06.305199   68864 api_server.go:131] duration metric: took 4.013383351s to wait for apiserver health ...
	I0501 03:40:06.305211   68864 cni.go:84] Creating CNI manager for ""
	I0501 03:40:06.305220   68864 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0501 03:40:06.306925   68864 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0501 03:40:06.308450   68864 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0501 03:40:06.325186   68864 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0501 03:40:06.380997   68864 system_pods.go:43] waiting for kube-system pods to appear ...
	I0501 03:40:06.394134   68864 system_pods.go:59] 8 kube-system pods found
	I0501 03:40:06.394178   68864 system_pods.go:61] "coredns-7db6d8ff4d-sjplt" [6701ee8e-0630-4332-b01c-26741ed3a7b7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0501 03:40:06.394191   68864 system_pods.go:61] "etcd-embed-certs-277128" [744d7481-dd80-4435-90da-685fc16a76a4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0501 03:40:06.394206   68864 system_pods.go:61] "kube-apiserver-embed-certs-277128" [2f1d0f09-b270-4cdc-af3c-e17e3a244bbf] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0501 03:40:06.394215   68864 system_pods.go:61] "kube-controller-manager-embed-certs-277128" [729aa138-dc0d-4616-97d4-1592453971b8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0501 03:40:06.394222   68864 system_pods.go:61] "kube-proxy-phx7x" [56c0381e-c140-4f69-bbe4-09d393db8b23] Running
	I0501 03:40:06.394232   68864 system_pods.go:61] "kube-scheduler-embed-certs-277128" [cc56e9bc-7f21-4f57-a65a-73a10ffd7145] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0501 03:40:06.394253   68864 system_pods.go:61] "metrics-server-569cc877fc-p8j59" [f8ad6c24-dd5d-4515-9052-c9aca7412b55] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0501 03:40:06.394258   68864 system_pods.go:61] "storage-provisioner" [785be666-58d5-4b9d-92fd-bcacdbdebeb2] Running
	I0501 03:40:06.394273   68864 system_pods.go:74] duration metric: took 13.25246ms to wait for pod list to return data ...
	I0501 03:40:06.394293   68864 node_conditions.go:102] verifying NodePressure condition ...
	I0501 03:40:06.399912   68864 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0501 03:40:06.399950   68864 node_conditions.go:123] node cpu capacity is 2
	I0501 03:40:06.399974   68864 node_conditions.go:105] duration metric: took 5.664461ms to run NodePressure ...
	I0501 03:40:06.399996   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:40:06.675573   68864 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0501 03:40:06.680567   68864 kubeadm.go:733] kubelet initialised
	I0501 03:40:06.680591   68864 kubeadm.go:734] duration metric: took 4.987942ms waiting for restarted kubelet to initialise ...
	I0501 03:40:06.680598   68864 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0501 03:40:06.687295   68864 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-sjplt" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:06.692224   68864 pod_ready.go:97] node "embed-certs-277128" hosting pod "coredns-7db6d8ff4d-sjplt" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-277128" has status "Ready":"False"
	I0501 03:40:06.692248   68864 pod_ready.go:81] duration metric: took 4.930388ms for pod "coredns-7db6d8ff4d-sjplt" in "kube-system" namespace to be "Ready" ...
	E0501 03:40:06.692258   68864 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-277128" hosting pod "coredns-7db6d8ff4d-sjplt" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-277128" has status "Ready":"False"
	I0501 03:40:06.692266   68864 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-277128" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:06.699559   68864 pod_ready.go:97] node "embed-certs-277128" hosting pod "etcd-embed-certs-277128" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-277128" has status "Ready":"False"
	I0501 03:40:06.699591   68864 pod_ready.go:81] duration metric: took 7.309622ms for pod "etcd-embed-certs-277128" in "kube-system" namespace to be "Ready" ...
	E0501 03:40:06.699602   68864 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-277128" hosting pod "etcd-embed-certs-277128" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-277128" has status "Ready":"False"
	I0501 03:40:06.699613   68864 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-277128" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:06.705459   68864 pod_ready.go:97] node "embed-certs-277128" hosting pod "kube-apiserver-embed-certs-277128" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-277128" has status "Ready":"False"
	I0501 03:40:06.705485   68864 pod_ready.go:81] duration metric: took 5.86335ms for pod "kube-apiserver-embed-certs-277128" in "kube-system" namespace to be "Ready" ...
	E0501 03:40:06.705497   68864 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-277128" hosting pod "kube-apiserver-embed-certs-277128" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-277128" has status "Ready":"False"
	I0501 03:40:06.705504   68864 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-277128" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:06.786157   68864 pod_ready.go:97] node "embed-certs-277128" hosting pod "kube-controller-manager-embed-certs-277128" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-277128" has status "Ready":"False"
	I0501 03:40:06.786186   68864 pod_ready.go:81] duration metric: took 80.673223ms for pod "kube-controller-manager-embed-certs-277128" in "kube-system" namespace to be "Ready" ...
	E0501 03:40:06.786198   68864 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-277128" hosting pod "kube-controller-manager-embed-certs-277128" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-277128" has status "Ready":"False"
	I0501 03:40:06.786205   68864 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-phx7x" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:07.184262   68864 pod_ready.go:97] node "embed-certs-277128" hosting pod "kube-proxy-phx7x" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-277128" has status "Ready":"False"
	I0501 03:40:07.184297   68864 pod_ready.go:81] duration metric: took 398.081204ms for pod "kube-proxy-phx7x" in "kube-system" namespace to be "Ready" ...
	E0501 03:40:07.184309   68864 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-277128" hosting pod "kube-proxy-phx7x" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-277128" has status "Ready":"False"
	I0501 03:40:07.184319   68864 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-277128" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:07.584569   68864 pod_ready.go:97] node "embed-certs-277128" hosting pod "kube-scheduler-embed-certs-277128" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-277128" has status "Ready":"False"
	I0501 03:40:07.584607   68864 pod_ready.go:81] duration metric: took 400.279023ms for pod "kube-scheduler-embed-certs-277128" in "kube-system" namespace to be "Ready" ...
	E0501 03:40:07.584620   68864 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-277128" hosting pod "kube-scheduler-embed-certs-277128" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-277128" has status "Ready":"False"
	I0501 03:40:07.584630   68864 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:07.984376   68864 pod_ready.go:97] node "embed-certs-277128" hosting pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-277128" has status "Ready":"False"
	I0501 03:40:07.984408   68864 pod_ready.go:81] duration metric: took 399.766342ms for pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace to be "Ready" ...
	E0501 03:40:07.984419   68864 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-277128" hosting pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-277128" has status "Ready":"False"
	I0501 03:40:07.984428   68864 pod_ready.go:38] duration metric: took 1.303821777s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0501 03:40:07.984448   68864 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0501 03:40:08.000370   68864 ops.go:34] apiserver oom_adj: -16
	I0501 03:40:08.000391   68864 kubeadm.go:591] duration metric: took 8.276531687s to restartPrimaryControlPlane
	I0501 03:40:08.000401   68864 kubeadm.go:393] duration metric: took 8.332343707s to StartCluster
	I0501 03:40:08.000416   68864 settings.go:142] acquiring lock: {Name:mkcfe08bcadd45d99c00ed1eaf4e9c226a489fe2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 03:40:08.000482   68864 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18779-13391/kubeconfig
	I0501 03:40:08.002013   68864 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18779-13391/kubeconfig: {Name:mk5d1131557f885800877622151c0915337cb23a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 03:40:08.002343   68864 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.218 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0501 03:40:08.004301   68864 out.go:177] * Verifying Kubernetes components...
	I0501 03:40:08.002423   68864 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0501 03:40:08.002582   68864 config.go:182] Loaded profile config "embed-certs-277128": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0501 03:40:08.005608   68864 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-277128"
	I0501 03:40:08.005624   68864 addons.go:69] Setting metrics-server=true in profile "embed-certs-277128"
	I0501 03:40:08.005658   68864 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-277128"
	W0501 03:40:08.005670   68864 addons.go:243] addon storage-provisioner should already be in state true
	I0501 03:40:08.005609   68864 addons.go:69] Setting default-storageclass=true in profile "embed-certs-277128"
	I0501 03:40:08.005785   68864 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-277128"
	I0501 03:40:08.005659   68864 addons.go:234] Setting addon metrics-server=true in "embed-certs-277128"
	W0501 03:40:08.005819   68864 addons.go:243] addon metrics-server should already be in state true
	I0501 03:40:08.005851   68864 host.go:66] Checking if "embed-certs-277128" exists ...
	I0501 03:40:08.005613   68864 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 03:40:08.005695   68864 host.go:66] Checking if "embed-certs-277128" exists ...
	I0501 03:40:08.006230   68864 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:40:08.006258   68864 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:40:08.006291   68864 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:40:08.006310   68864 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:40:08.006326   68864 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:40:08.006378   68864 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:40:08.021231   68864 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35583
	I0501 03:40:08.021276   68864 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44971
	I0501 03:40:08.021621   68864 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:40:08.021673   68864 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:40:08.022126   68864 main.go:141] libmachine: Using API Version  1
	I0501 03:40:08.022146   68864 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:40:08.022353   68864 main.go:141] libmachine: Using API Version  1
	I0501 03:40:08.022390   68864 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:40:08.022537   68864 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:40:08.022730   68864 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:40:08.022904   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetState
	I0501 03:40:08.023118   68864 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:40:08.023165   68864 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:40:08.024792   68864 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33047
	I0501 03:40:08.025226   68864 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:40:08.025734   68864 main.go:141] libmachine: Using API Version  1
	I0501 03:40:08.025761   68864 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:40:08.026090   68864 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:40:08.026569   68864 addons.go:234] Setting addon default-storageclass=true in "embed-certs-277128"
	W0501 03:40:08.026593   68864 addons.go:243] addon default-storageclass should already be in state true
	I0501 03:40:08.026620   68864 host.go:66] Checking if "embed-certs-277128" exists ...
	I0501 03:40:08.026696   68864 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:40:08.026730   68864 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:40:08.026977   68864 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:40:08.027033   68864 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:40:08.039119   68864 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42975
	I0501 03:40:08.039585   68864 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:40:08.040083   68864 main.go:141] libmachine: Using API Version  1
	I0501 03:40:08.040106   68864 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:40:08.040419   68864 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:40:08.040599   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetState
	I0501 03:40:08.042228   68864 main.go:141] libmachine: (embed-certs-277128) Calling .DriverName
	I0501 03:40:08.044289   68864 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0501 03:40:08.045766   68864 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0501 03:40:08.045787   68864 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0501 03:40:08.045804   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHHostname
	I0501 03:40:08.043677   68864 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37141
	I0501 03:40:08.045633   68864 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41653
	I0501 03:40:08.046247   68864 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:40:08.046326   68864 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:40:08.046989   68864 main.go:141] libmachine: Using API Version  1
	I0501 03:40:08.047012   68864 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:40:08.047196   68864 main.go:141] libmachine: Using API Version  1
	I0501 03:40:08.047216   68864 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:40:08.047279   68864 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:40:08.047403   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetState
	I0501 03:40:08.047515   68864 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:40:08.048047   68864 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:40:08.048081   68864 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:40:08.049225   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:40:08.049623   68864 main.go:141] libmachine: (embed-certs-277128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:7d", ip: ""} in network mk-embed-certs-277128: {Iface:virbr2 ExpiryTime:2024-05-01 04:39:45 +0000 UTC Type:0 Mac:52:54:00:96:11:7d Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-277128 Clientid:01:52:54:00:96:11:7d}
	I0501 03:40:08.049649   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined IP address 192.168.50.218 and MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:40:08.049773   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHPort
	I0501 03:40:08.049915   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHKeyPath
	I0501 03:40:08.050096   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHUsername
	I0501 03:40:08.050165   68864 main.go:141] libmachine: (embed-certs-277128) Calling .DriverName
	I0501 03:40:08.050297   68864 sshutil.go:53] new ssh client: &{IP:192.168.50.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/embed-certs-277128/id_rsa Username:docker}
	I0501 03:40:08.052006   68864 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0501 03:40:08.053365   68864 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0501 03:40:08.053380   68864 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0501 03:40:08.053394   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHHostname
	I0501 03:40:08.056360   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:40:08.056752   68864 main.go:141] libmachine: (embed-certs-277128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:7d", ip: ""} in network mk-embed-certs-277128: {Iface:virbr2 ExpiryTime:2024-05-01 04:39:45 +0000 UTC Type:0 Mac:52:54:00:96:11:7d Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-277128 Clientid:01:52:54:00:96:11:7d}
	I0501 03:40:08.056782   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined IP address 192.168.50.218 and MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:40:08.056892   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHPort
	I0501 03:40:08.057074   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHKeyPath
	I0501 03:40:08.057215   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHUsername
	I0501 03:40:08.057334   68864 sshutil.go:53] new ssh client: &{IP:192.168.50.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/embed-certs-277128/id_rsa Username:docker}
	I0501 03:40:08.064476   68864 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43029
	I0501 03:40:08.064882   68864 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:40:08.065323   68864 main.go:141] libmachine: Using API Version  1
	I0501 03:40:08.065352   68864 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:40:08.065696   68864 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:40:08.065895   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetState
	I0501 03:40:08.067420   68864 main.go:141] libmachine: (embed-certs-277128) Calling .DriverName
	I0501 03:40:08.067740   68864 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0501 03:40:08.067762   68864 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0501 03:40:08.067774   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHHostname
	I0501 03:40:08.070587   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:40:08.071043   68864 main.go:141] libmachine: (embed-certs-277128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:7d", ip: ""} in network mk-embed-certs-277128: {Iface:virbr2 ExpiryTime:2024-05-01 04:39:45 +0000 UTC Type:0 Mac:52:54:00:96:11:7d Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-277128 Clientid:01:52:54:00:96:11:7d}
	I0501 03:40:08.071073   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined IP address 192.168.50.218 and MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:40:08.071225   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHPort
	I0501 03:40:08.071401   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHKeyPath
	I0501 03:40:08.071554   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHUsername
	I0501 03:40:08.071688   68864 sshutil.go:53] new ssh client: &{IP:192.168.50.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/embed-certs-277128/id_rsa Username:docker}
	I0501 03:40:08.204158   68864 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0501 03:40:08.229990   68864 node_ready.go:35] waiting up to 6m0s for node "embed-certs-277128" to be "Ready" ...
	I0501 03:40:08.289511   68864 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0501 03:40:08.289535   68864 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0501 03:40:08.301855   68864 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0501 03:40:08.311966   68864 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0501 03:40:08.330943   68864 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0501 03:40:08.330973   68864 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0501 03:40:08.384842   68864 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0501 03:40:08.384867   68864 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0501 03:40:08.445206   68864 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0501 03:40:09.434390   68864 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.122391479s)
	I0501 03:40:09.434458   68864 main.go:141] libmachine: Making call to close driver server
	I0501 03:40:09.434471   68864 main.go:141] libmachine: (embed-certs-277128) Calling .Close
	I0501 03:40:09.434518   68864 main.go:141] libmachine: Making call to close driver server
	I0501 03:40:09.434541   68864 main.go:141] libmachine: (embed-certs-277128) Calling .Close
	I0501 03:40:09.434567   68864 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.132680542s)
	I0501 03:40:09.434595   68864 main.go:141] libmachine: Making call to close driver server
	I0501 03:40:09.434604   68864 main.go:141] libmachine: (embed-certs-277128) Calling .Close
	I0501 03:40:09.434833   68864 main.go:141] libmachine: Successfully made call to close driver server
	I0501 03:40:09.434859   68864 main.go:141] libmachine: Successfully made call to close driver server
	I0501 03:40:09.434870   68864 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 03:40:09.434872   68864 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 03:40:09.434881   68864 main.go:141] libmachine: Making call to close driver server
	I0501 03:40:09.434882   68864 main.go:141] libmachine: Making call to close driver server
	I0501 03:40:09.434889   68864 main.go:141] libmachine: (embed-certs-277128) Calling .Close
	I0501 03:40:09.434890   68864 main.go:141] libmachine: (embed-certs-277128) Calling .Close
	I0501 03:40:09.434936   68864 main.go:141] libmachine: Successfully made call to close driver server
	I0501 03:40:09.434949   68864 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 03:40:09.434967   68864 main.go:141] libmachine: Making call to close driver server
	I0501 03:40:09.434994   68864 main.go:141] libmachine: (embed-certs-277128) Calling .Close
	I0501 03:40:09.434832   68864 main.go:141] libmachine: (embed-certs-277128) DBG | Closing plugin on server side
	I0501 03:40:09.435072   68864 main.go:141] libmachine: (embed-certs-277128) DBG | Closing plugin on server side
	I0501 03:40:09.437116   68864 main.go:141] libmachine: Successfully made call to close driver server
	I0501 03:40:09.437138   68864 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 03:40:09.437146   68864 main.go:141] libmachine: (embed-certs-277128) DBG | Closing plugin on server side
	I0501 03:40:09.437179   68864 main.go:141] libmachine: (embed-certs-277128) DBG | Closing plugin on server side
	I0501 03:40:09.437194   68864 main.go:141] libmachine: Successfully made call to close driver server
	I0501 03:40:09.437215   68864 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 03:40:09.437297   68864 main.go:141] libmachine: Successfully made call to close driver server
	I0501 03:40:09.437342   68864 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 03:40:09.437359   68864 addons.go:470] Verifying addon metrics-server=true in "embed-certs-277128"
	I0501 03:40:09.445787   68864 main.go:141] libmachine: Making call to close driver server
	I0501 03:40:09.445817   68864 main.go:141] libmachine: (embed-certs-277128) Calling .Close
	I0501 03:40:09.446053   68864 main.go:141] libmachine: Successfully made call to close driver server
	I0501 03:40:09.446090   68864 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 03:40:09.446112   68864 main.go:141] libmachine: (embed-certs-277128) DBG | Closing plugin on server side
	I0501 03:40:09.448129   68864 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass
	I0501 03:40:07.567551   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:07.567914   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | unable to find current IP address of domain default-k8s-diff-port-715118 in network mk-default-k8s-diff-port-715118
	I0501 03:40:07.567948   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | I0501 03:40:07.567860   70304 retry.go:31] will retry after 4.440791777s: waiting for machine to come up
	I0501 03:40:13.516002   69580 start.go:364] duration metric: took 3m31.9441828s to acquireMachinesLock for "old-k8s-version-503971"
	I0501 03:40:13.516087   69580 start.go:96] Skipping create...Using existing machine configuration
	I0501 03:40:13.516100   69580 fix.go:54] fixHost starting: 
	I0501 03:40:13.516559   69580 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:40:13.516601   69580 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:40:13.537158   69580 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38535
	I0501 03:40:13.537631   69580 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:40:13.538169   69580 main.go:141] libmachine: Using API Version  1
	I0501 03:40:13.538197   69580 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:40:13.538570   69580 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:40:13.538769   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .DriverName
	I0501 03:40:13.538958   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetState
	I0501 03:40:13.540454   69580 fix.go:112] recreateIfNeeded on old-k8s-version-503971: state=Stopped err=<nil>
	I0501 03:40:13.540486   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .DriverName
	W0501 03:40:13.540787   69580 fix.go:138] unexpected machine state, will restart: <nil>
	I0501 03:40:13.542670   69580 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-503971" ...
	I0501 03:40:09.449483   68864 addons.go:505] duration metric: took 1.447068548s for enable addons: enabled=[storage-provisioner metrics-server default-storageclass]
	I0501 03:40:10.233650   68864 node_ready.go:53] node "embed-certs-277128" has status "Ready":"False"
	I0501 03:40:12.234270   68864 node_ready.go:53] node "embed-certs-277128" has status "Ready":"False"
	I0501 03:40:12.011886   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:12.012305   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Found IP for machine: 192.168.72.158
	I0501 03:40:12.012335   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has current primary IP address 192.168.72.158 and MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:12.012347   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Reserving static IP address...
	I0501 03:40:12.012759   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-715118", mac: "52:54:00:87:12:31", ip: "192.168.72.158"} in network mk-default-k8s-diff-port-715118: {Iface:virbr3 ExpiryTime:2024-05-01 04:40:05 +0000 UTC Type:0 Mac:52:54:00:87:12:31 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-715118 Clientid:01:52:54:00:87:12:31}
	I0501 03:40:12.012796   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | skip adding static IP to network mk-default-k8s-diff-port-715118 - found existing host DHCP lease matching {name: "default-k8s-diff-port-715118", mac: "52:54:00:87:12:31", ip: "192.168.72.158"}
	I0501 03:40:12.012809   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Reserved static IP address: 192.168.72.158
	I0501 03:40:12.012828   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Waiting for SSH to be available...
	I0501 03:40:12.012835   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | Getting to WaitForSSH function...
	I0501 03:40:12.014719   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:12.015044   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:12:31", ip: ""} in network mk-default-k8s-diff-port-715118: {Iface:virbr3 ExpiryTime:2024-05-01 04:40:05 +0000 UTC Type:0 Mac:52:54:00:87:12:31 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-715118 Clientid:01:52:54:00:87:12:31}
	I0501 03:40:12.015080   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined IP address 192.168.72.158 and MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:12.015193   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | Using SSH client type: external
	I0501 03:40:12.015220   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | Using SSH private key: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/default-k8s-diff-port-715118/id_rsa (-rw-------)
	I0501 03:40:12.015269   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.158 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18779-13391/.minikube/machines/default-k8s-diff-port-715118/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0501 03:40:12.015280   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | About to run SSH command:
	I0501 03:40:12.015289   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | exit 0
	I0501 03:40:12.138881   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | SSH cmd err, output: <nil>: 
	I0501 03:40:12.139286   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetConfigRaw
	I0501 03:40:12.140056   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetIP
	I0501 03:40:12.142869   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:12.143322   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:12:31", ip: ""} in network mk-default-k8s-diff-port-715118: {Iface:virbr3 ExpiryTime:2024-05-01 04:40:05 +0000 UTC Type:0 Mac:52:54:00:87:12:31 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-715118 Clientid:01:52:54:00:87:12:31}
	I0501 03:40:12.143353   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined IP address 192.168.72.158 and MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:12.143662   69237 profile.go:143] Saving config to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/default-k8s-diff-port-715118/config.json ...
	I0501 03:40:12.143858   69237 machine.go:94] provisionDockerMachine start ...
	I0501 03:40:12.143876   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .DriverName
	I0501 03:40:12.144117   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHHostname
	I0501 03:40:12.146145   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:12.146535   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:12:31", ip: ""} in network mk-default-k8s-diff-port-715118: {Iface:virbr3 ExpiryTime:2024-05-01 04:40:05 +0000 UTC Type:0 Mac:52:54:00:87:12:31 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-715118 Clientid:01:52:54:00:87:12:31}
	I0501 03:40:12.146563   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined IP address 192.168.72.158 and MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:12.146712   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHPort
	I0501 03:40:12.146889   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHKeyPath
	I0501 03:40:12.147021   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHKeyPath
	I0501 03:40:12.147130   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHUsername
	I0501 03:40:12.147310   69237 main.go:141] libmachine: Using SSH client type: native
	I0501 03:40:12.147558   69237 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.158 22 <nil> <nil>}
	I0501 03:40:12.147574   69237 main.go:141] libmachine: About to run SSH command:
	hostname
	I0501 03:40:12.251357   69237 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0501 03:40:12.251387   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetMachineName
	I0501 03:40:12.251629   69237 buildroot.go:166] provisioning hostname "default-k8s-diff-port-715118"
	I0501 03:40:12.251658   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetMachineName
	I0501 03:40:12.251862   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHHostname
	I0501 03:40:12.254582   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:12.254892   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:12:31", ip: ""} in network mk-default-k8s-diff-port-715118: {Iface:virbr3 ExpiryTime:2024-05-01 04:40:05 +0000 UTC Type:0 Mac:52:54:00:87:12:31 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-715118 Clientid:01:52:54:00:87:12:31}
	I0501 03:40:12.254924   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined IP address 192.168.72.158 and MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:12.255073   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHPort
	I0501 03:40:12.255276   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHKeyPath
	I0501 03:40:12.255435   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHKeyPath
	I0501 03:40:12.255575   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHUsername
	I0501 03:40:12.255744   69237 main.go:141] libmachine: Using SSH client type: native
	I0501 03:40:12.255905   69237 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.158 22 <nil> <nil>}
	I0501 03:40:12.255917   69237 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-715118 && echo "default-k8s-diff-port-715118" | sudo tee /etc/hostname
	I0501 03:40:12.377588   69237 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-715118
	
	I0501 03:40:12.377628   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHHostname
	I0501 03:40:12.380627   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:12.380927   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:12:31", ip: ""} in network mk-default-k8s-diff-port-715118: {Iface:virbr3 ExpiryTime:2024-05-01 04:40:05 +0000 UTC Type:0 Mac:52:54:00:87:12:31 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-715118 Clientid:01:52:54:00:87:12:31}
	I0501 03:40:12.380958   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined IP address 192.168.72.158 and MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:12.381155   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHPort
	I0501 03:40:12.381372   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHKeyPath
	I0501 03:40:12.381550   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHKeyPath
	I0501 03:40:12.381723   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHUsername
	I0501 03:40:12.381907   69237 main.go:141] libmachine: Using SSH client type: native
	I0501 03:40:12.382148   69237 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.158 22 <nil> <nil>}
	I0501 03:40:12.382170   69237 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-715118' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-715118/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-715118' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0501 03:40:12.494424   69237 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0501 03:40:12.494454   69237 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18779-13391/.minikube CaCertPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18779-13391/.minikube}
	I0501 03:40:12.494484   69237 buildroot.go:174] setting up certificates
	I0501 03:40:12.494493   69237 provision.go:84] configureAuth start
	I0501 03:40:12.494504   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetMachineName
	I0501 03:40:12.494786   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetIP
	I0501 03:40:12.497309   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:12.497584   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:12:31", ip: ""} in network mk-default-k8s-diff-port-715118: {Iface:virbr3 ExpiryTime:2024-05-01 04:40:05 +0000 UTC Type:0 Mac:52:54:00:87:12:31 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-715118 Clientid:01:52:54:00:87:12:31}
	I0501 03:40:12.497616   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined IP address 192.168.72.158 and MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:12.497746   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHHostname
	I0501 03:40:12.500010   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:12.500302   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:12:31", ip: ""} in network mk-default-k8s-diff-port-715118: {Iface:virbr3 ExpiryTime:2024-05-01 04:40:05 +0000 UTC Type:0 Mac:52:54:00:87:12:31 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-715118 Clientid:01:52:54:00:87:12:31}
	I0501 03:40:12.500322   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined IP address 192.168.72.158 and MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:12.500449   69237 provision.go:143] copyHostCerts
	I0501 03:40:12.500505   69237 exec_runner.go:144] found /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem, removing ...
	I0501 03:40:12.500524   69237 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem
	I0501 03:40:12.500598   69237 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem (1078 bytes)
	I0501 03:40:12.500759   69237 exec_runner.go:144] found /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem, removing ...
	I0501 03:40:12.500772   69237 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem
	I0501 03:40:12.500815   69237 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem (1123 bytes)
	I0501 03:40:12.500891   69237 exec_runner.go:144] found /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem, removing ...
	I0501 03:40:12.500900   69237 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem
	I0501 03:40:12.500925   69237 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem (1675 bytes)
	I0501 03:40:12.500991   69237 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-715118 san=[127.0.0.1 192.168.72.158 default-k8s-diff-port-715118 localhost minikube]
	I0501 03:40:12.779037   69237 provision.go:177] copyRemoteCerts
	I0501 03:40:12.779104   69237 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0501 03:40:12.779139   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHHostname
	I0501 03:40:12.781800   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:12.782159   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:12:31", ip: ""} in network mk-default-k8s-diff-port-715118: {Iface:virbr3 ExpiryTime:2024-05-01 04:40:05 +0000 UTC Type:0 Mac:52:54:00:87:12:31 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-715118 Clientid:01:52:54:00:87:12:31}
	I0501 03:40:12.782195   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined IP address 192.168.72.158 and MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:12.782356   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHPort
	I0501 03:40:12.782655   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHKeyPath
	I0501 03:40:12.782812   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHUsername
	I0501 03:40:12.782946   69237 sshutil.go:53] new ssh client: &{IP:192.168.72.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/default-k8s-diff-port-715118/id_rsa Username:docker}
	I0501 03:40:12.867622   69237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0501 03:40:12.897105   69237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0501 03:40:12.926675   69237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0501 03:40:12.955373   69237 provision.go:87] duration metric: took 460.865556ms to configureAuth
	I0501 03:40:12.955405   69237 buildroot.go:189] setting minikube options for container-runtime
	I0501 03:40:12.955606   69237 config.go:182] Loaded profile config "default-k8s-diff-port-715118": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0501 03:40:12.955700   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHHostname
	I0501 03:40:12.958286   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:12.958632   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:12:31", ip: ""} in network mk-default-k8s-diff-port-715118: {Iface:virbr3 ExpiryTime:2024-05-01 04:40:05 +0000 UTC Type:0 Mac:52:54:00:87:12:31 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-715118 Clientid:01:52:54:00:87:12:31}
	I0501 03:40:12.958670   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined IP address 192.168.72.158 and MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:12.958800   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHPort
	I0501 03:40:12.959007   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHKeyPath
	I0501 03:40:12.959225   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHKeyPath
	I0501 03:40:12.959374   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHUsername
	I0501 03:40:12.959554   69237 main.go:141] libmachine: Using SSH client type: native
	I0501 03:40:12.959729   69237 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.158 22 <nil> <nil>}
	I0501 03:40:12.959748   69237 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0501 03:40:13.253328   69237 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0501 03:40:13.253356   69237 machine.go:97] duration metric: took 1.109484866s to provisionDockerMachine
	I0501 03:40:13.253371   69237 start.go:293] postStartSetup for "default-k8s-diff-port-715118" (driver="kvm2")
	I0501 03:40:13.253385   69237 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0501 03:40:13.253405   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .DriverName
	I0501 03:40:13.253753   69237 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0501 03:40:13.253790   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHHostname
	I0501 03:40:13.256734   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:13.257187   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:12:31", ip: ""} in network mk-default-k8s-diff-port-715118: {Iface:virbr3 ExpiryTime:2024-05-01 04:40:05 +0000 UTC Type:0 Mac:52:54:00:87:12:31 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-715118 Clientid:01:52:54:00:87:12:31}
	I0501 03:40:13.257214   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined IP address 192.168.72.158 and MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:13.257345   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHPort
	I0501 03:40:13.257547   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHKeyPath
	I0501 03:40:13.257708   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHUsername
	I0501 03:40:13.257856   69237 sshutil.go:53] new ssh client: &{IP:192.168.72.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/default-k8s-diff-port-715118/id_rsa Username:docker}
	I0501 03:40:13.353373   69237 ssh_runner.go:195] Run: cat /etc/os-release
	I0501 03:40:13.359653   69237 info.go:137] Remote host: Buildroot 2023.02.9
	I0501 03:40:13.359679   69237 filesync.go:126] Scanning /home/jenkins/minikube-integration/18779-13391/.minikube/addons for local assets ...
	I0501 03:40:13.359747   69237 filesync.go:126] Scanning /home/jenkins/minikube-integration/18779-13391/.minikube/files for local assets ...
	I0501 03:40:13.359854   69237 filesync.go:149] local asset: /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem -> 207242.pem in /etc/ssl/certs
	I0501 03:40:13.359964   69237 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0501 03:40:13.370608   69237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem --> /etc/ssl/certs/207242.pem (1708 bytes)
	I0501 03:40:13.402903   69237 start.go:296] duration metric: took 149.518346ms for postStartSetup
	I0501 03:40:13.402946   69237 fix.go:56] duration metric: took 20.610871873s for fixHost
	I0501 03:40:13.402967   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHHostname
	I0501 03:40:13.406324   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:13.406762   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:12:31", ip: ""} in network mk-default-k8s-diff-port-715118: {Iface:virbr3 ExpiryTime:2024-05-01 04:40:05 +0000 UTC Type:0 Mac:52:54:00:87:12:31 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-715118 Clientid:01:52:54:00:87:12:31}
	I0501 03:40:13.406792   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined IP address 192.168.72.158 and MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:13.407028   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHPort
	I0501 03:40:13.407274   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHKeyPath
	I0501 03:40:13.407505   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHKeyPath
	I0501 03:40:13.407645   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHUsername
	I0501 03:40:13.407831   69237 main.go:141] libmachine: Using SSH client type: native
	I0501 03:40:13.408034   69237 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.158 22 <nil> <nil>}
	I0501 03:40:13.408045   69237 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0501 03:40:13.515775   69237 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714534813.490981768
	
	I0501 03:40:13.515814   69237 fix.go:216] guest clock: 1714534813.490981768
	I0501 03:40:13.515852   69237 fix.go:229] Guest: 2024-05-01 03:40:13.490981768 +0000 UTC Remote: 2024-05-01 03:40:13.402950224 +0000 UTC m=+262.796298359 (delta=88.031544ms)
	I0501 03:40:13.515884   69237 fix.go:200] guest clock delta is within tolerance: 88.031544ms
	I0501 03:40:13.515891   69237 start.go:83] releasing machines lock for "default-k8s-diff-port-715118", held for 20.723857967s
	I0501 03:40:13.515976   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .DriverName
	I0501 03:40:13.516272   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetIP
	I0501 03:40:13.519627   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:13.520098   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:12:31", ip: ""} in network mk-default-k8s-diff-port-715118: {Iface:virbr3 ExpiryTime:2024-05-01 04:40:05 +0000 UTC Type:0 Mac:52:54:00:87:12:31 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-715118 Clientid:01:52:54:00:87:12:31}
	I0501 03:40:13.520128   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined IP address 192.168.72.158 and MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:13.520304   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .DriverName
	I0501 03:40:13.520922   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .DriverName
	I0501 03:40:13.521122   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .DriverName
	I0501 03:40:13.521212   69237 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0501 03:40:13.521292   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHHostname
	I0501 03:40:13.521355   69237 ssh_runner.go:195] Run: cat /version.json
	I0501 03:40:13.521387   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHHostname
	I0501 03:40:13.524292   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:13.524328   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:13.524612   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:12:31", ip: ""} in network mk-default-k8s-diff-port-715118: {Iface:virbr3 ExpiryTime:2024-05-01 04:40:05 +0000 UTC Type:0 Mac:52:54:00:87:12:31 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-715118 Clientid:01:52:54:00:87:12:31}
	I0501 03:40:13.524672   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined IP address 192.168.72.158 and MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:13.524819   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:12:31", ip: ""} in network mk-default-k8s-diff-port-715118: {Iface:virbr3 ExpiryTime:2024-05-01 04:40:05 +0000 UTC Type:0 Mac:52:54:00:87:12:31 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-715118 Clientid:01:52:54:00:87:12:31}
	I0501 03:40:13.524948   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined IP address 192.168.72.158 and MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:13.524989   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHPort
	I0501 03:40:13.525033   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHPort
	I0501 03:40:13.525171   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHKeyPath
	I0501 03:40:13.525196   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHKeyPath
	I0501 03:40:13.525306   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHUsername
	I0501 03:40:13.525401   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHUsername
	I0501 03:40:13.525490   69237 sshutil.go:53] new ssh client: &{IP:192.168.72.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/default-k8s-diff-port-715118/id_rsa Username:docker}
	I0501 03:40:13.525553   69237 sshutil.go:53] new ssh client: &{IP:192.168.72.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/default-k8s-diff-port-715118/id_rsa Username:docker}
	I0501 03:40:13.628623   69237 ssh_runner.go:195] Run: systemctl --version
	I0501 03:40:13.636013   69237 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0501 03:40:13.787414   69237 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0501 03:40:13.795777   69237 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0501 03:40:13.795867   69237 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0501 03:40:13.822287   69237 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0501 03:40:13.822326   69237 start.go:494] detecting cgroup driver to use...
	I0501 03:40:13.822507   69237 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0501 03:40:13.841310   69237 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0501 03:40:13.857574   69237 docker.go:217] disabling cri-docker service (if available) ...
	I0501 03:40:13.857645   69237 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0501 03:40:13.872903   69237 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0501 03:40:13.889032   69237 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0501 03:40:14.020563   69237 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0501 03:40:14.222615   69237 docker.go:233] disabling docker service ...
	I0501 03:40:14.222691   69237 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0501 03:40:14.245841   69237 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0501 03:40:14.261001   69237 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0501 03:40:14.385943   69237 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0501 03:40:14.516899   69237 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0501 03:40:14.545138   69237 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0501 03:40:14.570308   69237 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0501 03:40:14.570373   69237 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:40:14.586460   69237 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0501 03:40:14.586535   69237 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:40:14.598947   69237 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:40:14.617581   69237 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:40:14.630097   69237 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0501 03:40:14.642379   69237 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:40:14.653723   69237 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:40:14.674508   69237 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:40:14.685890   69237 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0501 03:40:14.696560   69237 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0501 03:40:14.696614   69237 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0501 03:40:14.713050   69237 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0501 03:40:14.723466   69237 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 03:40:14.884910   69237 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0501 03:40:15.030618   69237 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0501 03:40:15.030689   69237 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0501 03:40:15.036403   69237 start.go:562] Will wait 60s for crictl version
	I0501 03:40:15.036470   69237 ssh_runner.go:195] Run: which crictl
	I0501 03:40:15.040924   69237 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0501 03:40:15.082944   69237 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0501 03:40:15.083037   69237 ssh_runner.go:195] Run: crio --version
	I0501 03:40:15.123492   69237 ssh_runner.go:195] Run: crio --version
	I0501 03:40:15.160739   69237 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0501 03:40:15.162026   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetIP
	I0501 03:40:15.164966   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:15.165378   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:12:31", ip: ""} in network mk-default-k8s-diff-port-715118: {Iface:virbr3 ExpiryTime:2024-05-01 04:40:05 +0000 UTC Type:0 Mac:52:54:00:87:12:31 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-715118 Clientid:01:52:54:00:87:12:31}
	I0501 03:40:15.165417   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined IP address 192.168.72.158 and MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:15.165621   69237 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0501 03:40:15.171717   69237 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0501 03:40:15.190203   69237 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-715118 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.0 ClusterName:default-k8s-diff-port-715118 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.158 Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0501 03:40:15.190359   69237 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0501 03:40:15.190439   69237 ssh_runner.go:195] Run: sudo crictl images --output json
	I0501 03:40:15.240549   69237 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0501 03:40:15.240606   69237 ssh_runner.go:195] Run: which lz4
	I0501 03:40:15.246523   69237 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0501 03:40:15.253094   69237 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0501 03:40:15.253139   69237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394544937 bytes)
	I0501 03:40:13.544100   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .Start
	I0501 03:40:13.544328   69580 main.go:141] libmachine: (old-k8s-version-503971) Ensuring networks are active...
	I0501 03:40:13.545238   69580 main.go:141] libmachine: (old-k8s-version-503971) Ensuring network default is active
	I0501 03:40:13.545621   69580 main.go:141] libmachine: (old-k8s-version-503971) Ensuring network mk-old-k8s-version-503971 is active
	I0501 03:40:13.546072   69580 main.go:141] libmachine: (old-k8s-version-503971) Getting domain xml...
	I0501 03:40:13.546928   69580 main.go:141] libmachine: (old-k8s-version-503971) Creating domain...
	I0501 03:40:14.858558   69580 main.go:141] libmachine: (old-k8s-version-503971) Waiting to get IP...
	I0501 03:40:14.859690   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:14.860108   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | unable to find current IP address of domain old-k8s-version-503971 in network mk-old-k8s-version-503971
	I0501 03:40:14.860215   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | I0501 03:40:14.860103   70499 retry.go:31] will retry after 294.057322ms: waiting for machine to come up
	I0501 03:40:15.155490   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:15.155922   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | unable to find current IP address of domain old-k8s-version-503971 in network mk-old-k8s-version-503971
	I0501 03:40:15.155954   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | I0501 03:40:15.155870   70499 retry.go:31] will retry after 281.238966ms: waiting for machine to come up
	I0501 03:40:15.439196   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:15.439735   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | unable to find current IP address of domain old-k8s-version-503971 in network mk-old-k8s-version-503971
	I0501 03:40:15.439783   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | I0501 03:40:15.439697   70499 retry.go:31] will retry after 429.353689ms: waiting for machine to come up
	I0501 03:40:15.871266   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:15.871947   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | unable to find current IP address of domain old-k8s-version-503971 in network mk-old-k8s-version-503971
	I0501 03:40:15.871970   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | I0501 03:40:15.871895   70499 retry.go:31] will retry after 478.685219ms: waiting for machine to come up
	I0501 03:40:16.352661   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:16.353125   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | unable to find current IP address of domain old-k8s-version-503971 in network mk-old-k8s-version-503971
	I0501 03:40:16.353161   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | I0501 03:40:16.353087   70499 retry.go:31] will retry after 642.905156ms: waiting for machine to come up
	I0501 03:40:14.235378   68864 node_ready.go:53] node "embed-certs-277128" has status "Ready":"False"
	I0501 03:40:15.735465   68864 node_ready.go:49] node "embed-certs-277128" has status "Ready":"True"
	I0501 03:40:15.735494   68864 node_ready.go:38] duration metric: took 7.50546727s for node "embed-certs-277128" to be "Ready" ...
	I0501 03:40:15.735503   68864 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0501 03:40:15.743215   68864 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-sjplt" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:17.752821   68864 pod_ready.go:102] pod "coredns-7db6d8ff4d-sjplt" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:17.121023   69237 crio.go:462] duration metric: took 1.874524806s to copy over tarball
	I0501 03:40:17.121097   69237 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0501 03:40:19.792970   69237 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.671840765s)
	I0501 03:40:19.793004   69237 crio.go:469] duration metric: took 2.67194801s to extract the tarball
	I0501 03:40:19.793014   69237 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0501 03:40:19.834845   69237 ssh_runner.go:195] Run: sudo crictl images --output json
	I0501 03:40:19.896841   69237 crio.go:514] all images are preloaded for cri-o runtime.
	I0501 03:40:19.896881   69237 cache_images.go:84] Images are preloaded, skipping loading
	I0501 03:40:19.896892   69237 kubeadm.go:928] updating node { 192.168.72.158 8444 v1.30.0 crio true true} ...
	I0501 03:40:19.897027   69237 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-715118 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.158
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:default-k8s-diff-port-715118 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0501 03:40:19.897113   69237 ssh_runner.go:195] Run: crio config
	I0501 03:40:19.953925   69237 cni.go:84] Creating CNI manager for ""
	I0501 03:40:19.953956   69237 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0501 03:40:19.953971   69237 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0501 03:40:19.953991   69237 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.158 APIServerPort:8444 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-715118 NodeName:default-k8s-diff-port-715118 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.158"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.158 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0501 03:40:19.954133   69237 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.158
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-715118"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.158
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.158"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0501 03:40:19.954198   69237 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0501 03:40:19.967632   69237 binaries.go:44] Found k8s binaries, skipping transfer
	I0501 03:40:19.967708   69237 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0501 03:40:19.984161   69237 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0501 03:40:20.006540   69237 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0501 03:40:20.029218   69237 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0501 03:40:20.051612   69237 ssh_runner.go:195] Run: grep 192.168.72.158	control-plane.minikube.internal$ /etc/hosts
	I0501 03:40:20.056502   69237 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.158	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0501 03:40:20.071665   69237 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 03:40:20.194289   69237 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0501 03:40:20.215402   69237 certs.go:68] Setting up /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/default-k8s-diff-port-715118 for IP: 192.168.72.158
	I0501 03:40:20.215440   69237 certs.go:194] generating shared ca certs ...
	I0501 03:40:20.215471   69237 certs.go:226] acquiring lock for ca certs: {Name:mk85a75d14c00780d4bf09cd9bdaf0f96f1b8fce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 03:40:20.215698   69237 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18779-13391/.minikube/ca.key
	I0501 03:40:20.215769   69237 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.key
	I0501 03:40:20.215785   69237 certs.go:256] generating profile certs ...
	I0501 03:40:20.215922   69237 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/default-k8s-diff-port-715118/client.key
	I0501 03:40:20.216023   69237 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/default-k8s-diff-port-715118/apiserver.key.91bc3872
	I0501 03:40:20.216094   69237 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/default-k8s-diff-port-715118/proxy-client.key
	I0501 03:40:20.216275   69237 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/20724.pem (1338 bytes)
	W0501 03:40:20.216321   69237 certs.go:480] ignoring /home/jenkins/minikube-integration/18779-13391/.minikube/certs/20724_empty.pem, impossibly tiny 0 bytes
	I0501 03:40:20.216337   69237 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca-key.pem (1679 bytes)
	I0501 03:40:20.216375   69237 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem (1078 bytes)
	I0501 03:40:20.216439   69237 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem (1123 bytes)
	I0501 03:40:20.216472   69237 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem (1675 bytes)
	I0501 03:40:20.216560   69237 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem (1708 bytes)
	I0501 03:40:20.217306   69237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0501 03:40:20.256162   69237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0501 03:40:20.293643   69237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0501 03:40:20.329175   69237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0501 03:40:20.367715   69237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/default-k8s-diff-port-715118/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0501 03:40:20.400024   69237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/default-k8s-diff-port-715118/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0501 03:40:20.428636   69237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/default-k8s-diff-port-715118/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0501 03:40:20.458689   69237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/default-k8s-diff-port-715118/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0501 03:40:20.487619   69237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem --> /usr/share/ca-certificates/207242.pem (1708 bytes)
	I0501 03:40:20.518140   69237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0501 03:40:20.547794   69237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/certs/20724.pem --> /usr/share/ca-certificates/20724.pem (1338 bytes)
	I0501 03:40:20.580453   69237 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0501 03:40:20.605211   69237 ssh_runner.go:195] Run: openssl version
	I0501 03:40:20.612269   69237 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/207242.pem && ln -fs /usr/share/ca-certificates/207242.pem /etc/ssl/certs/207242.pem"
	I0501 03:40:20.626575   69237 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/207242.pem
	I0501 03:40:20.632370   69237 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May  1 02:20 /usr/share/ca-certificates/207242.pem
	I0501 03:40:20.632439   69237 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/207242.pem
	I0501 03:40:20.639563   69237 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/207242.pem /etc/ssl/certs/3ec20f2e.0"
	I0501 03:40:16.997533   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:16.998034   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | unable to find current IP address of domain old-k8s-version-503971 in network mk-old-k8s-version-503971
	I0501 03:40:16.998076   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | I0501 03:40:16.997984   70499 retry.go:31] will retry after 596.56948ms: waiting for machine to come up
	I0501 03:40:17.596671   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:17.597182   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | unable to find current IP address of domain old-k8s-version-503971 in network mk-old-k8s-version-503971
	I0501 03:40:17.597207   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | I0501 03:40:17.597132   70499 retry.go:31] will retry after 770.742109ms: waiting for machine to come up
	I0501 03:40:18.369337   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:18.369833   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | unable to find current IP address of domain old-k8s-version-503971 in network mk-old-k8s-version-503971
	I0501 03:40:18.369864   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | I0501 03:40:18.369780   70499 retry.go:31] will retry after 1.382502808s: waiting for machine to come up
	I0501 03:40:19.753936   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:19.754419   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | unable to find current IP address of domain old-k8s-version-503971 in network mk-old-k8s-version-503971
	I0501 03:40:19.754458   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | I0501 03:40:19.754363   70499 retry.go:31] will retry after 1.344792989s: waiting for machine to come up
	I0501 03:40:21.101047   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:21.101474   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | unable to find current IP address of domain old-k8s-version-503971 in network mk-old-k8s-version-503971
	I0501 03:40:21.101514   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | I0501 03:40:21.101442   70499 retry.go:31] will retry after 1.636964906s: waiting for machine to come up
	I0501 03:40:20.252239   68864 pod_ready.go:102] pod "coredns-7db6d8ff4d-sjplt" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:22.751407   68864 pod_ready.go:92] pod "coredns-7db6d8ff4d-sjplt" in "kube-system" namespace has status "Ready":"True"
	I0501 03:40:22.751431   68864 pod_ready.go:81] duration metric: took 7.008190087s for pod "coredns-7db6d8ff4d-sjplt" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:22.751442   68864 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-277128" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:22.757104   68864 pod_ready.go:92] pod "etcd-embed-certs-277128" in "kube-system" namespace has status "Ready":"True"
	I0501 03:40:22.757124   68864 pod_ready.go:81] duration metric: took 5.677117ms for pod "etcd-embed-certs-277128" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:22.757141   68864 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-277128" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:22.763083   68864 pod_ready.go:92] pod "kube-apiserver-embed-certs-277128" in "kube-system" namespace has status "Ready":"True"
	I0501 03:40:22.763107   68864 pod_ready.go:81] duration metric: took 5.958961ms for pod "kube-apiserver-embed-certs-277128" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:22.763119   68864 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-277128" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:22.768163   68864 pod_ready.go:92] pod "kube-controller-manager-embed-certs-277128" in "kube-system" namespace has status "Ready":"True"
	I0501 03:40:22.768182   68864 pod_ready.go:81] duration metric: took 5.055934ms for pod "kube-controller-manager-embed-certs-277128" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:22.768193   68864 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-phx7x" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:22.772478   68864 pod_ready.go:92] pod "kube-proxy-phx7x" in "kube-system" namespace has status "Ready":"True"
	I0501 03:40:22.772497   68864 pod_ready.go:81] duration metric: took 4.297358ms for pod "kube-proxy-phx7x" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:22.772505   68864 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-277128" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:23.149692   68864 pod_ready.go:92] pod "kube-scheduler-embed-certs-277128" in "kube-system" namespace has status "Ready":"True"
	I0501 03:40:23.149726   68864 pod_ready.go:81] duration metric: took 377.213314ms for pod "kube-scheduler-embed-certs-277128" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:23.149741   68864 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:20.653202   69237 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0501 03:40:20.878582   69237 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0501 03:40:20.884671   69237 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May  1 02:08 /usr/share/ca-certificates/minikubeCA.pem
	I0501 03:40:20.884755   69237 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0501 03:40:20.891633   69237 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0501 03:40:20.906032   69237 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20724.pem && ln -fs /usr/share/ca-certificates/20724.pem /etc/ssl/certs/20724.pem"
	I0501 03:40:20.924491   69237 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20724.pem
	I0501 03:40:20.931346   69237 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May  1 02:20 /usr/share/ca-certificates/20724.pem
	I0501 03:40:20.931421   69237 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20724.pem
	I0501 03:40:20.937830   69237 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20724.pem /etc/ssl/certs/51391683.0"
	I0501 03:40:20.951239   69237 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0501 03:40:20.956883   69237 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0501 03:40:20.964048   69237 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0501 03:40:20.971156   69237 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0501 03:40:20.978243   69237 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0501 03:40:20.985183   69237 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0501 03:40:20.991709   69237 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0501 03:40:20.998390   69237 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-715118 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.0 ClusterName:default-k8s-diff-port-715118 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.158 Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 03:40:20.998509   69237 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0501 03:40:20.998558   69237 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0501 03:40:21.051469   69237 cri.go:89] found id: ""
	I0501 03:40:21.051575   69237 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0501 03:40:21.063280   69237 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0501 03:40:21.063301   69237 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0501 03:40:21.063307   69237 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0501 03:40:21.063381   69237 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0501 03:40:21.077380   69237 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0501 03:40:21.078445   69237 kubeconfig.go:125] found "default-k8s-diff-port-715118" server: "https://192.168.72.158:8444"
	I0501 03:40:21.080872   69237 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0501 03:40:21.095004   69237 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.158
	I0501 03:40:21.095045   69237 kubeadm.go:1154] stopping kube-system containers ...
	I0501 03:40:21.095059   69237 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0501 03:40:21.095123   69237 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0501 03:40:21.151629   69237 cri.go:89] found id: ""
	I0501 03:40:21.151711   69237 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0501 03:40:21.177077   69237 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0501 03:40:21.192057   69237 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0501 03:40:21.192087   69237 kubeadm.go:156] found existing configuration files:
	
	I0501 03:40:21.192146   69237 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0501 03:40:21.206784   69237 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0501 03:40:21.206870   69237 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0501 03:40:21.221942   69237 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0501 03:40:21.236442   69237 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0501 03:40:21.236516   69237 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0501 03:40:21.251285   69237 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0501 03:40:21.265997   69237 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0501 03:40:21.266049   69237 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0501 03:40:21.281137   69237 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0501 03:40:21.297713   69237 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0501 03:40:21.297783   69237 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0501 03:40:21.314264   69237 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0501 03:40:21.328605   69237 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:40:21.478475   69237 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:40:22.161692   69237 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:40:22.432136   69237 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:40:22.514744   69237 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:40:22.597689   69237 api_server.go:52] waiting for apiserver process to appear ...
	I0501 03:40:22.597770   69237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:23.098146   69237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:23.597831   69237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:23.629375   69237 api_server.go:72] duration metric: took 1.031684055s to wait for apiserver process to appear ...
	I0501 03:40:23.629462   69237 api_server.go:88] waiting for apiserver healthz status ...
	I0501 03:40:23.629500   69237 api_server.go:253] Checking apiserver healthz at https://192.168.72.158:8444/healthz ...
	I0501 03:40:23.630045   69237 api_server.go:269] stopped: https://192.168.72.158:8444/healthz: Get "https://192.168.72.158:8444/healthz": dial tcp 192.168.72.158:8444: connect: connection refused
	I0501 03:40:24.129831   69237 api_server.go:253] Checking apiserver healthz at https://192.168.72.158:8444/healthz ...
	I0501 03:40:22.740241   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:22.740692   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | unable to find current IP address of domain old-k8s-version-503971 in network mk-old-k8s-version-503971
	I0501 03:40:22.740722   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | I0501 03:40:22.740656   70499 retry.go:31] will retry after 1.899831455s: waiting for machine to come up
	I0501 03:40:24.642609   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:24.643075   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | unable to find current IP address of domain old-k8s-version-503971 in network mk-old-k8s-version-503971
	I0501 03:40:24.643104   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | I0501 03:40:24.643019   70499 retry.go:31] will retry after 3.503333894s: waiting for machine to come up
	I0501 03:40:25.157335   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:27.160083   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:27.091079   69237 api_server.go:279] https://192.168.72.158:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0501 03:40:27.091134   69237 api_server.go:103] status: https://192.168.72.158:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0501 03:40:27.091152   69237 api_server.go:253] Checking apiserver healthz at https://192.168.72.158:8444/healthz ...
	I0501 03:40:27.163481   69237 api_server.go:279] https://192.168.72.158:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0501 03:40:27.163509   69237 api_server.go:103] status: https://192.168.72.158:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0501 03:40:27.163522   69237 api_server.go:253] Checking apiserver healthz at https://192.168.72.158:8444/healthz ...
	I0501 03:40:27.175097   69237 api_server.go:279] https://192.168.72.158:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0501 03:40:27.175129   69237 api_server.go:103] status: https://192.168.72.158:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0501 03:40:27.629613   69237 api_server.go:253] Checking apiserver healthz at https://192.168.72.158:8444/healthz ...
	I0501 03:40:27.637166   69237 api_server.go:279] https://192.168.72.158:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0501 03:40:27.637202   69237 api_server.go:103] status: https://192.168.72.158:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0501 03:40:28.130467   69237 api_server.go:253] Checking apiserver healthz at https://192.168.72.158:8444/healthz ...
	I0501 03:40:28.148799   69237 api_server.go:279] https://192.168.72.158:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0501 03:40:28.148823   69237 api_server.go:103] status: https://192.168.72.158:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0501 03:40:28.630500   69237 api_server.go:253] Checking apiserver healthz at https://192.168.72.158:8444/healthz ...
	I0501 03:40:28.642856   69237 api_server.go:279] https://192.168.72.158:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0501 03:40:28.642890   69237 api_server.go:103] status: https://192.168.72.158:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0501 03:40:29.130453   69237 api_server.go:253] Checking apiserver healthz at https://192.168.72.158:8444/healthz ...
	I0501 03:40:29.137783   69237 api_server.go:279] https://192.168.72.158:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0501 03:40:29.137819   69237 api_server.go:103] status: https://192.168.72.158:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0501 03:40:29.630448   69237 api_server.go:253] Checking apiserver healthz at https://192.168.72.158:8444/healthz ...
	I0501 03:40:29.634736   69237 api_server.go:279] https://192.168.72.158:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0501 03:40:29.634764   69237 api_server.go:103] status: https://192.168.72.158:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0501 03:40:30.130371   69237 api_server.go:253] Checking apiserver healthz at https://192.168.72.158:8444/healthz ...
	I0501 03:40:30.134727   69237 api_server.go:279] https://192.168.72.158:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0501 03:40:30.134755   69237 api_server.go:103] status: https://192.168.72.158:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0501 03:40:30.630555   69237 api_server.go:253] Checking apiserver healthz at https://192.168.72.158:8444/healthz ...
	I0501 03:40:30.637025   69237 api_server.go:279] https://192.168.72.158:8444/healthz returned 200:
	ok
	I0501 03:40:30.644179   69237 api_server.go:141] control plane version: v1.30.0
	I0501 03:40:30.644209   69237 api_server.go:131] duration metric: took 7.014727807s to wait for apiserver health ...
	I0501 03:40:30.644217   69237 cni.go:84] Creating CNI manager for ""
	I0501 03:40:30.644223   69237 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0501 03:40:30.646018   69237 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0501 03:40:30.647222   69237 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0501 03:40:28.148102   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:28.148506   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | unable to find current IP address of domain old-k8s-version-503971 in network mk-old-k8s-version-503971
	I0501 03:40:28.148547   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | I0501 03:40:28.148463   70499 retry.go:31] will retry after 4.150508159s: waiting for machine to come up
	I0501 03:40:33.783990   68640 start.go:364] duration metric: took 56.072338201s to acquireMachinesLock for "no-preload-892672"
	I0501 03:40:33.784047   68640 start.go:96] Skipping create...Using existing machine configuration
	I0501 03:40:33.784056   68640 fix.go:54] fixHost starting: 
	I0501 03:40:33.784468   68640 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:40:33.784504   68640 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:40:33.801460   68640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37745
	I0501 03:40:33.802023   68640 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:40:33.802634   68640 main.go:141] libmachine: Using API Version  1
	I0501 03:40:33.802669   68640 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:40:33.803062   68640 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:40:33.803262   68640 main.go:141] libmachine: (no-preload-892672) Calling .DriverName
	I0501 03:40:33.803379   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetState
	I0501 03:40:33.805241   68640 fix.go:112] recreateIfNeeded on no-preload-892672: state=Stopped err=<nil>
	I0501 03:40:33.805266   68640 main.go:141] libmachine: (no-preload-892672) Calling .DriverName
	W0501 03:40:33.805452   68640 fix.go:138] unexpected machine state, will restart: <nil>
	I0501 03:40:33.807020   68640 out.go:177] * Restarting existing kvm2 VM for "no-preload-892672" ...
	I0501 03:40:29.656911   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:32.158119   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:32.303427   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:32.303804   69580 main.go:141] libmachine: (old-k8s-version-503971) Found IP for machine: 192.168.61.104
	I0501 03:40:32.303837   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has current primary IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:32.303851   69580 main.go:141] libmachine: (old-k8s-version-503971) Reserving static IP address...
	I0501 03:40:32.304254   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "old-k8s-version-503971", mac: "52:54:00:7d:68:83", ip: "192.168.61.104"} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:40:32.304286   69580 main.go:141] libmachine: (old-k8s-version-503971) Reserved static IP address: 192.168.61.104
	I0501 03:40:32.304305   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | skip adding static IP to network mk-old-k8s-version-503971 - found existing host DHCP lease matching {name: "old-k8s-version-503971", mac: "52:54:00:7d:68:83", ip: "192.168.61.104"}
	I0501 03:40:32.304323   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | Getting to WaitForSSH function...
	I0501 03:40:32.304337   69580 main.go:141] libmachine: (old-k8s-version-503971) Waiting for SSH to be available...
	I0501 03:40:32.306619   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:32.306972   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:68:83", ip: ""} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:40:32.307011   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:32.307114   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | Using SSH client type: external
	I0501 03:40:32.307138   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | Using SSH private key: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/old-k8s-version-503971/id_rsa (-rw-------)
	I0501 03:40:32.307174   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.104 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18779-13391/.minikube/machines/old-k8s-version-503971/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0501 03:40:32.307188   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | About to run SSH command:
	I0501 03:40:32.307224   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | exit 0
	I0501 03:40:32.438508   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | SSH cmd err, output: <nil>: 
	I0501 03:40:32.438882   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetConfigRaw
	I0501 03:40:32.439452   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetIP
	I0501 03:40:32.441984   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:32.442342   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:68:83", ip: ""} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:40:32.442369   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:32.442668   69580 profile.go:143] Saving config to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/old-k8s-version-503971/config.json ...
	I0501 03:40:32.442875   69580 machine.go:94] provisionDockerMachine start ...
	I0501 03:40:32.442897   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .DriverName
	I0501 03:40:32.443077   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHHostname
	I0501 03:40:32.445129   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:32.445442   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:68:83", ip: ""} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:40:32.445480   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:32.445628   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHPort
	I0501 03:40:32.445806   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHKeyPath
	I0501 03:40:32.445974   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHKeyPath
	I0501 03:40:32.446122   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHUsername
	I0501 03:40:32.446314   69580 main.go:141] libmachine: Using SSH client type: native
	I0501 03:40:32.446548   69580 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.104 22 <nil> <nil>}
	I0501 03:40:32.446564   69580 main.go:141] libmachine: About to run SSH command:
	hostname
	I0501 03:40:32.559346   69580 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0501 03:40:32.559379   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetMachineName
	I0501 03:40:32.559630   69580 buildroot.go:166] provisioning hostname "old-k8s-version-503971"
	I0501 03:40:32.559654   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetMachineName
	I0501 03:40:32.559832   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHHostname
	I0501 03:40:32.562176   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:32.562547   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:68:83", ip: ""} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:40:32.562582   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:32.562716   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHPort
	I0501 03:40:32.562892   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHKeyPath
	I0501 03:40:32.563019   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHKeyPath
	I0501 03:40:32.563161   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHUsername
	I0501 03:40:32.563332   69580 main.go:141] libmachine: Using SSH client type: native
	I0501 03:40:32.563545   69580 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.104 22 <nil> <nil>}
	I0501 03:40:32.563564   69580 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-503971 && echo "old-k8s-version-503971" | sudo tee /etc/hostname
	I0501 03:40:32.699918   69580 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-503971
	
	I0501 03:40:32.699961   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHHostname
	I0501 03:40:32.702721   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:32.703134   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:68:83", ip: ""} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:40:32.703158   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:32.703361   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHPort
	I0501 03:40:32.703547   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHKeyPath
	I0501 03:40:32.703744   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHKeyPath
	I0501 03:40:32.703881   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHUsername
	I0501 03:40:32.704037   69580 main.go:141] libmachine: Using SSH client type: native
	I0501 03:40:32.704199   69580 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.104 22 <nil> <nil>}
	I0501 03:40:32.704215   69580 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-503971' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-503971/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-503971' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0501 03:40:32.830277   69580 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0501 03:40:32.830307   69580 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18779-13391/.minikube CaCertPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18779-13391/.minikube}
	I0501 03:40:32.830323   69580 buildroot.go:174] setting up certificates
	I0501 03:40:32.830331   69580 provision.go:84] configureAuth start
	I0501 03:40:32.830340   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetMachineName
	I0501 03:40:32.830629   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetIP
	I0501 03:40:32.833575   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:32.833887   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:68:83", ip: ""} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:40:32.833932   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:32.834070   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHHostname
	I0501 03:40:32.836309   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:32.836664   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:68:83", ip: ""} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:40:32.836691   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:32.836824   69580 provision.go:143] copyHostCerts
	I0501 03:40:32.836885   69580 exec_runner.go:144] found /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem, removing ...
	I0501 03:40:32.836895   69580 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem
	I0501 03:40:32.836945   69580 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem (1078 bytes)
	I0501 03:40:32.837046   69580 exec_runner.go:144] found /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem, removing ...
	I0501 03:40:32.837054   69580 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem
	I0501 03:40:32.837072   69580 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem (1123 bytes)
	I0501 03:40:32.837129   69580 exec_runner.go:144] found /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem, removing ...
	I0501 03:40:32.837136   69580 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem
	I0501 03:40:32.837152   69580 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem (1675 bytes)
	I0501 03:40:32.837202   69580 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-503971 san=[127.0.0.1 192.168.61.104 localhost minikube old-k8s-version-503971]
	I0501 03:40:33.047948   69580 provision.go:177] copyRemoteCerts
	I0501 03:40:33.048004   69580 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0501 03:40:33.048030   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHHostname
	I0501 03:40:33.050591   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:33.050975   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:68:83", ip: ""} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:40:33.051012   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:33.051142   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHPort
	I0501 03:40:33.051310   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHKeyPath
	I0501 03:40:33.051465   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHUsername
	I0501 03:40:33.051574   69580 sshutil.go:53] new ssh client: &{IP:192.168.61.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/old-k8s-version-503971/id_rsa Username:docker}
	I0501 03:40:33.143991   69580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0501 03:40:33.175494   69580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0501 03:40:33.204770   69580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0501 03:40:33.232728   69580 provision.go:87] duration metric: took 402.386279ms to configureAuth
	I0501 03:40:33.232756   69580 buildroot.go:189] setting minikube options for container-runtime
	I0501 03:40:33.232962   69580 config.go:182] Loaded profile config "old-k8s-version-503971": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0501 03:40:33.233051   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHHostname
	I0501 03:40:33.235656   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:33.236006   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:68:83", ip: ""} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:40:33.236038   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:33.236162   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHPort
	I0501 03:40:33.236339   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHKeyPath
	I0501 03:40:33.236484   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHKeyPath
	I0501 03:40:33.236633   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHUsername
	I0501 03:40:33.236817   69580 main.go:141] libmachine: Using SSH client type: native
	I0501 03:40:33.236980   69580 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.104 22 <nil> <nil>}
	I0501 03:40:33.236997   69580 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0501 03:40:33.526370   69580 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0501 03:40:33.526419   69580 machine.go:97] duration metric: took 1.083510254s to provisionDockerMachine
	I0501 03:40:33.526432   69580 start.go:293] postStartSetup for "old-k8s-version-503971" (driver="kvm2")
	I0501 03:40:33.526443   69580 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0501 03:40:33.526470   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .DriverName
	I0501 03:40:33.526788   69580 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0501 03:40:33.526831   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHHostname
	I0501 03:40:33.529815   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:33.530209   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:68:83", ip: ""} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:40:33.530268   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:33.530364   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHPort
	I0501 03:40:33.530559   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHKeyPath
	I0501 03:40:33.530741   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHUsername
	I0501 03:40:33.530909   69580 sshutil.go:53] new ssh client: &{IP:192.168.61.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/old-k8s-version-503971/id_rsa Username:docker}
	I0501 03:40:33.620224   69580 ssh_runner.go:195] Run: cat /etc/os-release
	I0501 03:40:33.625417   69580 info.go:137] Remote host: Buildroot 2023.02.9
	I0501 03:40:33.625447   69580 filesync.go:126] Scanning /home/jenkins/minikube-integration/18779-13391/.minikube/addons for local assets ...
	I0501 03:40:33.625511   69580 filesync.go:126] Scanning /home/jenkins/minikube-integration/18779-13391/.minikube/files for local assets ...
	I0501 03:40:33.625594   69580 filesync.go:149] local asset: /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem -> 207242.pem in /etc/ssl/certs
	I0501 03:40:33.625691   69580 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0501 03:40:33.637311   69580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem --> /etc/ssl/certs/207242.pem (1708 bytes)
	I0501 03:40:33.666707   69580 start.go:296] duration metric: took 140.263297ms for postStartSetup
	I0501 03:40:33.666740   69580 fix.go:56] duration metric: took 20.150640355s for fixHost
	I0501 03:40:33.666758   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHHostname
	I0501 03:40:33.669394   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:33.669822   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:68:83", ip: ""} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:40:33.669852   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:33.669963   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHPort
	I0501 03:40:33.670213   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHKeyPath
	I0501 03:40:33.670388   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHKeyPath
	I0501 03:40:33.670589   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHUsername
	I0501 03:40:33.670794   69580 main.go:141] libmachine: Using SSH client type: native
	I0501 03:40:33.670972   69580 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.104 22 <nil> <nil>}
	I0501 03:40:33.670984   69580 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0501 03:40:33.783810   69580 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714534833.728910946
	
	I0501 03:40:33.783839   69580 fix.go:216] guest clock: 1714534833.728910946
	I0501 03:40:33.783850   69580 fix.go:229] Guest: 2024-05-01 03:40:33.728910946 +0000 UTC Remote: 2024-05-01 03:40:33.666743363 +0000 UTC m=+232.246108464 (delta=62.167583ms)
	I0501 03:40:33.783893   69580 fix.go:200] guest clock delta is within tolerance: 62.167583ms
	I0501 03:40:33.783903   69580 start.go:83] releasing machines lock for "old-k8s-version-503971", held for 20.267840723s
	I0501 03:40:33.783933   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .DriverName
	I0501 03:40:33.784203   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetIP
	I0501 03:40:33.786846   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:33.787202   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:68:83", ip: ""} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:40:33.787230   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:33.787385   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .DriverName
	I0501 03:40:33.787837   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .DriverName
	I0501 03:40:33.787997   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .DriverName
	I0501 03:40:33.788085   69580 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0501 03:40:33.788126   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHHostname
	I0501 03:40:33.788252   69580 ssh_runner.go:195] Run: cat /version.json
	I0501 03:40:33.788279   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHHostname
	I0501 03:40:33.790748   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:33.791086   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:33.791118   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:68:83", ip: ""} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:40:33.791142   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:33.791435   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHPort
	I0501 03:40:33.791491   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:68:83", ip: ""} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:40:33.791532   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:33.791618   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHKeyPath
	I0501 03:40:33.791740   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHPort
	I0501 03:40:33.791815   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHUsername
	I0501 03:40:33.791937   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHKeyPath
	I0501 03:40:33.792014   69580 sshutil.go:53] new ssh client: &{IP:192.168.61.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/old-k8s-version-503971/id_rsa Username:docker}
	I0501 03:40:33.792069   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHUsername
	I0501 03:40:33.792206   69580 sshutil.go:53] new ssh client: &{IP:192.168.61.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/old-k8s-version-503971/id_rsa Username:docker}
	I0501 03:40:33.876242   69580 ssh_runner.go:195] Run: systemctl --version
	I0501 03:40:33.901692   69580 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0501 03:40:34.056758   69580 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0501 03:40:34.065070   69580 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0501 03:40:34.065156   69580 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0501 03:40:34.085337   69580 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0501 03:40:34.085364   69580 start.go:494] detecting cgroup driver to use...
	I0501 03:40:34.085432   69580 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0501 03:40:34.102723   69580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0501 03:40:34.118792   69580 docker.go:217] disabling cri-docker service (if available) ...
	I0501 03:40:34.118847   69580 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0501 03:40:34.133978   69580 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0501 03:40:34.153890   69580 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0501 03:40:34.283815   69580 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0501 03:40:34.475851   69580 docker.go:233] disabling docker service ...
	I0501 03:40:34.475926   69580 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0501 03:40:34.500769   69580 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0501 03:40:34.517315   69580 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0501 03:40:34.674322   69580 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0501 03:40:34.833281   69580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0501 03:40:34.852610   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0501 03:40:34.879434   69580 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0501 03:40:34.879517   69580 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:40:34.892197   69580 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0501 03:40:34.892269   69580 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:40:34.904437   69580 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:40:34.919950   69580 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:40:34.933772   69580 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0501 03:40:34.947563   69580 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0501 03:40:34.965724   69580 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0501 03:40:34.965795   69580 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0501 03:40:34.984251   69580 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0501 03:40:34.997050   69580 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 03:40:35.155852   69580 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0501 03:40:35.362090   69580 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0501 03:40:35.362164   69580 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0501 03:40:35.368621   69580 start.go:562] Will wait 60s for crictl version
	I0501 03:40:35.368701   69580 ssh_runner.go:195] Run: which crictl
	I0501 03:40:35.373792   69580 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0501 03:40:35.436905   69580 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0501 03:40:35.437018   69580 ssh_runner.go:195] Run: crio --version
	I0501 03:40:35.485130   69580 ssh_runner.go:195] Run: crio --version
	I0501 03:40:35.528700   69580 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0501 03:40:30.661395   69237 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0501 03:40:30.682810   69237 system_pods.go:43] waiting for kube-system pods to appear ...
	I0501 03:40:30.694277   69237 system_pods.go:59] 8 kube-system pods found
	I0501 03:40:30.694326   69237 system_pods.go:61] "coredns-7db6d8ff4d-9r7dt" [75d43a25-d309-427e-befc-7f1851b90d8c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0501 03:40:30.694343   69237 system_pods.go:61] "etcd-default-k8s-diff-port-715118" [21f6a4cd-f662-4865-9208-83959f0a6782] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0501 03:40:30.694354   69237 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-715118" [4dc3e45e-a5d8-480f-a8e8-763ecab0976b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0501 03:40:30.694369   69237 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-715118" [340580a3-040e-48fc-b89c-36a4f6fccfc2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0501 03:40:30.694376   69237 system_pods.go:61] "kube-proxy-vg7ts" [e55f3363-178c-427a-819d-0dc94c3116f3] Running
	I0501 03:40:30.694388   69237 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-715118" [b850fc4a-da6b-4714-98bb-e36e185880dd] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0501 03:40:30.694417   69237 system_pods.go:61] "metrics-server-569cc877fc-2btjj" [9b8ff94d-9e59-46d4-ac6d-7accca8b3552] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0501 03:40:30.694427   69237 system_pods.go:61] "storage-provisioner" [d44a3cf1-c8a5-4a20-8dd6-b854680b33b9] Running
	I0501 03:40:30.694435   69237 system_pods.go:74] duration metric: took 11.599113ms to wait for pod list to return data ...
	I0501 03:40:30.694449   69237 node_conditions.go:102] verifying NodePressure condition ...
	I0501 03:40:30.697795   69237 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0501 03:40:30.697825   69237 node_conditions.go:123] node cpu capacity is 2
	I0501 03:40:30.697838   69237 node_conditions.go:105] duration metric: took 3.383507ms to run NodePressure ...
	I0501 03:40:30.697858   69237 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:40:30.978827   69237 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0501 03:40:30.984628   69237 kubeadm.go:733] kubelet initialised
	I0501 03:40:30.984650   69237 kubeadm.go:734] duration metric: took 5.799905ms waiting for restarted kubelet to initialise ...
	I0501 03:40:30.984656   69237 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0501 03:40:30.992354   69237 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-9r7dt" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:30.999663   69237 pod_ready.go:97] node "default-k8s-diff-port-715118" hosting pod "coredns-7db6d8ff4d-9r7dt" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-715118" has status "Ready":"False"
	I0501 03:40:30.999690   69237 pod_ready.go:81] duration metric: took 7.312969ms for pod "coredns-7db6d8ff4d-9r7dt" in "kube-system" namespace to be "Ready" ...
	E0501 03:40:30.999700   69237 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-715118" hosting pod "coredns-7db6d8ff4d-9r7dt" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-715118" has status "Ready":"False"
	I0501 03:40:30.999706   69237 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-715118" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:31.006163   69237 pod_ready.go:97] node "default-k8s-diff-port-715118" hosting pod "etcd-default-k8s-diff-port-715118" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-715118" has status "Ready":"False"
	I0501 03:40:31.006187   69237 pod_ready.go:81] duration metric: took 6.471262ms for pod "etcd-default-k8s-diff-port-715118" in "kube-system" namespace to be "Ready" ...
	E0501 03:40:31.006199   69237 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-715118" hosting pod "etcd-default-k8s-diff-port-715118" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-715118" has status "Ready":"False"
	I0501 03:40:31.006208   69237 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-715118" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:31.011772   69237 pod_ready.go:97] node "default-k8s-diff-port-715118" hosting pod "kube-apiserver-default-k8s-diff-port-715118" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-715118" has status "Ready":"False"
	I0501 03:40:31.011793   69237 pod_ready.go:81] duration metric: took 5.576722ms for pod "kube-apiserver-default-k8s-diff-port-715118" in "kube-system" namespace to be "Ready" ...
	E0501 03:40:31.011803   69237 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-715118" hosting pod "kube-apiserver-default-k8s-diff-port-715118" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-715118" has status "Ready":"False"
	I0501 03:40:31.011810   69237 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-715118" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:31.086163   69237 pod_ready.go:97] node "default-k8s-diff-port-715118" hosting pod "kube-controller-manager-default-k8s-diff-port-715118" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-715118" has status "Ready":"False"
	I0501 03:40:31.086194   69237 pod_ready.go:81] duration metric: took 74.377197ms for pod "kube-controller-manager-default-k8s-diff-port-715118" in "kube-system" namespace to be "Ready" ...
	E0501 03:40:31.086207   69237 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-715118" hosting pod "kube-controller-manager-default-k8s-diff-port-715118" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-715118" has status "Ready":"False"
	I0501 03:40:31.086214   69237 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-vg7ts" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:31.487056   69237 pod_ready.go:92] pod "kube-proxy-vg7ts" in "kube-system" namespace has status "Ready":"True"
	I0501 03:40:31.487078   69237 pod_ready.go:81] duration metric: took 400.857543ms for pod "kube-proxy-vg7ts" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:31.487088   69237 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-715118" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:33.502448   69237 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-715118" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:35.530015   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetIP
	I0501 03:40:35.533706   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:35.534178   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:68:83", ip: ""} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:40:35.534254   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:35.534515   69580 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0501 03:40:35.541542   69580 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0501 03:40:35.563291   69580 kubeadm.go:877] updating cluster {Name:old-k8s-version-503971 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-503971 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.104 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0501 03:40:35.563434   69580 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0501 03:40:35.563512   69580 ssh_runner.go:195] Run: sudo crictl images --output json
	I0501 03:40:35.646548   69580 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0501 03:40:35.646635   69580 ssh_runner.go:195] Run: which lz4
	I0501 03:40:35.652824   69580 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0501 03:40:35.660056   69580 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0501 03:40:35.660099   69580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0501 03:40:33.808828   68640 main.go:141] libmachine: (no-preload-892672) Calling .Start
	I0501 03:40:33.809083   68640 main.go:141] libmachine: (no-preload-892672) Ensuring networks are active...
	I0501 03:40:33.809829   68640 main.go:141] libmachine: (no-preload-892672) Ensuring network default is active
	I0501 03:40:33.810166   68640 main.go:141] libmachine: (no-preload-892672) Ensuring network mk-no-preload-892672 is active
	I0501 03:40:33.810632   68640 main.go:141] libmachine: (no-preload-892672) Getting domain xml...
	I0501 03:40:33.811386   68640 main.go:141] libmachine: (no-preload-892672) Creating domain...
	I0501 03:40:35.133886   68640 main.go:141] libmachine: (no-preload-892672) Waiting to get IP...
	I0501 03:40:35.134756   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:35.135216   68640 main.go:141] libmachine: (no-preload-892672) DBG | unable to find current IP address of domain no-preload-892672 in network mk-no-preload-892672
	I0501 03:40:35.135280   68640 main.go:141] libmachine: (no-preload-892672) DBG | I0501 03:40:35.135178   70664 retry.go:31] will retry after 275.796908ms: waiting for machine to come up
	I0501 03:40:35.412670   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:35.413206   68640 main.go:141] libmachine: (no-preload-892672) DBG | unable to find current IP address of domain no-preload-892672 in network mk-no-preload-892672
	I0501 03:40:35.413232   68640 main.go:141] libmachine: (no-preload-892672) DBG | I0501 03:40:35.413162   70664 retry.go:31] will retry after 326.173381ms: waiting for machine to come up
	I0501 03:40:35.740734   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:35.741314   68640 main.go:141] libmachine: (no-preload-892672) DBG | unable to find current IP address of domain no-preload-892672 in network mk-no-preload-892672
	I0501 03:40:35.741342   68640 main.go:141] libmachine: (no-preload-892672) DBG | I0501 03:40:35.741260   70664 retry.go:31] will retry after 476.50915ms: waiting for machine to come up
	I0501 03:40:36.219908   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:36.220440   68640 main.go:141] libmachine: (no-preload-892672) DBG | unable to find current IP address of domain no-preload-892672 in network mk-no-preload-892672
	I0501 03:40:36.220473   68640 main.go:141] libmachine: (no-preload-892672) DBG | I0501 03:40:36.220399   70664 retry.go:31] will retry after 377.277784ms: waiting for machine to come up
	I0501 03:40:36.598936   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:36.599391   68640 main.go:141] libmachine: (no-preload-892672) DBG | unable to find current IP address of domain no-preload-892672 in network mk-no-preload-892672
	I0501 03:40:36.599417   68640 main.go:141] libmachine: (no-preload-892672) DBG | I0501 03:40:36.599348   70664 retry.go:31] will retry after 587.166276ms: waiting for machine to come up
	I0501 03:40:37.188757   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:37.189406   68640 main.go:141] libmachine: (no-preload-892672) DBG | unable to find current IP address of domain no-preload-892672 in network mk-no-preload-892672
	I0501 03:40:37.189441   68640 main.go:141] libmachine: (no-preload-892672) DBG | I0501 03:40:37.189311   70664 retry.go:31] will retry after 801.958256ms: waiting for machine to come up
	I0501 03:40:34.658104   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:36.660517   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:35.998453   69237 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-715118" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:38.495088   69237 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-715118" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:39.004175   69237 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-715118" in "kube-system" namespace has status "Ready":"True"
	I0501 03:40:39.004198   69237 pod_ready.go:81] duration metric: took 7.517103824s for pod "kube-scheduler-default-k8s-diff-port-715118" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:39.004209   69237 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:37.870306   69580 crio.go:462] duration metric: took 2.217531377s to copy over tarball
	I0501 03:40:37.870393   69580 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0501 03:40:37.992669   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:37.993052   68640 main.go:141] libmachine: (no-preload-892672) DBG | unable to find current IP address of domain no-preload-892672 in network mk-no-preload-892672
	I0501 03:40:37.993080   68640 main.go:141] libmachine: (no-preload-892672) DBG | I0501 03:40:37.993016   70664 retry.go:31] will retry after 1.085029482s: waiting for machine to come up
	I0501 03:40:39.079315   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:39.079739   68640 main.go:141] libmachine: (no-preload-892672) DBG | unable to find current IP address of domain no-preload-892672 in network mk-no-preload-892672
	I0501 03:40:39.079779   68640 main.go:141] libmachine: (no-preload-892672) DBG | I0501 03:40:39.079682   70664 retry.go:31] will retry after 1.140448202s: waiting for machine to come up
	I0501 03:40:40.221645   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:40.222165   68640 main.go:141] libmachine: (no-preload-892672) DBG | unable to find current IP address of domain no-preload-892672 in network mk-no-preload-892672
	I0501 03:40:40.222192   68640 main.go:141] libmachine: (no-preload-892672) DBG | I0501 03:40:40.222103   70664 retry.go:31] will retry after 1.434247869s: waiting for machine to come up
	I0501 03:40:41.658447   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:41.659034   68640 main.go:141] libmachine: (no-preload-892672) DBG | unable to find current IP address of domain no-preload-892672 in network mk-no-preload-892672
	I0501 03:40:41.659072   68640 main.go:141] libmachine: (no-preload-892672) DBG | I0501 03:40:41.659003   70664 retry.go:31] will retry after 1.759453732s: waiting for machine to come up
	I0501 03:40:39.157834   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:41.164729   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:43.658248   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:41.014770   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:43.513038   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:45.516821   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:41.534681   69580 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.664236925s)
	I0501 03:40:41.599216   69580 crio.go:469] duration metric: took 3.72886857s to extract the tarball
	I0501 03:40:41.599238   69580 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0501 03:40:41.649221   69580 ssh_runner.go:195] Run: sudo crictl images --output json
	I0501 03:40:41.697169   69580 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0501 03:40:41.697198   69580 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0501 03:40:41.697302   69580 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0501 03:40:41.697346   69580 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0501 03:40:41.697367   69580 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0501 03:40:41.697352   69580 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0501 03:40:41.697375   69580 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0501 03:40:41.697329   69580 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0501 03:40:41.697275   69580 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0501 03:40:41.697329   69580 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0501 03:40:41.698950   69580 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0501 03:40:41.699010   69580 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0501 03:40:41.699114   69580 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0501 03:40:41.699251   69580 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0501 03:40:41.699292   69580 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0501 03:40:41.699020   69580 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0501 03:40:41.699550   69580 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0501 03:40:41.699715   69580 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0501 03:40:41.830042   69580 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0501 03:40:41.881770   69580 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0501 03:40:41.881834   69580 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0501 03:40:41.881896   69580 ssh_runner.go:195] Run: which crictl
	I0501 03:40:41.887083   69580 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0501 03:40:41.894597   69580 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0501 03:40:41.935993   69580 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0501 03:40:41.937339   69580 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0501 03:40:41.961728   69580 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0501 03:40:41.961778   69580 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0501 03:40:41.961827   69580 ssh_runner.go:195] Run: which crictl
	I0501 03:40:42.004327   69580 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0501 03:40:42.004395   69580 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0501 03:40:42.004435   69580 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0501 03:40:42.004493   69580 ssh_runner.go:195] Run: which crictl
	I0501 03:40:42.053743   69580 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0501 03:40:42.055914   69580 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0501 03:40:42.056267   69580 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0501 03:40:42.056610   69580 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0501 03:40:42.060229   69580 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0501 03:40:42.070489   69580 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0501 03:40:42.127829   69580 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0501 03:40:42.127880   69580 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0501 03:40:42.127927   69580 ssh_runner.go:195] Run: which crictl
	I0501 03:40:42.201731   69580 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0501 03:40:42.201783   69580 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0501 03:40:42.201814   69580 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0501 03:40:42.201842   69580 ssh_runner.go:195] Run: which crictl
	I0501 03:40:42.211112   69580 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0501 03:40:42.211163   69580 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0501 03:40:42.211227   69580 ssh_runner.go:195] Run: which crictl
	I0501 03:40:42.217794   69580 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0501 03:40:42.217835   69580 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0501 03:40:42.217873   69580 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0501 03:40:42.217917   69580 ssh_runner.go:195] Run: which crictl
	I0501 03:40:42.217873   69580 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0501 03:40:42.220250   69580 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0501 03:40:42.274880   69580 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0501 03:40:42.294354   69580 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0501 03:40:42.294436   69580 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0501 03:40:42.305191   69580 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0501 03:40:42.342502   69580 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0501 03:40:42.560474   69580 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0501 03:40:42.712970   69580 cache_images.go:92] duration metric: took 1.015752585s to LoadCachedImages
	W0501 03:40:42.713057   69580 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	I0501 03:40:42.713074   69580 kubeadm.go:928] updating node { 192.168.61.104 8443 v1.20.0 crio true true} ...
	I0501 03:40:42.713227   69580 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-503971 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.104
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-503971 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0501 03:40:42.713323   69580 ssh_runner.go:195] Run: crio config
	I0501 03:40:42.771354   69580 cni.go:84] Creating CNI manager for ""
	I0501 03:40:42.771384   69580 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0501 03:40:42.771403   69580 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0501 03:40:42.771428   69580 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.104 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-503971 NodeName:old-k8s-version-503971 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.104"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.104 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0501 03:40:42.771644   69580 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.104
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-503971"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.104
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.104"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0501 03:40:42.771722   69580 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0501 03:40:42.784978   69580 binaries.go:44] Found k8s binaries, skipping transfer
	I0501 03:40:42.785057   69580 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0501 03:40:42.800945   69580 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0501 03:40:42.824293   69580 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0501 03:40:42.845949   69580 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0501 03:40:42.867390   69580 ssh_runner.go:195] Run: grep 192.168.61.104	control-plane.minikube.internal$ /etc/hosts
	I0501 03:40:42.872038   69580 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.104	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0501 03:40:42.890213   69580 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 03:40:43.041533   69580 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0501 03:40:43.070048   69580 certs.go:68] Setting up /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/old-k8s-version-503971 for IP: 192.168.61.104
	I0501 03:40:43.070075   69580 certs.go:194] generating shared ca certs ...
	I0501 03:40:43.070097   69580 certs.go:226] acquiring lock for ca certs: {Name:mk85a75d14c00780d4bf09cd9bdaf0f96f1b8fce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 03:40:43.070315   69580 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18779-13391/.minikube/ca.key
	I0501 03:40:43.070388   69580 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.key
	I0501 03:40:43.070419   69580 certs.go:256] generating profile certs ...
	I0501 03:40:43.070558   69580 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/old-k8s-version-503971/client.key
	I0501 03:40:43.070631   69580 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/old-k8s-version-503971/apiserver.key.760b883a
	I0501 03:40:43.070670   69580 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/old-k8s-version-503971/proxy-client.key
	I0501 03:40:43.070804   69580 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/20724.pem (1338 bytes)
	W0501 03:40:43.070852   69580 certs.go:480] ignoring /home/jenkins/minikube-integration/18779-13391/.minikube/certs/20724_empty.pem, impossibly tiny 0 bytes
	I0501 03:40:43.070865   69580 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca-key.pem (1679 bytes)
	I0501 03:40:43.070914   69580 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem (1078 bytes)
	I0501 03:40:43.070955   69580 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem (1123 bytes)
	I0501 03:40:43.070985   69580 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem (1675 bytes)
	I0501 03:40:43.071044   69580 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem (1708 bytes)
	I0501 03:40:43.071869   69580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0501 03:40:43.110078   69580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0501 03:40:43.164382   69580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0501 03:40:43.197775   69580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0501 03:40:43.230575   69580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/old-k8s-version-503971/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0501 03:40:43.260059   69580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/old-k8s-version-503971/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0501 03:40:43.288704   69580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/old-k8s-version-503971/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0501 03:40:43.315417   69580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/old-k8s-version-503971/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0501 03:40:43.363440   69580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0501 03:40:43.396043   69580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/certs/20724.pem --> /usr/share/ca-certificates/20724.pem (1338 bytes)
	I0501 03:40:43.425997   69580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem --> /usr/share/ca-certificates/207242.pem (1708 bytes)
	I0501 03:40:43.456927   69580 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0501 03:40:43.478177   69580 ssh_runner.go:195] Run: openssl version
	I0501 03:40:43.484513   69580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0501 03:40:43.497230   69580 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0501 03:40:43.504025   69580 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May  1 02:08 /usr/share/ca-certificates/minikubeCA.pem
	I0501 03:40:43.504112   69580 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0501 03:40:43.513309   69580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0501 03:40:43.528592   69580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20724.pem && ln -fs /usr/share/ca-certificates/20724.pem /etc/ssl/certs/20724.pem"
	I0501 03:40:43.544560   69580 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20724.pem
	I0501 03:40:43.550975   69580 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May  1 02:20 /usr/share/ca-certificates/20724.pem
	I0501 03:40:43.551047   69580 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20724.pem
	I0501 03:40:43.559214   69580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20724.pem /etc/ssl/certs/51391683.0"
	I0501 03:40:43.575362   69580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/207242.pem && ln -fs /usr/share/ca-certificates/207242.pem /etc/ssl/certs/207242.pem"
	I0501 03:40:43.587848   69580 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/207242.pem
	I0501 03:40:43.593131   69580 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May  1 02:20 /usr/share/ca-certificates/207242.pem
	I0501 03:40:43.593183   69580 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/207242.pem
	I0501 03:40:43.600365   69580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/207242.pem /etc/ssl/certs/3ec20f2e.0"
	I0501 03:40:43.613912   69580 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0501 03:40:43.619576   69580 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0501 03:40:43.628551   69580 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0501 03:40:43.637418   69580 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0501 03:40:43.645060   69580 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0501 03:40:43.654105   69580 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0501 03:40:43.663501   69580 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0501 03:40:43.670855   69580 kubeadm.go:391] StartCluster: {Name:old-k8s-version-503971 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-503971 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.104 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 03:40:43.670937   69580 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0501 03:40:43.670982   69580 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0501 03:40:43.720350   69580 cri.go:89] found id: ""
	I0501 03:40:43.720419   69580 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0501 03:40:43.732518   69580 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0501 03:40:43.732544   69580 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0501 03:40:43.732552   69580 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0501 03:40:43.732612   69580 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0501 03:40:43.743804   69580 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0501 03:40:43.745071   69580 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-503971" does not appear in /home/jenkins/minikube-integration/18779-13391/kubeconfig
	I0501 03:40:43.745785   69580 kubeconfig.go:62] /home/jenkins/minikube-integration/18779-13391/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-503971" cluster setting kubeconfig missing "old-k8s-version-503971" context setting]
	I0501 03:40:43.747054   69580 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18779-13391/kubeconfig: {Name:mk5d1131557f885800877622151c0915337cb23a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 03:40:43.748989   69580 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0501 03:40:43.760349   69580 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.104
	I0501 03:40:43.760389   69580 kubeadm.go:1154] stopping kube-system containers ...
	I0501 03:40:43.760403   69580 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0501 03:40:43.760473   69580 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0501 03:40:43.804745   69580 cri.go:89] found id: ""
	I0501 03:40:43.804841   69580 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0501 03:40:43.825960   69580 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0501 03:40:43.838038   69580 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0501 03:40:43.838062   69580 kubeadm.go:156] found existing configuration files:
	
	I0501 03:40:43.838115   69580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0501 03:40:43.849075   69580 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0501 03:40:43.849164   69580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0501 03:40:43.860634   69580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0501 03:40:43.871244   69580 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0501 03:40:43.871313   69580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0501 03:40:43.882184   69580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0501 03:40:43.893193   69580 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0501 03:40:43.893254   69580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0501 03:40:43.904257   69580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0501 03:40:43.915414   69580 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0501 03:40:43.915492   69580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0501 03:40:43.927372   69580 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0501 03:40:43.939117   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:40:44.098502   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:40:45.150125   69580 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.051581029s)
	I0501 03:40:45.150161   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:40:45.443307   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:40:45.563369   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:40:45.678620   69580 api_server.go:52] waiting for apiserver process to appear ...
	I0501 03:40:45.678731   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:46.179675   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:43.419480   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:43.419952   68640 main.go:141] libmachine: (no-preload-892672) DBG | unable to find current IP address of domain no-preload-892672 in network mk-no-preload-892672
	I0501 03:40:43.419980   68640 main.go:141] libmachine: (no-preload-892672) DBG | I0501 03:40:43.419907   70664 retry.go:31] will retry after 2.329320519s: waiting for machine to come up
	I0501 03:40:45.751405   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:45.751871   68640 main.go:141] libmachine: (no-preload-892672) DBG | unable to find current IP address of domain no-preload-892672 in network mk-no-preload-892672
	I0501 03:40:45.751902   68640 main.go:141] libmachine: (no-preload-892672) DBG | I0501 03:40:45.751822   70664 retry.go:31] will retry after 3.262804058s: waiting for machine to come up
	I0501 03:40:45.659845   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:48.157145   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:48.013520   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:50.514729   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:46.679449   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:47.179179   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:47.678890   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:48.179190   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:48.679276   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:49.179698   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:49.679121   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:50.179723   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:50.679603   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:51.179094   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:49.016460   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:49.016856   68640 main.go:141] libmachine: (no-preload-892672) DBG | unable to find current IP address of domain no-preload-892672 in network mk-no-preload-892672
	I0501 03:40:49.016878   68640 main.go:141] libmachine: (no-preload-892672) DBG | I0501 03:40:49.016826   70664 retry.go:31] will retry after 3.440852681s: waiting for machine to come up
	I0501 03:40:52.461349   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:52.461771   68640 main.go:141] libmachine: (no-preload-892672) DBG | unable to find current IP address of domain no-preload-892672 in network mk-no-preload-892672
	I0501 03:40:52.461800   68640 main.go:141] libmachine: (no-preload-892672) DBG | I0501 03:40:52.461722   70664 retry.go:31] will retry after 4.871322728s: waiting for machine to come up
	I0501 03:40:50.157703   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:52.655677   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:53.011851   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:55.510458   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:51.679850   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:52.179568   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:52.679100   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:53.179470   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:53.679115   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:54.178815   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:54.679769   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:55.179576   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:55.678864   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:56.179617   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:57.335855   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:57.336228   68640 main.go:141] libmachine: (no-preload-892672) Found IP for machine: 192.168.39.144
	I0501 03:40:57.336263   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has current primary IP address 192.168.39.144 and MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:57.336281   68640 main.go:141] libmachine: (no-preload-892672) Reserving static IP address...
	I0501 03:40:57.336629   68640 main.go:141] libmachine: (no-preload-892672) DBG | found host DHCP lease matching {name: "no-preload-892672", mac: "52:54:00:c7:6d:9a", ip: "192.168.39.144"} in network mk-no-preload-892672: {Iface:virbr1 ExpiryTime:2024-05-01 04:40:47 +0000 UTC Type:0 Mac:52:54:00:c7:6d:9a Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:no-preload-892672 Clientid:01:52:54:00:c7:6d:9a}
	I0501 03:40:57.336649   68640 main.go:141] libmachine: (no-preload-892672) DBG | skip adding static IP to network mk-no-preload-892672 - found existing host DHCP lease matching {name: "no-preload-892672", mac: "52:54:00:c7:6d:9a", ip: "192.168.39.144"}
	I0501 03:40:57.336661   68640 main.go:141] libmachine: (no-preload-892672) Reserved static IP address: 192.168.39.144
	I0501 03:40:57.336671   68640 main.go:141] libmachine: (no-preload-892672) Waiting for SSH to be available...
	I0501 03:40:57.336680   68640 main.go:141] libmachine: (no-preload-892672) DBG | Getting to WaitForSSH function...
	I0501 03:40:57.338862   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:57.339135   68640 main.go:141] libmachine: (no-preload-892672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6d:9a", ip: ""} in network mk-no-preload-892672: {Iface:virbr1 ExpiryTime:2024-05-01 04:40:47 +0000 UTC Type:0 Mac:52:54:00:c7:6d:9a Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:no-preload-892672 Clientid:01:52:54:00:c7:6d:9a}
	I0501 03:40:57.339163   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined IP address 192.168.39.144 and MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:57.339268   68640 main.go:141] libmachine: (no-preload-892672) DBG | Using SSH client type: external
	I0501 03:40:57.339296   68640 main.go:141] libmachine: (no-preload-892672) DBG | Using SSH private key: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/no-preload-892672/id_rsa (-rw-------)
	I0501 03:40:57.339328   68640 main.go:141] libmachine: (no-preload-892672) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.144 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18779-13391/.minikube/machines/no-preload-892672/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0501 03:40:57.339341   68640 main.go:141] libmachine: (no-preload-892672) DBG | About to run SSH command:
	I0501 03:40:57.339370   68640 main.go:141] libmachine: (no-preload-892672) DBG | exit 0
	I0501 03:40:57.466775   68640 main.go:141] libmachine: (no-preload-892672) DBG | SSH cmd err, output: <nil>: 
	I0501 03:40:57.467183   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetConfigRaw
	I0501 03:40:57.467890   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetIP
	I0501 03:40:57.470097   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:57.470527   68640 main.go:141] libmachine: (no-preload-892672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6d:9a", ip: ""} in network mk-no-preload-892672: {Iface:virbr1 ExpiryTime:2024-05-01 04:40:47 +0000 UTC Type:0 Mac:52:54:00:c7:6d:9a Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:no-preload-892672 Clientid:01:52:54:00:c7:6d:9a}
	I0501 03:40:57.470555   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined IP address 192.168.39.144 and MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:57.470767   68640 profile.go:143] Saving config to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/no-preload-892672/config.json ...
	I0501 03:40:57.470929   68640 machine.go:94] provisionDockerMachine start ...
	I0501 03:40:57.470950   68640 main.go:141] libmachine: (no-preload-892672) Calling .DriverName
	I0501 03:40:57.471177   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHHostname
	I0501 03:40:57.473301   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:57.473599   68640 main.go:141] libmachine: (no-preload-892672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6d:9a", ip: ""} in network mk-no-preload-892672: {Iface:virbr1 ExpiryTime:2024-05-01 04:40:47 +0000 UTC Type:0 Mac:52:54:00:c7:6d:9a Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:no-preload-892672 Clientid:01:52:54:00:c7:6d:9a}
	I0501 03:40:57.473626   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined IP address 192.168.39.144 and MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:57.473724   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHPort
	I0501 03:40:57.473863   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHKeyPath
	I0501 03:40:57.474032   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHKeyPath
	I0501 03:40:57.474181   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHUsername
	I0501 03:40:57.474337   68640 main.go:141] libmachine: Using SSH client type: native
	I0501 03:40:57.474545   68640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.144 22 <nil> <nil>}
	I0501 03:40:57.474558   68640 main.go:141] libmachine: About to run SSH command:
	hostname
	I0501 03:40:57.591733   68640 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0501 03:40:57.591766   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetMachineName
	I0501 03:40:57.592016   68640 buildroot.go:166] provisioning hostname "no-preload-892672"
	I0501 03:40:57.592048   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetMachineName
	I0501 03:40:57.592308   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHHostname
	I0501 03:40:57.595192   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:57.595593   68640 main.go:141] libmachine: (no-preload-892672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6d:9a", ip: ""} in network mk-no-preload-892672: {Iface:virbr1 ExpiryTime:2024-05-01 04:40:47 +0000 UTC Type:0 Mac:52:54:00:c7:6d:9a Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:no-preload-892672 Clientid:01:52:54:00:c7:6d:9a}
	I0501 03:40:57.595618   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined IP address 192.168.39.144 and MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:57.595697   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHPort
	I0501 03:40:57.595891   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHKeyPath
	I0501 03:40:57.596041   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHKeyPath
	I0501 03:40:57.596192   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHUsername
	I0501 03:40:57.596376   68640 main.go:141] libmachine: Using SSH client type: native
	I0501 03:40:57.596544   68640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.144 22 <nil> <nil>}
	I0501 03:40:57.596559   68640 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-892672 && echo "no-preload-892672" | sudo tee /etc/hostname
	I0501 03:40:57.727738   68640 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-892672
	
	I0501 03:40:57.727770   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHHostname
	I0501 03:40:57.730673   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:57.731033   68640 main.go:141] libmachine: (no-preload-892672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6d:9a", ip: ""} in network mk-no-preload-892672: {Iface:virbr1 ExpiryTime:2024-05-01 04:40:47 +0000 UTC Type:0 Mac:52:54:00:c7:6d:9a Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:no-preload-892672 Clientid:01:52:54:00:c7:6d:9a}
	I0501 03:40:57.731066   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined IP address 192.168.39.144 and MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:57.731202   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHPort
	I0501 03:40:57.731383   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHKeyPath
	I0501 03:40:57.731577   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHKeyPath
	I0501 03:40:57.731744   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHUsername
	I0501 03:40:57.731936   68640 main.go:141] libmachine: Using SSH client type: native
	I0501 03:40:57.732155   68640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.144 22 <nil> <nil>}
	I0501 03:40:57.732173   68640 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-892672' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-892672/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-892672' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0501 03:40:57.857465   68640 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0501 03:40:57.857492   68640 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18779-13391/.minikube CaCertPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18779-13391/.minikube}
	I0501 03:40:57.857515   68640 buildroot.go:174] setting up certificates
	I0501 03:40:57.857524   68640 provision.go:84] configureAuth start
	I0501 03:40:57.857532   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetMachineName
	I0501 03:40:57.857791   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetIP
	I0501 03:40:57.860530   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:57.860881   68640 main.go:141] libmachine: (no-preload-892672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6d:9a", ip: ""} in network mk-no-preload-892672: {Iface:virbr1 ExpiryTime:2024-05-01 04:40:47 +0000 UTC Type:0 Mac:52:54:00:c7:6d:9a Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:no-preload-892672 Clientid:01:52:54:00:c7:6d:9a}
	I0501 03:40:57.860911   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined IP address 192.168.39.144 and MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:57.861035   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHHostname
	I0501 03:40:57.863122   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:57.863445   68640 main.go:141] libmachine: (no-preload-892672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6d:9a", ip: ""} in network mk-no-preload-892672: {Iface:virbr1 ExpiryTime:2024-05-01 04:40:47 +0000 UTC Type:0 Mac:52:54:00:c7:6d:9a Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:no-preload-892672 Clientid:01:52:54:00:c7:6d:9a}
	I0501 03:40:57.863472   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined IP address 192.168.39.144 and MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:57.863565   68640 provision.go:143] copyHostCerts
	I0501 03:40:57.863614   68640 exec_runner.go:144] found /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem, removing ...
	I0501 03:40:57.863624   68640 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem
	I0501 03:40:57.863689   68640 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem (1078 bytes)
	I0501 03:40:57.863802   68640 exec_runner.go:144] found /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem, removing ...
	I0501 03:40:57.863814   68640 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem
	I0501 03:40:57.863843   68640 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem (1123 bytes)
	I0501 03:40:57.863928   68640 exec_runner.go:144] found /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem, removing ...
	I0501 03:40:57.863938   68640 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem
	I0501 03:40:57.863962   68640 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem (1675 bytes)
	I0501 03:40:57.864040   68640 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca-key.pem org=jenkins.no-preload-892672 san=[127.0.0.1 192.168.39.144 localhost minikube no-preload-892672]
	I0501 03:40:54.658003   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:56.658041   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:58.125270   68640 provision.go:177] copyRemoteCerts
	I0501 03:40:58.125321   68640 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0501 03:40:58.125342   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHHostname
	I0501 03:40:58.127890   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:58.128299   68640 main.go:141] libmachine: (no-preload-892672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6d:9a", ip: ""} in network mk-no-preload-892672: {Iface:virbr1 ExpiryTime:2024-05-01 04:40:47 +0000 UTC Type:0 Mac:52:54:00:c7:6d:9a Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:no-preload-892672 Clientid:01:52:54:00:c7:6d:9a}
	I0501 03:40:58.128330   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined IP address 192.168.39.144 and MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:58.128469   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHPort
	I0501 03:40:58.128645   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHKeyPath
	I0501 03:40:58.128809   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHUsername
	I0501 03:40:58.128941   68640 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/no-preload-892672/id_rsa Username:docker}
	I0501 03:40:58.222112   68640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0501 03:40:58.249760   68640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0501 03:40:58.277574   68640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0501 03:40:58.304971   68640 provision.go:87] duration metric: took 447.420479ms to configureAuth
	I0501 03:40:58.305017   68640 buildroot.go:189] setting minikube options for container-runtime
	I0501 03:40:58.305270   68640 config.go:182] Loaded profile config "no-preload-892672": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0501 03:40:58.305434   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHHostname
	I0501 03:40:58.308098   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:58.308487   68640 main.go:141] libmachine: (no-preload-892672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6d:9a", ip: ""} in network mk-no-preload-892672: {Iface:virbr1 ExpiryTime:2024-05-01 04:40:47 +0000 UTC Type:0 Mac:52:54:00:c7:6d:9a Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:no-preload-892672 Clientid:01:52:54:00:c7:6d:9a}
	I0501 03:40:58.308528   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined IP address 192.168.39.144 and MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:58.308658   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHPort
	I0501 03:40:58.308857   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHKeyPath
	I0501 03:40:58.309025   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHKeyPath
	I0501 03:40:58.309173   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHUsername
	I0501 03:40:58.309354   68640 main.go:141] libmachine: Using SSH client type: native
	I0501 03:40:58.309510   68640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.144 22 <nil> <nil>}
	I0501 03:40:58.309526   68640 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0501 03:40:58.609833   68640 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0501 03:40:58.609859   68640 machine.go:97] duration metric: took 1.138916322s to provisionDockerMachine
	I0501 03:40:58.609873   68640 start.go:293] postStartSetup for "no-preload-892672" (driver="kvm2")
	I0501 03:40:58.609885   68640 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0501 03:40:58.609905   68640 main.go:141] libmachine: (no-preload-892672) Calling .DriverName
	I0501 03:40:58.610271   68640 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0501 03:40:58.610307   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHHostname
	I0501 03:40:58.612954   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:58.613308   68640 main.go:141] libmachine: (no-preload-892672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6d:9a", ip: ""} in network mk-no-preload-892672: {Iface:virbr1 ExpiryTime:2024-05-01 04:40:47 +0000 UTC Type:0 Mac:52:54:00:c7:6d:9a Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:no-preload-892672 Clientid:01:52:54:00:c7:6d:9a}
	I0501 03:40:58.613322   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined IP address 192.168.39.144 and MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:58.613485   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHPort
	I0501 03:40:58.613683   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHKeyPath
	I0501 03:40:58.613871   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHUsername
	I0501 03:40:58.614005   68640 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/no-preload-892672/id_rsa Username:docker}
	I0501 03:40:58.702752   68640 ssh_runner.go:195] Run: cat /etc/os-release
	I0501 03:40:58.707441   68640 info.go:137] Remote host: Buildroot 2023.02.9
	I0501 03:40:58.707468   68640 filesync.go:126] Scanning /home/jenkins/minikube-integration/18779-13391/.minikube/addons for local assets ...
	I0501 03:40:58.707577   68640 filesync.go:126] Scanning /home/jenkins/minikube-integration/18779-13391/.minikube/files for local assets ...
	I0501 03:40:58.707646   68640 filesync.go:149] local asset: /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem -> 207242.pem in /etc/ssl/certs
	I0501 03:40:58.707728   68640 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0501 03:40:58.718247   68640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem --> /etc/ssl/certs/207242.pem (1708 bytes)
	I0501 03:40:58.745184   68640 start.go:296] duration metric: took 135.29943ms for postStartSetup
	I0501 03:40:58.745218   68640 fix.go:56] duration metric: took 24.96116093s for fixHost
	I0501 03:40:58.745236   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHHostname
	I0501 03:40:58.747809   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:58.748228   68640 main.go:141] libmachine: (no-preload-892672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6d:9a", ip: ""} in network mk-no-preload-892672: {Iface:virbr1 ExpiryTime:2024-05-01 04:40:47 +0000 UTC Type:0 Mac:52:54:00:c7:6d:9a Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:no-preload-892672 Clientid:01:52:54:00:c7:6d:9a}
	I0501 03:40:58.748261   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined IP address 192.168.39.144 and MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:58.748380   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHPort
	I0501 03:40:58.748591   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHKeyPath
	I0501 03:40:58.748747   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHKeyPath
	I0501 03:40:58.748870   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHUsername
	I0501 03:40:58.749049   68640 main.go:141] libmachine: Using SSH client type: native
	I0501 03:40:58.749262   68640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.144 22 <nil> <nil>}
	I0501 03:40:58.749275   68640 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0501 03:40:58.867651   68640 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714534858.808639015
	
	I0501 03:40:58.867676   68640 fix.go:216] guest clock: 1714534858.808639015
	I0501 03:40:58.867686   68640 fix.go:229] Guest: 2024-05-01 03:40:58.808639015 +0000 UTC Remote: 2024-05-01 03:40:58.745221709 +0000 UTC m=+370.854832040 (delta=63.417306ms)
	I0501 03:40:58.867735   68640 fix.go:200] guest clock delta is within tolerance: 63.417306ms
	I0501 03:40:58.867746   68640 start.go:83] releasing machines lock for "no-preload-892672", held for 25.083724737s
	I0501 03:40:58.867770   68640 main.go:141] libmachine: (no-preload-892672) Calling .DriverName
	I0501 03:40:58.868053   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetIP
	I0501 03:40:58.871193   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:58.871618   68640 main.go:141] libmachine: (no-preload-892672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6d:9a", ip: ""} in network mk-no-preload-892672: {Iface:virbr1 ExpiryTime:2024-05-01 04:40:47 +0000 UTC Type:0 Mac:52:54:00:c7:6d:9a Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:no-preload-892672 Clientid:01:52:54:00:c7:6d:9a}
	I0501 03:40:58.871664   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined IP address 192.168.39.144 and MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:58.871815   68640 main.go:141] libmachine: (no-preload-892672) Calling .DriverName
	I0501 03:40:58.872441   68640 main.go:141] libmachine: (no-preload-892672) Calling .DriverName
	I0501 03:40:58.872665   68640 main.go:141] libmachine: (no-preload-892672) Calling .DriverName
	I0501 03:40:58.872750   68640 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0501 03:40:58.872787   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHHostname
	I0501 03:40:58.872918   68640 ssh_runner.go:195] Run: cat /version.json
	I0501 03:40:58.872946   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHHostname
	I0501 03:40:58.875797   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:58.875976   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:58.876230   68640 main.go:141] libmachine: (no-preload-892672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6d:9a", ip: ""} in network mk-no-preload-892672: {Iface:virbr1 ExpiryTime:2024-05-01 04:40:47 +0000 UTC Type:0 Mac:52:54:00:c7:6d:9a Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:no-preload-892672 Clientid:01:52:54:00:c7:6d:9a}
	I0501 03:40:58.876341   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined IP address 192.168.39.144 and MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:58.876377   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHPort
	I0501 03:40:58.876502   68640 main.go:141] libmachine: (no-preload-892672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6d:9a", ip: ""} in network mk-no-preload-892672: {Iface:virbr1 ExpiryTime:2024-05-01 04:40:47 +0000 UTC Type:0 Mac:52:54:00:c7:6d:9a Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:no-preload-892672 Clientid:01:52:54:00:c7:6d:9a}
	I0501 03:40:58.876539   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined IP address 192.168.39.144 and MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:58.876587   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHKeyPath
	I0501 03:40:58.876756   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHUsername
	I0501 03:40:58.876894   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHPort
	I0501 03:40:58.876969   68640 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/no-preload-892672/id_rsa Username:docker}
	I0501 03:40:58.877057   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHKeyPath
	I0501 03:40:58.877246   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHUsername
	I0501 03:40:58.877424   68640 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/no-preload-892672/id_rsa Username:docker}
	I0501 03:40:58.983384   68640 ssh_runner.go:195] Run: systemctl --version
	I0501 03:40:58.991625   68640 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0501 03:40:59.143916   68640 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0501 03:40:59.151065   68640 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0501 03:40:59.151124   68640 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0501 03:40:59.168741   68640 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0501 03:40:59.168763   68640 start.go:494] detecting cgroup driver to use...
	I0501 03:40:59.168825   68640 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0501 03:40:59.188524   68640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0501 03:40:59.205602   68640 docker.go:217] disabling cri-docker service (if available) ...
	I0501 03:40:59.205668   68640 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0501 03:40:59.221173   68640 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0501 03:40:59.236546   68640 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0501 03:40:59.364199   68640 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0501 03:40:59.533188   68640 docker.go:233] disabling docker service ...
	I0501 03:40:59.533266   68640 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0501 03:40:59.549488   68640 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0501 03:40:59.562910   68640 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0501 03:40:59.705451   68640 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0501 03:40:59.843226   68640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0501 03:40:59.858878   68640 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0501 03:40:59.882729   68640 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0501 03:40:59.882808   68640 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:40:59.895678   68640 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0501 03:40:59.895763   68640 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:40:59.908439   68640 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:40:59.921319   68640 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:40:59.934643   68640 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0501 03:40:59.947416   68640 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:40:59.959887   68640 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:40:59.981849   68640 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:40:59.994646   68640 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0501 03:41:00.006059   68640 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0501 03:41:00.006133   68640 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0501 03:41:00.024850   68640 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0501 03:41:00.036834   68640 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 03:41:00.161283   68640 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0501 03:41:00.312304   68640 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0501 03:41:00.312375   68640 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0501 03:41:00.317980   68640 start.go:562] Will wait 60s for crictl version
	I0501 03:41:00.318043   68640 ssh_runner.go:195] Run: which crictl
	I0501 03:41:00.322780   68640 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0501 03:41:00.362830   68640 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0501 03:41:00.362920   68640 ssh_runner.go:195] Run: crio --version
	I0501 03:41:00.399715   68640 ssh_runner.go:195] Run: crio --version
	I0501 03:41:00.432510   68640 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0501 03:40:57.511719   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:00.013693   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:56.679034   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:57.179062   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:57.679579   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:58.179221   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:58.679728   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:59.178851   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:59.679647   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:00.179397   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:00.678839   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:01.179679   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:00.433777   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetIP
	I0501 03:41:00.436557   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:41:00.436892   68640 main.go:141] libmachine: (no-preload-892672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6d:9a", ip: ""} in network mk-no-preload-892672: {Iface:virbr1 ExpiryTime:2024-05-01 04:40:47 +0000 UTC Type:0 Mac:52:54:00:c7:6d:9a Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:no-preload-892672 Clientid:01:52:54:00:c7:6d:9a}
	I0501 03:41:00.436920   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined IP address 192.168.39.144 and MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:41:00.437124   68640 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0501 03:41:00.441861   68640 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0501 03:41:00.455315   68640 kubeadm.go:877] updating cluster {Name:no-preload-892672 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.0 ClusterName:no-preload-892672 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.144 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0501 03:41:00.455417   68640 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0501 03:41:00.455462   68640 ssh_runner.go:195] Run: sudo crictl images --output json
	I0501 03:41:00.496394   68640 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0501 03:41:00.496422   68640 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.30.0 registry.k8s.io/kube-controller-manager:v1.30.0 registry.k8s.io/kube-scheduler:v1.30.0 registry.k8s.io/kube-proxy:v1.30.0 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.12-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0501 03:41:00.496508   68640 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0501 03:41:00.496532   68640 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.0
	I0501 03:41:00.496551   68640 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.0
	I0501 03:41:00.496581   68640 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0501 03:41:00.496679   68640 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.0
	I0501 03:41:00.496701   68640 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0501 03:41:00.496736   68640 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0501 03:41:00.496529   68640 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.0
	I0501 03:41:00.498207   68640 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.0
	I0501 03:41:00.498227   68640 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0501 03:41:00.498246   68640 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.0
	I0501 03:41:00.498250   68640 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.0
	I0501 03:41:00.498270   68640 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.0
	I0501 03:41:00.498254   68640 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0501 03:41:00.498298   68640 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0501 03:41:00.498477   68640 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0501 03:41:00.617430   68640 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.30.0
	I0501 03:41:00.621346   68640 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.30.0
	I0501 03:41:00.622759   68640 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0501 03:41:00.628313   68640 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.30.0
	I0501 03:41:00.629087   68640 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0501 03:41:00.633625   68640 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.12-0
	I0501 03:41:00.652130   68640 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.30.0
	I0501 03:41:00.722500   68640 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.30.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.30.0" does not exist at hash "c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0" in container runtime
	I0501 03:41:00.722554   68640 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.30.0
	I0501 03:41:00.722623   68640 ssh_runner.go:195] Run: which crictl
	I0501 03:41:00.796476   68640 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.30.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.30.0" does not exist at hash "259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced" in container runtime
	I0501 03:41:00.796530   68640 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.30.0
	I0501 03:41:00.796580   68640 ssh_runner.go:195] Run: which crictl
	I0501 03:41:00.944235   68640 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.30.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.30.0" does not exist at hash "c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b" in container runtime
	I0501 03:41:00.944262   68640 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0501 03:41:00.944289   68640 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.30.0
	I0501 03:41:00.944297   68640 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0501 03:41:00.944305   68640 cache_images.go:116] "registry.k8s.io/etcd:3.5.12-0" needs transfer: "registry.k8s.io/etcd:3.5.12-0" does not exist at hash "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899" in container runtime
	I0501 03:41:00.944325   68640 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.12-0
	I0501 03:41:00.944344   68640 ssh_runner.go:195] Run: which crictl
	I0501 03:41:00.944357   68640 ssh_runner.go:195] Run: which crictl
	I0501 03:41:00.944398   68640 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.30.0" needs transfer: "registry.k8s.io/kube-proxy:v1.30.0" does not exist at hash "a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b" in container runtime
	I0501 03:41:00.944348   68640 ssh_runner.go:195] Run: which crictl
	I0501 03:41:00.944434   68640 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.30.0
	I0501 03:41:00.944422   68640 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.30.0
	I0501 03:41:00.944452   68640 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.0
	I0501 03:41:00.944464   68640 ssh_runner.go:195] Run: which crictl
	I0501 03:41:00.998765   68640 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.30.0
	I0501 03:41:00.998791   68640 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0
	I0501 03:41:00.998846   68640 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.12-0
	I0501 03:41:00.998891   68640 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.0
	I0501 03:41:01.017469   68640 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.30.0
	I0501 03:41:01.017494   68640 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0
	I0501 03:41:01.017584   68640 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.0
	I0501 03:41:01.018040   68640 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0501 03:41:01.105445   68640 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0
	I0501 03:41:01.105517   68640 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0
	I0501 03:41:01.105560   68640 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0
	I0501 03:41:01.105583   68640 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.30.0 (exists)
	I0501 03:41:01.105595   68640 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.30.0
	I0501 03:41:01.105635   68640 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.0
	I0501 03:41:01.105645   68640 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0
	I0501 03:41:01.105734   68640 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.30.0 (exists)
	I0501 03:41:01.105814   68640 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0
	I0501 03:41:01.105888   68640 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.0
	I0501 03:41:01.120943   68640 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0501 03:41:01.121044   68640 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0501 03:41:01.127975   68640 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.12-0 (exists)
	I0501 03:41:01.359381   68640 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0501 03:40:59.156924   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:01.659307   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:03.661498   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:02.511652   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:05.011220   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:01.679527   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:02.179435   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:02.679626   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:03.179351   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:03.679618   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:04.179426   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:04.678853   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:05.179143   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:05.679065   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:06.179513   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:04.315680   68640 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.0: (3.210016587s)
	I0501 03:41:04.315725   68640 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.30.0 (exists)
	I0501 03:41:04.315756   68640 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.0: (3.209843913s)
	I0501 03:41:04.315784   68640 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1: (3.194721173s)
	I0501 03:41:04.315799   68640 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0: (3.210139611s)
	I0501 03:41:04.315812   68640 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.30.0 (exists)
	I0501 03:41:04.315813   68640 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0501 03:41:04.315813   68640 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0 from cache
	I0501 03:41:04.315844   68640 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.956432506s)
	I0501 03:41:04.315859   68640 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.30.0
	I0501 03:41:04.315902   68640 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0501 03:41:04.315905   68640 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0
	I0501 03:41:04.315927   68640 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0501 03:41:04.315962   68640 ssh_runner.go:195] Run: which crictl
	I0501 03:41:05.691351   68640 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0: (1.375419764s)
	I0501 03:41:05.691394   68640 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0 from cache
	I0501 03:41:05.691418   68640 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.12-0
	I0501 03:41:05.691467   68640 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0
	I0501 03:41:05.691477   68640 ssh_runner.go:235] Completed: which crictl: (1.375499162s)
	I0501 03:41:05.691529   68640 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0501 03:41:06.159381   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:08.659756   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:07.012126   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:09.511459   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:06.679246   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:07.179629   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:07.679601   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:08.179634   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:08.679603   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:09.179675   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:09.678837   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:10.178860   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:10.679638   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:11.179802   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:09.757005   68640 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0: (4.065509843s)
	I0501 03:41:09.757044   68640 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 from cache
	I0501 03:41:09.757079   68640 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.30.0
	I0501 03:41:09.757093   68640 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (4.065539206s)
	I0501 03:41:09.757137   68640 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0501 03:41:09.757158   68640 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0
	I0501 03:41:09.757222   68640 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0501 03:41:12.125691   68640 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0: (2.368504788s)
	I0501 03:41:12.125729   68640 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0 from cache
	I0501 03:41:12.125726   68640 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.368475622s)
	I0501 03:41:12.125755   68640 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0501 03:41:12.125754   68640 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.30.0
	I0501 03:41:12.125817   68640 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0
	I0501 03:41:11.157019   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:13.157632   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:11.513027   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:14.013463   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:11.679355   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:12.178847   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:12.679660   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:13.179641   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:13.678808   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:14.178955   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:14.679651   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:15.179623   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:15.678862   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:16.179775   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:14.315765   68640 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0: (2.18991878s)
	I0501 03:41:14.315791   68640 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0 from cache
	I0501 03:41:14.315835   68640 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0501 03:41:14.315911   68640 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0501 03:41:16.401221   68640 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.085281928s)
	I0501 03:41:16.401261   68640 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0501 03:41:16.401291   68640 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0501 03:41:16.401335   68640 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0501 03:41:17.152926   68640 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0501 03:41:17.152969   68640 cache_images.go:123] Successfully loaded all cached images
	I0501 03:41:17.152976   68640 cache_images.go:92] duration metric: took 16.656540612s to LoadCachedImages
	I0501 03:41:17.152989   68640 kubeadm.go:928] updating node { 192.168.39.144 8443 v1.30.0 crio true true} ...
	I0501 03:41:17.153119   68640 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-892672 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.144
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:no-preload-892672 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0501 03:41:17.153241   68640 ssh_runner.go:195] Run: crio config
	I0501 03:41:17.207153   68640 cni.go:84] Creating CNI manager for ""
	I0501 03:41:17.207181   68640 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0501 03:41:17.207196   68640 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0501 03:41:17.207225   68640 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.144 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-892672 NodeName:no-preload-892672 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.144"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.144 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0501 03:41:17.207407   68640 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.144
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-892672"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.144
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.144"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0501 03:41:17.207488   68640 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0501 03:41:17.221033   68640 binaries.go:44] Found k8s binaries, skipping transfer
	I0501 03:41:17.221099   68640 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0501 03:41:17.232766   68640 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0501 03:41:17.252543   68640 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0501 03:41:17.272030   68640 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0501 03:41:17.291541   68640 ssh_runner.go:195] Run: grep 192.168.39.144	control-plane.minikube.internal$ /etc/hosts
	I0501 03:41:17.295801   68640 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.144	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0501 03:41:17.309880   68640 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 03:41:17.432917   68640 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0501 03:41:17.452381   68640 certs.go:68] Setting up /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/no-preload-892672 for IP: 192.168.39.144
	I0501 03:41:17.452406   68640 certs.go:194] generating shared ca certs ...
	I0501 03:41:17.452425   68640 certs.go:226] acquiring lock for ca certs: {Name:mk85a75d14c00780d4bf09cd9bdaf0f96f1b8fce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 03:41:17.452606   68640 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18779-13391/.minikube/ca.key
	I0501 03:41:17.452655   68640 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.key
	I0501 03:41:17.452669   68640 certs.go:256] generating profile certs ...
	I0501 03:41:17.452746   68640 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/no-preload-892672/client.key
	I0501 03:41:17.452809   68640 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/no-preload-892672/apiserver.key.3644a8af
	I0501 03:41:17.452848   68640 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/no-preload-892672/proxy-client.key
	I0501 03:41:17.452963   68640 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/20724.pem (1338 bytes)
	W0501 03:41:17.453007   68640 certs.go:480] ignoring /home/jenkins/minikube-integration/18779-13391/.minikube/certs/20724_empty.pem, impossibly tiny 0 bytes
	I0501 03:41:17.453021   68640 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca-key.pem (1679 bytes)
	I0501 03:41:17.453050   68640 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem (1078 bytes)
	I0501 03:41:17.453083   68640 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem (1123 bytes)
	I0501 03:41:17.453116   68640 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem (1675 bytes)
	I0501 03:41:17.453166   68640 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem (1708 bytes)
	I0501 03:41:17.453767   68640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0501 03:41:17.490616   68640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0501 03:41:17.545217   68640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0501 03:41:17.576908   68640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0501 03:41:17.607371   68640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/no-preload-892672/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0501 03:41:17.657675   68640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/no-preload-892672/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0501 03:41:17.684681   68640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/no-preload-892672/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0501 03:41:17.716319   68640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/no-preload-892672/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0501 03:41:17.745731   68640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0501 03:41:17.770939   68640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/certs/20724.pem --> /usr/share/ca-certificates/20724.pem (1338 bytes)
	I0501 03:41:17.796366   68640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem --> /usr/share/ca-certificates/207242.pem (1708 bytes)
	I0501 03:41:17.823301   68640 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0501 03:41:17.841496   68640 ssh_runner.go:195] Run: openssl version
	I0501 03:41:17.848026   68640 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0501 03:41:17.860734   68640 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0501 03:41:17.865978   68640 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May  1 02:08 /usr/share/ca-certificates/minikubeCA.pem
	I0501 03:41:17.866037   68640 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0501 03:41:17.872644   68640 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0501 03:41:17.886241   68640 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20724.pem && ln -fs /usr/share/ca-certificates/20724.pem /etc/ssl/certs/20724.pem"
	I0501 03:41:17.899619   68640 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20724.pem
	I0501 03:41:17.904664   68640 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May  1 02:20 /usr/share/ca-certificates/20724.pem
	I0501 03:41:17.904701   68640 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20724.pem
	I0501 03:41:17.910799   68640 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20724.pem /etc/ssl/certs/51391683.0"
	I0501 03:41:17.923007   68640 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/207242.pem && ln -fs /usr/share/ca-certificates/207242.pem /etc/ssl/certs/207242.pem"
	I0501 03:41:15.657403   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:18.156777   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:16.511834   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:18.512735   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:20.513144   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:16.679614   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:17.179604   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:17.679100   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:18.179166   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:18.679202   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:19.179631   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:19.679583   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:20.179584   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:20.679493   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:21.178945   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:17.935647   68640 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/207242.pem
	I0501 03:41:17.942147   68640 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May  1 02:20 /usr/share/ca-certificates/207242.pem
	I0501 03:41:17.942187   68640 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/207242.pem
	I0501 03:41:17.948468   68640 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/207242.pem /etc/ssl/certs/3ec20f2e.0"
	I0501 03:41:17.962737   68640 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0501 03:41:17.968953   68640 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0501 03:41:17.975849   68640 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0501 03:41:17.982324   68640 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0501 03:41:17.988930   68640 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0501 03:41:17.995221   68640 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0501 03:41:18.001868   68640 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0501 03:41:18.008701   68640 kubeadm.go:391] StartCluster: {Name:no-preload-892672 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.0 ClusterName:no-preload-892672 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.144 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 03:41:18.008831   68640 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0501 03:41:18.008893   68640 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0501 03:41:18.056939   68640 cri.go:89] found id: ""
	I0501 03:41:18.057005   68640 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0501 03:41:18.070898   68640 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0501 03:41:18.070921   68640 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0501 03:41:18.070926   68640 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0501 03:41:18.070968   68640 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0501 03:41:18.083907   68640 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0501 03:41:18.085116   68640 kubeconfig.go:125] found "no-preload-892672" server: "https://192.168.39.144:8443"
	I0501 03:41:18.088582   68640 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0501 03:41:18.101426   68640 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.144
	I0501 03:41:18.101471   68640 kubeadm.go:1154] stopping kube-system containers ...
	I0501 03:41:18.101493   68640 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0501 03:41:18.101543   68640 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0501 03:41:18.153129   68640 cri.go:89] found id: ""
	I0501 03:41:18.153193   68640 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0501 03:41:18.173100   68640 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0501 03:41:18.188443   68640 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0501 03:41:18.188463   68640 kubeadm.go:156] found existing configuration files:
	
	I0501 03:41:18.188509   68640 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0501 03:41:18.202153   68640 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0501 03:41:18.202204   68640 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0501 03:41:18.215390   68640 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0501 03:41:18.227339   68640 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0501 03:41:18.227404   68640 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0501 03:41:18.239160   68640 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0501 03:41:18.251992   68640 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0501 03:41:18.252053   68640 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0501 03:41:18.265088   68640 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0501 03:41:18.277922   68640 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0501 03:41:18.277983   68640 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0501 03:41:18.291307   68640 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0501 03:41:18.304879   68640 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:41:18.417921   68640 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:41:19.350848   68640 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:41:19.586348   68640 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:41:19.761056   68640 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:41:19.867315   68640 api_server.go:52] waiting for apiserver process to appear ...
	I0501 03:41:19.867435   68640 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:20.368520   68640 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:20.868444   68640 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:20.913411   68640 api_server.go:72] duration metric: took 1.046095165s to wait for apiserver process to appear ...
	I0501 03:41:20.913444   68640 api_server.go:88] waiting for apiserver healthz status ...
	I0501 03:41:20.913469   68640 api_server.go:253] Checking apiserver healthz at https://192.168.39.144:8443/healthz ...
	I0501 03:41:20.914000   68640 api_server.go:269] stopped: https://192.168.39.144:8443/healthz: Get "https://192.168.39.144:8443/healthz": dial tcp 192.168.39.144:8443: connect: connection refused
	I0501 03:41:21.414544   68640 api_server.go:253] Checking apiserver healthz at https://192.168.39.144:8443/healthz ...
	I0501 03:41:20.658333   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:23.157298   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:23.011395   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:25.012164   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:21.678785   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:22.179435   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:22.679615   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:23.179610   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:23.679473   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:24.179613   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:24.679672   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:25.179400   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:25.679793   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:26.179809   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:24.166756   68640 api_server.go:279] https://192.168.39.144:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0501 03:41:24.166786   68640 api_server.go:103] status: https://192.168.39.144:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0501 03:41:24.166807   68640 api_server.go:253] Checking apiserver healthz at https://192.168.39.144:8443/healthz ...
	I0501 03:41:24.205679   68640 api_server.go:279] https://192.168.39.144:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0501 03:41:24.205713   68640 api_server.go:103] status: https://192.168.39.144:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0501 03:41:24.414055   68640 api_server.go:253] Checking apiserver healthz at https://192.168.39.144:8443/healthz ...
	I0501 03:41:24.420468   68640 api_server.go:279] https://192.168.39.144:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0501 03:41:24.420502   68640 api_server.go:103] status: https://192.168.39.144:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0501 03:41:24.914021   68640 api_server.go:253] Checking apiserver healthz at https://192.168.39.144:8443/healthz ...
	I0501 03:41:24.919717   68640 api_server.go:279] https://192.168.39.144:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0501 03:41:24.919754   68640 api_server.go:103] status: https://192.168.39.144:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0501 03:41:25.414015   68640 api_server.go:253] Checking apiserver healthz at https://192.168.39.144:8443/healthz ...
	I0501 03:41:25.422149   68640 api_server.go:279] https://192.168.39.144:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0501 03:41:25.422180   68640 api_server.go:103] status: https://192.168.39.144:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0501 03:41:25.913751   68640 api_server.go:253] Checking apiserver healthz at https://192.168.39.144:8443/healthz ...
	I0501 03:41:25.917839   68640 api_server.go:279] https://192.168.39.144:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0501 03:41:25.917865   68640 api_server.go:103] status: https://192.168.39.144:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0501 03:41:26.414458   68640 api_server.go:253] Checking apiserver healthz at https://192.168.39.144:8443/healthz ...
	I0501 03:41:26.419346   68640 api_server.go:279] https://192.168.39.144:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0501 03:41:26.419367   68640 api_server.go:103] status: https://192.168.39.144:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0501 03:41:26.913912   68640 api_server.go:253] Checking apiserver healthz at https://192.168.39.144:8443/healthz ...
	I0501 03:41:26.918504   68640 api_server.go:279] https://192.168.39.144:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0501 03:41:26.918537   68640 api_server.go:103] status: https://192.168.39.144:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0501 03:41:27.413693   68640 api_server.go:253] Checking apiserver healthz at https://192.168.39.144:8443/healthz ...
	I0501 03:41:27.421752   68640 api_server.go:279] https://192.168.39.144:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0501 03:41:27.421776   68640 api_server.go:103] status: https://192.168.39.144:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0501 03:41:27.913582   68640 api_server.go:253] Checking apiserver healthz at https://192.168.39.144:8443/healthz ...
	I0501 03:41:27.918116   68640 api_server.go:279] https://192.168.39.144:8443/healthz returned 200:
	ok
	I0501 03:41:27.927764   68640 api_server.go:141] control plane version: v1.30.0
	I0501 03:41:27.927790   68640 api_server.go:131] duration metric: took 7.014339409s to wait for apiserver health ...
	I0501 03:41:27.927799   68640 cni.go:84] Creating CNI manager for ""
	I0501 03:41:27.927805   68640 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0501 03:41:27.929889   68640 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0501 03:41:27.931210   68640 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0501 03:41:25.158177   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:27.656879   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:27.511692   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:30.010468   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:26.679430   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:27.179043   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:27.678801   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:28.179629   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:28.679111   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:29.179599   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:29.679624   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:30.179585   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:30.679442   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:31.179530   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:27.945852   68640 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0501 03:41:27.968311   68640 system_pods.go:43] waiting for kube-system pods to appear ...
	I0501 03:41:27.981571   68640 system_pods.go:59] 8 kube-system pods found
	I0501 03:41:27.981609   68640 system_pods.go:61] "coredns-7db6d8ff4d-v8bqq" [bf389521-9f19-4f2b-83a5-6d469c7ce0fe] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0501 03:41:27.981615   68640 system_pods.go:61] "etcd-no-preload-892672" [108fce6d-03f3-4bb9-a410-a58c58e8f186] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0501 03:41:27.981621   68640 system_pods.go:61] "kube-apiserver-no-preload-892672" [a18b7242-1865-4a67-aab6-c6cc19552326] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0501 03:41:27.981629   68640 system_pods.go:61] "kube-controller-manager-no-preload-892672" [318d39e1-5265-42e5-a3d5-4408b7b73542] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0501 03:41:27.981636   68640 system_pods.go:61] "kube-proxy-dwvdl" [f7a97598-aaa1-4df5-8d6a-8f6286568ad6] Running
	I0501 03:41:27.981642   68640 system_pods.go:61] "kube-scheduler-no-preload-892672" [cbf1c183-16df-42c8-b1c8-b9adf3c25a7a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0501 03:41:27.981647   68640 system_pods.go:61] "metrics-server-569cc877fc-k8jnl" [1dd0fb29-4d90-41c8-9de2-d163eeb0247b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0501 03:41:27.981651   68640 system_pods.go:61] "storage-provisioner" [fc703ab1-f14b-4766-8ee2-a43477d3df21] Running
	I0501 03:41:27.981657   68640 system_pods.go:74] duration metric: took 13.322893ms to wait for pod list to return data ...
	I0501 03:41:27.981667   68640 node_conditions.go:102] verifying NodePressure condition ...
	I0501 03:41:27.985896   68640 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0501 03:41:27.985931   68640 node_conditions.go:123] node cpu capacity is 2
	I0501 03:41:27.985944   68640 node_conditions.go:105] duration metric: took 4.271726ms to run NodePressure ...
	I0501 03:41:27.985966   68640 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:41:28.269675   68640 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0501 03:41:28.276487   68640 kubeadm.go:733] kubelet initialised
	I0501 03:41:28.276512   68640 kubeadm.go:734] duration metric: took 6.808875ms waiting for restarted kubelet to initialise ...
	I0501 03:41:28.276522   68640 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0501 03:41:28.287109   68640 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-v8bqq" in "kube-system" namespace to be "Ready" ...
	I0501 03:41:28.297143   68640 pod_ready.go:97] node "no-preload-892672" hosting pod "coredns-7db6d8ff4d-v8bqq" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-892672" has status "Ready":"False"
	I0501 03:41:28.297185   68640 pod_ready.go:81] duration metric: took 10.040841ms for pod "coredns-7db6d8ff4d-v8bqq" in "kube-system" namespace to be "Ready" ...
	E0501 03:41:28.297198   68640 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-892672" hosting pod "coredns-7db6d8ff4d-v8bqq" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-892672" has status "Ready":"False"
	I0501 03:41:28.297206   68640 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-892672" in "kube-system" namespace to be "Ready" ...
	I0501 03:41:28.307648   68640 pod_ready.go:97] node "no-preload-892672" hosting pod "etcd-no-preload-892672" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-892672" has status "Ready":"False"
	I0501 03:41:28.307682   68640 pod_ready.go:81] duration metric: took 10.464199ms for pod "etcd-no-preload-892672" in "kube-system" namespace to be "Ready" ...
	E0501 03:41:28.307695   68640 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-892672" hosting pod "etcd-no-preload-892672" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-892672" has status "Ready":"False"
	I0501 03:41:28.307707   68640 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-892672" in "kube-system" namespace to be "Ready" ...
	I0501 03:41:30.319652   68640 pod_ready.go:102] pod "kube-apiserver-no-preload-892672" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:32.821375   68640 pod_ready.go:102] pod "kube-apiserver-no-preload-892672" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:29.657167   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:32.157549   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:32.012009   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:34.511543   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:31.679423   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:32.179628   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:32.679456   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:33.179336   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:33.679221   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:34.178900   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:34.679236   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:35.179595   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:35.679520   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:36.179639   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:35.317202   68640 pod_ready.go:102] pod "kube-apiserver-no-preload-892672" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:37.318125   68640 pod_ready.go:92] pod "kube-apiserver-no-preload-892672" in "kube-system" namespace has status "Ready":"True"
	I0501 03:41:37.318157   68640 pod_ready.go:81] duration metric: took 9.010440772s for pod "kube-apiserver-no-preload-892672" in "kube-system" namespace to be "Ready" ...
	I0501 03:41:37.318170   68640 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-892672" in "kube-system" namespace to be "Ready" ...
	I0501 03:41:37.327390   68640 pod_ready.go:92] pod "kube-controller-manager-no-preload-892672" in "kube-system" namespace has status "Ready":"True"
	I0501 03:41:37.327412   68640 pod_ready.go:81] duration metric: took 9.233689ms for pod "kube-controller-manager-no-preload-892672" in "kube-system" namespace to be "Ready" ...
	I0501 03:41:37.327425   68640 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-dwvdl" in "kube-system" namespace to be "Ready" ...
	I0501 03:41:37.333971   68640 pod_ready.go:92] pod "kube-proxy-dwvdl" in "kube-system" namespace has status "Ready":"True"
	I0501 03:41:37.333994   68640 pod_ready.go:81] duration metric: took 6.561014ms for pod "kube-proxy-dwvdl" in "kube-system" namespace to be "Ready" ...
	I0501 03:41:37.334006   68640 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-892672" in "kube-system" namespace to be "Ready" ...
	I0501 03:41:37.338637   68640 pod_ready.go:92] pod "kube-scheduler-no-preload-892672" in "kube-system" namespace has status "Ready":"True"
	I0501 03:41:37.338657   68640 pod_ready.go:81] duration metric: took 4.644395ms for pod "kube-scheduler-no-preload-892672" in "kube-system" namespace to be "Ready" ...
	I0501 03:41:37.338665   68640 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace to be "Ready" ...
	I0501 03:41:34.657958   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:36.658191   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:36.512234   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:39.012636   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:36.678883   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:37.179198   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:37.679101   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:38.179088   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:38.679354   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:39.179163   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:39.678809   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:40.179768   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:40.679046   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:41.179618   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:39.346054   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:41.346434   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:39.157142   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:41.656902   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:41.510939   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:43.511571   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:45.511959   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:41.679751   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:42.178848   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:42.679525   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:43.179706   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:43.679665   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:44.179053   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:44.679615   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:45.178830   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:45.679547   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:41:45.679620   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:41:45.718568   69580 cri.go:89] found id: ""
	I0501 03:41:45.718597   69580 logs.go:276] 0 containers: []
	W0501 03:41:45.718611   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:41:45.718619   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:41:45.718678   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:41:45.755572   69580 cri.go:89] found id: ""
	I0501 03:41:45.755596   69580 logs.go:276] 0 containers: []
	W0501 03:41:45.755604   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:41:45.755609   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:41:45.755654   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:41:45.793411   69580 cri.go:89] found id: ""
	I0501 03:41:45.793440   69580 logs.go:276] 0 containers: []
	W0501 03:41:45.793450   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:41:45.793458   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:41:45.793526   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:41:45.834547   69580 cri.go:89] found id: ""
	I0501 03:41:45.834572   69580 logs.go:276] 0 containers: []
	W0501 03:41:45.834579   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:41:45.834585   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:41:45.834668   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:41:45.873293   69580 cri.go:89] found id: ""
	I0501 03:41:45.873321   69580 logs.go:276] 0 containers: []
	W0501 03:41:45.873332   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:41:45.873348   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:41:45.873411   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:41:45.911703   69580 cri.go:89] found id: ""
	I0501 03:41:45.911734   69580 logs.go:276] 0 containers: []
	W0501 03:41:45.911745   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:41:45.911766   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:41:45.911826   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:41:45.949577   69580 cri.go:89] found id: ""
	I0501 03:41:45.949602   69580 logs.go:276] 0 containers: []
	W0501 03:41:45.949610   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:41:45.949616   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:41:45.949666   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:41:45.986174   69580 cri.go:89] found id: ""
	I0501 03:41:45.986199   69580 logs.go:276] 0 containers: []
	W0501 03:41:45.986207   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:41:45.986216   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:41:45.986228   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:41:46.041028   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:41:46.041064   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:41:46.057097   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:41:46.057126   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:41:46.195021   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:41:46.195042   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:41:46.195055   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:41:46.261153   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:41:46.261197   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:41:43.845096   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:45.845950   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:47.849620   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:44.157041   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:46.158028   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:48.658062   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:48.011975   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:50.512345   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:48.809274   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:48.824295   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:41:48.824369   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:41:48.869945   69580 cri.go:89] found id: ""
	I0501 03:41:48.869975   69580 logs.go:276] 0 containers: []
	W0501 03:41:48.869985   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:41:48.869993   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:41:48.870053   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:41:48.918088   69580 cri.go:89] found id: ""
	I0501 03:41:48.918113   69580 logs.go:276] 0 containers: []
	W0501 03:41:48.918122   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:41:48.918131   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:41:48.918190   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:41:48.958102   69580 cri.go:89] found id: ""
	I0501 03:41:48.958132   69580 logs.go:276] 0 containers: []
	W0501 03:41:48.958143   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:41:48.958149   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:41:48.958207   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:41:48.997163   69580 cri.go:89] found id: ""
	I0501 03:41:48.997194   69580 logs.go:276] 0 containers: []
	W0501 03:41:48.997211   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:41:48.997218   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:41:48.997284   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:41:49.040132   69580 cri.go:89] found id: ""
	I0501 03:41:49.040156   69580 logs.go:276] 0 containers: []
	W0501 03:41:49.040164   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:41:49.040170   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:41:49.040228   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:41:49.079680   69580 cri.go:89] found id: ""
	I0501 03:41:49.079712   69580 logs.go:276] 0 containers: []
	W0501 03:41:49.079724   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:41:49.079732   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:41:49.079790   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:41:49.120577   69580 cri.go:89] found id: ""
	I0501 03:41:49.120610   69580 logs.go:276] 0 containers: []
	W0501 03:41:49.120623   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:41:49.120630   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:41:49.120700   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:41:49.167098   69580 cri.go:89] found id: ""
	I0501 03:41:49.167123   69580 logs.go:276] 0 containers: []
	W0501 03:41:49.167133   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:41:49.167141   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:41:49.167152   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:41:49.242834   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:41:49.242868   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:41:49.264011   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:41:49.264033   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:41:49.367711   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:41:49.367739   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:41:49.367764   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:41:49.441925   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:41:49.441964   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:41:50.346009   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:52.346333   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:51.156287   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:53.657588   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:53.010720   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:55.012329   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:51.986536   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:52.001651   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:41:52.001734   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:41:52.039550   69580 cri.go:89] found id: ""
	I0501 03:41:52.039571   69580 logs.go:276] 0 containers: []
	W0501 03:41:52.039579   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:41:52.039584   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:41:52.039636   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:41:52.082870   69580 cri.go:89] found id: ""
	I0501 03:41:52.082892   69580 logs.go:276] 0 containers: []
	W0501 03:41:52.082900   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:41:52.082905   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:41:52.082949   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:41:52.126970   69580 cri.go:89] found id: ""
	I0501 03:41:52.126996   69580 logs.go:276] 0 containers: []
	W0501 03:41:52.127009   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:41:52.127014   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:41:52.127076   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:41:52.169735   69580 cri.go:89] found id: ""
	I0501 03:41:52.169761   69580 logs.go:276] 0 containers: []
	W0501 03:41:52.169769   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:41:52.169774   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:41:52.169826   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:41:52.207356   69580 cri.go:89] found id: ""
	I0501 03:41:52.207392   69580 logs.go:276] 0 containers: []
	W0501 03:41:52.207404   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:41:52.207412   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:41:52.207472   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:41:52.250074   69580 cri.go:89] found id: ""
	I0501 03:41:52.250102   69580 logs.go:276] 0 containers: []
	W0501 03:41:52.250113   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:41:52.250121   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:41:52.250180   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:41:52.290525   69580 cri.go:89] found id: ""
	I0501 03:41:52.290550   69580 logs.go:276] 0 containers: []
	W0501 03:41:52.290558   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:41:52.290564   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:41:52.290610   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:41:52.336058   69580 cri.go:89] found id: ""
	I0501 03:41:52.336084   69580 logs.go:276] 0 containers: []
	W0501 03:41:52.336092   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:41:52.336103   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:41:52.336118   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:41:52.392738   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:41:52.392773   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:41:52.408475   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:41:52.408503   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:41:52.493567   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:41:52.493594   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:41:52.493608   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:41:52.566550   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:41:52.566583   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:41:55.117129   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:55.134840   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:41:55.134918   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:41:55.193990   69580 cri.go:89] found id: ""
	I0501 03:41:55.194019   69580 logs.go:276] 0 containers: []
	W0501 03:41:55.194029   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:41:55.194038   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:41:55.194100   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:41:55.261710   69580 cri.go:89] found id: ""
	I0501 03:41:55.261743   69580 logs.go:276] 0 containers: []
	W0501 03:41:55.261754   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:41:55.261761   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:41:55.261823   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:41:55.302432   69580 cri.go:89] found id: ""
	I0501 03:41:55.302468   69580 logs.go:276] 0 containers: []
	W0501 03:41:55.302480   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:41:55.302488   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:41:55.302550   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:41:55.346029   69580 cri.go:89] found id: ""
	I0501 03:41:55.346058   69580 logs.go:276] 0 containers: []
	W0501 03:41:55.346067   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:41:55.346073   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:41:55.346117   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:41:55.393206   69580 cri.go:89] found id: ""
	I0501 03:41:55.393229   69580 logs.go:276] 0 containers: []
	W0501 03:41:55.393236   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:41:55.393242   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:41:55.393295   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:41:55.437908   69580 cri.go:89] found id: ""
	I0501 03:41:55.437940   69580 logs.go:276] 0 containers: []
	W0501 03:41:55.437952   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:41:55.437960   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:41:55.438020   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:41:55.480439   69580 cri.go:89] found id: ""
	I0501 03:41:55.480468   69580 logs.go:276] 0 containers: []
	W0501 03:41:55.480480   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:41:55.480488   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:41:55.480589   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:41:55.524782   69580 cri.go:89] found id: ""
	I0501 03:41:55.524811   69580 logs.go:276] 0 containers: []
	W0501 03:41:55.524819   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:41:55.524828   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:41:55.524840   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:41:55.604337   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:41:55.604373   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:41:55.649427   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:41:55.649455   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:41:55.707928   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:41:55.707976   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:41:55.723289   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:41:55.723316   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:41:55.805146   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:41:54.347203   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:56.847806   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:55.658387   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:58.156886   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:57.511280   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:59.511460   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:58.306145   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:58.322207   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:41:58.322280   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:41:58.370291   69580 cri.go:89] found id: ""
	I0501 03:41:58.370319   69580 logs.go:276] 0 containers: []
	W0501 03:41:58.370331   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:41:58.370338   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:41:58.370417   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:41:58.421230   69580 cri.go:89] found id: ""
	I0501 03:41:58.421256   69580 logs.go:276] 0 containers: []
	W0501 03:41:58.421264   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:41:58.421270   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:41:58.421317   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:41:58.463694   69580 cri.go:89] found id: ""
	I0501 03:41:58.463724   69580 logs.go:276] 0 containers: []
	W0501 03:41:58.463735   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:41:58.463743   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:41:58.463797   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:41:58.507756   69580 cri.go:89] found id: ""
	I0501 03:41:58.507785   69580 logs.go:276] 0 containers: []
	W0501 03:41:58.507791   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:41:58.507797   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:41:58.507870   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:41:58.554852   69580 cri.go:89] found id: ""
	I0501 03:41:58.554884   69580 logs.go:276] 0 containers: []
	W0501 03:41:58.554895   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:41:58.554903   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:41:58.554969   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:41:58.602467   69580 cri.go:89] found id: ""
	I0501 03:41:58.602495   69580 logs.go:276] 0 containers: []
	W0501 03:41:58.602505   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:41:58.602511   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:41:58.602561   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:41:58.652718   69580 cri.go:89] found id: ""
	I0501 03:41:58.652749   69580 logs.go:276] 0 containers: []
	W0501 03:41:58.652759   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:41:58.652766   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:41:58.652837   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:41:58.694351   69580 cri.go:89] found id: ""
	I0501 03:41:58.694377   69580 logs.go:276] 0 containers: []
	W0501 03:41:58.694385   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:41:58.694393   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:41:58.694434   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:41:58.779878   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:41:58.779911   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:41:58.826733   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:41:58.826768   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:41:58.883808   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:41:58.883842   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:41:58.900463   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:41:58.900495   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:41:58.991346   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:41:59.345807   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:01.846099   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:00.157131   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:02.157204   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:01.511711   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:03.512536   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:01.492396   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:42:01.508620   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:42:01.508756   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:42:01.555669   69580 cri.go:89] found id: ""
	I0501 03:42:01.555696   69580 logs.go:276] 0 containers: []
	W0501 03:42:01.555712   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:42:01.555720   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:42:01.555782   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:42:01.597591   69580 cri.go:89] found id: ""
	I0501 03:42:01.597615   69580 logs.go:276] 0 containers: []
	W0501 03:42:01.597626   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:42:01.597635   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:42:01.597693   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:42:01.636259   69580 cri.go:89] found id: ""
	I0501 03:42:01.636286   69580 logs.go:276] 0 containers: []
	W0501 03:42:01.636297   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:42:01.636305   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:42:01.636361   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:42:01.684531   69580 cri.go:89] found id: ""
	I0501 03:42:01.684562   69580 logs.go:276] 0 containers: []
	W0501 03:42:01.684572   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:42:01.684579   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:42:01.684647   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:42:01.725591   69580 cri.go:89] found id: ""
	I0501 03:42:01.725621   69580 logs.go:276] 0 containers: []
	W0501 03:42:01.725628   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:42:01.725652   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:42:01.725718   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:42:01.767868   69580 cri.go:89] found id: ""
	I0501 03:42:01.767901   69580 logs.go:276] 0 containers: []
	W0501 03:42:01.767910   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:42:01.767917   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:42:01.767977   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:42:01.817590   69580 cri.go:89] found id: ""
	I0501 03:42:01.817618   69580 logs.go:276] 0 containers: []
	W0501 03:42:01.817629   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:42:01.817637   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:42:01.817697   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:42:01.863549   69580 cri.go:89] found id: ""
	I0501 03:42:01.863576   69580 logs.go:276] 0 containers: []
	W0501 03:42:01.863586   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:42:01.863595   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:42:01.863607   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:42:01.879134   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:42:01.879162   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:42:01.967015   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:42:01.967043   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:42:01.967059   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:42:02.051576   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:42:02.051614   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:42:02.095614   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:42:02.095644   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:42:04.652974   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:42:04.671018   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:42:04.671103   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:42:04.712392   69580 cri.go:89] found id: ""
	I0501 03:42:04.712425   69580 logs.go:276] 0 containers: []
	W0501 03:42:04.712435   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:42:04.712442   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:42:04.712503   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:42:04.756854   69580 cri.go:89] found id: ""
	I0501 03:42:04.756881   69580 logs.go:276] 0 containers: []
	W0501 03:42:04.756893   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:42:04.756900   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:42:04.756962   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:42:04.797665   69580 cri.go:89] found id: ""
	I0501 03:42:04.797694   69580 logs.go:276] 0 containers: []
	W0501 03:42:04.797703   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:42:04.797709   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:42:04.797756   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:42:04.838441   69580 cri.go:89] found id: ""
	I0501 03:42:04.838472   69580 logs.go:276] 0 containers: []
	W0501 03:42:04.838483   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:42:04.838491   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:42:04.838556   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:42:04.879905   69580 cri.go:89] found id: ""
	I0501 03:42:04.879935   69580 logs.go:276] 0 containers: []
	W0501 03:42:04.879945   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:42:04.879952   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:42:04.880012   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:42:04.924759   69580 cri.go:89] found id: ""
	I0501 03:42:04.924792   69580 logs.go:276] 0 containers: []
	W0501 03:42:04.924804   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:42:04.924813   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:42:04.924879   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:42:04.965638   69580 cri.go:89] found id: ""
	I0501 03:42:04.965663   69580 logs.go:276] 0 containers: []
	W0501 03:42:04.965670   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:42:04.965676   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:42:04.965721   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:42:05.013127   69580 cri.go:89] found id: ""
	I0501 03:42:05.013153   69580 logs.go:276] 0 containers: []
	W0501 03:42:05.013163   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:42:05.013173   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:42:05.013185   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:42:05.108388   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:42:05.108409   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:42:05.108422   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:42:05.198239   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:42:05.198281   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:42:05.241042   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:42:05.241076   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:42:05.299017   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:42:05.299069   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:42:04.345910   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:06.346830   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:04.657438   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:06.657707   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:06.011511   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:08.016548   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:10.510503   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:07.815458   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:42:07.832047   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:42:07.832125   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:42:07.882950   69580 cri.go:89] found id: ""
	I0501 03:42:07.882985   69580 logs.go:276] 0 containers: []
	W0501 03:42:07.882996   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:42:07.883002   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:42:07.883051   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:42:07.928086   69580 cri.go:89] found id: ""
	I0501 03:42:07.928111   69580 logs.go:276] 0 containers: []
	W0501 03:42:07.928119   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:42:07.928124   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:42:07.928177   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:42:07.976216   69580 cri.go:89] found id: ""
	I0501 03:42:07.976250   69580 logs.go:276] 0 containers: []
	W0501 03:42:07.976268   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:42:07.976274   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:42:07.976331   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:42:08.019903   69580 cri.go:89] found id: ""
	I0501 03:42:08.019932   69580 logs.go:276] 0 containers: []
	W0501 03:42:08.019943   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:42:08.019951   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:42:08.020009   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:42:08.075980   69580 cri.go:89] found id: ""
	I0501 03:42:08.076004   69580 logs.go:276] 0 containers: []
	W0501 03:42:08.076012   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:42:08.076018   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:42:08.076065   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:42:08.114849   69580 cri.go:89] found id: ""
	I0501 03:42:08.114881   69580 logs.go:276] 0 containers: []
	W0501 03:42:08.114891   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:42:08.114897   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:42:08.114955   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:42:08.159427   69580 cri.go:89] found id: ""
	I0501 03:42:08.159457   69580 logs.go:276] 0 containers: []
	W0501 03:42:08.159468   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:42:08.159476   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:42:08.159543   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:42:08.200117   69580 cri.go:89] found id: ""
	I0501 03:42:08.200151   69580 logs.go:276] 0 containers: []
	W0501 03:42:08.200163   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:42:08.200182   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:42:08.200197   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:42:08.281926   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:42:08.281972   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:42:08.331393   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:42:08.331429   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:42:08.386758   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:42:08.386793   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:42:08.402551   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:42:08.402581   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:42:08.489678   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:42:10.990653   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:42:11.007879   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:42:11.007958   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:42:11.049842   69580 cri.go:89] found id: ""
	I0501 03:42:11.049867   69580 logs.go:276] 0 containers: []
	W0501 03:42:11.049879   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:42:11.049885   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:42:11.049933   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:42:11.091946   69580 cri.go:89] found id: ""
	I0501 03:42:11.091980   69580 logs.go:276] 0 containers: []
	W0501 03:42:11.091992   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:42:11.092000   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:42:11.092079   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:42:11.140100   69580 cri.go:89] found id: ""
	I0501 03:42:11.140129   69580 logs.go:276] 0 containers: []
	W0501 03:42:11.140138   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:42:11.140144   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:42:11.140207   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:42:11.182796   69580 cri.go:89] found id: ""
	I0501 03:42:11.182821   69580 logs.go:276] 0 containers: []
	W0501 03:42:11.182832   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:42:11.182838   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:42:11.182896   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:42:11.222985   69580 cri.go:89] found id: ""
	I0501 03:42:11.223016   69580 logs.go:276] 0 containers: []
	W0501 03:42:11.223027   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:42:11.223033   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:42:11.223114   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:42:11.265793   69580 cri.go:89] found id: ""
	I0501 03:42:11.265818   69580 logs.go:276] 0 containers: []
	W0501 03:42:11.265830   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:42:11.265838   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:42:11.265913   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:42:11.309886   69580 cri.go:89] found id: ""
	I0501 03:42:11.309912   69580 logs.go:276] 0 containers: []
	W0501 03:42:11.309924   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:42:11.309931   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:42:11.309989   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:42:11.357757   69580 cri.go:89] found id: ""
	I0501 03:42:11.357791   69580 logs.go:276] 0 containers: []
	W0501 03:42:11.357803   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:42:11.357823   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:42:11.357839   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:42:11.412668   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:42:11.412704   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:42:11.428380   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:42:11.428422   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0501 03:42:08.347511   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:10.846691   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:09.156632   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:11.158047   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:13.657603   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:12.512713   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:15.011382   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	W0501 03:42:11.521898   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:42:11.521924   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:42:11.521940   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:42:11.607081   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:42:11.607116   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:42:14.153054   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:42:14.173046   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:42:14.173150   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:42:14.219583   69580 cri.go:89] found id: ""
	I0501 03:42:14.219605   69580 logs.go:276] 0 containers: []
	W0501 03:42:14.219613   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:42:14.219619   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:42:14.219664   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:42:14.260316   69580 cri.go:89] found id: ""
	I0501 03:42:14.260349   69580 logs.go:276] 0 containers: []
	W0501 03:42:14.260357   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:42:14.260366   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:42:14.260420   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:42:14.305049   69580 cri.go:89] found id: ""
	I0501 03:42:14.305085   69580 logs.go:276] 0 containers: []
	W0501 03:42:14.305109   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:42:14.305117   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:42:14.305198   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:42:14.359589   69580 cri.go:89] found id: ""
	I0501 03:42:14.359614   69580 logs.go:276] 0 containers: []
	W0501 03:42:14.359622   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:42:14.359628   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:42:14.359672   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:42:14.403867   69580 cri.go:89] found id: ""
	I0501 03:42:14.403895   69580 logs.go:276] 0 containers: []
	W0501 03:42:14.403904   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:42:14.403910   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:42:14.403987   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:42:14.446626   69580 cri.go:89] found id: ""
	I0501 03:42:14.446655   69580 logs.go:276] 0 containers: []
	W0501 03:42:14.446675   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:42:14.446683   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:42:14.446754   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:42:14.490983   69580 cri.go:89] found id: ""
	I0501 03:42:14.491016   69580 logs.go:276] 0 containers: []
	W0501 03:42:14.491028   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:42:14.491036   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:42:14.491117   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:42:14.534180   69580 cri.go:89] found id: ""
	I0501 03:42:14.534205   69580 logs.go:276] 0 containers: []
	W0501 03:42:14.534213   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:42:14.534221   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:42:14.534236   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:42:14.621433   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:42:14.621491   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:42:14.680265   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:42:14.680310   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:42:14.738943   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:42:14.738983   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:42:14.754145   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:42:14.754176   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:42:14.839974   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:42:13.347081   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:15.847072   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:17.847749   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:16.157433   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:18.158120   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:17.017276   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:19.514339   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:17.340948   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:42:17.360007   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:42:17.360068   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:42:17.403201   69580 cri.go:89] found id: ""
	I0501 03:42:17.403231   69580 logs.go:276] 0 containers: []
	W0501 03:42:17.403239   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:42:17.403245   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:42:17.403301   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:42:17.442940   69580 cri.go:89] found id: ""
	I0501 03:42:17.442966   69580 logs.go:276] 0 containers: []
	W0501 03:42:17.442975   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:42:17.442981   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:42:17.443038   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:42:17.487219   69580 cri.go:89] found id: ""
	I0501 03:42:17.487248   69580 logs.go:276] 0 containers: []
	W0501 03:42:17.487259   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:42:17.487267   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:42:17.487324   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:42:17.528551   69580 cri.go:89] found id: ""
	I0501 03:42:17.528583   69580 logs.go:276] 0 containers: []
	W0501 03:42:17.528593   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:42:17.528601   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:42:17.528668   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:42:17.577005   69580 cri.go:89] found id: ""
	I0501 03:42:17.577041   69580 logs.go:276] 0 containers: []
	W0501 03:42:17.577052   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:42:17.577061   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:42:17.577132   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:42:17.618924   69580 cri.go:89] found id: ""
	I0501 03:42:17.618949   69580 logs.go:276] 0 containers: []
	W0501 03:42:17.618957   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:42:17.618963   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:42:17.619022   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:42:17.660487   69580 cri.go:89] found id: ""
	I0501 03:42:17.660514   69580 logs.go:276] 0 containers: []
	W0501 03:42:17.660525   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:42:17.660532   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:42:17.660592   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:42:17.701342   69580 cri.go:89] found id: ""
	I0501 03:42:17.701370   69580 logs.go:276] 0 containers: []
	W0501 03:42:17.701378   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:42:17.701387   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:42:17.701400   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:42:17.757034   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:42:17.757069   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:42:17.772955   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:42:17.772984   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:42:17.888062   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:42:17.888088   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:42:17.888101   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:42:17.969274   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:42:17.969312   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:42:20.521053   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:42:20.536065   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:42:20.536141   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:42:20.577937   69580 cri.go:89] found id: ""
	I0501 03:42:20.577967   69580 logs.go:276] 0 containers: []
	W0501 03:42:20.577977   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:42:20.577986   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:42:20.578055   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:42:20.626690   69580 cri.go:89] found id: ""
	I0501 03:42:20.626714   69580 logs.go:276] 0 containers: []
	W0501 03:42:20.626722   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:42:20.626728   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:42:20.626809   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:42:20.670849   69580 cri.go:89] found id: ""
	I0501 03:42:20.670872   69580 logs.go:276] 0 containers: []
	W0501 03:42:20.670881   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:42:20.670886   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:42:20.670946   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:42:20.711481   69580 cri.go:89] found id: ""
	I0501 03:42:20.711511   69580 logs.go:276] 0 containers: []
	W0501 03:42:20.711522   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:42:20.711531   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:42:20.711596   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:42:20.753413   69580 cri.go:89] found id: ""
	I0501 03:42:20.753443   69580 logs.go:276] 0 containers: []
	W0501 03:42:20.753452   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:42:20.753459   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:42:20.753536   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:42:20.791424   69580 cri.go:89] found id: ""
	I0501 03:42:20.791452   69580 logs.go:276] 0 containers: []
	W0501 03:42:20.791461   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:42:20.791466   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:42:20.791526   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:42:20.833718   69580 cri.go:89] found id: ""
	I0501 03:42:20.833740   69580 logs.go:276] 0 containers: []
	W0501 03:42:20.833748   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:42:20.833752   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:42:20.833799   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:42:20.879788   69580 cri.go:89] found id: ""
	I0501 03:42:20.879818   69580 logs.go:276] 0 containers: []
	W0501 03:42:20.879828   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:42:20.879839   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:42:20.879855   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:42:20.895266   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:42:20.895304   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:42:20.976429   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:42:20.976452   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:42:20.976465   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:42:21.063573   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:42:21.063611   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:42:21.113510   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:42:21.113543   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:42:20.346735   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:22.347096   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:20.658642   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:22.659841   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:22.011045   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:24.012756   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:23.672203   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:42:23.687849   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:42:23.687946   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:42:23.731428   69580 cri.go:89] found id: ""
	I0501 03:42:23.731455   69580 logs.go:276] 0 containers: []
	W0501 03:42:23.731467   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:42:23.731473   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:42:23.731534   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:42:23.772219   69580 cri.go:89] found id: ""
	I0501 03:42:23.772248   69580 logs.go:276] 0 containers: []
	W0501 03:42:23.772259   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:42:23.772266   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:42:23.772369   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:42:23.837203   69580 cri.go:89] found id: ""
	I0501 03:42:23.837235   69580 logs.go:276] 0 containers: []
	W0501 03:42:23.837247   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:42:23.837255   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:42:23.837317   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:42:23.884681   69580 cri.go:89] found id: ""
	I0501 03:42:23.884709   69580 logs.go:276] 0 containers: []
	W0501 03:42:23.884716   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:42:23.884722   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:42:23.884783   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:42:23.927544   69580 cri.go:89] found id: ""
	I0501 03:42:23.927576   69580 logs.go:276] 0 containers: []
	W0501 03:42:23.927584   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:42:23.927590   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:42:23.927652   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:42:23.970428   69580 cri.go:89] found id: ""
	I0501 03:42:23.970457   69580 logs.go:276] 0 containers: []
	W0501 03:42:23.970467   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:42:23.970476   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:42:23.970541   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:42:24.010545   69580 cri.go:89] found id: ""
	I0501 03:42:24.010573   69580 logs.go:276] 0 containers: []
	W0501 03:42:24.010583   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:42:24.010593   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:42:24.010653   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:42:24.053547   69580 cri.go:89] found id: ""
	I0501 03:42:24.053574   69580 logs.go:276] 0 containers: []
	W0501 03:42:24.053582   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:42:24.053591   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:42:24.053602   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:42:24.108416   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:42:24.108452   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:42:24.124052   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:42:24.124083   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:42:24.209024   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:42:24.209048   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:42:24.209063   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:42:24.291644   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:42:24.291693   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:42:24.846439   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:26.846750   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:25.157009   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:27.657022   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:26.510679   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:28.511049   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:30.511542   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:26.840623   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:42:26.856231   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:42:26.856320   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:42:26.897988   69580 cri.go:89] found id: ""
	I0501 03:42:26.898022   69580 logs.go:276] 0 containers: []
	W0501 03:42:26.898033   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:42:26.898041   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:42:26.898109   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:42:26.937608   69580 cri.go:89] found id: ""
	I0501 03:42:26.937638   69580 logs.go:276] 0 containers: []
	W0501 03:42:26.937660   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:42:26.937668   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:42:26.937731   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:42:26.979799   69580 cri.go:89] found id: ""
	I0501 03:42:26.979836   69580 logs.go:276] 0 containers: []
	W0501 03:42:26.979847   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:42:26.979854   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:42:26.979922   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:42:27.018863   69580 cri.go:89] found id: ""
	I0501 03:42:27.018896   69580 logs.go:276] 0 containers: []
	W0501 03:42:27.018903   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:42:27.018909   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:42:27.018959   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:42:27.057864   69580 cri.go:89] found id: ""
	I0501 03:42:27.057893   69580 logs.go:276] 0 containers: []
	W0501 03:42:27.057904   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:42:27.057912   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:42:27.057982   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:42:27.102909   69580 cri.go:89] found id: ""
	I0501 03:42:27.102939   69580 logs.go:276] 0 containers: []
	W0501 03:42:27.102950   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:42:27.102958   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:42:27.103019   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:42:27.148292   69580 cri.go:89] found id: ""
	I0501 03:42:27.148326   69580 logs.go:276] 0 containers: []
	W0501 03:42:27.148336   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:42:27.148344   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:42:27.148407   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:42:27.197557   69580 cri.go:89] found id: ""
	I0501 03:42:27.197581   69580 logs.go:276] 0 containers: []
	W0501 03:42:27.197588   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:42:27.197596   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:42:27.197609   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:42:27.281768   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:42:27.281793   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:42:27.281806   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:42:27.361496   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:42:27.361528   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:42:27.407640   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:42:27.407675   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:42:27.472533   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:42:27.472576   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:42:29.987773   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:42:30.003511   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:42:30.003619   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:42:30.049330   69580 cri.go:89] found id: ""
	I0501 03:42:30.049363   69580 logs.go:276] 0 containers: []
	W0501 03:42:30.049377   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:42:30.049384   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:42:30.049439   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:42:30.088521   69580 cri.go:89] found id: ""
	I0501 03:42:30.088549   69580 logs.go:276] 0 containers: []
	W0501 03:42:30.088560   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:42:30.088568   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:42:30.088624   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:42:30.132731   69580 cri.go:89] found id: ""
	I0501 03:42:30.132765   69580 logs.go:276] 0 containers: []
	W0501 03:42:30.132777   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:42:30.132784   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:42:30.132847   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:42:30.178601   69580 cri.go:89] found id: ""
	I0501 03:42:30.178639   69580 logs.go:276] 0 containers: []
	W0501 03:42:30.178648   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:42:30.178656   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:42:30.178714   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:42:30.230523   69580 cri.go:89] found id: ""
	I0501 03:42:30.230549   69580 logs.go:276] 0 containers: []
	W0501 03:42:30.230561   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:42:30.230569   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:42:30.230632   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:42:30.289234   69580 cri.go:89] found id: ""
	I0501 03:42:30.289262   69580 logs.go:276] 0 containers: []
	W0501 03:42:30.289270   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:42:30.289277   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:42:30.289342   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:42:30.332596   69580 cri.go:89] found id: ""
	I0501 03:42:30.332627   69580 logs.go:276] 0 containers: []
	W0501 03:42:30.332637   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:42:30.332644   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:42:30.332710   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:42:30.383871   69580 cri.go:89] found id: ""
	I0501 03:42:30.383901   69580 logs.go:276] 0 containers: []
	W0501 03:42:30.383908   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:42:30.383917   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:42:30.383929   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:42:30.464382   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:42:30.464404   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:42:30.464417   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:42:30.550604   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:42:30.550637   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:42:30.594927   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:42:30.594959   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:42:30.648392   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:42:30.648426   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:42:28.847271   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:31.345865   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:29.657316   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:31.657435   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:32.511887   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:35.011677   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:33.167591   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:42:33.183804   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:42:33.183874   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:42:33.223501   69580 cri.go:89] found id: ""
	I0501 03:42:33.223525   69580 logs.go:276] 0 containers: []
	W0501 03:42:33.223532   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:42:33.223539   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:42:33.223600   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:42:33.268674   69580 cri.go:89] found id: ""
	I0501 03:42:33.268705   69580 logs.go:276] 0 containers: []
	W0501 03:42:33.268741   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:42:33.268749   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:42:33.268807   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:42:33.310613   69580 cri.go:89] found id: ""
	I0501 03:42:33.310655   69580 logs.go:276] 0 containers: []
	W0501 03:42:33.310666   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:42:33.310674   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:42:33.310737   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:42:33.353156   69580 cri.go:89] found id: ""
	I0501 03:42:33.353177   69580 logs.go:276] 0 containers: []
	W0501 03:42:33.353184   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:42:33.353189   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:42:33.353237   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:42:33.389702   69580 cri.go:89] found id: ""
	I0501 03:42:33.389730   69580 logs.go:276] 0 containers: []
	W0501 03:42:33.389743   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:42:33.389751   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:42:33.389817   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:42:33.431244   69580 cri.go:89] found id: ""
	I0501 03:42:33.431275   69580 logs.go:276] 0 containers: []
	W0501 03:42:33.431290   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:42:33.431298   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:42:33.431384   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:42:33.472382   69580 cri.go:89] found id: ""
	I0501 03:42:33.472412   69580 logs.go:276] 0 containers: []
	W0501 03:42:33.472423   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:42:33.472431   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:42:33.472519   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:42:33.517042   69580 cri.go:89] found id: ""
	I0501 03:42:33.517064   69580 logs.go:276] 0 containers: []
	W0501 03:42:33.517071   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:42:33.517079   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:42:33.517091   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:42:33.573343   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:42:33.573372   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:42:33.588932   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:42:33.588963   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:42:33.674060   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:42:33.674090   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:42:33.674106   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:42:33.756635   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:42:33.756684   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:42:36.300909   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:42:36.320407   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:42:36.320474   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:42:36.367236   69580 cri.go:89] found id: ""
	I0501 03:42:36.367261   69580 logs.go:276] 0 containers: []
	W0501 03:42:36.367269   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:42:36.367274   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:42:36.367335   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:42:36.406440   69580 cri.go:89] found id: ""
	I0501 03:42:36.406471   69580 logs.go:276] 0 containers: []
	W0501 03:42:36.406482   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:42:36.406489   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:42:36.406552   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:42:36.443931   69580 cri.go:89] found id: ""
	I0501 03:42:36.443957   69580 logs.go:276] 0 containers: []
	W0501 03:42:36.443964   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:42:36.443969   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:42:36.444024   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:42:33.844832   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:35.845476   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:37.846291   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:34.156976   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:36.657001   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:38.657056   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:37.510534   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:39.511335   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:36.486169   69580 cri.go:89] found id: ""
	I0501 03:42:36.486200   69580 logs.go:276] 0 containers: []
	W0501 03:42:36.486213   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:42:36.486220   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:42:36.486276   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:42:36.532211   69580 cri.go:89] found id: ""
	I0501 03:42:36.532237   69580 logs.go:276] 0 containers: []
	W0501 03:42:36.532246   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:42:36.532251   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:42:36.532311   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:42:36.571889   69580 cri.go:89] found id: ""
	I0501 03:42:36.571921   69580 logs.go:276] 0 containers: []
	W0501 03:42:36.571933   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:42:36.571940   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:42:36.572000   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:42:36.612126   69580 cri.go:89] found id: ""
	I0501 03:42:36.612159   69580 logs.go:276] 0 containers: []
	W0501 03:42:36.612170   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:42:36.612177   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:42:36.612238   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:42:36.654067   69580 cri.go:89] found id: ""
	I0501 03:42:36.654096   69580 logs.go:276] 0 containers: []
	W0501 03:42:36.654106   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:42:36.654117   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:42:36.654129   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:42:36.740205   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:42:36.740226   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:42:36.740237   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:42:36.821403   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:42:36.821437   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:42:36.874829   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:42:36.874867   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:42:36.928312   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:42:36.928342   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:42:39.444598   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:42:39.460086   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:42:39.460151   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:42:39.500833   69580 cri.go:89] found id: ""
	I0501 03:42:39.500859   69580 logs.go:276] 0 containers: []
	W0501 03:42:39.500870   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:42:39.500879   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:42:39.500936   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:42:39.544212   69580 cri.go:89] found id: ""
	I0501 03:42:39.544238   69580 logs.go:276] 0 containers: []
	W0501 03:42:39.544248   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:42:39.544260   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:42:39.544326   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:42:39.582167   69580 cri.go:89] found id: ""
	I0501 03:42:39.582200   69580 logs.go:276] 0 containers: []
	W0501 03:42:39.582218   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:42:39.582231   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:42:39.582296   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:42:39.624811   69580 cri.go:89] found id: ""
	I0501 03:42:39.624837   69580 logs.go:276] 0 containers: []
	W0501 03:42:39.624848   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:42:39.624855   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:42:39.624913   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:42:39.666001   69580 cri.go:89] found id: ""
	I0501 03:42:39.666030   69580 logs.go:276] 0 containers: []
	W0501 03:42:39.666041   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:42:39.666048   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:42:39.666111   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:42:39.708790   69580 cri.go:89] found id: ""
	I0501 03:42:39.708820   69580 logs.go:276] 0 containers: []
	W0501 03:42:39.708831   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:42:39.708839   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:42:39.708896   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:42:39.750585   69580 cri.go:89] found id: ""
	I0501 03:42:39.750609   69580 logs.go:276] 0 containers: []
	W0501 03:42:39.750617   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:42:39.750622   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:42:39.750670   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:42:39.798576   69580 cri.go:89] found id: ""
	I0501 03:42:39.798612   69580 logs.go:276] 0 containers: []
	W0501 03:42:39.798624   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:42:39.798636   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:42:39.798651   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:42:39.891759   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:42:39.891782   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:42:39.891797   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:42:39.974419   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:42:39.974462   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:42:40.020700   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:42:40.020728   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:42:40.073946   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:42:40.073980   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:42:40.345975   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:42.350579   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:40.657403   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:42.658271   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:41.511780   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:43.512428   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:42.590933   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:42:42.606044   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:42:42.606120   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:42:42.653074   69580 cri.go:89] found id: ""
	I0501 03:42:42.653104   69580 logs.go:276] 0 containers: []
	W0501 03:42:42.653115   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:42:42.653123   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:42:42.653195   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:42:42.693770   69580 cri.go:89] found id: ""
	I0501 03:42:42.693809   69580 logs.go:276] 0 containers: []
	W0501 03:42:42.693821   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:42:42.693829   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:42:42.693885   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:42:42.739087   69580 cri.go:89] found id: ""
	I0501 03:42:42.739115   69580 logs.go:276] 0 containers: []
	W0501 03:42:42.739125   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:42:42.739133   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:42:42.739196   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:42:42.779831   69580 cri.go:89] found id: ""
	I0501 03:42:42.779863   69580 logs.go:276] 0 containers: []
	W0501 03:42:42.779876   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:42:42.779885   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:42:42.779950   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:42:42.826759   69580 cri.go:89] found id: ""
	I0501 03:42:42.826791   69580 logs.go:276] 0 containers: []
	W0501 03:42:42.826799   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:42:42.826804   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:42:42.826854   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:42:42.872602   69580 cri.go:89] found id: ""
	I0501 03:42:42.872629   69580 logs.go:276] 0 containers: []
	W0501 03:42:42.872640   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:42:42.872648   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:42:42.872707   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:42:42.913833   69580 cri.go:89] found id: ""
	I0501 03:42:42.913862   69580 logs.go:276] 0 containers: []
	W0501 03:42:42.913872   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:42:42.913879   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:42:42.913936   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:42:42.953629   69580 cri.go:89] found id: ""
	I0501 03:42:42.953657   69580 logs.go:276] 0 containers: []
	W0501 03:42:42.953667   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:42:42.953679   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:42:42.953695   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:42:42.968420   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:42:42.968447   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:42:43.046840   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:42:43.046874   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:42:43.046898   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:42:43.135453   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:42:43.135492   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:42:43.184103   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:42:43.184141   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:42:45.738246   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:42:45.753193   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:42:45.753258   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:42:45.791191   69580 cri.go:89] found id: ""
	I0501 03:42:45.791216   69580 logs.go:276] 0 containers: []
	W0501 03:42:45.791224   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:42:45.791236   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:42:45.791285   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:42:45.831935   69580 cri.go:89] found id: ""
	I0501 03:42:45.831967   69580 logs.go:276] 0 containers: []
	W0501 03:42:45.831978   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:42:45.831986   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:42:45.832041   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:42:45.869492   69580 cri.go:89] found id: ""
	I0501 03:42:45.869517   69580 logs.go:276] 0 containers: []
	W0501 03:42:45.869529   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:42:45.869536   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:42:45.869593   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:42:45.910642   69580 cri.go:89] found id: ""
	I0501 03:42:45.910672   69580 logs.go:276] 0 containers: []
	W0501 03:42:45.910682   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:42:45.910691   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:42:45.910754   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:42:45.951489   69580 cri.go:89] found id: ""
	I0501 03:42:45.951518   69580 logs.go:276] 0 containers: []
	W0501 03:42:45.951528   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:42:45.951535   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:42:45.951582   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:42:45.991388   69580 cri.go:89] found id: ""
	I0501 03:42:45.991410   69580 logs.go:276] 0 containers: []
	W0501 03:42:45.991418   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:42:45.991423   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:42:45.991467   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:42:46.036524   69580 cri.go:89] found id: ""
	I0501 03:42:46.036546   69580 logs.go:276] 0 containers: []
	W0501 03:42:46.036553   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:42:46.036560   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:42:46.036622   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:42:46.087472   69580 cri.go:89] found id: ""
	I0501 03:42:46.087495   69580 logs.go:276] 0 containers: []
	W0501 03:42:46.087504   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:42:46.087513   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:42:46.087526   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:42:46.101283   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:42:46.101314   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:42:46.176459   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:42:46.176491   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:42:46.176506   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:42:46.261921   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:42:46.261956   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:42:46.309879   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:42:46.309910   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:42:44.846042   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:47.349023   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:44.658318   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:47.155780   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:46.011347   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:48.511156   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:50.512175   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:48.867064   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:42:48.884082   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:42:48.884192   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:42:48.929681   69580 cri.go:89] found id: ""
	I0501 03:42:48.929708   69580 logs.go:276] 0 containers: []
	W0501 03:42:48.929716   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:42:48.929722   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:42:48.929789   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:42:48.977850   69580 cri.go:89] found id: ""
	I0501 03:42:48.977882   69580 logs.go:276] 0 containers: []
	W0501 03:42:48.977894   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:42:48.977901   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:42:48.977962   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:42:49.022590   69580 cri.go:89] found id: ""
	I0501 03:42:49.022619   69580 logs.go:276] 0 containers: []
	W0501 03:42:49.022629   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:42:49.022637   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:42:49.022706   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:42:49.064092   69580 cri.go:89] found id: ""
	I0501 03:42:49.064122   69580 logs.go:276] 0 containers: []
	W0501 03:42:49.064143   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:42:49.064152   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:42:49.064220   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:42:49.103962   69580 cri.go:89] found id: ""
	I0501 03:42:49.103990   69580 logs.go:276] 0 containers: []
	W0501 03:42:49.104002   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:42:49.104009   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:42:49.104070   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:42:49.144566   69580 cri.go:89] found id: ""
	I0501 03:42:49.144596   69580 logs.go:276] 0 containers: []
	W0501 03:42:49.144604   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:42:49.144610   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:42:49.144669   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:42:49.183110   69580 cri.go:89] found id: ""
	I0501 03:42:49.183141   69580 logs.go:276] 0 containers: []
	W0501 03:42:49.183161   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:42:49.183166   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:42:49.183239   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:42:49.225865   69580 cri.go:89] found id: ""
	I0501 03:42:49.225890   69580 logs.go:276] 0 containers: []
	W0501 03:42:49.225902   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:42:49.225912   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:42:49.225926   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:42:49.312967   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:42:49.313005   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:42:49.361171   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:42:49.361206   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:42:49.418731   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:42:49.418780   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:42:49.436976   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:42:49.437007   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:42:49.517994   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:42:49.848517   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:52.346908   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:49.160713   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:51.656444   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:53.659040   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:53.011092   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:55.011811   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:52.018675   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:42:52.033946   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:42:52.034022   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:42:52.081433   69580 cri.go:89] found id: ""
	I0501 03:42:52.081465   69580 logs.go:276] 0 containers: []
	W0501 03:42:52.081477   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:42:52.081485   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:42:52.081544   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:42:52.123914   69580 cri.go:89] found id: ""
	I0501 03:42:52.123947   69580 logs.go:276] 0 containers: []
	W0501 03:42:52.123958   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:42:52.123966   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:42:52.124023   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:42:52.164000   69580 cri.go:89] found id: ""
	I0501 03:42:52.164020   69580 logs.go:276] 0 containers: []
	W0501 03:42:52.164027   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:42:52.164033   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:42:52.164086   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:42:52.205984   69580 cri.go:89] found id: ""
	I0501 03:42:52.206011   69580 logs.go:276] 0 containers: []
	W0501 03:42:52.206023   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:42:52.206031   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:42:52.206096   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:42:52.252743   69580 cri.go:89] found id: ""
	I0501 03:42:52.252766   69580 logs.go:276] 0 containers: []
	W0501 03:42:52.252774   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:42:52.252779   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:42:52.252839   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:42:52.296814   69580 cri.go:89] found id: ""
	I0501 03:42:52.296838   69580 logs.go:276] 0 containers: []
	W0501 03:42:52.296856   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:42:52.296864   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:42:52.296928   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:42:52.335996   69580 cri.go:89] found id: ""
	I0501 03:42:52.336023   69580 logs.go:276] 0 containers: []
	W0501 03:42:52.336034   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:42:52.336042   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:42:52.336105   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:42:52.377470   69580 cri.go:89] found id: ""
	I0501 03:42:52.377498   69580 logs.go:276] 0 containers: []
	W0501 03:42:52.377513   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:42:52.377524   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:42:52.377540   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:42:52.432644   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:42:52.432680   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:42:52.447518   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:42:52.447552   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:42:52.530967   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:42:52.530992   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:42:52.531005   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:42:52.612280   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:42:52.612327   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:42:55.170134   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:42:55.185252   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:42:55.185328   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:42:55.227741   69580 cri.go:89] found id: ""
	I0501 03:42:55.227764   69580 logs.go:276] 0 containers: []
	W0501 03:42:55.227771   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:42:55.227777   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:42:55.227820   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:42:55.270796   69580 cri.go:89] found id: ""
	I0501 03:42:55.270823   69580 logs.go:276] 0 containers: []
	W0501 03:42:55.270834   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:42:55.270840   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:42:55.270898   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:42:55.312146   69580 cri.go:89] found id: ""
	I0501 03:42:55.312171   69580 logs.go:276] 0 containers: []
	W0501 03:42:55.312180   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:42:55.312190   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:42:55.312236   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:42:55.354410   69580 cri.go:89] found id: ""
	I0501 03:42:55.354436   69580 logs.go:276] 0 containers: []
	W0501 03:42:55.354445   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:42:55.354450   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:42:55.354509   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:42:55.393550   69580 cri.go:89] found id: ""
	I0501 03:42:55.393580   69580 logs.go:276] 0 containers: []
	W0501 03:42:55.393589   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:42:55.393594   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:42:55.393651   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:42:55.431468   69580 cri.go:89] found id: ""
	I0501 03:42:55.431497   69580 logs.go:276] 0 containers: []
	W0501 03:42:55.431507   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:42:55.431514   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:42:55.431566   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:42:55.470491   69580 cri.go:89] found id: ""
	I0501 03:42:55.470513   69580 logs.go:276] 0 containers: []
	W0501 03:42:55.470520   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:42:55.470526   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:42:55.470571   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:42:55.509849   69580 cri.go:89] found id: ""
	I0501 03:42:55.509875   69580 logs.go:276] 0 containers: []
	W0501 03:42:55.509885   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:42:55.509894   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:42:55.509909   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:42:55.566680   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:42:55.566762   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:42:55.584392   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:42:55.584423   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:42:55.663090   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:42:55.663116   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:42:55.663131   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:42:55.741459   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:42:55.741494   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:42:54.846549   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:56.848989   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:56.156918   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:58.157016   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:57.012980   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:59.513719   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:58.294435   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:42:58.310204   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:42:58.310267   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:42:58.350292   69580 cri.go:89] found id: ""
	I0501 03:42:58.350322   69580 logs.go:276] 0 containers: []
	W0501 03:42:58.350334   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:42:58.350343   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:42:58.350431   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:42:58.395998   69580 cri.go:89] found id: ""
	I0501 03:42:58.396029   69580 logs.go:276] 0 containers: []
	W0501 03:42:58.396041   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:42:58.396049   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:42:58.396131   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:42:58.434371   69580 cri.go:89] found id: ""
	I0501 03:42:58.434414   69580 logs.go:276] 0 containers: []
	W0501 03:42:58.434427   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:42:58.434434   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:42:58.434493   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:42:58.473457   69580 cri.go:89] found id: ""
	I0501 03:42:58.473489   69580 logs.go:276] 0 containers: []
	W0501 03:42:58.473499   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:42:58.473507   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:42:58.473572   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:42:58.515172   69580 cri.go:89] found id: ""
	I0501 03:42:58.515201   69580 logs.go:276] 0 containers: []
	W0501 03:42:58.515212   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:42:58.515221   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:42:58.515291   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:42:58.560305   69580 cri.go:89] found id: ""
	I0501 03:42:58.560333   69580 logs.go:276] 0 containers: []
	W0501 03:42:58.560341   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:42:58.560348   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:42:58.560407   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:42:58.617980   69580 cri.go:89] found id: ""
	I0501 03:42:58.618005   69580 logs.go:276] 0 containers: []
	W0501 03:42:58.618013   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:42:58.618019   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:42:58.618080   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:42:58.659800   69580 cri.go:89] found id: ""
	I0501 03:42:58.659827   69580 logs.go:276] 0 containers: []
	W0501 03:42:58.659838   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:42:58.659848   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:42:58.659862   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:42:58.718134   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:42:58.718169   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:42:58.733972   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:42:58.734001   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:42:58.813055   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:42:58.813082   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:42:58.813099   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:42:58.897293   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:42:58.897331   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:43:01.442980   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:43:01.459602   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:43:01.459687   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:42:58.849599   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:01.346264   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:00.157322   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:02.657002   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:02.012753   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:04.510896   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:01.502817   69580 cri.go:89] found id: ""
	I0501 03:43:01.502848   69580 logs.go:276] 0 containers: []
	W0501 03:43:01.502857   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:43:01.502863   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:43:01.502924   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:43:01.547251   69580 cri.go:89] found id: ""
	I0501 03:43:01.547289   69580 logs.go:276] 0 containers: []
	W0501 03:43:01.547301   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:43:01.547308   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:43:01.547376   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:43:01.590179   69580 cri.go:89] found id: ""
	I0501 03:43:01.590211   69580 logs.go:276] 0 containers: []
	W0501 03:43:01.590221   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:43:01.590228   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:43:01.590296   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:43:01.628772   69580 cri.go:89] found id: ""
	I0501 03:43:01.628814   69580 logs.go:276] 0 containers: []
	W0501 03:43:01.628826   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:43:01.628834   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:43:01.628893   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:43:01.677414   69580 cri.go:89] found id: ""
	I0501 03:43:01.677440   69580 logs.go:276] 0 containers: []
	W0501 03:43:01.677448   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:43:01.677453   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:43:01.677500   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:43:01.723107   69580 cri.go:89] found id: ""
	I0501 03:43:01.723139   69580 logs.go:276] 0 containers: []
	W0501 03:43:01.723152   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:43:01.723160   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:43:01.723225   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:43:01.771846   69580 cri.go:89] found id: ""
	I0501 03:43:01.771873   69580 logs.go:276] 0 containers: []
	W0501 03:43:01.771883   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:43:01.771890   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:43:01.771952   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:43:01.818145   69580 cri.go:89] found id: ""
	I0501 03:43:01.818179   69580 logs.go:276] 0 containers: []
	W0501 03:43:01.818191   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:43:01.818202   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:43:01.818218   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:43:01.881502   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:43:01.881546   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:43:01.897580   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:43:01.897614   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:43:01.981959   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:43:01.981980   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:43:01.981996   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:43:02.066228   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:43:02.066269   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:43:04.609855   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:43:04.626885   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:43:04.626962   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:43:04.668248   69580 cri.go:89] found id: ""
	I0501 03:43:04.668277   69580 logs.go:276] 0 containers: []
	W0501 03:43:04.668290   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:43:04.668298   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:43:04.668364   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:43:04.711032   69580 cri.go:89] found id: ""
	I0501 03:43:04.711057   69580 logs.go:276] 0 containers: []
	W0501 03:43:04.711068   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:43:04.711076   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:43:04.711136   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:43:04.754197   69580 cri.go:89] found id: ""
	I0501 03:43:04.754232   69580 logs.go:276] 0 containers: []
	W0501 03:43:04.754241   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:43:04.754248   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:43:04.754317   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:43:04.801062   69580 cri.go:89] found id: ""
	I0501 03:43:04.801089   69580 logs.go:276] 0 containers: []
	W0501 03:43:04.801097   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:43:04.801103   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:43:04.801163   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:43:04.849425   69580 cri.go:89] found id: ""
	I0501 03:43:04.849454   69580 logs.go:276] 0 containers: []
	W0501 03:43:04.849465   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:43:04.849473   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:43:04.849536   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:43:04.892555   69580 cri.go:89] found id: ""
	I0501 03:43:04.892589   69580 logs.go:276] 0 containers: []
	W0501 03:43:04.892597   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:43:04.892603   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:43:04.892661   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:43:04.934101   69580 cri.go:89] found id: ""
	I0501 03:43:04.934129   69580 logs.go:276] 0 containers: []
	W0501 03:43:04.934137   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:43:04.934142   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:43:04.934191   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:43:04.985720   69580 cri.go:89] found id: ""
	I0501 03:43:04.985747   69580 logs.go:276] 0 containers: []
	W0501 03:43:04.985760   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:43:04.985773   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:43:04.985789   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:43:05.060634   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:43:05.060692   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:43:05.082007   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:43:05.082036   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:43:05.164613   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:43:05.164636   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:43:05.164652   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:43:05.244064   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:43:05.244103   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:43:03.845495   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:06.346757   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:05.157929   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:07.657094   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:06.511168   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:08.511512   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:10.511984   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:07.793867   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:43:07.811161   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:43:07.811236   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:43:07.850738   69580 cri.go:89] found id: ""
	I0501 03:43:07.850765   69580 logs.go:276] 0 containers: []
	W0501 03:43:07.850775   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:43:07.850782   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:43:07.850841   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:43:07.892434   69580 cri.go:89] found id: ""
	I0501 03:43:07.892466   69580 logs.go:276] 0 containers: []
	W0501 03:43:07.892476   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:43:07.892483   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:43:07.892543   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:43:07.934093   69580 cri.go:89] found id: ""
	I0501 03:43:07.934122   69580 logs.go:276] 0 containers: []
	W0501 03:43:07.934133   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:43:07.934141   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:43:07.934200   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:43:07.976165   69580 cri.go:89] found id: ""
	I0501 03:43:07.976196   69580 logs.go:276] 0 containers: []
	W0501 03:43:07.976205   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:43:07.976216   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:43:07.976278   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:43:08.016925   69580 cri.go:89] found id: ""
	I0501 03:43:08.016956   69580 logs.go:276] 0 containers: []
	W0501 03:43:08.016968   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:43:08.016975   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:43:08.017038   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:43:08.063385   69580 cri.go:89] found id: ""
	I0501 03:43:08.063438   69580 logs.go:276] 0 containers: []
	W0501 03:43:08.063454   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:43:08.063465   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:43:08.063551   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:43:08.103586   69580 cri.go:89] found id: ""
	I0501 03:43:08.103610   69580 logs.go:276] 0 containers: []
	W0501 03:43:08.103618   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:43:08.103628   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:43:08.103672   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:43:08.142564   69580 cri.go:89] found id: ""
	I0501 03:43:08.142594   69580 logs.go:276] 0 containers: []
	W0501 03:43:08.142605   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:43:08.142617   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:43:08.142635   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:43:08.231532   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:43:08.231556   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:43:08.231571   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:43:08.311009   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:43:08.311053   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:43:08.357841   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:43:08.357877   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:43:08.409577   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:43:08.409610   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:43:10.924898   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:43:10.941525   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:43:10.941591   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:43:11.009214   69580 cri.go:89] found id: ""
	I0501 03:43:11.009238   69580 logs.go:276] 0 containers: []
	W0501 03:43:11.009247   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:43:11.009255   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:43:11.009316   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:43:11.072233   69580 cri.go:89] found id: ""
	I0501 03:43:11.072259   69580 logs.go:276] 0 containers: []
	W0501 03:43:11.072267   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:43:11.072273   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:43:11.072327   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:43:11.111662   69580 cri.go:89] found id: ""
	I0501 03:43:11.111691   69580 logs.go:276] 0 containers: []
	W0501 03:43:11.111701   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:43:11.111708   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:43:11.111765   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:43:11.151540   69580 cri.go:89] found id: ""
	I0501 03:43:11.151570   69580 logs.go:276] 0 containers: []
	W0501 03:43:11.151580   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:43:11.151594   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:43:11.151656   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:43:11.194030   69580 cri.go:89] found id: ""
	I0501 03:43:11.194064   69580 logs.go:276] 0 containers: []
	W0501 03:43:11.194076   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:43:11.194083   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:43:11.194146   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:43:11.233010   69580 cri.go:89] found id: ""
	I0501 03:43:11.233045   69580 logs.go:276] 0 containers: []
	W0501 03:43:11.233056   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:43:11.233063   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:43:11.233117   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:43:11.270979   69580 cri.go:89] found id: ""
	I0501 03:43:11.271009   69580 logs.go:276] 0 containers: []
	W0501 03:43:11.271019   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:43:11.271026   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:43:11.271088   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:43:11.312338   69580 cri.go:89] found id: ""
	I0501 03:43:11.312369   69580 logs.go:276] 0 containers: []
	W0501 03:43:11.312381   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:43:11.312393   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:43:11.312408   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:43:11.364273   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:43:11.364307   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:43:11.418603   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:43:11.418634   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:43:11.433409   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:43:11.433438   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0501 03:43:08.349537   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:10.845566   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:12.846699   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:10.157910   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:12.657859   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:12.512669   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:15.013314   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	W0501 03:43:11.511243   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:43:11.511265   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:43:11.511280   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:43:14.089834   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:43:14.104337   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:43:14.104419   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:43:14.148799   69580 cri.go:89] found id: ""
	I0501 03:43:14.148826   69580 logs.go:276] 0 containers: []
	W0501 03:43:14.148833   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:43:14.148839   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:43:14.148904   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:43:14.191330   69580 cri.go:89] found id: ""
	I0501 03:43:14.191366   69580 logs.go:276] 0 containers: []
	W0501 03:43:14.191378   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:43:14.191386   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:43:14.191448   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:43:14.245978   69580 cri.go:89] found id: ""
	I0501 03:43:14.246010   69580 logs.go:276] 0 containers: []
	W0501 03:43:14.246018   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:43:14.246024   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:43:14.246093   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:43:14.287188   69580 cri.go:89] found id: ""
	I0501 03:43:14.287215   69580 logs.go:276] 0 containers: []
	W0501 03:43:14.287223   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:43:14.287228   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:43:14.287276   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:43:14.328060   69580 cri.go:89] found id: ""
	I0501 03:43:14.328093   69580 logs.go:276] 0 containers: []
	W0501 03:43:14.328104   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:43:14.328113   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:43:14.328179   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:43:14.370734   69580 cri.go:89] found id: ""
	I0501 03:43:14.370765   69580 logs.go:276] 0 containers: []
	W0501 03:43:14.370776   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:43:14.370783   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:43:14.370837   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:43:14.414690   69580 cri.go:89] found id: ""
	I0501 03:43:14.414713   69580 logs.go:276] 0 containers: []
	W0501 03:43:14.414721   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:43:14.414726   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:43:14.414790   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:43:14.459030   69580 cri.go:89] found id: ""
	I0501 03:43:14.459060   69580 logs.go:276] 0 containers: []
	W0501 03:43:14.459072   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:43:14.459083   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:43:14.459098   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:43:14.519728   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:43:14.519761   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:43:14.535841   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:43:14.535871   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:43:14.615203   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:43:14.615231   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:43:14.615249   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:43:14.707677   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:43:14.707725   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:43:15.345927   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:17.846732   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:14.657956   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:17.156935   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:17.512424   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:20.012471   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:17.254918   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:43:17.270643   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:43:17.270698   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:43:17.310692   69580 cri.go:89] found id: ""
	I0501 03:43:17.310724   69580 logs.go:276] 0 containers: []
	W0501 03:43:17.310732   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:43:17.310739   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:43:17.310806   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:43:17.349932   69580 cri.go:89] found id: ""
	I0501 03:43:17.349959   69580 logs.go:276] 0 containers: []
	W0501 03:43:17.349969   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:43:17.349976   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:43:17.350040   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:43:17.393073   69580 cri.go:89] found id: ""
	I0501 03:43:17.393099   69580 logs.go:276] 0 containers: []
	W0501 03:43:17.393109   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:43:17.393116   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:43:17.393176   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:43:17.429736   69580 cri.go:89] found id: ""
	I0501 03:43:17.429763   69580 logs.go:276] 0 containers: []
	W0501 03:43:17.429773   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:43:17.429787   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:43:17.429858   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:43:17.473052   69580 cri.go:89] found id: ""
	I0501 03:43:17.473085   69580 logs.go:276] 0 containers: []
	W0501 03:43:17.473097   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:43:17.473105   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:43:17.473168   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:43:17.514035   69580 cri.go:89] found id: ""
	I0501 03:43:17.514062   69580 logs.go:276] 0 containers: []
	W0501 03:43:17.514071   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:43:17.514078   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:43:17.514126   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:43:17.553197   69580 cri.go:89] found id: ""
	I0501 03:43:17.553225   69580 logs.go:276] 0 containers: []
	W0501 03:43:17.553234   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:43:17.553240   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:43:17.553300   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:43:17.592170   69580 cri.go:89] found id: ""
	I0501 03:43:17.592192   69580 logs.go:276] 0 containers: []
	W0501 03:43:17.592199   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:43:17.592208   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:43:17.592220   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:43:17.647549   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:43:17.647584   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:43:17.663084   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:43:17.663114   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:43:17.748357   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:43:17.748385   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:43:17.748401   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:43:17.832453   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:43:17.832491   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:43:20.375927   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:43:20.391840   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:43:20.391918   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:43:20.434158   69580 cri.go:89] found id: ""
	I0501 03:43:20.434185   69580 logs.go:276] 0 containers: []
	W0501 03:43:20.434193   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:43:20.434198   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:43:20.434254   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:43:20.477209   69580 cri.go:89] found id: ""
	I0501 03:43:20.477237   69580 logs.go:276] 0 containers: []
	W0501 03:43:20.477253   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:43:20.477259   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:43:20.477309   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:43:20.517227   69580 cri.go:89] found id: ""
	I0501 03:43:20.517260   69580 logs.go:276] 0 containers: []
	W0501 03:43:20.517270   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:43:20.517282   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:43:20.517340   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:43:20.555771   69580 cri.go:89] found id: ""
	I0501 03:43:20.555802   69580 logs.go:276] 0 containers: []
	W0501 03:43:20.555812   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:43:20.555820   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:43:20.555866   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:43:20.598177   69580 cri.go:89] found id: ""
	I0501 03:43:20.598200   69580 logs.go:276] 0 containers: []
	W0501 03:43:20.598213   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:43:20.598218   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:43:20.598326   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:43:20.637336   69580 cri.go:89] found id: ""
	I0501 03:43:20.637364   69580 logs.go:276] 0 containers: []
	W0501 03:43:20.637373   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:43:20.637378   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:43:20.637435   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:43:20.687736   69580 cri.go:89] found id: ""
	I0501 03:43:20.687761   69580 logs.go:276] 0 containers: []
	W0501 03:43:20.687768   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:43:20.687782   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:43:20.687840   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:43:20.726102   69580 cri.go:89] found id: ""
	I0501 03:43:20.726135   69580 logs.go:276] 0 containers: []
	W0501 03:43:20.726143   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:43:20.726154   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:43:20.726169   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:43:20.780874   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:43:20.780905   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:43:20.795798   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:43:20.795836   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:43:20.882337   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:43:20.882367   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:43:20.882381   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:43:20.962138   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:43:20.962188   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:43:20.345887   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:22.346061   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:19.157165   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:21.657358   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:22.015676   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:24.511682   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:23.512174   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:43:23.528344   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:43:23.528417   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:43:23.567182   69580 cri.go:89] found id: ""
	I0501 03:43:23.567212   69580 logs.go:276] 0 containers: []
	W0501 03:43:23.567222   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:43:23.567230   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:43:23.567291   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:43:23.607522   69580 cri.go:89] found id: ""
	I0501 03:43:23.607556   69580 logs.go:276] 0 containers: []
	W0501 03:43:23.607567   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:43:23.607574   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:43:23.607637   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:43:23.650932   69580 cri.go:89] found id: ""
	I0501 03:43:23.650959   69580 logs.go:276] 0 containers: []
	W0501 03:43:23.650970   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:43:23.650976   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:43:23.651035   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:43:23.695392   69580 cri.go:89] found id: ""
	I0501 03:43:23.695419   69580 logs.go:276] 0 containers: []
	W0501 03:43:23.695428   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:43:23.695436   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:43:23.695514   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:43:23.736577   69580 cri.go:89] found id: ""
	I0501 03:43:23.736607   69580 logs.go:276] 0 containers: []
	W0501 03:43:23.736619   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:43:23.736627   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:43:23.736685   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:43:23.776047   69580 cri.go:89] found id: ""
	I0501 03:43:23.776070   69580 logs.go:276] 0 containers: []
	W0501 03:43:23.776077   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:43:23.776082   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:43:23.776134   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:43:23.813896   69580 cri.go:89] found id: ""
	I0501 03:43:23.813934   69580 logs.go:276] 0 containers: []
	W0501 03:43:23.813943   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:43:23.813949   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:43:23.813997   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:43:23.858898   69580 cri.go:89] found id: ""
	I0501 03:43:23.858925   69580 logs.go:276] 0 containers: []
	W0501 03:43:23.858936   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:43:23.858947   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:43:23.858964   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:43:23.901796   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:43:23.901850   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:43:23.957009   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:43:23.957040   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:43:23.972811   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:43:23.972839   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:43:24.055535   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:43:24.055557   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:43:24.055576   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:43:24.845310   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:26.847397   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:24.157453   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:26.661073   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:27.012181   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:29.511387   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:26.640114   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:43:26.657217   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:43:26.657285   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:43:26.701191   69580 cri.go:89] found id: ""
	I0501 03:43:26.701218   69580 logs.go:276] 0 containers: []
	W0501 03:43:26.701227   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:43:26.701232   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:43:26.701287   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:43:26.740710   69580 cri.go:89] found id: ""
	I0501 03:43:26.740737   69580 logs.go:276] 0 containers: []
	W0501 03:43:26.740745   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:43:26.740750   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:43:26.740808   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:43:26.778682   69580 cri.go:89] found id: ""
	I0501 03:43:26.778710   69580 logs.go:276] 0 containers: []
	W0501 03:43:26.778724   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:43:26.778730   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:43:26.778789   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:43:26.822143   69580 cri.go:89] found id: ""
	I0501 03:43:26.822190   69580 logs.go:276] 0 containers: []
	W0501 03:43:26.822201   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:43:26.822209   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:43:26.822270   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:43:26.865938   69580 cri.go:89] found id: ""
	I0501 03:43:26.865976   69580 logs.go:276] 0 containers: []
	W0501 03:43:26.865988   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:43:26.865996   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:43:26.866058   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:43:26.914939   69580 cri.go:89] found id: ""
	I0501 03:43:26.914969   69580 logs.go:276] 0 containers: []
	W0501 03:43:26.914979   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:43:26.914986   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:43:26.915043   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:43:26.961822   69580 cri.go:89] found id: ""
	I0501 03:43:26.961850   69580 logs.go:276] 0 containers: []
	W0501 03:43:26.961860   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:43:26.961867   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:43:26.961920   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:43:27.005985   69580 cri.go:89] found id: ""
	I0501 03:43:27.006012   69580 logs.go:276] 0 containers: []
	W0501 03:43:27.006021   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:43:27.006032   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:43:27.006046   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:43:27.058265   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:43:27.058303   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:43:27.076270   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:43:27.076308   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:43:27.152627   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:43:27.152706   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:43:27.152728   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:43:27.229638   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:43:27.229678   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:43:29.775960   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:43:29.792849   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:43:29.792925   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:43:29.832508   69580 cri.go:89] found id: ""
	I0501 03:43:29.832537   69580 logs.go:276] 0 containers: []
	W0501 03:43:29.832551   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:43:29.832559   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:43:29.832617   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:43:29.873160   69580 cri.go:89] found id: ""
	I0501 03:43:29.873188   69580 logs.go:276] 0 containers: []
	W0501 03:43:29.873199   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:43:29.873207   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:43:29.873271   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:43:29.919431   69580 cri.go:89] found id: ""
	I0501 03:43:29.919459   69580 logs.go:276] 0 containers: []
	W0501 03:43:29.919468   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:43:29.919474   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:43:29.919533   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:43:29.967944   69580 cri.go:89] found id: ""
	I0501 03:43:29.967976   69580 logs.go:276] 0 containers: []
	W0501 03:43:29.967987   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:43:29.967995   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:43:29.968060   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:43:30.011626   69580 cri.go:89] found id: ""
	I0501 03:43:30.011657   69580 logs.go:276] 0 containers: []
	W0501 03:43:30.011669   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:43:30.011678   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:43:30.011743   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:43:30.051998   69580 cri.go:89] found id: ""
	I0501 03:43:30.052020   69580 logs.go:276] 0 containers: []
	W0501 03:43:30.052028   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:43:30.052034   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:43:30.052095   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:43:30.094140   69580 cri.go:89] found id: ""
	I0501 03:43:30.094164   69580 logs.go:276] 0 containers: []
	W0501 03:43:30.094172   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:43:30.094179   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:43:30.094253   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:43:30.132363   69580 cri.go:89] found id: ""
	I0501 03:43:30.132391   69580 logs.go:276] 0 containers: []
	W0501 03:43:30.132399   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:43:30.132411   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:43:30.132422   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:43:30.221368   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:43:30.221410   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:43:30.271279   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:43:30.271317   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:43:30.325549   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:43:30.325586   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:43:30.345337   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:43:30.345376   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:43:30.427552   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:43:29.347108   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:31.846435   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:29.156483   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:31.156871   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:33.157355   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:32.015498   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:34.511190   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:32.928667   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:43:32.945489   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:43:32.945557   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:43:32.989604   69580 cri.go:89] found id: ""
	I0501 03:43:32.989628   69580 logs.go:276] 0 containers: []
	W0501 03:43:32.989636   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:43:32.989642   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:43:32.989701   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:43:33.030862   69580 cri.go:89] found id: ""
	I0501 03:43:33.030892   69580 logs.go:276] 0 containers: []
	W0501 03:43:33.030903   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:43:33.030912   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:43:33.030977   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:43:33.079795   69580 cri.go:89] found id: ""
	I0501 03:43:33.079827   69580 logs.go:276] 0 containers: []
	W0501 03:43:33.079835   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:43:33.079841   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:43:33.079898   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:43:33.120612   69580 cri.go:89] found id: ""
	I0501 03:43:33.120636   69580 logs.go:276] 0 containers: []
	W0501 03:43:33.120644   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:43:33.120649   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:43:33.120694   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:43:33.161824   69580 cri.go:89] found id: ""
	I0501 03:43:33.161851   69580 logs.go:276] 0 containers: []
	W0501 03:43:33.161861   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:43:33.161868   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:43:33.161924   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:43:33.200068   69580 cri.go:89] found id: ""
	I0501 03:43:33.200098   69580 logs.go:276] 0 containers: []
	W0501 03:43:33.200107   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:43:33.200113   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:43:33.200175   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:43:33.239314   69580 cri.go:89] found id: ""
	I0501 03:43:33.239341   69580 logs.go:276] 0 containers: []
	W0501 03:43:33.239351   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:43:33.239359   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:43:33.239427   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:43:33.281381   69580 cri.go:89] found id: ""
	I0501 03:43:33.281408   69580 logs.go:276] 0 containers: []
	W0501 03:43:33.281419   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:43:33.281431   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:43:33.281447   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:43:33.297992   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:43:33.298047   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:43:33.383273   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:43:33.383292   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:43:33.383303   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:43:33.465256   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:43:33.465289   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:43:33.509593   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:43:33.509621   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:43:36.065074   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:43:36.081361   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:43:36.081429   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:43:36.130394   69580 cri.go:89] found id: ""
	I0501 03:43:36.130436   69580 logs.go:276] 0 containers: []
	W0501 03:43:36.130448   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:43:36.130456   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:43:36.130524   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:43:36.171013   69580 cri.go:89] found id: ""
	I0501 03:43:36.171038   69580 logs.go:276] 0 containers: []
	W0501 03:43:36.171046   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:43:36.171052   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:43:36.171099   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:43:36.215372   69580 cri.go:89] found id: ""
	I0501 03:43:36.215411   69580 logs.go:276] 0 containers: []
	W0501 03:43:36.215424   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:43:36.215431   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:43:36.215493   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:43:36.257177   69580 cri.go:89] found id: ""
	I0501 03:43:36.257204   69580 logs.go:276] 0 containers: []
	W0501 03:43:36.257216   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:43:36.257223   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:43:36.257293   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:43:36.299035   69580 cri.go:89] found id: ""
	I0501 03:43:36.299066   69580 logs.go:276] 0 containers: []
	W0501 03:43:36.299085   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:43:36.299094   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:43:36.299166   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:43:36.339060   69580 cri.go:89] found id: ""
	I0501 03:43:36.339087   69580 logs.go:276] 0 containers: []
	W0501 03:43:36.339097   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:43:36.339105   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:43:36.339163   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:43:36.379982   69580 cri.go:89] found id: ""
	I0501 03:43:36.380016   69580 logs.go:276] 0 containers: []
	W0501 03:43:36.380028   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:43:36.380037   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:43:36.380100   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:43:36.419702   69580 cri.go:89] found id: ""
	I0501 03:43:36.419734   69580 logs.go:276] 0 containers: []
	W0501 03:43:36.419746   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:43:36.419758   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:43:36.419780   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:43:33.846499   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:35.846579   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:37.852802   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:35.159724   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:37.657040   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:36.516601   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:39.012001   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:36.472553   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:43:36.472774   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:43:36.488402   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:43:36.488439   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:43:36.566390   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:43:36.566433   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:43:36.566446   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:43:36.643493   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:43:36.643527   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:43:39.199060   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:43:39.216612   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:43:39.216695   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:43:39.262557   69580 cri.go:89] found id: ""
	I0501 03:43:39.262581   69580 logs.go:276] 0 containers: []
	W0501 03:43:39.262589   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:43:39.262595   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:43:39.262642   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:43:39.331051   69580 cri.go:89] found id: ""
	I0501 03:43:39.331076   69580 logs.go:276] 0 containers: []
	W0501 03:43:39.331093   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:43:39.331098   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:43:39.331162   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:43:39.382033   69580 cri.go:89] found id: ""
	I0501 03:43:39.382058   69580 logs.go:276] 0 containers: []
	W0501 03:43:39.382066   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:43:39.382071   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:43:39.382122   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:43:39.424019   69580 cri.go:89] found id: ""
	I0501 03:43:39.424049   69580 logs.go:276] 0 containers: []
	W0501 03:43:39.424058   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:43:39.424064   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:43:39.424120   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:43:39.465787   69580 cri.go:89] found id: ""
	I0501 03:43:39.465833   69580 logs.go:276] 0 containers: []
	W0501 03:43:39.465846   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:43:39.465855   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:43:39.465916   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:43:39.507746   69580 cri.go:89] found id: ""
	I0501 03:43:39.507781   69580 logs.go:276] 0 containers: []
	W0501 03:43:39.507791   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:43:39.507798   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:43:39.507861   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:43:39.550737   69580 cri.go:89] found id: ""
	I0501 03:43:39.550768   69580 logs.go:276] 0 containers: []
	W0501 03:43:39.550775   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:43:39.550781   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:43:39.550831   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:43:39.592279   69580 cri.go:89] found id: ""
	I0501 03:43:39.592329   69580 logs.go:276] 0 containers: []
	W0501 03:43:39.592343   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:43:39.592356   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:43:39.592373   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:43:39.648858   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:43:39.648896   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:43:39.665316   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:43:39.665343   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:43:39.743611   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:43:39.743632   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:43:39.743646   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:43:39.829285   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:43:39.829322   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:43:40.347121   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:42.845466   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:39.657888   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:41.657976   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:41.512061   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:44.017693   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:42.374457   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:43:42.389944   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:43:42.390002   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:43:42.431270   69580 cri.go:89] found id: ""
	I0501 03:43:42.431294   69580 logs.go:276] 0 containers: []
	W0501 03:43:42.431302   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:43:42.431308   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:43:42.431366   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:43:42.470515   69580 cri.go:89] found id: ""
	I0501 03:43:42.470546   69580 logs.go:276] 0 containers: []
	W0501 03:43:42.470558   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:43:42.470566   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:43:42.470619   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:43:42.518472   69580 cri.go:89] found id: ""
	I0501 03:43:42.518494   69580 logs.go:276] 0 containers: []
	W0501 03:43:42.518501   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:43:42.518506   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:43:42.518555   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:43:42.562192   69580 cri.go:89] found id: ""
	I0501 03:43:42.562220   69580 logs.go:276] 0 containers: []
	W0501 03:43:42.562231   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:43:42.562239   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:43:42.562300   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:43:42.599372   69580 cri.go:89] found id: ""
	I0501 03:43:42.599403   69580 logs.go:276] 0 containers: []
	W0501 03:43:42.599414   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:43:42.599422   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:43:42.599483   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:43:42.636738   69580 cri.go:89] found id: ""
	I0501 03:43:42.636766   69580 logs.go:276] 0 containers: []
	W0501 03:43:42.636777   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:43:42.636786   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:43:42.636845   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:43:42.682087   69580 cri.go:89] found id: ""
	I0501 03:43:42.682115   69580 logs.go:276] 0 containers: []
	W0501 03:43:42.682125   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:43:42.682133   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:43:42.682198   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:43:42.724280   69580 cri.go:89] found id: ""
	I0501 03:43:42.724316   69580 logs.go:276] 0 containers: []
	W0501 03:43:42.724328   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:43:42.724340   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:43:42.724354   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:43:42.771667   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:43:42.771702   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:43:42.827390   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:43:42.827428   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:43:42.843452   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:43:42.843480   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:43:42.925544   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:43:42.925563   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:43:42.925577   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:43:45.515104   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:43:45.529545   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:43:45.529619   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:43:45.573451   69580 cri.go:89] found id: ""
	I0501 03:43:45.573475   69580 logs.go:276] 0 containers: []
	W0501 03:43:45.573483   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:43:45.573489   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:43:45.573536   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:43:45.613873   69580 cri.go:89] found id: ""
	I0501 03:43:45.613897   69580 logs.go:276] 0 containers: []
	W0501 03:43:45.613905   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:43:45.613910   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:43:45.613954   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:43:45.660195   69580 cri.go:89] found id: ""
	I0501 03:43:45.660215   69580 logs.go:276] 0 containers: []
	W0501 03:43:45.660221   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:43:45.660226   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:43:45.660284   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:43:45.703539   69580 cri.go:89] found id: ""
	I0501 03:43:45.703566   69580 logs.go:276] 0 containers: []
	W0501 03:43:45.703574   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:43:45.703580   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:43:45.703637   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:43:45.754635   69580 cri.go:89] found id: ""
	I0501 03:43:45.754659   69580 logs.go:276] 0 containers: []
	W0501 03:43:45.754668   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:43:45.754675   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:43:45.754738   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:43:45.800836   69580 cri.go:89] found id: ""
	I0501 03:43:45.800866   69580 logs.go:276] 0 containers: []
	W0501 03:43:45.800884   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:43:45.800892   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:43:45.800955   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:43:45.859057   69580 cri.go:89] found id: ""
	I0501 03:43:45.859084   69580 logs.go:276] 0 containers: []
	W0501 03:43:45.859092   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:43:45.859098   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:43:45.859145   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:43:45.913173   69580 cri.go:89] found id: ""
	I0501 03:43:45.913204   69580 logs.go:276] 0 containers: []
	W0501 03:43:45.913216   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:43:45.913227   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:43:45.913243   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:43:45.930050   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:43:45.930087   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:43:46.006047   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:43:46.006081   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:43:46.006097   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:43:46.086630   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:43:46.086666   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:43:46.134635   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:43:46.134660   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:43:45.347071   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:47.845983   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:44.157143   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:46.157880   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:48.656747   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:46.510981   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:48.512854   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:48.690330   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:43:48.705024   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:43:48.705093   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:43:48.750244   69580 cri.go:89] found id: ""
	I0501 03:43:48.750278   69580 logs.go:276] 0 containers: []
	W0501 03:43:48.750299   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:43:48.750307   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:43:48.750377   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:43:48.791231   69580 cri.go:89] found id: ""
	I0501 03:43:48.791264   69580 logs.go:276] 0 containers: []
	W0501 03:43:48.791276   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:43:48.791283   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:43:48.791348   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:43:48.834692   69580 cri.go:89] found id: ""
	I0501 03:43:48.834720   69580 logs.go:276] 0 containers: []
	W0501 03:43:48.834731   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:43:48.834739   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:43:48.834809   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:43:48.877383   69580 cri.go:89] found id: ""
	I0501 03:43:48.877415   69580 logs.go:276] 0 containers: []
	W0501 03:43:48.877424   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:43:48.877430   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:43:48.877479   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:43:48.919728   69580 cri.go:89] found id: ""
	I0501 03:43:48.919756   69580 logs.go:276] 0 containers: []
	W0501 03:43:48.919767   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:43:48.919775   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:43:48.919836   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:43:48.962090   69580 cri.go:89] found id: ""
	I0501 03:43:48.962122   69580 logs.go:276] 0 containers: []
	W0501 03:43:48.962137   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:43:48.962144   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:43:48.962205   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:43:48.998456   69580 cri.go:89] found id: ""
	I0501 03:43:48.998487   69580 logs.go:276] 0 containers: []
	W0501 03:43:48.998498   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:43:48.998506   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:43:48.998566   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:43:49.042591   69580 cri.go:89] found id: ""
	I0501 03:43:49.042623   69580 logs.go:276] 0 containers: []
	W0501 03:43:49.042633   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:43:49.042645   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:43:49.042661   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:43:49.088533   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:43:49.088571   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:43:49.145252   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:43:49.145288   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:43:49.163093   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:43:49.163120   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:43:49.240805   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:43:49.240831   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:43:49.240844   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:43:49.848864   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:52.347128   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:50.656790   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:52.658130   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:51.011713   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:53.510598   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:55.512900   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:51.825530   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:43:51.839596   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:43:51.839669   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:43:51.879493   69580 cri.go:89] found id: ""
	I0501 03:43:51.879516   69580 logs.go:276] 0 containers: []
	W0501 03:43:51.879524   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:43:51.879530   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:43:51.879585   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:43:51.921577   69580 cri.go:89] found id: ""
	I0501 03:43:51.921608   69580 logs.go:276] 0 containers: []
	W0501 03:43:51.921620   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:43:51.921627   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:43:51.921693   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:43:51.961000   69580 cri.go:89] found id: ""
	I0501 03:43:51.961028   69580 logs.go:276] 0 containers: []
	W0501 03:43:51.961037   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:43:51.961043   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:43:51.961103   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:43:52.006087   69580 cri.go:89] found id: ""
	I0501 03:43:52.006118   69580 logs.go:276] 0 containers: []
	W0501 03:43:52.006129   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:43:52.006137   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:43:52.006201   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:43:52.047196   69580 cri.go:89] found id: ""
	I0501 03:43:52.047228   69580 logs.go:276] 0 containers: []
	W0501 03:43:52.047239   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:43:52.047250   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:43:52.047319   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:43:52.086380   69580 cri.go:89] found id: ""
	I0501 03:43:52.086423   69580 logs.go:276] 0 containers: []
	W0501 03:43:52.086434   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:43:52.086442   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:43:52.086499   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:43:52.128824   69580 cri.go:89] found id: ""
	I0501 03:43:52.128851   69580 logs.go:276] 0 containers: []
	W0501 03:43:52.128861   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:43:52.128868   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:43:52.128933   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:43:52.168743   69580 cri.go:89] found id: ""
	I0501 03:43:52.168769   69580 logs.go:276] 0 containers: []
	W0501 03:43:52.168776   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:43:52.168788   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:43:52.168802   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:43:52.184391   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:43:52.184419   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:43:52.268330   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:43:52.268368   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:43:52.268386   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:43:52.350556   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:43:52.350586   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:43:52.395930   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:43:52.395967   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:43:54.952879   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:43:54.968440   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:43:54.968517   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:43:55.008027   69580 cri.go:89] found id: ""
	I0501 03:43:55.008056   69580 logs.go:276] 0 containers: []
	W0501 03:43:55.008067   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:43:55.008074   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:43:55.008137   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:43:55.048848   69580 cri.go:89] found id: ""
	I0501 03:43:55.048869   69580 logs.go:276] 0 containers: []
	W0501 03:43:55.048877   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:43:55.048882   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:43:55.048931   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:43:55.085886   69580 cri.go:89] found id: ""
	I0501 03:43:55.085910   69580 logs.go:276] 0 containers: []
	W0501 03:43:55.085919   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:43:55.085924   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:43:55.085971   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:43:55.119542   69580 cri.go:89] found id: ""
	I0501 03:43:55.119567   69580 logs.go:276] 0 containers: []
	W0501 03:43:55.119574   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:43:55.119580   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:43:55.119636   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:43:55.158327   69580 cri.go:89] found id: ""
	I0501 03:43:55.158357   69580 logs.go:276] 0 containers: []
	W0501 03:43:55.158367   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:43:55.158374   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:43:55.158449   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:43:55.200061   69580 cri.go:89] found id: ""
	I0501 03:43:55.200085   69580 logs.go:276] 0 containers: []
	W0501 03:43:55.200093   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:43:55.200100   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:43:55.200146   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:43:55.239446   69580 cri.go:89] found id: ""
	I0501 03:43:55.239476   69580 logs.go:276] 0 containers: []
	W0501 03:43:55.239487   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:43:55.239493   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:43:55.239557   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:43:55.275593   69580 cri.go:89] found id: ""
	I0501 03:43:55.275623   69580 logs.go:276] 0 containers: []
	W0501 03:43:55.275635   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:43:55.275646   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:43:55.275662   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:43:55.356701   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:43:55.356724   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:43:55.356740   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:43:55.437445   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:43:55.437483   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:43:55.489024   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:43:55.489051   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:43:55.548083   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:43:55.548114   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:43:54.845529   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:57.348771   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:55.158591   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:57.657361   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:58.010099   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:00.010511   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:58.067063   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:43:58.080485   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:43:58.080539   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:43:58.121459   69580 cri.go:89] found id: ""
	I0501 03:43:58.121488   69580 logs.go:276] 0 containers: []
	W0501 03:43:58.121498   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:43:58.121505   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:43:58.121562   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:43:58.161445   69580 cri.go:89] found id: ""
	I0501 03:43:58.161479   69580 logs.go:276] 0 containers: []
	W0501 03:43:58.161489   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:43:58.161499   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:43:58.161560   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:43:58.203216   69580 cri.go:89] found id: ""
	I0501 03:43:58.203238   69580 logs.go:276] 0 containers: []
	W0501 03:43:58.203246   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:43:58.203251   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:43:58.203297   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:43:58.239496   69580 cri.go:89] found id: ""
	I0501 03:43:58.239526   69580 logs.go:276] 0 containers: []
	W0501 03:43:58.239538   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:43:58.239546   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:43:58.239605   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:43:58.280331   69580 cri.go:89] found id: ""
	I0501 03:43:58.280359   69580 logs.go:276] 0 containers: []
	W0501 03:43:58.280370   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:43:58.280378   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:43:58.280438   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:43:58.318604   69580 cri.go:89] found id: ""
	I0501 03:43:58.318634   69580 logs.go:276] 0 containers: []
	W0501 03:43:58.318646   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:43:58.318653   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:43:58.318712   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:43:58.359360   69580 cri.go:89] found id: ""
	I0501 03:43:58.359383   69580 logs.go:276] 0 containers: []
	W0501 03:43:58.359392   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:43:58.359398   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:43:58.359446   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:43:58.401172   69580 cri.go:89] found id: ""
	I0501 03:43:58.401202   69580 logs.go:276] 0 containers: []
	W0501 03:43:58.401211   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:43:58.401220   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:43:58.401232   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:43:58.416877   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:43:58.416907   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:43:58.489812   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:43:58.489835   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:43:58.489849   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:43:58.574971   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:43:58.575004   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:43:58.619526   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:43:58.619557   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:44:01.173759   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:44:01.187838   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:44:01.187922   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:44:01.227322   69580 cri.go:89] found id: ""
	I0501 03:44:01.227355   69580 logs.go:276] 0 containers: []
	W0501 03:44:01.227366   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:44:01.227372   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:44:01.227432   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:44:01.268418   69580 cri.go:89] found id: ""
	I0501 03:44:01.268453   69580 logs.go:276] 0 containers: []
	W0501 03:44:01.268465   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:44:01.268472   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:44:01.268530   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:44:01.314641   69580 cri.go:89] found id: ""
	I0501 03:44:01.314667   69580 logs.go:276] 0 containers: []
	W0501 03:44:01.314675   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:44:01.314681   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:44:01.314739   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:44:01.361237   69580 cri.go:89] found id: ""
	I0501 03:44:01.361272   69580 logs.go:276] 0 containers: []
	W0501 03:44:01.361288   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:44:01.361294   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:44:01.361348   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:44:01.400650   69580 cri.go:89] found id: ""
	I0501 03:44:01.400676   69580 logs.go:276] 0 containers: []
	W0501 03:44:01.400684   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:44:01.400690   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:44:01.400739   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:44:01.447998   69580 cri.go:89] found id: ""
	I0501 03:44:01.448023   69580 logs.go:276] 0 containers: []
	W0501 03:44:01.448032   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:44:01.448040   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:44:01.448101   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:43:59.845726   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:02.345826   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:00.155851   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:02.155998   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:02.010828   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:04.014801   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:01.492172   69580 cri.go:89] found id: ""
	I0501 03:44:01.492199   69580 logs.go:276] 0 containers: []
	W0501 03:44:01.492207   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:44:01.492213   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:44:01.492265   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:44:01.538589   69580 cri.go:89] found id: ""
	I0501 03:44:01.538617   69580 logs.go:276] 0 containers: []
	W0501 03:44:01.538628   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:44:01.538638   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:44:01.538653   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:44:01.592914   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:44:01.592952   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:44:01.611706   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:44:01.611754   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:44:01.693469   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:44:01.693488   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:44:01.693501   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:44:01.774433   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:44:01.774470   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:44:04.321593   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:44:04.335428   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:44:04.335497   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:44:04.378479   69580 cri.go:89] found id: ""
	I0501 03:44:04.378505   69580 logs.go:276] 0 containers: []
	W0501 03:44:04.378516   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:44:04.378525   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:44:04.378585   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:44:04.420025   69580 cri.go:89] found id: ""
	I0501 03:44:04.420050   69580 logs.go:276] 0 containers: []
	W0501 03:44:04.420059   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:44:04.420065   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:44:04.420113   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:44:04.464009   69580 cri.go:89] found id: ""
	I0501 03:44:04.464039   69580 logs.go:276] 0 containers: []
	W0501 03:44:04.464047   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:44:04.464052   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:44:04.464113   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:44:04.502039   69580 cri.go:89] found id: ""
	I0501 03:44:04.502069   69580 logs.go:276] 0 containers: []
	W0501 03:44:04.502081   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:44:04.502088   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:44:04.502150   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:44:04.544566   69580 cri.go:89] found id: ""
	I0501 03:44:04.544593   69580 logs.go:276] 0 containers: []
	W0501 03:44:04.544605   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:44:04.544614   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:44:04.544672   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:44:04.584067   69580 cri.go:89] found id: ""
	I0501 03:44:04.584095   69580 logs.go:276] 0 containers: []
	W0501 03:44:04.584104   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:44:04.584112   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:44:04.584174   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:44:04.625165   69580 cri.go:89] found id: ""
	I0501 03:44:04.625197   69580 logs.go:276] 0 containers: []
	W0501 03:44:04.625210   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:44:04.625219   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:44:04.625292   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:44:04.667796   69580 cri.go:89] found id: ""
	I0501 03:44:04.667830   69580 logs.go:276] 0 containers: []
	W0501 03:44:04.667839   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:44:04.667850   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:44:04.667868   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:44:04.722269   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:44:04.722303   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:44:04.738232   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:44:04.738265   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:44:04.821551   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:44:04.821578   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:44:04.821595   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:44:04.902575   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:44:04.902618   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:44:04.346197   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:06.845552   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:04.157333   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:06.157366   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:08.656837   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:06.513484   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:09.012004   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:07.449793   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:44:07.466348   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:44:07.466450   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:44:07.510325   69580 cri.go:89] found id: ""
	I0501 03:44:07.510352   69580 logs.go:276] 0 containers: []
	W0501 03:44:07.510363   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:44:07.510371   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:44:07.510450   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:44:07.550722   69580 cri.go:89] found id: ""
	I0501 03:44:07.550748   69580 logs.go:276] 0 containers: []
	W0501 03:44:07.550756   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:44:07.550762   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:44:07.550810   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:44:07.589592   69580 cri.go:89] found id: ""
	I0501 03:44:07.589617   69580 logs.go:276] 0 containers: []
	W0501 03:44:07.589625   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:44:07.589630   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:44:07.589678   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:44:07.631628   69580 cri.go:89] found id: ""
	I0501 03:44:07.631655   69580 logs.go:276] 0 containers: []
	W0501 03:44:07.631662   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:44:07.631668   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:44:07.631726   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:44:07.674709   69580 cri.go:89] found id: ""
	I0501 03:44:07.674743   69580 logs.go:276] 0 containers: []
	W0501 03:44:07.674753   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:44:07.674760   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:44:07.674811   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:44:07.714700   69580 cri.go:89] found id: ""
	I0501 03:44:07.714767   69580 logs.go:276] 0 containers: []
	W0501 03:44:07.714788   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:44:07.714797   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:44:07.714856   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:44:07.753440   69580 cri.go:89] found id: ""
	I0501 03:44:07.753467   69580 logs.go:276] 0 containers: []
	W0501 03:44:07.753478   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:44:07.753485   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:44:07.753549   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:44:07.791579   69580 cri.go:89] found id: ""
	I0501 03:44:07.791606   69580 logs.go:276] 0 containers: []
	W0501 03:44:07.791617   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:44:07.791628   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:44:07.791644   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:44:07.845568   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:44:07.845606   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:44:07.861861   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:44:07.861885   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:44:07.941719   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:44:07.941743   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:44:07.941757   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:44:08.022684   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:44:08.022720   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:44:10.575417   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:44:10.593408   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:44:10.593468   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:44:10.641322   69580 cri.go:89] found id: ""
	I0501 03:44:10.641357   69580 logs.go:276] 0 containers: []
	W0501 03:44:10.641370   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:44:10.641378   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:44:10.641442   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:44:10.686330   69580 cri.go:89] found id: ""
	I0501 03:44:10.686358   69580 logs.go:276] 0 containers: []
	W0501 03:44:10.686368   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:44:10.686377   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:44:10.686458   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:44:10.734414   69580 cri.go:89] found id: ""
	I0501 03:44:10.734444   69580 logs.go:276] 0 containers: []
	W0501 03:44:10.734456   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:44:10.734463   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:44:10.734527   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:44:10.776063   69580 cri.go:89] found id: ""
	I0501 03:44:10.776095   69580 logs.go:276] 0 containers: []
	W0501 03:44:10.776106   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:44:10.776113   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:44:10.776176   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:44:10.819035   69580 cri.go:89] found id: ""
	I0501 03:44:10.819065   69580 logs.go:276] 0 containers: []
	W0501 03:44:10.819076   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:44:10.819084   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:44:10.819150   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:44:10.868912   69580 cri.go:89] found id: ""
	I0501 03:44:10.868938   69580 logs.go:276] 0 containers: []
	W0501 03:44:10.868946   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:44:10.868952   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:44:10.869000   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:44:10.910517   69580 cri.go:89] found id: ""
	I0501 03:44:10.910549   69580 logs.go:276] 0 containers: []
	W0501 03:44:10.910572   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:44:10.910581   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:44:10.910678   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:44:10.949267   69580 cri.go:89] found id: ""
	I0501 03:44:10.949297   69580 logs.go:276] 0 containers: []
	W0501 03:44:10.949306   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:44:10.949314   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:44:10.949327   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:44:11.004731   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:44:11.004779   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:44:11.022146   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:44:11.022174   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:44:11.108992   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:44:11.109020   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:44:11.109035   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:44:11.192571   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:44:11.192605   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:44:08.846431   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:11.346295   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:10.657938   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:13.156112   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:11.012040   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:13.512166   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:15.512232   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:13.739336   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:44:13.758622   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:44:13.758721   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:44:13.805395   69580 cri.go:89] found id: ""
	I0501 03:44:13.805423   69580 logs.go:276] 0 containers: []
	W0501 03:44:13.805434   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:44:13.805442   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:44:13.805523   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:44:13.847372   69580 cri.go:89] found id: ""
	I0501 03:44:13.847400   69580 logs.go:276] 0 containers: []
	W0501 03:44:13.847409   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:44:13.847417   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:44:13.847474   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:44:13.891842   69580 cri.go:89] found id: ""
	I0501 03:44:13.891867   69580 logs.go:276] 0 containers: []
	W0501 03:44:13.891874   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:44:13.891880   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:44:13.891935   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:44:13.933382   69580 cri.go:89] found id: ""
	I0501 03:44:13.933411   69580 logs.go:276] 0 containers: []
	W0501 03:44:13.933422   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:44:13.933430   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:44:13.933490   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:44:13.973955   69580 cri.go:89] found id: ""
	I0501 03:44:13.973980   69580 logs.go:276] 0 containers: []
	W0501 03:44:13.973991   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:44:13.974000   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:44:13.974053   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:44:14.015202   69580 cri.go:89] found id: ""
	I0501 03:44:14.015226   69580 logs.go:276] 0 containers: []
	W0501 03:44:14.015234   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:44:14.015240   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:44:14.015287   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:44:14.057441   69580 cri.go:89] found id: ""
	I0501 03:44:14.057471   69580 logs.go:276] 0 containers: []
	W0501 03:44:14.057483   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:44:14.057491   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:44:14.057551   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:44:14.099932   69580 cri.go:89] found id: ""
	I0501 03:44:14.099961   69580 logs.go:276] 0 containers: []
	W0501 03:44:14.099972   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:44:14.099983   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:44:14.099996   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:44:14.160386   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:44:14.160418   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:44:14.176880   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:44:14.176908   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:44:14.272137   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:44:14.272155   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:44:14.272168   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:44:14.366523   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:44:14.366571   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:44:13.349770   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:15.351345   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:17.845182   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:15.156569   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:17.157994   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:17.512836   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:20.012034   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:16.914394   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:44:16.930976   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:44:16.931038   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:44:16.977265   69580 cri.go:89] found id: ""
	I0501 03:44:16.977294   69580 logs.go:276] 0 containers: []
	W0501 03:44:16.977303   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:44:16.977309   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:44:16.977363   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:44:17.015656   69580 cri.go:89] found id: ""
	I0501 03:44:17.015686   69580 logs.go:276] 0 containers: []
	W0501 03:44:17.015694   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:44:17.015700   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:44:17.015768   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:44:17.056079   69580 cri.go:89] found id: ""
	I0501 03:44:17.056111   69580 logs.go:276] 0 containers: []
	W0501 03:44:17.056121   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:44:17.056129   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:44:17.056188   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:44:17.099504   69580 cri.go:89] found id: ""
	I0501 03:44:17.099528   69580 logs.go:276] 0 containers: []
	W0501 03:44:17.099536   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:44:17.099542   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:44:17.099606   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:44:17.141371   69580 cri.go:89] found id: ""
	I0501 03:44:17.141401   69580 logs.go:276] 0 containers: []
	W0501 03:44:17.141410   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:44:17.141417   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:44:17.141484   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:44:17.184143   69580 cri.go:89] found id: ""
	I0501 03:44:17.184167   69580 logs.go:276] 0 containers: []
	W0501 03:44:17.184179   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:44:17.184193   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:44:17.184246   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:44:17.224012   69580 cri.go:89] found id: ""
	I0501 03:44:17.224049   69580 logs.go:276] 0 containers: []
	W0501 03:44:17.224061   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:44:17.224069   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:44:17.224136   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:44:17.268185   69580 cri.go:89] found id: ""
	I0501 03:44:17.268216   69580 logs.go:276] 0 containers: []
	W0501 03:44:17.268224   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:44:17.268233   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:44:17.268248   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:44:17.351342   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:44:17.351392   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:44:17.398658   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:44:17.398689   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:44:17.452476   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:44:17.452517   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:44:17.468734   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:44:17.468771   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:44:17.558971   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:44:20.059342   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:44:20.075707   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:44:20.075791   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:44:20.114436   69580 cri.go:89] found id: ""
	I0501 03:44:20.114472   69580 logs.go:276] 0 containers: []
	W0501 03:44:20.114486   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:44:20.114495   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:44:20.114562   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:44:20.155607   69580 cri.go:89] found id: ""
	I0501 03:44:20.155638   69580 logs.go:276] 0 containers: []
	W0501 03:44:20.155649   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:44:20.155657   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:44:20.155715   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:44:20.198188   69580 cri.go:89] found id: ""
	I0501 03:44:20.198218   69580 logs.go:276] 0 containers: []
	W0501 03:44:20.198227   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:44:20.198234   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:44:20.198291   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:44:20.237183   69580 cri.go:89] found id: ""
	I0501 03:44:20.237213   69580 logs.go:276] 0 containers: []
	W0501 03:44:20.237223   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:44:20.237232   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:44:20.237286   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:44:20.279289   69580 cri.go:89] found id: ""
	I0501 03:44:20.279320   69580 logs.go:276] 0 containers: []
	W0501 03:44:20.279332   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:44:20.279341   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:44:20.279409   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:44:20.334066   69580 cri.go:89] found id: ""
	I0501 03:44:20.334091   69580 logs.go:276] 0 containers: []
	W0501 03:44:20.334112   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:44:20.334121   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:44:20.334181   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:44:20.385740   69580 cri.go:89] found id: ""
	I0501 03:44:20.385775   69580 logs.go:276] 0 containers: []
	W0501 03:44:20.385785   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:44:20.385796   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:44:20.385860   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:44:20.425151   69580 cri.go:89] found id: ""
	I0501 03:44:20.425176   69580 logs.go:276] 0 containers: []
	W0501 03:44:20.425183   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:44:20.425193   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:44:20.425214   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:44:20.472563   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:44:20.472605   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:44:20.526589   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:44:20.526626   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:44:20.541978   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:44:20.542013   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:44:20.619513   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:44:20.619540   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:44:20.619555   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:44:19.846208   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:22.345166   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:19.658986   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:22.156821   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:23.159267   68864 pod_ready.go:81] duration metric: took 4m0.009511824s for pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace to be "Ready" ...
	E0501 03:44:23.159296   68864 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0501 03:44:23.159308   68864 pod_ready.go:38] duration metric: took 4m7.423794373s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0501 03:44:23.159327   68864 api_server.go:52] waiting for apiserver process to appear ...
	I0501 03:44:23.159362   68864 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:44:23.159422   68864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:44:23.225563   68864 cri.go:89] found id: "a96815c49ac45b34c359d7171b30be5c25cee80fae08d947fc004d479693df00"
	I0501 03:44:23.225590   68864 cri.go:89] found id: ""
	I0501 03:44:23.225607   68864 logs.go:276] 1 containers: [a96815c49ac45b34c359d7171b30be5c25cee80fae08d947fc004d479693df00]
	I0501 03:44:23.225663   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:44:23.231542   68864 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:44:23.231598   68864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:44:23.290847   68864 cri.go:89] found id: "d109948ffbbddd2e38484d83d2260690afce3c51e16abff0fe8713e844bfdcb3"
	I0501 03:44:23.290871   68864 cri.go:89] found id: ""
	I0501 03:44:23.290878   68864 logs.go:276] 1 containers: [d109948ffbbddd2e38484d83d2260690afce3c51e16abff0fe8713e844bfdcb3]
	I0501 03:44:23.290926   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:44:23.295697   68864 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:44:23.295755   68864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:44:23.348625   68864 cri.go:89] found id: "e3c74de489af3d78a4b0cf620ed16e1fba7c9ce1c7021a15c5b4bd241b7a934a"
	I0501 03:44:23.348652   68864 cri.go:89] found id: ""
	I0501 03:44:23.348661   68864 logs.go:276] 1 containers: [e3c74de489af3d78a4b0cf620ed16e1fba7c9ce1c7021a15c5b4bd241b7a934a]
	I0501 03:44:23.348717   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:44:23.355801   68864 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:44:23.355896   68864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:44:23.409428   68864 cri.go:89] found id: "1813f35574f4f0398e7d482fe77c5d7f8490726666a277035bda0a5cff80af6e"
	I0501 03:44:23.409461   68864 cri.go:89] found id: ""
	I0501 03:44:23.409471   68864 logs.go:276] 1 containers: [1813f35574f4f0398e7d482fe77c5d7f8490726666a277035bda0a5cff80af6e]
	I0501 03:44:23.409530   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:44:23.416480   68864 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:44:23.416560   68864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:44:23.466642   68864 cri.go:89] found id: "94afdb03c382269209154b2e73edbf200662e89960c248ba775a0ef2b8fbe6b1"
	I0501 03:44:23.466672   68864 cri.go:89] found id: ""
	I0501 03:44:23.466681   68864 logs.go:276] 1 containers: [94afdb03c382269209154b2e73edbf200662e89960c248ba775a0ef2b8fbe6b1]
	I0501 03:44:23.466739   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:44:23.472831   68864 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:44:23.472906   68864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:44:23.524815   68864 cri.go:89] found id: "7e7158f7ff3922e97ba5ee97108acc1d0ba0350dff2f9aa56c2f71519108791c"
	I0501 03:44:23.524841   68864 cri.go:89] found id: ""
	I0501 03:44:23.524850   68864 logs.go:276] 1 containers: [7e7158f7ff3922e97ba5ee97108acc1d0ba0350dff2f9aa56c2f71519108791c]
	I0501 03:44:23.524902   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:44:23.532092   68864 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:44:23.532161   68864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:44:23.577262   68864 cri.go:89] found id: ""
	I0501 03:44:23.577292   68864 logs.go:276] 0 containers: []
	W0501 03:44:23.577305   68864 logs.go:278] No container was found matching "kindnet"
	I0501 03:44:23.577312   68864 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0501 03:44:23.577374   68864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0501 03:44:23.623597   68864 cri.go:89] found id: "f9a8d2f0f9453969b410fb7f880f6cf4c7e6e2f3c17a687583906efe4181dd17"
	I0501 03:44:23.623626   68864 cri.go:89] found id: "aaae36261c5ce1accf3bfb5993be5a256a037a640d5f6f30de8384900dda06b2"
	I0501 03:44:23.623632   68864 cri.go:89] found id: ""
	I0501 03:44:23.623640   68864 logs.go:276] 2 containers: [f9a8d2f0f9453969b410fb7f880f6cf4c7e6e2f3c17a687583906efe4181dd17 aaae36261c5ce1accf3bfb5993be5a256a037a640d5f6f30de8384900dda06b2]
	I0501 03:44:23.623702   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:44:23.630189   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:44:23.635673   68864 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:44:23.635694   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:44:22.012084   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:24.511736   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:23.203031   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:44:23.219964   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:44:23.220043   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:44:23.264287   69580 cri.go:89] found id: ""
	I0501 03:44:23.264315   69580 logs.go:276] 0 containers: []
	W0501 03:44:23.264323   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:44:23.264328   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:44:23.264395   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:44:23.310337   69580 cri.go:89] found id: ""
	I0501 03:44:23.310366   69580 logs.go:276] 0 containers: []
	W0501 03:44:23.310375   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:44:23.310383   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:44:23.310461   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:44:23.364550   69580 cri.go:89] found id: ""
	I0501 03:44:23.364577   69580 logs.go:276] 0 containers: []
	W0501 03:44:23.364588   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:44:23.364596   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:44:23.364676   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:44:23.412620   69580 cri.go:89] found id: ""
	I0501 03:44:23.412647   69580 logs.go:276] 0 containers: []
	W0501 03:44:23.412657   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:44:23.412665   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:44:23.412726   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:44:23.461447   69580 cri.go:89] found id: ""
	I0501 03:44:23.461477   69580 logs.go:276] 0 containers: []
	W0501 03:44:23.461488   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:44:23.461496   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:44:23.461558   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:44:23.514868   69580 cri.go:89] found id: ""
	I0501 03:44:23.514896   69580 logs.go:276] 0 containers: []
	W0501 03:44:23.514915   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:44:23.514924   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:44:23.514984   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:44:23.559171   69580 cri.go:89] found id: ""
	I0501 03:44:23.559200   69580 logs.go:276] 0 containers: []
	W0501 03:44:23.559210   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:44:23.559218   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:44:23.559284   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:44:23.601713   69580 cri.go:89] found id: ""
	I0501 03:44:23.601740   69580 logs.go:276] 0 containers: []
	W0501 03:44:23.601749   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:44:23.601760   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:44:23.601772   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:44:23.656147   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:44:23.656187   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:44:23.673507   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:44:23.673545   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:44:23.771824   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:44:23.771846   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:44:23.771861   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:44:23.861128   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:44:23.861161   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:44:26.406507   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:44:26.421836   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:44:26.421894   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:44:26.462758   69580 cri.go:89] found id: ""
	I0501 03:44:26.462785   69580 logs.go:276] 0 containers: []
	W0501 03:44:26.462796   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:44:26.462804   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:44:26.462860   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:44:24.346534   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:26.847370   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:24.220047   68864 logs.go:123] Gathering logs for kubelet ...
	I0501 03:44:24.220087   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:44:24.279596   68864 logs.go:123] Gathering logs for kube-apiserver [a96815c49ac45b34c359d7171b30be5c25cee80fae08d947fc004d479693df00] ...
	I0501 03:44:24.279633   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a96815c49ac45b34c359d7171b30be5c25cee80fae08d947fc004d479693df00"
	I0501 03:44:24.336092   68864 logs.go:123] Gathering logs for etcd [d109948ffbbddd2e38484d83d2260690afce3c51e16abff0fe8713e844bfdcb3] ...
	I0501 03:44:24.336128   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d109948ffbbddd2e38484d83d2260690afce3c51e16abff0fe8713e844bfdcb3"
	I0501 03:44:24.396117   68864 logs.go:123] Gathering logs for storage-provisioner [f9a8d2f0f9453969b410fb7f880f6cf4c7e6e2f3c17a687583906efe4181dd17] ...
	I0501 03:44:24.396145   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f9a8d2f0f9453969b410fb7f880f6cf4c7e6e2f3c17a687583906efe4181dd17"
	I0501 03:44:24.443608   68864 logs.go:123] Gathering logs for storage-provisioner [aaae36261c5ce1accf3bfb5993be5a256a037a640d5f6f30de8384900dda06b2] ...
	I0501 03:44:24.443644   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aaae36261c5ce1accf3bfb5993be5a256a037a640d5f6f30de8384900dda06b2"
	I0501 03:44:24.499533   68864 logs.go:123] Gathering logs for kube-controller-manager [7e7158f7ff3922e97ba5ee97108acc1d0ba0350dff2f9aa56c2f71519108791c] ...
	I0501 03:44:24.499560   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7e7158f7ff3922e97ba5ee97108acc1d0ba0350dff2f9aa56c2f71519108791c"
	I0501 03:44:24.562990   68864 logs.go:123] Gathering logs for container status ...
	I0501 03:44:24.563028   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:44:24.622630   68864 logs.go:123] Gathering logs for dmesg ...
	I0501 03:44:24.622671   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:44:24.641106   68864 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:44:24.641145   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0501 03:44:24.781170   68864 logs.go:123] Gathering logs for coredns [e3c74de489af3d78a4b0cf620ed16e1fba7c9ce1c7021a15c5b4bd241b7a934a] ...
	I0501 03:44:24.781203   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e3c74de489af3d78a4b0cf620ed16e1fba7c9ce1c7021a15c5b4bd241b7a934a"
	I0501 03:44:24.824616   68864 logs.go:123] Gathering logs for kube-scheduler [1813f35574f4f0398e7d482fe77c5d7f8490726666a277035bda0a5cff80af6e] ...
	I0501 03:44:24.824643   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1813f35574f4f0398e7d482fe77c5d7f8490726666a277035bda0a5cff80af6e"
	I0501 03:44:24.871956   68864 logs.go:123] Gathering logs for kube-proxy [94afdb03c382269209154b2e73edbf200662e89960c248ba775a0ef2b8fbe6b1] ...
	I0501 03:44:24.871992   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 94afdb03c382269209154b2e73edbf200662e89960c248ba775a0ef2b8fbe6b1"
	I0501 03:44:27.424582   68864 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:44:27.447490   68864 api_server.go:72] duration metric: took 4m19.445111196s to wait for apiserver process to appear ...
	I0501 03:44:27.447522   68864 api_server.go:88] waiting for apiserver healthz status ...
	I0501 03:44:27.447555   68864 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:44:27.447601   68864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:44:27.494412   68864 cri.go:89] found id: "a96815c49ac45b34c359d7171b30be5c25cee80fae08d947fc004d479693df00"
	I0501 03:44:27.494437   68864 cri.go:89] found id: ""
	I0501 03:44:27.494445   68864 logs.go:276] 1 containers: [a96815c49ac45b34c359d7171b30be5c25cee80fae08d947fc004d479693df00]
	I0501 03:44:27.494490   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:44:27.503782   68864 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:44:27.503853   68864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:44:27.550991   68864 cri.go:89] found id: "d109948ffbbddd2e38484d83d2260690afce3c51e16abff0fe8713e844bfdcb3"
	I0501 03:44:27.551018   68864 cri.go:89] found id: ""
	I0501 03:44:27.551026   68864 logs.go:276] 1 containers: [d109948ffbbddd2e38484d83d2260690afce3c51e16abff0fe8713e844bfdcb3]
	I0501 03:44:27.551073   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:44:27.556919   68864 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:44:27.556983   68864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:44:27.606005   68864 cri.go:89] found id: "e3c74de489af3d78a4b0cf620ed16e1fba7c9ce1c7021a15c5b4bd241b7a934a"
	I0501 03:44:27.606033   68864 cri.go:89] found id: ""
	I0501 03:44:27.606042   68864 logs.go:276] 1 containers: [e3c74de489af3d78a4b0cf620ed16e1fba7c9ce1c7021a15c5b4bd241b7a934a]
	I0501 03:44:27.606100   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:44:27.611639   68864 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:44:27.611706   68864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:44:27.661151   68864 cri.go:89] found id: "1813f35574f4f0398e7d482fe77c5d7f8490726666a277035bda0a5cff80af6e"
	I0501 03:44:27.661172   68864 cri.go:89] found id: ""
	I0501 03:44:27.661179   68864 logs.go:276] 1 containers: [1813f35574f4f0398e7d482fe77c5d7f8490726666a277035bda0a5cff80af6e]
	I0501 03:44:27.661278   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:44:27.666443   68864 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:44:27.666514   68864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:44:27.712387   68864 cri.go:89] found id: "94afdb03c382269209154b2e73edbf200662e89960c248ba775a0ef2b8fbe6b1"
	I0501 03:44:27.712416   68864 cri.go:89] found id: ""
	I0501 03:44:27.712424   68864 logs.go:276] 1 containers: [94afdb03c382269209154b2e73edbf200662e89960c248ba775a0ef2b8fbe6b1]
	I0501 03:44:27.712480   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:44:27.717280   68864 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:44:27.717342   68864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:44:27.767124   68864 cri.go:89] found id: "7e7158f7ff3922e97ba5ee97108acc1d0ba0350dff2f9aa56c2f71519108791c"
	I0501 03:44:27.767154   68864 cri.go:89] found id: ""
	I0501 03:44:27.767163   68864 logs.go:276] 1 containers: [7e7158f7ff3922e97ba5ee97108acc1d0ba0350dff2f9aa56c2f71519108791c]
	I0501 03:44:27.767215   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:44:27.773112   68864 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:44:27.773183   68864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:44:27.829966   68864 cri.go:89] found id: ""
	I0501 03:44:27.829991   68864 logs.go:276] 0 containers: []
	W0501 03:44:27.829999   68864 logs.go:278] No container was found matching "kindnet"
	I0501 03:44:27.830005   68864 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0501 03:44:27.830056   68864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0501 03:44:27.873391   68864 cri.go:89] found id: "f9a8d2f0f9453969b410fb7f880f6cf4c7e6e2f3c17a687583906efe4181dd17"
	I0501 03:44:27.873415   68864 cri.go:89] found id: "aaae36261c5ce1accf3bfb5993be5a256a037a640d5f6f30de8384900dda06b2"
	I0501 03:44:27.873419   68864 cri.go:89] found id: ""
	I0501 03:44:27.873426   68864 logs.go:276] 2 containers: [f9a8d2f0f9453969b410fb7f880f6cf4c7e6e2f3c17a687583906efe4181dd17 aaae36261c5ce1accf3bfb5993be5a256a037a640d5f6f30de8384900dda06b2]
	I0501 03:44:27.873473   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:44:27.878537   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:44:27.883518   68864 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:44:27.883543   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0501 03:44:28.012337   68864 logs.go:123] Gathering logs for kube-apiserver [a96815c49ac45b34c359d7171b30be5c25cee80fae08d947fc004d479693df00] ...
	I0501 03:44:28.012377   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a96815c49ac45b34c359d7171b30be5c25cee80fae08d947fc004d479693df00"
	I0501 03:44:28.063686   68864 logs.go:123] Gathering logs for etcd [d109948ffbbddd2e38484d83d2260690afce3c51e16abff0fe8713e844bfdcb3] ...
	I0501 03:44:28.063715   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d109948ffbbddd2e38484d83d2260690afce3c51e16abff0fe8713e844bfdcb3"
	I0501 03:44:28.116507   68864 logs.go:123] Gathering logs for storage-provisioner [aaae36261c5ce1accf3bfb5993be5a256a037a640d5f6f30de8384900dda06b2] ...
	I0501 03:44:28.116535   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aaae36261c5ce1accf3bfb5993be5a256a037a640d5f6f30de8384900dda06b2"
	I0501 03:44:28.165593   68864 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:44:28.165636   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:44:28.595278   68864 logs.go:123] Gathering logs for container status ...
	I0501 03:44:28.595333   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:44:28.645790   68864 logs.go:123] Gathering logs for dmesg ...
	I0501 03:44:28.645836   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:44:28.662952   68864 logs.go:123] Gathering logs for coredns [e3c74de489af3d78a4b0cf620ed16e1fba7c9ce1c7021a15c5b4bd241b7a934a] ...
	I0501 03:44:28.662984   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e3c74de489af3d78a4b0cf620ed16e1fba7c9ce1c7021a15c5b4bd241b7a934a"
	I0501 03:44:28.710273   68864 logs.go:123] Gathering logs for kube-scheduler [1813f35574f4f0398e7d482fe77c5d7f8490726666a277035bda0a5cff80af6e] ...
	I0501 03:44:28.710302   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1813f35574f4f0398e7d482fe77c5d7f8490726666a277035bda0a5cff80af6e"
	I0501 03:44:28.761838   68864 logs.go:123] Gathering logs for kube-proxy [94afdb03c382269209154b2e73edbf200662e89960c248ba775a0ef2b8fbe6b1] ...
	I0501 03:44:28.761872   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 94afdb03c382269209154b2e73edbf200662e89960c248ba775a0ef2b8fbe6b1"
	I0501 03:44:28.810775   68864 logs.go:123] Gathering logs for kube-controller-manager [7e7158f7ff3922e97ba5ee97108acc1d0ba0350dff2f9aa56c2f71519108791c] ...
	I0501 03:44:28.810808   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7e7158f7ff3922e97ba5ee97108acc1d0ba0350dff2f9aa56c2f71519108791c"
	I0501 03:44:27.012119   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:29.510651   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:26.505067   69580 cri.go:89] found id: ""
	I0501 03:44:26.505098   69580 logs.go:276] 0 containers: []
	W0501 03:44:26.505110   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:44:26.505121   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:44:26.505182   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:44:26.544672   69580 cri.go:89] found id: ""
	I0501 03:44:26.544699   69580 logs.go:276] 0 containers: []
	W0501 03:44:26.544711   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:44:26.544717   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:44:26.544764   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:44:26.590579   69580 cri.go:89] found id: ""
	I0501 03:44:26.590605   69580 logs.go:276] 0 containers: []
	W0501 03:44:26.590614   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:44:26.590620   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:44:26.590670   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:44:26.637887   69580 cri.go:89] found id: ""
	I0501 03:44:26.637920   69580 logs.go:276] 0 containers: []
	W0501 03:44:26.637930   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:44:26.637939   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:44:26.637998   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:44:26.686778   69580 cri.go:89] found id: ""
	I0501 03:44:26.686807   69580 logs.go:276] 0 containers: []
	W0501 03:44:26.686815   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:44:26.686821   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:44:26.686882   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:44:26.729020   69580 cri.go:89] found id: ""
	I0501 03:44:26.729045   69580 logs.go:276] 0 containers: []
	W0501 03:44:26.729054   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:44:26.729060   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:44:26.729124   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:44:26.769022   69580 cri.go:89] found id: ""
	I0501 03:44:26.769043   69580 logs.go:276] 0 containers: []
	W0501 03:44:26.769051   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:44:26.769059   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:44:26.769073   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:44:26.854985   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:44:26.855011   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:44:26.855024   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:44:26.937031   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:44:26.937063   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:44:27.006267   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:44:27.006301   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:44:27.080503   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:44:27.080545   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:44:29.598176   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:44:29.614465   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:44:29.614523   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:44:29.662384   69580 cri.go:89] found id: ""
	I0501 03:44:29.662421   69580 logs.go:276] 0 containers: []
	W0501 03:44:29.662433   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:44:29.662439   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:44:29.662483   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:44:29.705262   69580 cri.go:89] found id: ""
	I0501 03:44:29.705286   69580 logs.go:276] 0 containers: []
	W0501 03:44:29.705295   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:44:29.705300   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:44:29.705345   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:44:29.752308   69580 cri.go:89] found id: ""
	I0501 03:44:29.752335   69580 logs.go:276] 0 containers: []
	W0501 03:44:29.752343   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:44:29.752349   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:44:29.752403   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:44:29.802702   69580 cri.go:89] found id: ""
	I0501 03:44:29.802729   69580 logs.go:276] 0 containers: []
	W0501 03:44:29.802741   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:44:29.802749   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:44:29.802814   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:44:29.854112   69580 cri.go:89] found id: ""
	I0501 03:44:29.854138   69580 logs.go:276] 0 containers: []
	W0501 03:44:29.854149   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:44:29.854157   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:44:29.854217   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:44:29.898447   69580 cri.go:89] found id: ""
	I0501 03:44:29.898470   69580 logs.go:276] 0 containers: []
	W0501 03:44:29.898480   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:44:29.898486   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:44:29.898545   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:44:29.938832   69580 cri.go:89] found id: ""
	I0501 03:44:29.938862   69580 logs.go:276] 0 containers: []
	W0501 03:44:29.938873   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:44:29.938881   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:44:29.938948   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:44:29.987697   69580 cri.go:89] found id: ""
	I0501 03:44:29.987721   69580 logs.go:276] 0 containers: []
	W0501 03:44:29.987730   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:44:29.987738   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:44:29.987753   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:44:30.042446   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:44:30.042473   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:44:30.095358   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:44:30.095389   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:44:30.110745   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:44:30.110782   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:44:30.190923   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:44:30.190951   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:44:30.190965   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:44:29.346013   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:31.347513   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:28.868838   68864 logs.go:123] Gathering logs for storage-provisioner [f9a8d2f0f9453969b410fb7f880f6cf4c7e6e2f3c17a687583906efe4181dd17] ...
	I0501 03:44:28.868876   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f9a8d2f0f9453969b410fb7f880f6cf4c7e6e2f3c17a687583906efe4181dd17"
	I0501 03:44:28.912436   68864 logs.go:123] Gathering logs for kubelet ...
	I0501 03:44:28.912474   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:44:31.469456   68864 api_server.go:253] Checking apiserver healthz at https://192.168.50.218:8443/healthz ...
	I0501 03:44:31.478498   68864 api_server.go:279] https://192.168.50.218:8443/healthz returned 200:
	ok
	I0501 03:44:31.479838   68864 api_server.go:141] control plane version: v1.30.0
	I0501 03:44:31.479861   68864 api_server.go:131] duration metric: took 4.032331979s to wait for apiserver health ...
	I0501 03:44:31.479869   68864 system_pods.go:43] waiting for kube-system pods to appear ...
	I0501 03:44:31.479889   68864 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:44:31.479930   68864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:44:31.531068   68864 cri.go:89] found id: "a96815c49ac45b34c359d7171b30be5c25cee80fae08d947fc004d479693df00"
	I0501 03:44:31.531088   68864 cri.go:89] found id: ""
	I0501 03:44:31.531095   68864 logs.go:276] 1 containers: [a96815c49ac45b34c359d7171b30be5c25cee80fae08d947fc004d479693df00]
	I0501 03:44:31.531137   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:44:31.536216   68864 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:44:31.536292   68864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:44:31.584155   68864 cri.go:89] found id: "d109948ffbbddd2e38484d83d2260690afce3c51e16abff0fe8713e844bfdcb3"
	I0501 03:44:31.584183   68864 cri.go:89] found id: ""
	I0501 03:44:31.584194   68864 logs.go:276] 1 containers: [d109948ffbbddd2e38484d83d2260690afce3c51e16abff0fe8713e844bfdcb3]
	I0501 03:44:31.584250   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:44:31.589466   68864 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:44:31.589528   68864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:44:31.639449   68864 cri.go:89] found id: "e3c74de489af3d78a4b0cf620ed16e1fba7c9ce1c7021a15c5b4bd241b7a934a"
	I0501 03:44:31.639476   68864 cri.go:89] found id: ""
	I0501 03:44:31.639484   68864 logs.go:276] 1 containers: [e3c74de489af3d78a4b0cf620ed16e1fba7c9ce1c7021a15c5b4bd241b7a934a]
	I0501 03:44:31.639535   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:44:31.644684   68864 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:44:31.644750   68864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:44:31.702095   68864 cri.go:89] found id: "1813f35574f4f0398e7d482fe77c5d7f8490726666a277035bda0a5cff80af6e"
	I0501 03:44:31.702119   68864 cri.go:89] found id: ""
	I0501 03:44:31.702125   68864 logs.go:276] 1 containers: [1813f35574f4f0398e7d482fe77c5d7f8490726666a277035bda0a5cff80af6e]
	I0501 03:44:31.702173   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:44:31.707443   68864 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:44:31.707508   68864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:44:31.758582   68864 cri.go:89] found id: "94afdb03c382269209154b2e73edbf200662e89960c248ba775a0ef2b8fbe6b1"
	I0501 03:44:31.758603   68864 cri.go:89] found id: ""
	I0501 03:44:31.758610   68864 logs.go:276] 1 containers: [94afdb03c382269209154b2e73edbf200662e89960c248ba775a0ef2b8fbe6b1]
	I0501 03:44:31.758656   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:44:31.764261   68864 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:44:31.764325   68864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:44:31.813385   68864 cri.go:89] found id: "7e7158f7ff3922e97ba5ee97108acc1d0ba0350dff2f9aa56c2f71519108791c"
	I0501 03:44:31.813407   68864 cri.go:89] found id: ""
	I0501 03:44:31.813414   68864 logs.go:276] 1 containers: [7e7158f7ff3922e97ba5ee97108acc1d0ba0350dff2f9aa56c2f71519108791c]
	I0501 03:44:31.813457   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:44:31.818289   68864 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:44:31.818348   68864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:44:31.862788   68864 cri.go:89] found id: ""
	I0501 03:44:31.862814   68864 logs.go:276] 0 containers: []
	W0501 03:44:31.862824   68864 logs.go:278] No container was found matching "kindnet"
	I0501 03:44:31.862832   68864 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0501 03:44:31.862890   68864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0501 03:44:31.912261   68864 cri.go:89] found id: "f9a8d2f0f9453969b410fb7f880f6cf4c7e6e2f3c17a687583906efe4181dd17"
	I0501 03:44:31.912284   68864 cri.go:89] found id: "aaae36261c5ce1accf3bfb5993be5a256a037a640d5f6f30de8384900dda06b2"
	I0501 03:44:31.912298   68864 cri.go:89] found id: ""
	I0501 03:44:31.912312   68864 logs.go:276] 2 containers: [f9a8d2f0f9453969b410fb7f880f6cf4c7e6e2f3c17a687583906efe4181dd17 aaae36261c5ce1accf3bfb5993be5a256a037a640d5f6f30de8384900dda06b2]
	I0501 03:44:31.912367   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:44:31.917696   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:44:31.922432   68864 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:44:31.922450   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:44:32.332797   68864 logs.go:123] Gathering logs for kubelet ...
	I0501 03:44:32.332836   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:44:32.396177   68864 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:44:32.396214   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0501 03:44:32.511915   68864 logs.go:123] Gathering logs for etcd [d109948ffbbddd2e38484d83d2260690afce3c51e16abff0fe8713e844bfdcb3] ...
	I0501 03:44:32.511953   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d109948ffbbddd2e38484d83d2260690afce3c51e16abff0fe8713e844bfdcb3"
	I0501 03:44:32.564447   68864 logs.go:123] Gathering logs for kube-proxy [94afdb03c382269209154b2e73edbf200662e89960c248ba775a0ef2b8fbe6b1] ...
	I0501 03:44:32.564475   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 94afdb03c382269209154b2e73edbf200662e89960c248ba775a0ef2b8fbe6b1"
	I0501 03:44:32.610196   68864 logs.go:123] Gathering logs for kube-controller-manager [7e7158f7ff3922e97ba5ee97108acc1d0ba0350dff2f9aa56c2f71519108791c] ...
	I0501 03:44:32.610235   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7e7158f7ff3922e97ba5ee97108acc1d0ba0350dff2f9aa56c2f71519108791c"
	I0501 03:44:32.665262   68864 logs.go:123] Gathering logs for storage-provisioner [aaae36261c5ce1accf3bfb5993be5a256a037a640d5f6f30de8384900dda06b2] ...
	I0501 03:44:32.665314   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aaae36261c5ce1accf3bfb5993be5a256a037a640d5f6f30de8384900dda06b2"
	I0501 03:44:32.707346   68864 logs.go:123] Gathering logs for container status ...
	I0501 03:44:32.707377   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:44:32.757693   68864 logs.go:123] Gathering logs for dmesg ...
	I0501 03:44:32.757726   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:44:32.775720   68864 logs.go:123] Gathering logs for kube-apiserver [a96815c49ac45b34c359d7171b30be5c25cee80fae08d947fc004d479693df00] ...
	I0501 03:44:32.775759   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a96815c49ac45b34c359d7171b30be5c25cee80fae08d947fc004d479693df00"
	I0501 03:44:32.831002   68864 logs.go:123] Gathering logs for coredns [e3c74de489af3d78a4b0cf620ed16e1fba7c9ce1c7021a15c5b4bd241b7a934a] ...
	I0501 03:44:32.831039   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e3c74de489af3d78a4b0cf620ed16e1fba7c9ce1c7021a15c5b4bd241b7a934a"
	I0501 03:44:32.878365   68864 logs.go:123] Gathering logs for kube-scheduler [1813f35574f4f0398e7d482fe77c5d7f8490726666a277035bda0a5cff80af6e] ...
	I0501 03:44:32.878416   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1813f35574f4f0398e7d482fe77c5d7f8490726666a277035bda0a5cff80af6e"
	I0501 03:44:32.935752   68864 logs.go:123] Gathering logs for storage-provisioner [f9a8d2f0f9453969b410fb7f880f6cf4c7e6e2f3c17a687583906efe4181dd17] ...
	I0501 03:44:32.935791   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f9a8d2f0f9453969b410fb7f880f6cf4c7e6e2f3c17a687583906efe4181dd17"
	I0501 03:44:35.492575   68864 system_pods.go:59] 8 kube-system pods found
	I0501 03:44:35.492603   68864 system_pods.go:61] "coredns-7db6d8ff4d-sjplt" [6701ee8e-0630-4332-b01c-26741ed3a7b7] Running
	I0501 03:44:35.492607   68864 system_pods.go:61] "etcd-embed-certs-277128" [744d7481-dd80-4435-90da-685fc16a76a4] Running
	I0501 03:44:35.492612   68864 system_pods.go:61] "kube-apiserver-embed-certs-277128" [2f1d0f09-b270-4cdc-af3c-e17e3a244bbf] Running
	I0501 03:44:35.492616   68864 system_pods.go:61] "kube-controller-manager-embed-certs-277128" [729aa138-dc0d-4616-97d4-1592453971b8] Running
	I0501 03:44:35.492619   68864 system_pods.go:61] "kube-proxy-phx7x" [56c0381e-c140-4f69-bbe4-09d393db8b23] Running
	I0501 03:44:35.492621   68864 system_pods.go:61] "kube-scheduler-embed-certs-277128" [cc56e9bc-7f21-4f57-a65a-73a10ffd7145] Running
	I0501 03:44:35.492627   68864 system_pods.go:61] "metrics-server-569cc877fc-p8j59" [f8ad6c24-dd5d-4515-9052-c9aca7412b55] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0501 03:44:35.492631   68864 system_pods.go:61] "storage-provisioner" [785be666-58d5-4b9d-92fd-bcacdbdebeb2] Running
	I0501 03:44:35.492638   68864 system_pods.go:74] duration metric: took 4.012764043s to wait for pod list to return data ...
	I0501 03:44:35.492644   68864 default_sa.go:34] waiting for default service account to be created ...
	I0501 03:44:35.494580   68864 default_sa.go:45] found service account: "default"
	I0501 03:44:35.494599   68864 default_sa.go:55] duration metric: took 1.949121ms for default service account to be created ...
	I0501 03:44:35.494606   68864 system_pods.go:116] waiting for k8s-apps to be running ...
	I0501 03:44:35.499484   68864 system_pods.go:86] 8 kube-system pods found
	I0501 03:44:35.499507   68864 system_pods.go:89] "coredns-7db6d8ff4d-sjplt" [6701ee8e-0630-4332-b01c-26741ed3a7b7] Running
	I0501 03:44:35.499514   68864 system_pods.go:89] "etcd-embed-certs-277128" [744d7481-dd80-4435-90da-685fc16a76a4] Running
	I0501 03:44:35.499519   68864 system_pods.go:89] "kube-apiserver-embed-certs-277128" [2f1d0f09-b270-4cdc-af3c-e17e3a244bbf] Running
	I0501 03:44:35.499523   68864 system_pods.go:89] "kube-controller-manager-embed-certs-277128" [729aa138-dc0d-4616-97d4-1592453971b8] Running
	I0501 03:44:35.499526   68864 system_pods.go:89] "kube-proxy-phx7x" [56c0381e-c140-4f69-bbe4-09d393db8b23] Running
	I0501 03:44:35.499531   68864 system_pods.go:89] "kube-scheduler-embed-certs-277128" [cc56e9bc-7f21-4f57-a65a-73a10ffd7145] Running
	I0501 03:44:35.499537   68864 system_pods.go:89] "metrics-server-569cc877fc-p8j59" [f8ad6c24-dd5d-4515-9052-c9aca7412b55] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0501 03:44:35.499544   68864 system_pods.go:89] "storage-provisioner" [785be666-58d5-4b9d-92fd-bcacdbdebeb2] Running
	I0501 03:44:35.499550   68864 system_pods.go:126] duration metric: took 4.939659ms to wait for k8s-apps to be running ...
	I0501 03:44:35.499559   68864 system_svc.go:44] waiting for kubelet service to be running ....
	I0501 03:44:35.499599   68864 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 03:44:35.518471   68864 system_svc.go:56] duration metric: took 18.902776ms WaitForService to wait for kubelet
	I0501 03:44:35.518498   68864 kubeadm.go:576] duration metric: took 4m27.516125606s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0501 03:44:35.518521   68864 node_conditions.go:102] verifying NodePressure condition ...
	I0501 03:44:35.521936   68864 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0501 03:44:35.521956   68864 node_conditions.go:123] node cpu capacity is 2
	I0501 03:44:35.521966   68864 node_conditions.go:105] duration metric: took 3.439997ms to run NodePressure ...
	I0501 03:44:35.521976   68864 start.go:240] waiting for startup goroutines ...
	I0501 03:44:35.521983   68864 start.go:245] waiting for cluster config update ...
	I0501 03:44:35.521994   68864 start.go:254] writing updated cluster config ...
	I0501 03:44:35.522311   68864 ssh_runner.go:195] Run: rm -f paused
	I0501 03:44:35.572130   68864 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0501 03:44:35.573709   68864 out.go:177] * Done! kubectl is now configured to use "embed-certs-277128" cluster and "default" namespace by default
	I0501 03:44:31.512755   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:34.011892   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:32.772208   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:44:32.791063   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:44:32.791145   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:44:32.856883   69580 cri.go:89] found id: ""
	I0501 03:44:32.856909   69580 logs.go:276] 0 containers: []
	W0501 03:44:32.856920   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:44:32.856927   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:44:32.856988   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:44:32.928590   69580 cri.go:89] found id: ""
	I0501 03:44:32.928625   69580 logs.go:276] 0 containers: []
	W0501 03:44:32.928637   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:44:32.928644   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:44:32.928707   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:44:32.978068   69580 cri.go:89] found id: ""
	I0501 03:44:32.978100   69580 logs.go:276] 0 containers: []
	W0501 03:44:32.978113   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:44:32.978120   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:44:32.978184   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:44:33.018873   69580 cri.go:89] found id: ""
	I0501 03:44:33.018897   69580 logs.go:276] 0 containers: []
	W0501 03:44:33.018905   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:44:33.018911   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:44:33.018970   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:44:33.060633   69580 cri.go:89] found id: ""
	I0501 03:44:33.060661   69580 logs.go:276] 0 containers: []
	W0501 03:44:33.060673   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:44:33.060681   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:44:33.060735   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:44:33.099862   69580 cri.go:89] found id: ""
	I0501 03:44:33.099891   69580 logs.go:276] 0 containers: []
	W0501 03:44:33.099900   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:44:33.099906   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:44:33.099953   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:44:33.139137   69580 cri.go:89] found id: ""
	I0501 03:44:33.139163   69580 logs.go:276] 0 containers: []
	W0501 03:44:33.139171   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:44:33.139177   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:44:33.139224   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:44:33.178800   69580 cri.go:89] found id: ""
	I0501 03:44:33.178826   69580 logs.go:276] 0 containers: []
	W0501 03:44:33.178834   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:44:33.178842   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:44:33.178856   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:44:33.233811   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:44:33.233842   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:44:33.248931   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:44:33.248958   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:44:33.325530   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:44:33.325551   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:44:33.325563   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:44:33.412071   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:44:33.412103   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:44:35.954706   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:44:35.970256   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:44:35.970333   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:44:36.010417   69580 cri.go:89] found id: ""
	I0501 03:44:36.010443   69580 logs.go:276] 0 containers: []
	W0501 03:44:36.010452   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:44:36.010459   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:44:36.010524   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:44:36.051571   69580 cri.go:89] found id: ""
	I0501 03:44:36.051600   69580 logs.go:276] 0 containers: []
	W0501 03:44:36.051611   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:44:36.051619   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:44:36.051683   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:44:36.092148   69580 cri.go:89] found id: ""
	I0501 03:44:36.092176   69580 logs.go:276] 0 containers: []
	W0501 03:44:36.092185   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:44:36.092190   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:44:36.092247   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:44:36.136243   69580 cri.go:89] found id: ""
	I0501 03:44:36.136282   69580 logs.go:276] 0 containers: []
	W0501 03:44:36.136290   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:44:36.136296   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:44:36.136342   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:44:36.178154   69580 cri.go:89] found id: ""
	I0501 03:44:36.178183   69580 logs.go:276] 0 containers: []
	W0501 03:44:36.178193   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:44:36.178200   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:44:36.178264   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:44:36.217050   69580 cri.go:89] found id: ""
	I0501 03:44:36.217077   69580 logs.go:276] 0 containers: []
	W0501 03:44:36.217089   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:44:36.217096   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:44:36.217172   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:44:36.260438   69580 cri.go:89] found id: ""
	I0501 03:44:36.260470   69580 logs.go:276] 0 containers: []
	W0501 03:44:36.260481   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:44:36.260488   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:44:36.260546   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:44:36.303410   69580 cri.go:89] found id: ""
	I0501 03:44:36.303436   69580 logs.go:276] 0 containers: []
	W0501 03:44:36.303448   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:44:36.303459   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:44:36.303475   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:44:36.390427   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:44:36.390468   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:44:36.433631   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:44:36.433663   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:44:33.845863   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:35.847896   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:36.012448   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:38.510722   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:39.005005   69237 pod_ready.go:81] duration metric: took 4m0.000783466s for pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace to be "Ready" ...
	E0501 03:44:39.005036   69237 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace to be "Ready" (will not retry!)
	I0501 03:44:39.005057   69237 pod_ready.go:38] duration metric: took 4m8.020392425s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0501 03:44:39.005089   69237 kubeadm.go:591] duration metric: took 4m17.941775807s to restartPrimaryControlPlane
	W0501 03:44:39.005175   69237 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0501 03:44:39.005208   69237 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0501 03:44:36.486334   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:44:36.486365   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:44:36.502145   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:44:36.502175   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:44:36.586733   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:44:39.087607   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:44:39.102475   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:44:39.102552   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:44:39.141916   69580 cri.go:89] found id: ""
	I0501 03:44:39.141947   69580 logs.go:276] 0 containers: []
	W0501 03:44:39.141958   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:44:39.141964   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:44:39.142012   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:44:39.188472   69580 cri.go:89] found id: ""
	I0501 03:44:39.188501   69580 logs.go:276] 0 containers: []
	W0501 03:44:39.188512   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:44:39.188520   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:44:39.188582   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:44:39.243282   69580 cri.go:89] found id: ""
	I0501 03:44:39.243306   69580 logs.go:276] 0 containers: []
	W0501 03:44:39.243313   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:44:39.243318   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:44:39.243377   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:44:39.288254   69580 cri.go:89] found id: ""
	I0501 03:44:39.288284   69580 logs.go:276] 0 containers: []
	W0501 03:44:39.288296   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:44:39.288304   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:44:39.288379   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:44:39.330846   69580 cri.go:89] found id: ""
	I0501 03:44:39.330879   69580 logs.go:276] 0 containers: []
	W0501 03:44:39.330892   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:44:39.330901   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:44:39.330969   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:44:39.377603   69580 cri.go:89] found id: ""
	I0501 03:44:39.377632   69580 logs.go:276] 0 containers: []
	W0501 03:44:39.377642   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:44:39.377649   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:44:39.377710   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:44:39.421545   69580 cri.go:89] found id: ""
	I0501 03:44:39.421574   69580 logs.go:276] 0 containers: []
	W0501 03:44:39.421585   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:44:39.421594   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:44:39.421653   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:44:39.463394   69580 cri.go:89] found id: ""
	I0501 03:44:39.463424   69580 logs.go:276] 0 containers: []
	W0501 03:44:39.463435   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:44:39.463447   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:44:39.463464   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:44:39.552196   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:44:39.552218   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:44:39.552229   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:44:39.648509   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:44:39.648549   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:44:39.702829   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:44:39.702866   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:44:39.757712   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:44:39.757746   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:44:38.347120   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:40.355310   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:42.847346   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:42.273443   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:44:42.289788   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:44:42.289856   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:44:42.336802   69580 cri.go:89] found id: ""
	I0501 03:44:42.336833   69580 logs.go:276] 0 containers: []
	W0501 03:44:42.336846   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:44:42.336854   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:44:42.336919   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:44:42.387973   69580 cri.go:89] found id: ""
	I0501 03:44:42.388017   69580 logs.go:276] 0 containers: []
	W0501 03:44:42.388028   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:44:42.388036   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:44:42.388103   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:44:42.444866   69580 cri.go:89] found id: ""
	I0501 03:44:42.444895   69580 logs.go:276] 0 containers: []
	W0501 03:44:42.444906   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:44:42.444914   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:44:42.444987   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:44:42.493647   69580 cri.go:89] found id: ""
	I0501 03:44:42.493676   69580 logs.go:276] 0 containers: []
	W0501 03:44:42.493686   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:44:42.493692   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:44:42.493748   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:44:42.535046   69580 cri.go:89] found id: ""
	I0501 03:44:42.535075   69580 logs.go:276] 0 containers: []
	W0501 03:44:42.535086   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:44:42.535093   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:44:42.535161   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:44:42.579453   69580 cri.go:89] found id: ""
	I0501 03:44:42.579486   69580 logs.go:276] 0 containers: []
	W0501 03:44:42.579499   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:44:42.579507   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:44:42.579568   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:44:42.621903   69580 cri.go:89] found id: ""
	I0501 03:44:42.621931   69580 logs.go:276] 0 containers: []
	W0501 03:44:42.621942   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:44:42.621950   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:44:42.622009   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:44:42.666202   69580 cri.go:89] found id: ""
	I0501 03:44:42.666232   69580 logs.go:276] 0 containers: []
	W0501 03:44:42.666243   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:44:42.666257   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:44:42.666272   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:44:42.736032   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:44:42.736078   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:44:42.750773   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:44:42.750799   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:44:42.836942   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:44:42.836975   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:44:42.836997   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:44:42.930660   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:44:42.930695   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:44:45.479619   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:44:45.495112   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:44:45.495174   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:44:45.536693   69580 cri.go:89] found id: ""
	I0501 03:44:45.536722   69580 logs.go:276] 0 containers: []
	W0501 03:44:45.536730   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:44:45.536737   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:44:45.536785   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:44:45.577838   69580 cri.go:89] found id: ""
	I0501 03:44:45.577866   69580 logs.go:276] 0 containers: []
	W0501 03:44:45.577876   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:44:45.577894   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:44:45.577958   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:44:45.615842   69580 cri.go:89] found id: ""
	I0501 03:44:45.615868   69580 logs.go:276] 0 containers: []
	W0501 03:44:45.615879   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:44:45.615892   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:44:45.615953   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:44:45.654948   69580 cri.go:89] found id: ""
	I0501 03:44:45.654972   69580 logs.go:276] 0 containers: []
	W0501 03:44:45.654980   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:44:45.654986   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:44:45.655042   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:44:45.695104   69580 cri.go:89] found id: ""
	I0501 03:44:45.695129   69580 logs.go:276] 0 containers: []
	W0501 03:44:45.695138   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:44:45.695145   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:44:45.695212   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:44:45.737609   69580 cri.go:89] found id: ""
	I0501 03:44:45.737633   69580 logs.go:276] 0 containers: []
	W0501 03:44:45.737641   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:44:45.737647   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:44:45.737693   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:44:45.778655   69580 cri.go:89] found id: ""
	I0501 03:44:45.778685   69580 logs.go:276] 0 containers: []
	W0501 03:44:45.778696   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:44:45.778702   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:44:45.778781   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:44:45.819430   69580 cri.go:89] found id: ""
	I0501 03:44:45.819452   69580 logs.go:276] 0 containers: []
	W0501 03:44:45.819460   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:44:45.819469   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:44:45.819485   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:44:45.875879   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:44:45.875911   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:44:45.892035   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:44:45.892062   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:44:45.975803   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:44:45.975836   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:44:45.975853   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:44:46.058183   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:44:46.058222   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:44:45.345465   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:47.346947   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:48.604991   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:44:48.621226   69580 kubeadm.go:591] duration metric: took 4m4.888665162s to restartPrimaryControlPlane
	W0501 03:44:48.621351   69580 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0501 03:44:48.621407   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0501 03:44:49.654748   69580 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.033320548s)
	I0501 03:44:49.654838   69580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 03:44:49.671511   69580 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0501 03:44:49.684266   69580 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0501 03:44:49.697079   69580 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0501 03:44:49.697101   69580 kubeadm.go:156] found existing configuration files:
	
	I0501 03:44:49.697159   69580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0501 03:44:49.710609   69580 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0501 03:44:49.710692   69580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0501 03:44:49.723647   69580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0501 03:44:49.736855   69580 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0501 03:44:49.737023   69580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0501 03:44:49.748842   69580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0501 03:44:49.760856   69580 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0501 03:44:49.760923   69580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0501 03:44:49.772685   69580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0501 03:44:49.784035   69580 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0501 03:44:49.784114   69580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0501 03:44:49.795699   69580 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0501 03:44:49.869387   69580 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0501 03:44:49.869481   69580 kubeadm.go:309] [preflight] Running pre-flight checks
	I0501 03:44:50.028858   69580 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0501 03:44:50.028999   69580 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0501 03:44:50.029182   69580 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0501 03:44:50.242773   69580 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0501 03:44:50.244816   69580 out.go:204]   - Generating certificates and keys ...
	I0501 03:44:50.244918   69580 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0501 03:44:50.245008   69580 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0501 03:44:50.245111   69580 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0501 03:44:50.245216   69580 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0501 03:44:50.245331   69580 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0501 03:44:50.245424   69580 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0501 03:44:50.245490   69580 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0501 03:44:50.245556   69580 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0501 03:44:50.245629   69580 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0501 03:44:50.245724   69580 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0501 03:44:50.245784   69580 kubeadm.go:309] [certs] Using the existing "sa" key
	I0501 03:44:50.245877   69580 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0501 03:44:50.501955   69580 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0501 03:44:50.683749   69580 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0501 03:44:50.905745   69580 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0501 03:44:51.005912   69580 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0501 03:44:51.025470   69580 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0501 03:44:51.029411   69580 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0501 03:44:51.029859   69580 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0501 03:44:51.181498   69580 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0501 03:44:51.183222   69580 out.go:204]   - Booting up control plane ...
	I0501 03:44:51.183334   69580 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0501 03:44:51.200394   69580 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0501 03:44:51.201612   69580 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0501 03:44:51.202445   69580 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0501 03:44:51.204681   69580 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0501 03:44:49.847629   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:52.345383   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:54.346479   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:56.348560   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:58.846207   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:45:01.345790   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:45:03.847746   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:45:06.346172   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:45:08.346693   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:45:10.846797   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:45:11.778923   69237 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.773690939s)
	I0501 03:45:11.778992   69237 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 03:45:11.796337   69237 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0501 03:45:11.810167   69237 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0501 03:45:11.822425   69237 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0501 03:45:11.822457   69237 kubeadm.go:156] found existing configuration files:
	
	I0501 03:45:11.822514   69237 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0501 03:45:11.834539   69237 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0501 03:45:11.834596   69237 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0501 03:45:11.848336   69237 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0501 03:45:11.860459   69237 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0501 03:45:11.860535   69237 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0501 03:45:11.873903   69237 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0501 03:45:11.887353   69237 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0501 03:45:11.887427   69237 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0501 03:45:11.900805   69237 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0501 03:45:11.912512   69237 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0501 03:45:11.912572   69237 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0501 03:45:11.924870   69237 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0501 03:45:12.149168   69237 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0501 03:45:13.348651   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:45:15.847148   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:45:20.882309   69237 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0501 03:45:20.882382   69237 kubeadm.go:309] [preflight] Running pre-flight checks
	I0501 03:45:20.882472   69237 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0501 03:45:20.882602   69237 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0501 03:45:20.882741   69237 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0501 03:45:20.882836   69237 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0501 03:45:20.884733   69237 out.go:204]   - Generating certificates and keys ...
	I0501 03:45:20.884837   69237 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0501 03:45:20.884894   69237 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0501 03:45:20.884996   69237 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0501 03:45:20.885106   69237 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0501 03:45:20.885209   69237 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0501 03:45:20.885316   69237 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0501 03:45:20.885400   69237 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0501 03:45:20.885483   69237 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0501 03:45:20.885590   69237 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0501 03:45:20.885702   69237 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0501 03:45:20.885759   69237 kubeadm.go:309] [certs] Using the existing "sa" key
	I0501 03:45:20.885838   69237 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0501 03:45:20.885915   69237 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0501 03:45:20.885996   69237 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0501 03:45:20.886074   69237 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0501 03:45:20.886164   69237 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0501 03:45:20.886233   69237 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0501 03:45:20.886362   69237 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0501 03:45:20.886492   69237 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0501 03:45:20.888113   69237 out.go:204]   - Booting up control plane ...
	I0501 03:45:20.888194   69237 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0501 03:45:20.888264   69237 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0501 03:45:20.888329   69237 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0501 03:45:20.888458   69237 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0501 03:45:20.888570   69237 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0501 03:45:20.888627   69237 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0501 03:45:20.888777   69237 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0501 03:45:20.888863   69237 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0501 03:45:20.888964   69237 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 502.867448ms
	I0501 03:45:20.889080   69237 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0501 03:45:20.889177   69237 kubeadm.go:309] [api-check] The API server is healthy after 5.503139909s
	I0501 03:45:20.889335   69237 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0501 03:45:20.889506   69237 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0501 03:45:20.889579   69237 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0501 03:45:20.889817   69237 kubeadm.go:309] [mark-control-plane] Marking the node default-k8s-diff-port-715118 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0501 03:45:20.889868   69237 kubeadm.go:309] [bootstrap-token] Using token: 2vhvw6.gdesonhc2twrukzt
	I0501 03:45:20.892253   69237 out.go:204]   - Configuring RBAC rules ...
	I0501 03:45:20.892395   69237 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0501 03:45:20.892475   69237 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0501 03:45:20.892652   69237 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0501 03:45:20.892812   69237 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0501 03:45:20.892931   69237 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0501 03:45:20.893040   69237 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0501 03:45:20.893201   69237 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0501 03:45:20.893264   69237 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0501 03:45:20.893309   69237 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0501 03:45:20.893319   69237 kubeadm.go:309] 
	I0501 03:45:20.893367   69237 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0501 03:45:20.893373   69237 kubeadm.go:309] 
	I0501 03:45:20.893450   69237 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0501 03:45:20.893458   69237 kubeadm.go:309] 
	I0501 03:45:20.893481   69237 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0501 03:45:20.893544   69237 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0501 03:45:20.893591   69237 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0501 03:45:20.893597   69237 kubeadm.go:309] 
	I0501 03:45:20.893643   69237 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0501 03:45:20.893650   69237 kubeadm.go:309] 
	I0501 03:45:20.893685   69237 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0501 03:45:20.893690   69237 kubeadm.go:309] 
	I0501 03:45:20.893741   69237 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0501 03:45:20.893805   69237 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0501 03:45:20.893858   69237 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0501 03:45:20.893863   69237 kubeadm.go:309] 
	I0501 03:45:20.893946   69237 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0501 03:45:20.894035   69237 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0501 03:45:20.894045   69237 kubeadm.go:309] 
	I0501 03:45:20.894139   69237 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8444 --token 2vhvw6.gdesonhc2twrukzt \
	I0501 03:45:20.894267   69237 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:bd94cc6acfb95fe36f70b12c6a1f2980601b8235e7c4dea65cebdf35eb514754 \
	I0501 03:45:20.894294   69237 kubeadm.go:309] 	--control-plane 
	I0501 03:45:20.894301   69237 kubeadm.go:309] 
	I0501 03:45:20.894368   69237 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0501 03:45:20.894375   69237 kubeadm.go:309] 
	I0501 03:45:20.894498   69237 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8444 --token 2vhvw6.gdesonhc2twrukzt \
	I0501 03:45:20.894605   69237 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:bd94cc6acfb95fe36f70b12c6a1f2980601b8235e7c4dea65cebdf35eb514754 
	I0501 03:45:20.894616   69237 cni.go:84] Creating CNI manager for ""
	I0501 03:45:20.894623   69237 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0501 03:45:20.896151   69237 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0501 03:45:18.346276   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:45:20.846958   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:45:20.897443   69237 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0501 03:45:20.911935   69237 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0501 03:45:20.941109   69237 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0501 03:45:20.941193   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:20.941249   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-715118 minikube.k8s.io/updated_at=2024_05_01T03_45_20_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=2c4eae41cda912e6a762d77f0d8868e00f97bb4e minikube.k8s.io/name=default-k8s-diff-port-715118 minikube.k8s.io/primary=true
	I0501 03:45:20.971300   69237 ops.go:34] apiserver oom_adj: -16
	I0501 03:45:21.143744   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:21.643800   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:22.144096   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:22.643852   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:23.144726   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:23.644174   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:24.143735   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:24.643947   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:25.143871   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:25.644557   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:23.345774   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:45:25.346189   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:45:27.348026   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:45:26.144443   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:26.643761   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:27.144691   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:27.644445   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:28.144006   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:28.643904   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:29.144077   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:29.644690   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:30.144692   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:30.644604   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:31.207553   69580 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0501 03:45:31.208328   69580 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0501 03:45:31.208516   69580 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0501 03:45:29.845785   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:45:32.348020   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:45:31.144517   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:31.644673   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:32.143793   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:32.644380   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:33.144729   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:33.644415   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:33.752056   69237 kubeadm.go:1107] duration metric: took 12.810918189s to wait for elevateKubeSystemPrivileges
	W0501 03:45:33.752096   69237 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0501 03:45:33.752105   69237 kubeadm.go:393] duration metric: took 5m12.753721662s to StartCluster
	I0501 03:45:33.752124   69237 settings.go:142] acquiring lock: {Name:mkcfe08bcadd45d99c00ed1eaf4e9c226a489fe2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 03:45:33.752219   69237 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18779-13391/kubeconfig
	I0501 03:45:33.753829   69237 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18779-13391/kubeconfig: {Name:mk5d1131557f885800877622151c0915337cb23a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 03:45:33.754094   69237 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.72.158 Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0501 03:45:33.755764   69237 out.go:177] * Verifying Kubernetes components...
	I0501 03:45:33.754178   69237 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0501 03:45:33.754310   69237 config.go:182] Loaded profile config "default-k8s-diff-port-715118": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0501 03:45:33.757144   69237 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 03:45:33.757151   69237 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-715118"
	I0501 03:45:33.757172   69237 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-715118"
	I0501 03:45:33.757189   69237 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-715118"
	I0501 03:45:33.757213   69237 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-715118"
	I0501 03:45:33.757221   69237 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-715118"
	W0501 03:45:33.757230   69237 addons.go:243] addon metrics-server should already be in state true
	I0501 03:45:33.757264   69237 host.go:66] Checking if "default-k8s-diff-port-715118" exists ...
	I0501 03:45:33.757180   69237 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-715118"
	W0501 03:45:33.757295   69237 addons.go:243] addon storage-provisioner should already be in state true
	I0501 03:45:33.757355   69237 host.go:66] Checking if "default-k8s-diff-port-715118" exists ...
	I0501 03:45:33.757596   69237 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:45:33.757624   69237 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:45:33.757630   69237 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:45:33.757762   69237 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:45:33.757808   69237 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:45:33.757662   69237 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:45:33.773846   69237 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44313
	I0501 03:45:33.774442   69237 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:45:33.775002   69237 main.go:141] libmachine: Using API Version  1
	I0501 03:45:33.775024   69237 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:45:33.775438   69237 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:45:33.776086   69237 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:45:33.776117   69237 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:45:33.777715   69237 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37079
	I0501 03:45:33.777835   69237 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38097
	I0501 03:45:33.778170   69237 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:45:33.778240   69237 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:45:33.778701   69237 main.go:141] libmachine: Using API Version  1
	I0501 03:45:33.778734   69237 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:45:33.778778   69237 main.go:141] libmachine: Using API Version  1
	I0501 03:45:33.778795   69237 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:45:33.779142   69237 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:45:33.779150   69237 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:45:33.779427   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetState
	I0501 03:45:33.779721   69237 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:45:33.779769   69237 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:45:33.783493   69237 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-715118"
	W0501 03:45:33.783519   69237 addons.go:243] addon default-storageclass should already be in state true
	I0501 03:45:33.783551   69237 host.go:66] Checking if "default-k8s-diff-port-715118" exists ...
	I0501 03:45:33.783922   69237 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:45:33.783965   69237 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:45:33.795373   69237 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43491
	I0501 03:45:33.795988   69237 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:45:33.796557   69237 main.go:141] libmachine: Using API Version  1
	I0501 03:45:33.796579   69237 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:45:33.796931   69237 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:45:33.797093   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetState
	I0501 03:45:33.797155   69237 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46755
	I0501 03:45:33.797806   69237 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:45:33.798383   69237 main.go:141] libmachine: Using API Version  1
	I0501 03:45:33.798442   69237 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:45:33.798848   69237 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:45:33.799052   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetState
	I0501 03:45:33.799105   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .DriverName
	I0501 03:45:33.801809   69237 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0501 03:45:33.800600   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .DriverName
	I0501 03:45:33.803752   69237 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0501 03:45:33.803779   69237 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0501 03:45:33.803800   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHHostname
	I0501 03:45:33.805235   69237 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0501 03:45:33.804172   69237 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35709
	I0501 03:45:33.806635   69237 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0501 03:45:33.806651   69237 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0501 03:45:33.806670   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHHostname
	I0501 03:45:33.806889   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:45:33.806967   69237 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:45:33.807292   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:12:31", ip: ""} in network mk-default-k8s-diff-port-715118: {Iface:virbr3 ExpiryTime:2024-05-01 04:40:05 +0000 UTC Type:0 Mac:52:54:00:87:12:31 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-715118 Clientid:01:52:54:00:87:12:31}
	I0501 03:45:33.807426   69237 main.go:141] libmachine: Using API Version  1
	I0501 03:45:33.807428   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHPort
	I0501 03:45:33.807437   69237 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:45:33.807449   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined IP address 192.168.72.158 and MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:45:33.807578   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHKeyPath
	I0501 03:45:33.807680   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHUsername
	I0501 03:45:33.807799   69237 sshutil.go:53] new ssh client: &{IP:192.168.72.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/default-k8s-diff-port-715118/id_rsa Username:docker}
	I0501 03:45:33.808171   69237 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:45:33.808625   69237 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:45:33.808660   69237 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:45:33.810668   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:45:33.811266   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:12:31", ip: ""} in network mk-default-k8s-diff-port-715118: {Iface:virbr3 ExpiryTime:2024-05-01 04:40:05 +0000 UTC Type:0 Mac:52:54:00:87:12:31 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-715118 Clientid:01:52:54:00:87:12:31}
	I0501 03:45:33.811297   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined IP address 192.168.72.158 and MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:45:33.811595   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHPort
	I0501 03:45:33.811794   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHKeyPath
	I0501 03:45:33.811983   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHUsername
	I0501 03:45:33.812124   69237 sshutil.go:53] new ssh client: &{IP:192.168.72.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/default-k8s-diff-port-715118/id_rsa Username:docker}
	I0501 03:45:33.825315   69237 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35461
	I0501 03:45:33.825891   69237 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:45:33.826334   69237 main.go:141] libmachine: Using API Version  1
	I0501 03:45:33.826351   69237 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:45:33.826679   69237 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:45:33.826912   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetState
	I0501 03:45:33.828659   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .DriverName
	I0501 03:45:33.828931   69237 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0501 03:45:33.828946   69237 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0501 03:45:33.828963   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHHostname
	I0501 03:45:33.832151   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:45:33.832632   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:12:31", ip: ""} in network mk-default-k8s-diff-port-715118: {Iface:virbr3 ExpiryTime:2024-05-01 04:40:05 +0000 UTC Type:0 Mac:52:54:00:87:12:31 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-715118 Clientid:01:52:54:00:87:12:31}
	I0501 03:45:33.832656   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined IP address 192.168.72.158 and MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:45:33.832863   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHPort
	I0501 03:45:33.833010   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHKeyPath
	I0501 03:45:33.833146   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHUsername
	I0501 03:45:33.833302   69237 sshutil.go:53] new ssh client: &{IP:192.168.72.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/default-k8s-diff-port-715118/id_rsa Username:docker}
	I0501 03:45:34.014287   69237 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0501 03:45:34.047199   69237 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-715118" to be "Ready" ...
	I0501 03:45:34.069000   69237 node_ready.go:49] node "default-k8s-diff-port-715118" has status "Ready":"True"
	I0501 03:45:34.069023   69237 node_ready.go:38] duration metric: took 21.790599ms for node "default-k8s-diff-port-715118" to be "Ready" ...
	I0501 03:45:34.069033   69237 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0501 03:45:34.077182   69237 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-bg755" in "kube-system" namespace to be "Ready" ...
	I0501 03:45:34.151001   69237 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0501 03:45:34.166362   69237 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0501 03:45:34.166385   69237 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0501 03:45:34.214624   69237 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0501 03:45:34.329110   69237 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0501 03:45:34.329133   69237 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0501 03:45:34.436779   69237 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0501 03:45:34.436804   69237 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0501 03:45:34.611410   69237 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0501 03:45:34.698997   69237 main.go:141] libmachine: Making call to close driver server
	I0501 03:45:34.699026   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .Close
	I0501 03:45:34.699321   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | Closing plugin on server side
	I0501 03:45:34.699389   69237 main.go:141] libmachine: Successfully made call to close driver server
	I0501 03:45:34.699408   69237 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 03:45:34.699423   69237 main.go:141] libmachine: Making call to close driver server
	I0501 03:45:34.699437   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .Close
	I0501 03:45:34.699684   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | Closing plugin on server side
	I0501 03:45:34.699726   69237 main.go:141] libmachine: Successfully made call to close driver server
	I0501 03:45:34.699734   69237 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 03:45:34.708143   69237 main.go:141] libmachine: Making call to close driver server
	I0501 03:45:34.708171   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .Close
	I0501 03:45:34.708438   69237 main.go:141] libmachine: Successfully made call to close driver server
	I0501 03:45:34.708457   69237 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 03:45:34.708463   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | Closing plugin on server side
	I0501 03:45:35.510225   69237 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.295555956s)
	I0501 03:45:35.510274   69237 main.go:141] libmachine: Making call to close driver server
	I0501 03:45:35.510286   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .Close
	I0501 03:45:35.510700   69237 main.go:141] libmachine: Successfully made call to close driver server
	I0501 03:45:35.510721   69237 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 03:45:35.510732   69237 main.go:141] libmachine: Making call to close driver server
	I0501 03:45:35.510728   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | Closing plugin on server side
	I0501 03:45:35.510740   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .Close
	I0501 03:45:35.510961   69237 main.go:141] libmachine: Successfully made call to close driver server
	I0501 03:45:35.510979   69237 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 03:45:35.510983   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | Closing plugin on server side
	I0501 03:45:35.845633   69237 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.234178466s)
	I0501 03:45:35.845691   69237 main.go:141] libmachine: Making call to close driver server
	I0501 03:45:35.845708   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .Close
	I0501 03:45:35.845997   69237 main.go:141] libmachine: Successfully made call to close driver server
	I0501 03:45:35.846017   69237 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 03:45:35.846027   69237 main.go:141] libmachine: Making call to close driver server
	I0501 03:45:35.846026   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | Closing plugin on server side
	I0501 03:45:35.846036   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .Close
	I0501 03:45:35.847736   69237 main.go:141] libmachine: Successfully made call to close driver server
	I0501 03:45:35.847767   69237 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 03:45:35.847781   69237 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-715118"
	I0501 03:45:35.847786   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | Closing plugin on server side
	I0501 03:45:35.849438   69237 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0501 03:45:36.209029   69580 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0501 03:45:36.209300   69580 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0501 03:45:34.848699   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:45:37.338985   68640 pod_ready.go:81] duration metric: took 4m0.000306796s for pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace to be "Ready" ...
	E0501 03:45:37.339010   68640 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace to be "Ready" (will not retry!)
	I0501 03:45:37.339029   68640 pod_ready.go:38] duration metric: took 4m9.062496127s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0501 03:45:37.339089   68640 kubeadm.go:591] duration metric: took 4m19.268153875s to restartPrimaryControlPlane
	W0501 03:45:37.339148   68640 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0501 03:45:37.339176   68640 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0501 03:45:35.851156   69237 addons.go:505] duration metric: took 2.096980743s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0501 03:45:36.085176   69237 pod_ready.go:102] pod "coredns-7db6d8ff4d-bg755" in "kube-system" namespace has status "Ready":"False"
	I0501 03:45:36.585390   69237 pod_ready.go:92] pod "coredns-7db6d8ff4d-bg755" in "kube-system" namespace has status "Ready":"True"
	I0501 03:45:36.585415   69237 pod_ready.go:81] duration metric: took 2.508204204s for pod "coredns-7db6d8ff4d-bg755" in "kube-system" namespace to be "Ready" ...
	I0501 03:45:36.585428   69237 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-mp6f5" in "kube-system" namespace to be "Ready" ...
	I0501 03:45:36.594575   69237 pod_ready.go:92] pod "coredns-7db6d8ff4d-mp6f5" in "kube-system" namespace has status "Ready":"True"
	I0501 03:45:36.594600   69237 pod_ready.go:81] duration metric: took 9.163923ms for pod "coredns-7db6d8ff4d-mp6f5" in "kube-system" namespace to be "Ready" ...
	I0501 03:45:36.594613   69237 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-715118" in "kube-system" namespace to be "Ready" ...
	I0501 03:45:36.606784   69237 pod_ready.go:92] pod "etcd-default-k8s-diff-port-715118" in "kube-system" namespace has status "Ready":"True"
	I0501 03:45:36.606807   69237 pod_ready.go:81] duration metric: took 12.186129ms for pod "etcd-default-k8s-diff-port-715118" in "kube-system" namespace to be "Ready" ...
	I0501 03:45:36.606819   69237 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-715118" in "kube-system" namespace to be "Ready" ...
	I0501 03:45:36.617373   69237 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-715118" in "kube-system" namespace has status "Ready":"True"
	I0501 03:45:36.617394   69237 pod_ready.go:81] duration metric: took 10.566278ms for pod "kube-apiserver-default-k8s-diff-port-715118" in "kube-system" namespace to be "Ready" ...
	I0501 03:45:36.617404   69237 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-715118" in "kube-system" namespace to be "Ready" ...
	I0501 03:45:36.622441   69237 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-715118" in "kube-system" namespace has status "Ready":"True"
	I0501 03:45:36.622460   69237 pod_ready.go:81] duration metric: took 5.049948ms for pod "kube-controller-manager-default-k8s-diff-port-715118" in "kube-system" namespace to be "Ready" ...
	I0501 03:45:36.622469   69237 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2knrp" in "kube-system" namespace to be "Ready" ...
	I0501 03:45:36.981490   69237 pod_ready.go:92] pod "kube-proxy-2knrp" in "kube-system" namespace has status "Ready":"True"
	I0501 03:45:36.981513   69237 pod_ready.go:81] duration metric: took 359.038927ms for pod "kube-proxy-2knrp" in "kube-system" namespace to be "Ready" ...
	I0501 03:45:36.981523   69237 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-715118" in "kube-system" namespace to be "Ready" ...
	I0501 03:45:37.381970   69237 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-715118" in "kube-system" namespace has status "Ready":"True"
	I0501 03:45:37.381999   69237 pod_ready.go:81] duration metric: took 400.468372ms for pod "kube-scheduler-default-k8s-diff-port-715118" in "kube-system" namespace to be "Ready" ...
	I0501 03:45:37.382011   69237 pod_ready.go:38] duration metric: took 3.312967983s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0501 03:45:37.382028   69237 api_server.go:52] waiting for apiserver process to appear ...
	I0501 03:45:37.382091   69237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:45:37.401961   69237 api_server.go:72] duration metric: took 3.647829991s to wait for apiserver process to appear ...
	I0501 03:45:37.401992   69237 api_server.go:88] waiting for apiserver healthz status ...
	I0501 03:45:37.402016   69237 api_server.go:253] Checking apiserver healthz at https://192.168.72.158:8444/healthz ...
	I0501 03:45:37.407177   69237 api_server.go:279] https://192.168.72.158:8444/healthz returned 200:
	ok
	I0501 03:45:37.408020   69237 api_server.go:141] control plane version: v1.30.0
	I0501 03:45:37.408037   69237 api_server.go:131] duration metric: took 6.036621ms to wait for apiserver health ...
	I0501 03:45:37.408046   69237 system_pods.go:43] waiting for kube-system pods to appear ...
	I0501 03:45:37.586052   69237 system_pods.go:59] 9 kube-system pods found
	I0501 03:45:37.586081   69237 system_pods.go:61] "coredns-7db6d8ff4d-bg755" [884d489a-bc1e-442c-8e00-4616f983d3e9] Running
	I0501 03:45:37.586085   69237 system_pods.go:61] "coredns-7db6d8ff4d-mp6f5" [4c8550d0-0029-48f1-a892-1800f6639c75] Running
	I0501 03:45:37.586090   69237 system_pods.go:61] "etcd-default-k8s-diff-port-715118" [12be9bec-1d84-49ee-898c-499ff75a8026] Running
	I0501 03:45:37.586094   69237 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-715118" [ae9a476b-03cf-4d4d-9990-5e760db82e60] Running
	I0501 03:45:37.586098   69237 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-715118" [542bbe50-58b6-40fb-b81b-0cc2444a3401] Running
	I0501 03:45:37.586101   69237 system_pods.go:61] "kube-proxy-2knrp" [cf1406ff-8a6e-49bb-b180-1e72f4b54fbf] Running
	I0501 03:45:37.586104   69237 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-715118" [d24f02a2-67a9-4f28-9acc-445e0e74a68d] Running
	I0501 03:45:37.586109   69237 system_pods.go:61] "metrics-server-569cc877fc-xwxx9" [a66f5df4-355c-47f0-8b6e-da29e1c4394e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0501 03:45:37.586113   69237 system_pods.go:61] "storage-provisioner" [debb3a59-143a-46d3-87da-c2403e264861] Running
	I0501 03:45:37.586123   69237 system_pods.go:74] duration metric: took 178.07045ms to wait for pod list to return data ...
	I0501 03:45:37.586132   69237 default_sa.go:34] waiting for default service account to be created ...
	I0501 03:45:37.780696   69237 default_sa.go:45] found service account: "default"
	I0501 03:45:37.780720   69237 default_sa.go:55] duration metric: took 194.582743ms for default service account to be created ...
	I0501 03:45:37.780728   69237 system_pods.go:116] waiting for k8s-apps to be running ...
	I0501 03:45:37.985342   69237 system_pods.go:86] 9 kube-system pods found
	I0501 03:45:37.985368   69237 system_pods.go:89] "coredns-7db6d8ff4d-bg755" [884d489a-bc1e-442c-8e00-4616f983d3e9] Running
	I0501 03:45:37.985374   69237 system_pods.go:89] "coredns-7db6d8ff4d-mp6f5" [4c8550d0-0029-48f1-a892-1800f6639c75] Running
	I0501 03:45:37.985378   69237 system_pods.go:89] "etcd-default-k8s-diff-port-715118" [12be9bec-1d84-49ee-898c-499ff75a8026] Running
	I0501 03:45:37.985383   69237 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-715118" [ae9a476b-03cf-4d4d-9990-5e760db82e60] Running
	I0501 03:45:37.985387   69237 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-715118" [542bbe50-58b6-40fb-b81b-0cc2444a3401] Running
	I0501 03:45:37.985391   69237 system_pods.go:89] "kube-proxy-2knrp" [cf1406ff-8a6e-49bb-b180-1e72f4b54fbf] Running
	I0501 03:45:37.985395   69237 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-715118" [d24f02a2-67a9-4f28-9acc-445e0e74a68d] Running
	I0501 03:45:37.985401   69237 system_pods.go:89] "metrics-server-569cc877fc-xwxx9" [a66f5df4-355c-47f0-8b6e-da29e1c4394e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0501 03:45:37.985405   69237 system_pods.go:89] "storage-provisioner" [debb3a59-143a-46d3-87da-c2403e264861] Running
	I0501 03:45:37.985412   69237 system_pods.go:126] duration metric: took 204.679545ms to wait for k8s-apps to be running ...
	I0501 03:45:37.985418   69237 system_svc.go:44] waiting for kubelet service to be running ....
	I0501 03:45:37.985463   69237 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 03:45:38.002421   69237 system_svc.go:56] duration metric: took 16.992346ms WaitForService to wait for kubelet
	I0501 03:45:38.002458   69237 kubeadm.go:576] duration metric: took 4.248332952s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0501 03:45:38.002477   69237 node_conditions.go:102] verifying NodePressure condition ...
	I0501 03:45:38.181465   69237 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0501 03:45:38.181496   69237 node_conditions.go:123] node cpu capacity is 2
	I0501 03:45:38.181510   69237 node_conditions.go:105] duration metric: took 179.027834ms to run NodePressure ...
	I0501 03:45:38.181524   69237 start.go:240] waiting for startup goroutines ...
	I0501 03:45:38.181534   69237 start.go:245] waiting for cluster config update ...
	I0501 03:45:38.181547   69237 start.go:254] writing updated cluster config ...
	I0501 03:45:38.181810   69237 ssh_runner.go:195] Run: rm -f paused
	I0501 03:45:38.244075   69237 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0501 03:45:38.246261   69237 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-715118" cluster and "default" namespace by default
	I0501 03:45:46.209837   69580 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0501 03:45:46.210120   69580 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0501 03:46:06.211471   69580 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0501 03:46:06.211673   69580 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0501 03:46:09.967666   68640 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.628454657s)
	I0501 03:46:09.967737   68640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 03:46:09.985802   68640 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0501 03:46:09.996494   68640 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0501 03:46:10.006956   68640 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0501 03:46:10.006979   68640 kubeadm.go:156] found existing configuration files:
	
	I0501 03:46:10.007025   68640 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0501 03:46:10.017112   68640 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0501 03:46:10.017174   68640 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0501 03:46:10.027747   68640 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0501 03:46:10.037853   68640 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0501 03:46:10.037910   68640 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0501 03:46:10.048023   68640 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0501 03:46:10.057354   68640 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0501 03:46:10.057408   68640 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0501 03:46:10.067352   68640 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0501 03:46:10.076696   68640 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0501 03:46:10.076741   68640 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0501 03:46:10.086799   68640 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0501 03:46:10.150816   68640 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0501 03:46:10.150871   68640 kubeadm.go:309] [preflight] Running pre-flight checks
	I0501 03:46:10.325430   68640 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0501 03:46:10.325546   68640 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0501 03:46:10.325669   68640 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0501 03:46:10.581934   68640 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0501 03:46:10.585119   68640 out.go:204]   - Generating certificates and keys ...
	I0501 03:46:10.585221   68640 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0501 03:46:10.585319   68640 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0501 03:46:10.585416   68640 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0501 03:46:10.585522   68640 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0501 03:46:10.585620   68640 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0501 03:46:10.585695   68640 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0501 03:46:10.585781   68640 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0501 03:46:10.585861   68640 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0501 03:46:10.585959   68640 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0501 03:46:10.586064   68640 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0501 03:46:10.586116   68640 kubeadm.go:309] [certs] Using the existing "sa" key
	I0501 03:46:10.586208   68640 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0501 03:46:10.789482   68640 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0501 03:46:10.991219   68640 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0501 03:46:11.194897   68640 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0501 03:46:11.411926   68640 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0501 03:46:11.994791   68640 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0501 03:46:11.995468   68640 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0501 03:46:11.998463   68640 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0501 03:46:12.000394   68640 out.go:204]   - Booting up control plane ...
	I0501 03:46:12.000521   68640 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0501 03:46:12.000632   68640 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0501 03:46:12.000721   68640 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0501 03:46:12.022371   68640 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0501 03:46:12.023628   68640 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0501 03:46:12.023709   68640 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0501 03:46:12.178475   68640 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0501 03:46:12.178615   68640 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0501 03:46:12.680307   68640 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 502.179909ms
	I0501 03:46:12.680409   68640 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0501 03:46:18.182830   68640 kubeadm.go:309] [api-check] The API server is healthy after 5.502227274s
	I0501 03:46:18.197822   68640 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0501 03:46:18.217282   68640 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0501 03:46:18.247591   68640 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0501 03:46:18.247833   68640 kubeadm.go:309] [mark-control-plane] Marking the node no-preload-892672 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0501 03:46:18.259687   68640 kubeadm.go:309] [bootstrap-token] Using token: 8rc6kt.ele1oeavg6hezahw
	I0501 03:46:18.261204   68640 out.go:204]   - Configuring RBAC rules ...
	I0501 03:46:18.261333   68640 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0501 03:46:18.272461   68640 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0501 03:46:18.284615   68640 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0501 03:46:18.288686   68640 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0501 03:46:18.292005   68640 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0501 03:46:18.295772   68640 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0501 03:46:18.591035   68640 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0501 03:46:19.028299   68640 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0501 03:46:19.598192   68640 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0501 03:46:19.598219   68640 kubeadm.go:309] 
	I0501 03:46:19.598323   68640 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0501 03:46:19.598337   68640 kubeadm.go:309] 
	I0501 03:46:19.598490   68640 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0501 03:46:19.598514   68640 kubeadm.go:309] 
	I0501 03:46:19.598542   68640 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0501 03:46:19.598604   68640 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0501 03:46:19.598648   68640 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0501 03:46:19.598673   68640 kubeadm.go:309] 
	I0501 03:46:19.598771   68640 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0501 03:46:19.598784   68640 kubeadm.go:309] 
	I0501 03:46:19.598850   68640 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0501 03:46:19.598860   68640 kubeadm.go:309] 
	I0501 03:46:19.598963   68640 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0501 03:46:19.599069   68640 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0501 03:46:19.599167   68640 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0501 03:46:19.599183   68640 kubeadm.go:309] 
	I0501 03:46:19.599283   68640 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0501 03:46:19.599389   68640 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0501 03:46:19.599400   68640 kubeadm.go:309] 
	I0501 03:46:19.599500   68640 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 8rc6kt.ele1oeavg6hezahw \
	I0501 03:46:19.599626   68640 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:bd94cc6acfb95fe36f70b12c6a1f2980601b8235e7c4dea65cebdf35eb514754 \
	I0501 03:46:19.599666   68640 kubeadm.go:309] 	--control-plane 
	I0501 03:46:19.599676   68640 kubeadm.go:309] 
	I0501 03:46:19.599779   68640 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0501 03:46:19.599807   68640 kubeadm.go:309] 
	I0501 03:46:19.599931   68640 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 8rc6kt.ele1oeavg6hezahw \
	I0501 03:46:19.600079   68640 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:bd94cc6acfb95fe36f70b12c6a1f2980601b8235e7c4dea65cebdf35eb514754 
	I0501 03:46:19.600763   68640 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0501 03:46:19.600786   68640 cni.go:84] Creating CNI manager for ""
	I0501 03:46:19.600792   68640 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0501 03:46:19.602473   68640 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0501 03:46:19.603816   68640 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0501 03:46:19.621706   68640 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0501 03:46:19.649643   68640 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0501 03:46:19.649762   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:19.649787   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-892672 minikube.k8s.io/updated_at=2024_05_01T03_46_19_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=2c4eae41cda912e6a762d77f0d8868e00f97bb4e minikube.k8s.io/name=no-preload-892672 minikube.k8s.io/primary=true
	I0501 03:46:19.892482   68640 ops.go:34] apiserver oom_adj: -16
	I0501 03:46:19.892631   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:20.393436   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:20.893412   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:21.393634   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:21.893273   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:22.393031   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:22.893498   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:23.393599   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:23.893024   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:24.393544   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:24.893431   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:25.393290   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:25.892718   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:26.392928   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:26.893101   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:27.393045   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:27.892722   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:28.393102   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:28.892871   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:29.392650   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:29.893034   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:30.393561   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:30.893661   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:31.393235   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:31.892889   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:32.393263   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:32.893427   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:33.046965   68640 kubeadm.go:1107] duration metric: took 13.397277159s to wait for elevateKubeSystemPrivileges
	W0501 03:46:33.047010   68640 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0501 03:46:33.047020   68640 kubeadm.go:393] duration metric: took 5m15.038324633s to StartCluster
	I0501 03:46:33.047042   68640 settings.go:142] acquiring lock: {Name:mkcfe08bcadd45d99c00ed1eaf4e9c226a489fe2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 03:46:33.047126   68640 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18779-13391/kubeconfig
	I0501 03:46:33.048731   68640 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18779-13391/kubeconfig: {Name:mk5d1131557f885800877622151c0915337cb23a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 03:46:33.048988   68640 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.144 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0501 03:46:33.050376   68640 out.go:177] * Verifying Kubernetes components...
	I0501 03:46:33.049030   68640 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0501 03:46:33.049253   68640 config.go:182] Loaded profile config "no-preload-892672": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0501 03:46:33.051595   68640 addons.go:69] Setting storage-provisioner=true in profile "no-preload-892672"
	I0501 03:46:33.051616   68640 addons.go:69] Setting metrics-server=true in profile "no-preload-892672"
	I0501 03:46:33.051639   68640 addons.go:234] Setting addon storage-provisioner=true in "no-preload-892672"
	I0501 03:46:33.051644   68640 addons.go:234] Setting addon metrics-server=true in "no-preload-892672"
	W0501 03:46:33.051649   68640 addons.go:243] addon storage-provisioner should already be in state true
	W0501 03:46:33.051653   68640 addons.go:243] addon metrics-server should already be in state true
	I0501 03:46:33.051675   68640 host.go:66] Checking if "no-preload-892672" exists ...
	I0501 03:46:33.051680   68640 host.go:66] Checking if "no-preload-892672" exists ...
	I0501 03:46:33.051599   68640 addons.go:69] Setting default-storageclass=true in profile "no-preload-892672"
	I0501 03:46:33.051760   68640 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-892672"
	I0501 03:46:33.051600   68640 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 03:46:33.052016   68640 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:46:33.052047   68640 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:46:33.052064   68640 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:46:33.052095   68640 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:46:33.052110   68640 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:46:33.052135   68640 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:46:33.068515   68640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46407
	I0501 03:46:33.069115   68640 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:46:33.069702   68640 main.go:141] libmachine: Using API Version  1
	I0501 03:46:33.069728   68640 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:46:33.070085   68640 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:46:33.070731   68640 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:46:33.070763   68640 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:46:33.072166   68640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46609
	I0501 03:46:33.072179   68640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46109
	I0501 03:46:33.072632   68640 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:46:33.072770   68640 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:46:33.073161   68640 main.go:141] libmachine: Using API Version  1
	I0501 03:46:33.073180   68640 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:46:33.073318   68640 main.go:141] libmachine: Using API Version  1
	I0501 03:46:33.073333   68640 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:46:33.073467   68640 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:46:33.073893   68640 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:46:33.074056   68640 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:46:33.074065   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetState
	I0501 03:46:33.074092   68640 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:46:33.077976   68640 addons.go:234] Setting addon default-storageclass=true in "no-preload-892672"
	W0501 03:46:33.077997   68640 addons.go:243] addon default-storageclass should already be in state true
	I0501 03:46:33.078110   68640 host.go:66] Checking if "no-preload-892672" exists ...
	I0501 03:46:33.078535   68640 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:46:33.078566   68640 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:46:33.092605   68640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39707
	I0501 03:46:33.092996   68640 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:46:33.093578   68640 main.go:141] libmachine: Using API Version  1
	I0501 03:46:33.093597   68640 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:46:33.093602   68640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36671
	I0501 03:46:33.093778   68640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34727
	I0501 03:46:33.093893   68640 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:46:33.094117   68640 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:46:33.094169   68640 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:46:33.094250   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetState
	I0501 03:46:33.094577   68640 main.go:141] libmachine: Using API Version  1
	I0501 03:46:33.094602   68640 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:46:33.094986   68640 main.go:141] libmachine: Using API Version  1
	I0501 03:46:33.095004   68640 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:46:33.095062   68640 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:46:33.095389   68640 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:46:33.096401   68640 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:46:33.096423   68640 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:46:33.096665   68640 main.go:141] libmachine: (no-preload-892672) Calling .DriverName
	I0501 03:46:33.096678   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetState
	I0501 03:46:33.098465   68640 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0501 03:46:33.099842   68640 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0501 03:46:33.099861   68640 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0501 03:46:33.099879   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHHostname
	I0501 03:46:33.098734   68640 main.go:141] libmachine: (no-preload-892672) Calling .DriverName
	I0501 03:46:33.101305   68640 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0501 03:46:33.102491   68640 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0501 03:46:33.102512   68640 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0501 03:46:33.102531   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHHostname
	I0501 03:46:33.103006   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:46:33.103617   68640 main.go:141] libmachine: (no-preload-892672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6d:9a", ip: ""} in network mk-no-preload-892672: {Iface:virbr1 ExpiryTime:2024-05-01 04:40:47 +0000 UTC Type:0 Mac:52:54:00:c7:6d:9a Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:no-preload-892672 Clientid:01:52:54:00:c7:6d:9a}
	I0501 03:46:33.103641   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined IP address 192.168.39.144 and MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:46:33.103799   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHPort
	I0501 03:46:33.103977   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHKeyPath
	I0501 03:46:33.104143   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHUsername
	I0501 03:46:33.104272   68640 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/no-preload-892672/id_rsa Username:docker}
	I0501 03:46:33.105452   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:46:33.105795   68640 main.go:141] libmachine: (no-preload-892672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6d:9a", ip: ""} in network mk-no-preload-892672: {Iface:virbr1 ExpiryTime:2024-05-01 04:40:47 +0000 UTC Type:0 Mac:52:54:00:c7:6d:9a Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:no-preload-892672 Clientid:01:52:54:00:c7:6d:9a}
	I0501 03:46:33.105821   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined IP address 192.168.39.144 and MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:46:33.106142   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHPort
	I0501 03:46:33.106290   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHKeyPath
	I0501 03:46:33.106392   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHUsername
	I0501 03:46:33.106511   68640 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/no-preload-892672/id_rsa Username:docker}
	I0501 03:46:33.113012   68640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42615
	I0501 03:46:33.113365   68640 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:46:33.113813   68640 main.go:141] libmachine: Using API Version  1
	I0501 03:46:33.113834   68640 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:46:33.114127   68640 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:46:33.114304   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetState
	I0501 03:46:33.115731   68640 main.go:141] libmachine: (no-preload-892672) Calling .DriverName
	I0501 03:46:33.115997   68640 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0501 03:46:33.116010   68640 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0501 03:46:33.116023   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHHostname
	I0501 03:46:33.119272   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:46:33.119644   68640 main.go:141] libmachine: (no-preload-892672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6d:9a", ip: ""} in network mk-no-preload-892672: {Iface:virbr1 ExpiryTime:2024-05-01 04:40:47 +0000 UTC Type:0 Mac:52:54:00:c7:6d:9a Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:no-preload-892672 Clientid:01:52:54:00:c7:6d:9a}
	I0501 03:46:33.119661   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined IP address 192.168.39.144 and MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:46:33.119845   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHPort
	I0501 03:46:33.120223   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHKeyPath
	I0501 03:46:33.120358   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHUsername
	I0501 03:46:33.120449   68640 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/no-preload-892672/id_rsa Username:docker}
	I0501 03:46:33.296711   68640 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0501 03:46:33.342215   68640 node_ready.go:35] waiting up to 6m0s for node "no-preload-892672" to be "Ready" ...
	I0501 03:46:33.355677   68640 node_ready.go:49] node "no-preload-892672" has status "Ready":"True"
	I0501 03:46:33.355707   68640 node_ready.go:38] duration metric: took 13.392381ms for node "no-preload-892672" to be "Ready" ...
	I0501 03:46:33.355718   68640 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0501 03:46:33.367706   68640 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-57k52" in "kube-system" namespace to be "Ready" ...
	I0501 03:46:33.413444   68640 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0501 03:46:33.418869   68640 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0501 03:46:33.439284   68640 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0501 03:46:33.439318   68640 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0501 03:46:33.512744   68640 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0501 03:46:33.512768   68640 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0501 03:46:33.594777   68640 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0501 03:46:33.594798   68640 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0501 03:46:33.658506   68640 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0501 03:46:34.013890   68640 main.go:141] libmachine: Making call to close driver server
	I0501 03:46:34.013919   68640 main.go:141] libmachine: (no-preload-892672) Calling .Close
	I0501 03:46:34.014023   68640 main.go:141] libmachine: Making call to close driver server
	I0501 03:46:34.014056   68640 main.go:141] libmachine: (no-preload-892672) Calling .Close
	I0501 03:46:34.014250   68640 main.go:141] libmachine: Successfully made call to close driver server
	I0501 03:46:34.014284   68640 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 03:46:34.014297   68640 main.go:141] libmachine: Making call to close driver server
	I0501 03:46:34.014306   68640 main.go:141] libmachine: (no-preload-892672) Calling .Close
	I0501 03:46:34.014353   68640 main.go:141] libmachine: Successfully made call to close driver server
	I0501 03:46:34.014370   68640 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 03:46:34.014383   68640 main.go:141] libmachine: Making call to close driver server
	I0501 03:46:34.014393   68640 main.go:141] libmachine: (no-preload-892672) Calling .Close
	I0501 03:46:34.014642   68640 main.go:141] libmachine: Successfully made call to close driver server
	I0501 03:46:34.014664   68640 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 03:46:34.016263   68640 main.go:141] libmachine: (no-preload-892672) DBG | Closing plugin on server side
	I0501 03:46:34.016263   68640 main.go:141] libmachine: Successfully made call to close driver server
	I0501 03:46:34.016288   68640 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 03:46:34.031961   68640 main.go:141] libmachine: Making call to close driver server
	I0501 03:46:34.031996   68640 main.go:141] libmachine: (no-preload-892672) Calling .Close
	I0501 03:46:34.032303   68640 main.go:141] libmachine: Successfully made call to close driver server
	I0501 03:46:34.032324   68640 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 03:46:34.260154   68640 main.go:141] libmachine: Making call to close driver server
	I0501 03:46:34.260180   68640 main.go:141] libmachine: (no-preload-892672) Calling .Close
	I0501 03:46:34.260600   68640 main.go:141] libmachine: Successfully made call to close driver server
	I0501 03:46:34.260629   68640 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 03:46:34.260641   68640 main.go:141] libmachine: Making call to close driver server
	I0501 03:46:34.260650   68640 main.go:141] libmachine: (no-preload-892672) Calling .Close
	I0501 03:46:34.260876   68640 main.go:141] libmachine: Successfully made call to close driver server
	I0501 03:46:34.260888   68640 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 03:46:34.260899   68640 addons.go:470] Verifying addon metrics-server=true in "no-preload-892672"
	I0501 03:46:34.262520   68640 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0501 03:46:34.264176   68640 addons.go:505] duration metric: took 1.215147486s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0501 03:46:35.384910   68640 pod_ready.go:102] pod "coredns-7db6d8ff4d-57k52" in "kube-system" namespace has status "Ready":"False"
	I0501 03:46:36.377298   68640 pod_ready.go:92] pod "coredns-7db6d8ff4d-57k52" in "kube-system" namespace has status "Ready":"True"
	I0501 03:46:36.377321   68640 pod_ready.go:81] duration metric: took 3.009581117s for pod "coredns-7db6d8ff4d-57k52" in "kube-system" namespace to be "Ready" ...
	I0501 03:46:36.377331   68640 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-c6lnj" in "kube-system" namespace to be "Ready" ...
	I0501 03:46:36.383022   68640 pod_ready.go:92] pod "coredns-7db6d8ff4d-c6lnj" in "kube-system" namespace has status "Ready":"True"
	I0501 03:46:36.383042   68640 pod_ready.go:81] duration metric: took 5.704691ms for pod "coredns-7db6d8ff4d-c6lnj" in "kube-system" namespace to be "Ready" ...
	I0501 03:46:36.383051   68640 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-892672" in "kube-system" namespace to be "Ready" ...
	I0501 03:46:36.387456   68640 pod_ready.go:92] pod "etcd-no-preload-892672" in "kube-system" namespace has status "Ready":"True"
	I0501 03:46:36.387476   68640 pod_ready.go:81] duration metric: took 4.418883ms for pod "etcd-no-preload-892672" in "kube-system" namespace to be "Ready" ...
	I0501 03:46:36.387485   68640 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-892672" in "kube-system" namespace to be "Ready" ...
	I0501 03:46:36.392348   68640 pod_ready.go:92] pod "kube-apiserver-no-preload-892672" in "kube-system" namespace has status "Ready":"True"
	I0501 03:46:36.392366   68640 pod_ready.go:81] duration metric: took 4.874928ms for pod "kube-apiserver-no-preload-892672" in "kube-system" namespace to be "Ready" ...
	I0501 03:46:36.392375   68640 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-892672" in "kube-system" namespace to be "Ready" ...
	I0501 03:46:36.397155   68640 pod_ready.go:92] pod "kube-controller-manager-no-preload-892672" in "kube-system" namespace has status "Ready":"True"
	I0501 03:46:36.397175   68640 pod_ready.go:81] duration metric: took 4.794583ms for pod "kube-controller-manager-no-preload-892672" in "kube-system" namespace to be "Ready" ...
	I0501 03:46:36.397185   68640 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-czsqz" in "kube-system" namespace to be "Ready" ...
	I0501 03:46:36.774003   68640 pod_ready.go:92] pod "kube-proxy-czsqz" in "kube-system" namespace has status "Ready":"True"
	I0501 03:46:36.774025   68640 pod_ready.go:81] duration metric: took 376.83321ms for pod "kube-proxy-czsqz" in "kube-system" namespace to be "Ready" ...
	I0501 03:46:36.774036   68640 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-892672" in "kube-system" namespace to be "Ready" ...
	I0501 03:46:37.171504   68640 pod_ready.go:92] pod "kube-scheduler-no-preload-892672" in "kube-system" namespace has status "Ready":"True"
	I0501 03:46:37.171526   68640 pod_ready.go:81] duration metric: took 397.484706ms for pod "kube-scheduler-no-preload-892672" in "kube-system" namespace to be "Ready" ...
	I0501 03:46:37.171535   68640 pod_ready.go:38] duration metric: took 3.815806043s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0501 03:46:37.171549   68640 api_server.go:52] waiting for apiserver process to appear ...
	I0501 03:46:37.171609   68640 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:46:37.189446   68640 api_server.go:72] duration metric: took 4.140414812s to wait for apiserver process to appear ...
	I0501 03:46:37.189473   68640 api_server.go:88] waiting for apiserver healthz status ...
	I0501 03:46:37.189494   68640 api_server.go:253] Checking apiserver healthz at https://192.168.39.144:8443/healthz ...
	I0501 03:46:37.195052   68640 api_server.go:279] https://192.168.39.144:8443/healthz returned 200:
	ok
	I0501 03:46:37.196163   68640 api_server.go:141] control plane version: v1.30.0
	I0501 03:46:37.196183   68640 api_server.go:131] duration metric: took 6.703804ms to wait for apiserver health ...
	I0501 03:46:37.196191   68640 system_pods.go:43] waiting for kube-system pods to appear ...
	I0501 03:46:37.375742   68640 system_pods.go:59] 9 kube-system pods found
	I0501 03:46:37.375775   68640 system_pods.go:61] "coredns-7db6d8ff4d-57k52" [f98cb358-71ba-49c5-8213-0f3160c6e38b] Running
	I0501 03:46:37.375784   68640 system_pods.go:61] "coredns-7db6d8ff4d-c6lnj" [f8b8c1f1-7696-43f2-98be-339f99963e7c] Running
	I0501 03:46:37.375789   68640 system_pods.go:61] "etcd-no-preload-892672" [5f92eb1b-6611-4663-95f0-8c071a3a37c9] Running
	I0501 03:46:37.375796   68640 system_pods.go:61] "kube-apiserver-no-preload-892672" [90bcaa82-61b0-49d5-b50c-76288b099683] Running
	I0501 03:46:37.375804   68640 system_pods.go:61] "kube-controller-manager-no-preload-892672" [f80af654-aa81-4cd2-b5ce-4f31f6e49e9f] Running
	I0501 03:46:37.375809   68640 system_pods.go:61] "kube-proxy-czsqz" [4254b019-b6c8-4ff9-a361-c96eaf20dc65] Running
	I0501 03:46:37.375813   68640 system_pods.go:61] "kube-scheduler-no-preload-892672" [6753a5df-86d1-47bf-9514-6b8352acf969] Running
	I0501 03:46:37.375824   68640 system_pods.go:61] "metrics-server-569cc877fc-5m5qf" [a1ec3e6c-fe90-4168-b0ec-54f82f17b46d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0501 03:46:37.375830   68640 system_pods.go:61] "storage-provisioner" [b55b7e8b-4de0-40f8-96ff-bf0b550699d1] Running
	I0501 03:46:37.375841   68640 system_pods.go:74] duration metric: took 179.642731ms to wait for pod list to return data ...
	I0501 03:46:37.375857   68640 default_sa.go:34] waiting for default service account to be created ...
	I0501 03:46:37.572501   68640 default_sa.go:45] found service account: "default"
	I0501 03:46:37.572530   68640 default_sa.go:55] duration metric: took 196.664812ms for default service account to be created ...
	I0501 03:46:37.572542   68640 system_pods.go:116] waiting for k8s-apps to be running ...
	I0501 03:46:37.778012   68640 system_pods.go:86] 9 kube-system pods found
	I0501 03:46:37.778053   68640 system_pods.go:89] "coredns-7db6d8ff4d-57k52" [f98cb358-71ba-49c5-8213-0f3160c6e38b] Running
	I0501 03:46:37.778062   68640 system_pods.go:89] "coredns-7db6d8ff4d-c6lnj" [f8b8c1f1-7696-43f2-98be-339f99963e7c] Running
	I0501 03:46:37.778068   68640 system_pods.go:89] "etcd-no-preload-892672" [5f92eb1b-6611-4663-95f0-8c071a3a37c9] Running
	I0501 03:46:37.778075   68640 system_pods.go:89] "kube-apiserver-no-preload-892672" [90bcaa82-61b0-49d5-b50c-76288b099683] Running
	I0501 03:46:37.778082   68640 system_pods.go:89] "kube-controller-manager-no-preload-892672" [f80af654-aa81-4cd2-b5ce-4f31f6e49e9f] Running
	I0501 03:46:37.778088   68640 system_pods.go:89] "kube-proxy-czsqz" [4254b019-b6c8-4ff9-a361-c96eaf20dc65] Running
	I0501 03:46:37.778094   68640 system_pods.go:89] "kube-scheduler-no-preload-892672" [6753a5df-86d1-47bf-9514-6b8352acf969] Running
	I0501 03:46:37.778104   68640 system_pods.go:89] "metrics-server-569cc877fc-5m5qf" [a1ec3e6c-fe90-4168-b0ec-54f82f17b46d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0501 03:46:37.778112   68640 system_pods.go:89] "storage-provisioner" [b55b7e8b-4de0-40f8-96ff-bf0b550699d1] Running
	I0501 03:46:37.778127   68640 system_pods.go:126] duration metric: took 205.578312ms to wait for k8s-apps to be running ...
	I0501 03:46:37.778148   68640 system_svc.go:44] waiting for kubelet service to be running ....
	I0501 03:46:37.778215   68640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 03:46:37.794660   68640 system_svc.go:56] duration metric: took 16.509214ms WaitForService to wait for kubelet
	I0501 03:46:37.794694   68640 kubeadm.go:576] duration metric: took 4.745668881s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0501 03:46:37.794721   68640 node_conditions.go:102] verifying NodePressure condition ...
	I0501 03:46:37.972621   68640 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0501 03:46:37.972647   68640 node_conditions.go:123] node cpu capacity is 2
	I0501 03:46:37.972660   68640 node_conditions.go:105] duration metric: took 177.933367ms to run NodePressure ...
	I0501 03:46:37.972676   68640 start.go:240] waiting for startup goroutines ...
	I0501 03:46:37.972684   68640 start.go:245] waiting for cluster config update ...
	I0501 03:46:37.972699   68640 start.go:254] writing updated cluster config ...
	I0501 03:46:37.972951   68640 ssh_runner.go:195] Run: rm -f paused
	I0501 03:46:38.023054   68640 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0501 03:46:38.025098   68640 out.go:177] * Done! kubectl is now configured to use "no-preload-892672" cluster and "default" namespace by default
	I0501 03:46:46.214470   69580 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0501 03:46:46.214695   69580 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0501 03:46:46.214721   69580 kubeadm.go:309] 
	I0501 03:46:46.214770   69580 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0501 03:46:46.214837   69580 kubeadm.go:309] 		timed out waiting for the condition
	I0501 03:46:46.214875   69580 kubeadm.go:309] 
	I0501 03:46:46.214936   69580 kubeadm.go:309] 	This error is likely caused by:
	I0501 03:46:46.214983   69580 kubeadm.go:309] 		- The kubelet is not running
	I0501 03:46:46.215076   69580 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0501 03:46:46.215084   69580 kubeadm.go:309] 
	I0501 03:46:46.215169   69580 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0501 03:46:46.215201   69580 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0501 03:46:46.215233   69580 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0501 03:46:46.215239   69580 kubeadm.go:309] 
	I0501 03:46:46.215380   69580 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0501 03:46:46.215489   69580 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0501 03:46:46.215505   69580 kubeadm.go:309] 
	I0501 03:46:46.215657   69580 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0501 03:46:46.215782   69580 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0501 03:46:46.215882   69580 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0501 03:46:46.215972   69580 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0501 03:46:46.215984   69580 kubeadm.go:309] 
	I0501 03:46:46.217243   69580 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0501 03:46:46.217352   69580 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0501 03:46:46.217426   69580 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0501 03:46:46.217550   69580 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0501 03:46:46.217611   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0501 03:46:47.375634   69580 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.157990231s)
	I0501 03:46:47.375723   69580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 03:46:47.392333   69580 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0501 03:46:47.404983   69580 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0501 03:46:47.405007   69580 kubeadm.go:156] found existing configuration files:
	
	I0501 03:46:47.405054   69580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0501 03:46:47.417437   69580 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0501 03:46:47.417501   69580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0501 03:46:47.429929   69580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0501 03:46:47.441141   69580 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0501 03:46:47.441215   69580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0501 03:46:47.453012   69580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0501 03:46:47.463702   69580 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0501 03:46:47.463759   69580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0501 03:46:47.474783   69580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0501 03:46:47.485793   69580 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0501 03:46:47.485853   69580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0501 03:46:47.497706   69580 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0501 03:46:47.588221   69580 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0501 03:46:47.588340   69580 kubeadm.go:309] [preflight] Running pre-flight checks
	I0501 03:46:47.759631   69580 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0501 03:46:47.759801   69580 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0501 03:46:47.759949   69580 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0501 03:46:47.978077   69580 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0501 03:46:47.980130   69580 out.go:204]   - Generating certificates and keys ...
	I0501 03:46:47.980240   69580 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0501 03:46:47.980323   69580 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0501 03:46:47.980455   69580 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0501 03:46:47.980579   69580 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0501 03:46:47.980679   69580 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0501 03:46:47.980771   69580 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0501 03:46:47.980864   69580 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0501 03:46:47.981256   69580 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0501 03:46:47.981616   69580 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0501 03:46:47.981858   69580 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0501 03:46:47.981907   69580 kubeadm.go:309] [certs] Using the existing "sa" key
	I0501 03:46:47.981991   69580 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0501 03:46:48.100377   69580 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0501 03:46:48.463892   69580 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0501 03:46:48.521991   69580 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0501 03:46:48.735222   69580 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0501 03:46:48.753098   69580 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0501 03:46:48.756950   69580 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0501 03:46:48.757379   69580 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0501 03:46:48.937039   69580 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0501 03:46:48.939065   69580 out.go:204]   - Booting up control plane ...
	I0501 03:46:48.939183   69580 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0501 03:46:48.961380   69580 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0501 03:46:48.962890   69580 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0501 03:46:48.963978   69580 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0501 03:46:48.971754   69580 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0501 03:47:28.974873   69580 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0501 03:47:28.975296   69580 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0501 03:47:28.975545   69580 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0501 03:47:33.976469   69580 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0501 03:47:33.976699   69580 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0501 03:47:43.977443   69580 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0501 03:47:43.977663   69580 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0501 03:48:03.979113   69580 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0501 03:48:03.979409   69580 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0501 03:48:43.982479   69580 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0501 03:48:43.982781   69580 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0501 03:48:43.983363   69580 kubeadm.go:309] 
	I0501 03:48:43.983427   69580 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0501 03:48:43.983484   69580 kubeadm.go:309] 		timed out waiting for the condition
	I0501 03:48:43.983490   69580 kubeadm.go:309] 
	I0501 03:48:43.983520   69580 kubeadm.go:309] 	This error is likely caused by:
	I0501 03:48:43.983547   69580 kubeadm.go:309] 		- The kubelet is not running
	I0501 03:48:43.983633   69580 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0501 03:48:43.983637   69580 kubeadm.go:309] 
	I0501 03:48:43.983721   69580 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0501 03:48:43.983748   69580 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0501 03:48:43.983774   69580 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0501 03:48:43.983778   69580 kubeadm.go:309] 
	I0501 03:48:43.983861   69580 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0501 03:48:43.983928   69580 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0501 03:48:43.983932   69580 kubeadm.go:309] 
	I0501 03:48:43.984023   69580 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0501 03:48:43.984094   69580 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0501 03:48:43.984155   69580 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0501 03:48:43.984212   69580 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0501 03:48:43.984216   69580 kubeadm.go:309] 
	I0501 03:48:43.985577   69580 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0501 03:48:43.985777   69580 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0501 03:48:43.985875   69580 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0501 03:48:43.985971   69580 kubeadm.go:393] duration metric: took 8m0.315126498s to StartCluster
	I0501 03:48:43.986025   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:48:43.986092   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:48:44.038296   69580 cri.go:89] found id: ""
	I0501 03:48:44.038328   69580 logs.go:276] 0 containers: []
	W0501 03:48:44.038339   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:48:44.038346   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:48:44.038426   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:48:44.081855   69580 cri.go:89] found id: ""
	I0501 03:48:44.081891   69580 logs.go:276] 0 containers: []
	W0501 03:48:44.081904   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:48:44.081913   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:48:44.081996   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:48:44.131400   69580 cri.go:89] found id: ""
	I0501 03:48:44.131435   69580 logs.go:276] 0 containers: []
	W0501 03:48:44.131445   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:48:44.131451   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:48:44.131519   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:48:44.178274   69580 cri.go:89] found id: ""
	I0501 03:48:44.178302   69580 logs.go:276] 0 containers: []
	W0501 03:48:44.178310   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:48:44.178316   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:48:44.178376   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:48:44.223087   69580 cri.go:89] found id: ""
	I0501 03:48:44.223115   69580 logs.go:276] 0 containers: []
	W0501 03:48:44.223125   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:48:44.223133   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:48:44.223196   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:48:44.266093   69580 cri.go:89] found id: ""
	I0501 03:48:44.266122   69580 logs.go:276] 0 containers: []
	W0501 03:48:44.266135   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:48:44.266143   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:48:44.266204   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:48:44.307766   69580 cri.go:89] found id: ""
	I0501 03:48:44.307795   69580 logs.go:276] 0 containers: []
	W0501 03:48:44.307806   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:48:44.307813   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:48:44.307876   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:48:44.348548   69580 cri.go:89] found id: ""
	I0501 03:48:44.348576   69580 logs.go:276] 0 containers: []
	W0501 03:48:44.348585   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:48:44.348594   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:48:44.348614   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:48:44.394160   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:48:44.394209   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:48:44.449845   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:48:44.449879   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:48:44.467663   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:48:44.467694   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:48:44.556150   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:48:44.556183   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:48:44.556199   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W0501 03:48:44.661110   69580 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0501 03:48:44.661169   69580 out.go:239] * 
	W0501 03:48:44.661226   69580 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0501 03:48:44.661246   69580 out.go:239] * 
	W0501 03:48:44.662064   69580 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0501 03:48:44.665608   69580 out.go:177] 
	W0501 03:48:44.666799   69580 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0501 03:48:44.666851   69580 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0501 03:48:44.666870   69580 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0501 03:48:44.668487   69580 out.go:177] 
	
	
	==> CRI-O <==
	May 01 03:48:46 old-k8s-version-503971 crio[647]: time="2024-05-01 03:48:46.576697965Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714535326576668560,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6438c598-3f5f-4d84-b055-3a7cfbe00d61 name=/runtime.v1.ImageService/ImageFsInfo
	May 01 03:48:46 old-k8s-version-503971 crio[647]: time="2024-05-01 03:48:46.577426165Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=44dc1542-c452-4e97-889f-fe88316990be name=/runtime.v1.RuntimeService/ListContainers
	May 01 03:48:46 old-k8s-version-503971 crio[647]: time="2024-05-01 03:48:46.577509700Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=44dc1542-c452-4e97-889f-fe88316990be name=/runtime.v1.RuntimeService/ListContainers
	May 01 03:48:46 old-k8s-version-503971 crio[647]: time="2024-05-01 03:48:46.577549430Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=44dc1542-c452-4e97-889f-fe88316990be name=/runtime.v1.RuntimeService/ListContainers
	May 01 03:48:46 old-k8s-version-503971 crio[647]: time="2024-05-01 03:48:46.614359048Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=98cf916d-5c5b-4132-af5c-d69745c52267 name=/runtime.v1.RuntimeService/Version
	May 01 03:48:46 old-k8s-version-503971 crio[647]: time="2024-05-01 03:48:46.614454059Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=98cf916d-5c5b-4132-af5c-d69745c52267 name=/runtime.v1.RuntimeService/Version
	May 01 03:48:46 old-k8s-version-503971 crio[647]: time="2024-05-01 03:48:46.623555255Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a79d603e-0214-4847-a269-4d7ac06f2bbf name=/runtime.v1.ImageService/ImageFsInfo
	May 01 03:48:46 old-k8s-version-503971 crio[647]: time="2024-05-01 03:48:46.623965988Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714535326623947070,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a79d603e-0214-4847-a269-4d7ac06f2bbf name=/runtime.v1.ImageService/ImageFsInfo
	May 01 03:48:46 old-k8s-version-503971 crio[647]: time="2024-05-01 03:48:46.624734019Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7c783364-8c79-4e32-8a19-8f689826a832 name=/runtime.v1.RuntimeService/ListContainers
	May 01 03:48:46 old-k8s-version-503971 crio[647]: time="2024-05-01 03:48:46.624816881Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7c783364-8c79-4e32-8a19-8f689826a832 name=/runtime.v1.RuntimeService/ListContainers
	May 01 03:48:46 old-k8s-version-503971 crio[647]: time="2024-05-01 03:48:46.624850762Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=7c783364-8c79-4e32-8a19-8f689826a832 name=/runtime.v1.RuntimeService/ListContainers
	May 01 03:48:46 old-k8s-version-503971 crio[647]: time="2024-05-01 03:48:46.661580183Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ab0debb6-8d83-4351-90dd-258b03fdb23a name=/runtime.v1.RuntimeService/Version
	May 01 03:48:46 old-k8s-version-503971 crio[647]: time="2024-05-01 03:48:46.661676269Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ab0debb6-8d83-4351-90dd-258b03fdb23a name=/runtime.v1.RuntimeService/Version
	May 01 03:48:46 old-k8s-version-503971 crio[647]: time="2024-05-01 03:48:46.663307764Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8582a106-72c0-4566-ab9d-e817bd0f3430 name=/runtime.v1.ImageService/ImageFsInfo
	May 01 03:48:46 old-k8s-version-503971 crio[647]: time="2024-05-01 03:48:46.663680734Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714535326663660060,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8582a106-72c0-4566-ab9d-e817bd0f3430 name=/runtime.v1.ImageService/ImageFsInfo
	May 01 03:48:46 old-k8s-version-503971 crio[647]: time="2024-05-01 03:48:46.664470155Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b91649e1-8992-4afc-90d4-4468a38d5664 name=/runtime.v1.RuntimeService/ListContainers
	May 01 03:48:46 old-k8s-version-503971 crio[647]: time="2024-05-01 03:48:46.664551489Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b91649e1-8992-4afc-90d4-4468a38d5664 name=/runtime.v1.RuntimeService/ListContainers
	May 01 03:48:46 old-k8s-version-503971 crio[647]: time="2024-05-01 03:48:46.664586985Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=b91649e1-8992-4afc-90d4-4468a38d5664 name=/runtime.v1.RuntimeService/ListContainers
	May 01 03:48:46 old-k8s-version-503971 crio[647]: time="2024-05-01 03:48:46.697170313Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=10ace72b-3f3b-4ece-a20c-28c8d0589ec2 name=/runtime.v1.RuntimeService/Version
	May 01 03:48:46 old-k8s-version-503971 crio[647]: time="2024-05-01 03:48:46.697272671Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=10ace72b-3f3b-4ece-a20c-28c8d0589ec2 name=/runtime.v1.RuntimeService/Version
	May 01 03:48:46 old-k8s-version-503971 crio[647]: time="2024-05-01 03:48:46.698637945Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f456069c-725d-41b3-b02e-e751f5ebdbcb name=/runtime.v1.ImageService/ImageFsInfo
	May 01 03:48:46 old-k8s-version-503971 crio[647]: time="2024-05-01 03:48:46.698967321Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714535326698948556,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f456069c-725d-41b3-b02e-e751f5ebdbcb name=/runtime.v1.ImageService/ImageFsInfo
	May 01 03:48:46 old-k8s-version-503971 crio[647]: time="2024-05-01 03:48:46.699474101Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ea4da723-e639-46af-b1aa-e9e944630465 name=/runtime.v1.RuntimeService/ListContainers
	May 01 03:48:46 old-k8s-version-503971 crio[647]: time="2024-05-01 03:48:46.699552623Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ea4da723-e639-46af-b1aa-e9e944630465 name=/runtime.v1.RuntimeService/ListContainers
	May 01 03:48:46 old-k8s-version-503971 crio[647]: time="2024-05-01 03:48:46.699588069Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=ea4da723-e639-46af-b1aa-e9e944630465 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[May 1 03:40] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.055665] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.051850] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.015816] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.551540] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.720618] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.127424] systemd-fstab-generator[566]: Ignoring "noauto" option for root device
	[  +0.059671] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.072683] systemd-fstab-generator[579]: Ignoring "noauto" option for root device
	[  +0.239117] systemd-fstab-generator[592]: Ignoring "noauto" option for root device
	[  +0.162286] systemd-fstab-generator[604]: Ignoring "noauto" option for root device
	[  +0.321649] systemd-fstab-generator[630]: Ignoring "noauto" option for root device
	[  +7.891142] systemd-fstab-generator[842]: Ignoring "noauto" option for root device
	[  +0.068807] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.309273] systemd-fstab-generator[969]: Ignoring "noauto" option for root device
	[ +12.277413] kauditd_printk_skb: 46 callbacks suppressed
	[May 1 03:44] systemd-fstab-generator[5009]: Ignoring "noauto" option for root device
	[May 1 03:46] systemd-fstab-generator[5290]: Ignoring "noauto" option for root device
	[  +0.082733] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 03:48:46 up 8 min,  0 users,  load average: 0.03, 0.12, 0.08
	Linux old-k8s-version-503971 5.10.207 #1 SMP Tue Apr 30 22:38:43 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	May 01 03:48:45 old-k8s-version-503971 kubelet[5466]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	May 01 03:48:45 old-k8s-version-503971 kubelet[5466]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	May 01 03:48:45 old-k8s-version-503971 kubelet[5466]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000baa6a0, 0xc000b0dea0)
	May 01 03:48:45 old-k8s-version-503971 kubelet[5466]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	May 01 03:48:45 old-k8s-version-503971 kubelet[5466]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	May 01 03:48:45 old-k8s-version-503971 kubelet[5466]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	May 01 03:48:45 old-k8s-version-503971 kubelet[5466]: goroutine 48 [select]:
	May 01 03:48:45 old-k8s-version-503971 kubelet[5466]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*controlBuffer).get(0xc000051450, 0x1, 0x0, 0x0, 0x0, 0x0)
	May 01 03:48:45 old-k8s-version-503971 kubelet[5466]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:395 +0x125
	May 01 03:48:45 old-k8s-version-503971 kubelet[5466]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*loopyWriter).run(0xc000affda0, 0x0, 0x0)
	May 01 03:48:45 old-k8s-version-503971 kubelet[5466]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:513 +0x1d3
	May 01 03:48:45 old-k8s-version-503971 kubelet[5466]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client.func3(0xc000ad81c0)
	May 01 03:48:45 old-k8s-version-503971 kubelet[5466]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:346 +0x7b
	May 01 03:48:45 old-k8s-version-503971 kubelet[5466]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	May 01 03:48:45 old-k8s-version-503971 kubelet[5466]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:344 +0xefc
	May 01 03:48:45 old-k8s-version-503971 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	May 01 03:48:45 old-k8s-version-503971 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	May 01 03:48:46 old-k8s-version-503971 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20.
	May 01 03:48:46 old-k8s-version-503971 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	May 01 03:48:46 old-k8s-version-503971 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	May 01 03:48:46 old-k8s-version-503971 kubelet[5543]: I0501 03:48:46.375538    5543 server.go:416] Version: v1.20.0
	May 01 03:48:46 old-k8s-version-503971 kubelet[5543]: I0501 03:48:46.375790    5543 server.go:837] Client rotation is on, will bootstrap in background
	May 01 03:48:46 old-k8s-version-503971 kubelet[5543]: I0501 03:48:46.377930    5543 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	May 01 03:48:46 old-k8s-version-503971 kubelet[5543]: I0501 03:48:46.379571    5543 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	May 01 03:48:46 old-k8s-version-503971 kubelet[5543]: W0501 03:48:46.379703    5543 manager.go:159] Cannot detect current cgroup on cgroup v2
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-503971 -n old-k8s-version-503971
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-503971 -n old-k8s-version-503971: exit status 2 (246.175297ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-503971" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (726.86s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.4s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0501 03:44:56.198960   20724 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/functional-960026/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-277128 -n embed-certs-277128
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-05-01 03:53:36.152544946 +0000 UTC m=+6391.258461236
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-277128 -n embed-certs-277128
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-277128 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-277128 logs -n 25: (2.209599968s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p cert-options-582976                                 | cert-options-582976          | jenkins | v1.33.0 | 01 May 24 03:30 UTC | 01 May 24 03:30 UTC |
	| delete  | -p pause-542495                                        | pause-542495                 | jenkins | v1.33.0 | 01 May 24 03:30 UTC | 01 May 24 03:30 UTC |
	| start   | -p no-preload-892672                                   | no-preload-892672            | jenkins | v1.33.0 | 01 May 24 03:30 UTC | 01 May 24 03:32 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-277128                                  | embed-certs-277128           | jenkins | v1.33.0 | 01 May 24 03:30 UTC | 01 May 24 03:32 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-046243                           | kubernetes-upgrade-046243    | jenkins | v1.33.0 | 01 May 24 03:30 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-046243                           | kubernetes-upgrade-046243    | jenkins | v1.33.0 | 01 May 24 03:30 UTC | 01 May 24 03:32 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-046243                           | kubernetes-upgrade-046243    | jenkins | v1.33.0 | 01 May 24 03:32 UTC | 01 May 24 03:32 UTC |
	| delete  | -p                                                     | disable-driver-mounts-483221 | jenkins | v1.33.0 | 01 May 24 03:32 UTC | 01 May 24 03:32 UTC |
	|         | disable-driver-mounts-483221                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-715118 | jenkins | v1.33.0 | 01 May 24 03:32 UTC | 01 May 24 03:33 UTC |
	|         | default-k8s-diff-port-715118                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-892672             | no-preload-892672            | jenkins | v1.33.0 | 01 May 24 03:32 UTC | 01 May 24 03:32 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-892672                                   | no-preload-892672            | jenkins | v1.33.0 | 01 May 24 03:32 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-277128            | embed-certs-277128           | jenkins | v1.33.0 | 01 May 24 03:32 UTC | 01 May 24 03:32 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-277128                                  | embed-certs-277128           | jenkins | v1.33.0 | 01 May 24 03:32 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-715118  | default-k8s-diff-port-715118 | jenkins | v1.33.0 | 01 May 24 03:33 UTC | 01 May 24 03:33 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-715118 | jenkins | v1.33.0 | 01 May 24 03:33 UTC |                     |
	|         | default-k8s-diff-port-715118                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-892672                  | no-preload-892672            | jenkins | v1.33.0 | 01 May 24 03:34 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-277128                 | embed-certs-277128           | jenkins | v1.33.0 | 01 May 24 03:34 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-892672                                   | no-preload-892672            | jenkins | v1.33.0 | 01 May 24 03:34 UTC | 01 May 24 03:46 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-503971        | old-k8s-version-503971       | jenkins | v1.33.0 | 01 May 24 03:34 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p embed-certs-277128                                  | embed-certs-277128           | jenkins | v1.33.0 | 01 May 24 03:34 UTC | 01 May 24 03:44 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-715118       | default-k8s-diff-port-715118 | jenkins | v1.33.0 | 01 May 24 03:35 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-715118 | jenkins | v1.33.0 | 01 May 24 03:35 UTC | 01 May 24 03:45 UTC |
	|         | default-k8s-diff-port-715118                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-503971                              | old-k8s-version-503971       | jenkins | v1.33.0 | 01 May 24 03:36 UTC | 01 May 24 03:36 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-503971             | old-k8s-version-503971       | jenkins | v1.33.0 | 01 May 24 03:36 UTC | 01 May 24 03:36 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-503971                              | old-k8s-version-503971       | jenkins | v1.33.0 | 01 May 24 03:36 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/01 03:36:41
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0501 03:36:41.470152   69580 out.go:291] Setting OutFile to fd 1 ...
	I0501 03:36:41.470256   69580 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 03:36:41.470264   69580 out.go:304] Setting ErrFile to fd 2...
	I0501 03:36:41.470268   69580 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 03:36:41.470484   69580 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18779-13391/.minikube/bin
	I0501 03:36:41.470989   69580 out.go:298] Setting JSON to false
	I0501 03:36:41.471856   69580 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":8345,"bootTime":1714526257,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0501 03:36:41.471911   69580 start.go:139] virtualization: kvm guest
	I0501 03:36:41.473901   69580 out.go:177] * [old-k8s-version-503971] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0501 03:36:41.474994   69580 out.go:177]   - MINIKUBE_LOCATION=18779
	I0501 03:36:41.475003   69580 notify.go:220] Checking for updates...
	I0501 03:36:41.477150   69580 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0501 03:36:41.478394   69580 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18779-13391/kubeconfig
	I0501 03:36:41.479462   69580 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18779-13391/.minikube
	I0501 03:36:41.480507   69580 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0501 03:36:41.481543   69580 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0501 03:36:41.482907   69580 config.go:182] Loaded profile config "old-k8s-version-503971": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0501 03:36:41.483279   69580 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:36:41.483311   69580 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:36:41.497758   69580 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40255
	I0501 03:36:41.498090   69580 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:36:41.498591   69580 main.go:141] libmachine: Using API Version  1
	I0501 03:36:41.498616   69580 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:36:41.498891   69580 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:36:41.499055   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .DriverName
	I0501 03:36:41.500675   69580 out.go:177] * Kubernetes 1.30.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.0
	I0501 03:36:41.501716   69580 driver.go:392] Setting default libvirt URI to qemu:///system
	I0501 03:36:41.501974   69580 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:36:41.502024   69580 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:36:41.515991   69580 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40591
	I0501 03:36:41.516392   69580 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:36:41.516826   69580 main.go:141] libmachine: Using API Version  1
	I0501 03:36:41.516846   69580 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:36:41.517120   69580 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:36:41.517281   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .DriverName
	I0501 03:36:41.551130   69580 out.go:177] * Using the kvm2 driver based on existing profile
	I0501 03:36:41.552244   69580 start.go:297] selected driver: kvm2
	I0501 03:36:41.552253   69580 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-503971 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-503971 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.104 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 03:36:41.552369   69580 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0501 03:36:41.553004   69580 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0501 03:36:41.553071   69580 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18779-13391/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0501 03:36:41.567362   69580 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0501 03:36:41.567736   69580 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0501 03:36:41.567815   69580 cni.go:84] Creating CNI manager for ""
	I0501 03:36:41.567832   69580 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0501 03:36:41.567881   69580 start.go:340] cluster config:
	{Name:old-k8s-version-503971 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-503971 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.104 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 03:36:41.568012   69580 iso.go:125] acquiring lock: {Name:mkcd2496aadb29931e179193d707c731bfda98b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0501 03:36:41.569791   69580 out.go:177] * Starting "old-k8s-version-503971" primary control-plane node in "old-k8s-version-503971" cluster
	I0501 03:36:38.886755   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:36:41.571352   69580 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0501 03:36:41.571389   69580 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0501 03:36:41.571408   69580 cache.go:56] Caching tarball of preloaded images
	I0501 03:36:41.571478   69580 preload.go:173] Found /home/jenkins/minikube-integration/18779-13391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0501 03:36:41.571490   69580 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0501 03:36:41.571588   69580 profile.go:143] Saving config to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/old-k8s-version-503971/config.json ...
	I0501 03:36:41.571775   69580 start.go:360] acquireMachinesLock for old-k8s-version-503971: {Name:mkc4548e5afe1d4b0a833f6c522103562b0cefff Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0501 03:36:44.966689   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:36:48.038769   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:36:54.118675   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:36:57.190716   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:37:03.270653   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:37:06.342693   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:37:12.422726   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:37:15.494702   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:37:21.574646   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:37:24.646711   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:37:30.726724   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:37:33.798628   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:37:39.878657   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:37:42.950647   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:37:49.030699   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:37:52.102665   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:37:58.182647   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:38:01.254620   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:38:07.334707   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:38:10.406670   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:38:16.486684   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:38:19.558714   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:38:25.638642   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:38:28.710687   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:38:34.790659   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:38:37.862651   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:38:43.942639   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:38:47.014729   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:38:53.094674   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:38:56.166684   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:39:02.246662   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:39:05.318633   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:39:11.398705   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:39:14.470640   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:39:20.550642   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:39:23.622701   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:39:32.707273   68864 start.go:364] duration metric: took 4m38.787656406s to acquireMachinesLock for "embed-certs-277128"
	I0501 03:39:32.707327   68864 start.go:96] Skipping create...Using existing machine configuration
	I0501 03:39:32.707336   68864 fix.go:54] fixHost starting: 
	I0501 03:39:32.707655   68864 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:39:32.707697   68864 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:39:32.722689   68864 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35015
	I0501 03:39:32.723061   68864 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:39:32.723536   68864 main.go:141] libmachine: Using API Version  1
	I0501 03:39:32.723557   68864 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:39:32.723848   68864 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:39:32.724041   68864 main.go:141] libmachine: (embed-certs-277128) Calling .DriverName
	I0501 03:39:32.724164   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetState
	I0501 03:39:32.725542   68864 fix.go:112] recreateIfNeeded on embed-certs-277128: state=Stopped err=<nil>
	I0501 03:39:32.725569   68864 main.go:141] libmachine: (embed-certs-277128) Calling .DriverName
	W0501 03:39:32.725714   68864 fix.go:138] unexpected machine state, will restart: <nil>
	I0501 03:39:32.727403   68864 out.go:177] * Restarting existing kvm2 VM for "embed-certs-277128" ...
	I0501 03:39:29.702654   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:39:32.704906   68640 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0501 03:39:32.704940   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetMachineName
	I0501 03:39:32.705254   68640 buildroot.go:166] provisioning hostname "no-preload-892672"
	I0501 03:39:32.705278   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetMachineName
	I0501 03:39:32.705499   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHHostname
	I0501 03:39:32.707128   68640 machine.go:97] duration metric: took 4m44.649178925s to provisionDockerMachine
	I0501 03:39:32.707171   68640 fix.go:56] duration metric: took 4m44.67002247s for fixHost
	I0501 03:39:32.707178   68640 start.go:83] releasing machines lock for "no-preload-892672", held for 4m44.670048235s
	W0501 03:39:32.707201   68640 start.go:713] error starting host: provision: host is not running
	W0501 03:39:32.707293   68640 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0501 03:39:32.707305   68640 start.go:728] Will try again in 5 seconds ...
	I0501 03:39:32.728616   68864 main.go:141] libmachine: (embed-certs-277128) Calling .Start
	I0501 03:39:32.728768   68864 main.go:141] libmachine: (embed-certs-277128) Ensuring networks are active...
	I0501 03:39:32.729434   68864 main.go:141] libmachine: (embed-certs-277128) Ensuring network default is active
	I0501 03:39:32.729789   68864 main.go:141] libmachine: (embed-certs-277128) Ensuring network mk-embed-certs-277128 is active
	I0501 03:39:32.730218   68864 main.go:141] libmachine: (embed-certs-277128) Getting domain xml...
	I0501 03:39:32.730972   68864 main.go:141] libmachine: (embed-certs-277128) Creating domain...
	I0501 03:39:37.711605   68640 start.go:360] acquireMachinesLock for no-preload-892672: {Name:mkc4548e5afe1d4b0a833f6c522103562b0cefff Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0501 03:39:33.914124   68864 main.go:141] libmachine: (embed-certs-277128) Waiting to get IP...
	I0501 03:39:33.915022   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:33.915411   68864 main.go:141] libmachine: (embed-certs-277128) DBG | unable to find current IP address of domain embed-certs-277128 in network mk-embed-certs-277128
	I0501 03:39:33.915473   68864 main.go:141] libmachine: (embed-certs-277128) DBG | I0501 03:39:33.915391   70171 retry.go:31] will retry after 278.418743ms: waiting for machine to come up
	I0501 03:39:34.195933   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:34.196470   68864 main.go:141] libmachine: (embed-certs-277128) DBG | unable to find current IP address of domain embed-certs-277128 in network mk-embed-certs-277128
	I0501 03:39:34.196497   68864 main.go:141] libmachine: (embed-certs-277128) DBG | I0501 03:39:34.196417   70171 retry.go:31] will retry after 375.593174ms: waiting for machine to come up
	I0501 03:39:34.574178   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:34.574666   68864 main.go:141] libmachine: (embed-certs-277128) DBG | unable to find current IP address of domain embed-certs-277128 in network mk-embed-certs-277128
	I0501 03:39:34.574689   68864 main.go:141] libmachine: (embed-certs-277128) DBG | I0501 03:39:34.574617   70171 retry.go:31] will retry after 377.853045ms: waiting for machine to come up
	I0501 03:39:34.954022   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:34.954436   68864 main.go:141] libmachine: (embed-certs-277128) DBG | unable to find current IP address of domain embed-certs-277128 in network mk-embed-certs-277128
	I0501 03:39:34.954465   68864 main.go:141] libmachine: (embed-certs-277128) DBG | I0501 03:39:34.954375   70171 retry.go:31] will retry after 374.024178ms: waiting for machine to come up
	I0501 03:39:35.330087   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:35.330514   68864 main.go:141] libmachine: (embed-certs-277128) DBG | unable to find current IP address of domain embed-certs-277128 in network mk-embed-certs-277128
	I0501 03:39:35.330545   68864 main.go:141] libmachine: (embed-certs-277128) DBG | I0501 03:39:35.330478   70171 retry.go:31] will retry after 488.296666ms: waiting for machine to come up
	I0501 03:39:35.820177   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:35.820664   68864 main.go:141] libmachine: (embed-certs-277128) DBG | unable to find current IP address of domain embed-certs-277128 in network mk-embed-certs-277128
	I0501 03:39:35.820692   68864 main.go:141] libmachine: (embed-certs-277128) DBG | I0501 03:39:35.820629   70171 retry.go:31] will retry after 665.825717ms: waiting for machine to come up
	I0501 03:39:36.488492   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:36.488910   68864 main.go:141] libmachine: (embed-certs-277128) DBG | unable to find current IP address of domain embed-certs-277128 in network mk-embed-certs-277128
	I0501 03:39:36.488941   68864 main.go:141] libmachine: (embed-certs-277128) DBG | I0501 03:39:36.488860   70171 retry.go:31] will retry after 1.04269192s: waiting for machine to come up
	I0501 03:39:37.532622   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:37.533006   68864 main.go:141] libmachine: (embed-certs-277128) DBG | unable to find current IP address of domain embed-certs-277128 in network mk-embed-certs-277128
	I0501 03:39:37.533032   68864 main.go:141] libmachine: (embed-certs-277128) DBG | I0501 03:39:37.532968   70171 retry.go:31] will retry after 1.348239565s: waiting for machine to come up
	I0501 03:39:38.882895   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:38.883364   68864 main.go:141] libmachine: (embed-certs-277128) DBG | unable to find current IP address of domain embed-certs-277128 in network mk-embed-certs-277128
	I0501 03:39:38.883396   68864 main.go:141] libmachine: (embed-certs-277128) DBG | I0501 03:39:38.883301   70171 retry.go:31] will retry after 1.718495999s: waiting for machine to come up
	I0501 03:39:40.604329   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:40.604760   68864 main.go:141] libmachine: (embed-certs-277128) DBG | unable to find current IP address of domain embed-certs-277128 in network mk-embed-certs-277128
	I0501 03:39:40.604791   68864 main.go:141] libmachine: (embed-certs-277128) DBG | I0501 03:39:40.604703   70171 retry.go:31] will retry after 2.237478005s: waiting for machine to come up
	I0501 03:39:42.843398   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:42.843920   68864 main.go:141] libmachine: (embed-certs-277128) DBG | unable to find current IP address of domain embed-certs-277128 in network mk-embed-certs-277128
	I0501 03:39:42.843949   68864 main.go:141] libmachine: (embed-certs-277128) DBG | I0501 03:39:42.843869   70171 retry.go:31] will retry after 2.618059388s: waiting for machine to come up
	I0501 03:39:45.465576   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:45.465968   68864 main.go:141] libmachine: (embed-certs-277128) DBG | unable to find current IP address of domain embed-certs-277128 in network mk-embed-certs-277128
	I0501 03:39:45.465992   68864 main.go:141] libmachine: (embed-certs-277128) DBG | I0501 03:39:45.465928   70171 retry.go:31] will retry after 2.895120972s: waiting for machine to come up
	I0501 03:39:48.362239   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:48.362651   68864 main.go:141] libmachine: (embed-certs-277128) DBG | unable to find current IP address of domain embed-certs-277128 in network mk-embed-certs-277128
	I0501 03:39:48.362683   68864 main.go:141] libmachine: (embed-certs-277128) DBG | I0501 03:39:48.362617   70171 retry.go:31] will retry after 2.857441112s: waiting for machine to come up
	I0501 03:39:52.791989   69237 start.go:364] duration metric: took 4m2.036138912s to acquireMachinesLock for "default-k8s-diff-port-715118"
	I0501 03:39:52.792059   69237 start.go:96] Skipping create...Using existing machine configuration
	I0501 03:39:52.792071   69237 fix.go:54] fixHost starting: 
	I0501 03:39:52.792454   69237 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:39:52.792492   69237 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:39:52.809707   69237 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46419
	I0501 03:39:52.810075   69237 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:39:52.810544   69237 main.go:141] libmachine: Using API Version  1
	I0501 03:39:52.810564   69237 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:39:52.810881   69237 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:39:52.811067   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .DriverName
	I0501 03:39:52.811217   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetState
	I0501 03:39:52.812787   69237 fix.go:112] recreateIfNeeded on default-k8s-diff-port-715118: state=Stopped err=<nil>
	I0501 03:39:52.812820   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .DriverName
	W0501 03:39:52.812969   69237 fix.go:138] unexpected machine state, will restart: <nil>
	I0501 03:39:52.815136   69237 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-715118" ...
	I0501 03:39:51.223450   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:51.223938   68864 main.go:141] libmachine: (embed-certs-277128) Found IP for machine: 192.168.50.218
	I0501 03:39:51.223965   68864 main.go:141] libmachine: (embed-certs-277128) Reserving static IP address...
	I0501 03:39:51.223982   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has current primary IP address 192.168.50.218 and MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:51.224375   68864 main.go:141] libmachine: (embed-certs-277128) Reserved static IP address: 192.168.50.218
	I0501 03:39:51.224440   68864 main.go:141] libmachine: (embed-certs-277128) DBG | found host DHCP lease matching {name: "embed-certs-277128", mac: "52:54:00:96:11:7d", ip: "192.168.50.218"} in network mk-embed-certs-277128: {Iface:virbr2 ExpiryTime:2024-05-01 04:39:45 +0000 UTC Type:0 Mac:52:54:00:96:11:7d Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-277128 Clientid:01:52:54:00:96:11:7d}
	I0501 03:39:51.224454   68864 main.go:141] libmachine: (embed-certs-277128) Waiting for SSH to be available...
	I0501 03:39:51.224491   68864 main.go:141] libmachine: (embed-certs-277128) DBG | skip adding static IP to network mk-embed-certs-277128 - found existing host DHCP lease matching {name: "embed-certs-277128", mac: "52:54:00:96:11:7d", ip: "192.168.50.218"}
	I0501 03:39:51.224507   68864 main.go:141] libmachine: (embed-certs-277128) DBG | Getting to WaitForSSH function...
	I0501 03:39:51.226437   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:51.226733   68864 main.go:141] libmachine: (embed-certs-277128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:7d", ip: ""} in network mk-embed-certs-277128: {Iface:virbr2 ExpiryTime:2024-05-01 04:39:45 +0000 UTC Type:0 Mac:52:54:00:96:11:7d Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-277128 Clientid:01:52:54:00:96:11:7d}
	I0501 03:39:51.226764   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined IP address 192.168.50.218 and MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:51.226863   68864 main.go:141] libmachine: (embed-certs-277128) DBG | Using SSH client type: external
	I0501 03:39:51.226886   68864 main.go:141] libmachine: (embed-certs-277128) DBG | Using SSH private key: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/embed-certs-277128/id_rsa (-rw-------)
	I0501 03:39:51.226917   68864 main.go:141] libmachine: (embed-certs-277128) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.218 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18779-13391/.minikube/machines/embed-certs-277128/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0501 03:39:51.226930   68864 main.go:141] libmachine: (embed-certs-277128) DBG | About to run SSH command:
	I0501 03:39:51.226941   68864 main.go:141] libmachine: (embed-certs-277128) DBG | exit 0
	I0501 03:39:51.354225   68864 main.go:141] libmachine: (embed-certs-277128) DBG | SSH cmd err, output: <nil>: 
	I0501 03:39:51.354641   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetConfigRaw
	I0501 03:39:51.355337   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetIP
	I0501 03:39:51.357934   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:51.358265   68864 main.go:141] libmachine: (embed-certs-277128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:7d", ip: ""} in network mk-embed-certs-277128: {Iface:virbr2 ExpiryTime:2024-05-01 04:39:45 +0000 UTC Type:0 Mac:52:54:00:96:11:7d Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-277128 Clientid:01:52:54:00:96:11:7d}
	I0501 03:39:51.358302   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined IP address 192.168.50.218 and MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:51.358584   68864 profile.go:143] Saving config to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/embed-certs-277128/config.json ...
	I0501 03:39:51.358753   68864 machine.go:94] provisionDockerMachine start ...
	I0501 03:39:51.358771   68864 main.go:141] libmachine: (embed-certs-277128) Calling .DriverName
	I0501 03:39:51.358940   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHHostname
	I0501 03:39:51.361202   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:51.361564   68864 main.go:141] libmachine: (embed-certs-277128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:7d", ip: ""} in network mk-embed-certs-277128: {Iface:virbr2 ExpiryTime:2024-05-01 04:39:45 +0000 UTC Type:0 Mac:52:54:00:96:11:7d Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-277128 Clientid:01:52:54:00:96:11:7d}
	I0501 03:39:51.361600   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined IP address 192.168.50.218 and MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:51.361711   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHPort
	I0501 03:39:51.361884   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHKeyPath
	I0501 03:39:51.362054   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHKeyPath
	I0501 03:39:51.362170   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHUsername
	I0501 03:39:51.362344   68864 main.go:141] libmachine: Using SSH client type: native
	I0501 03:39:51.362572   68864 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.218 22 <nil> <nil>}
	I0501 03:39:51.362586   68864 main.go:141] libmachine: About to run SSH command:
	hostname
	I0501 03:39:51.467448   68864 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0501 03:39:51.467480   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetMachineName
	I0501 03:39:51.467740   68864 buildroot.go:166] provisioning hostname "embed-certs-277128"
	I0501 03:39:51.467772   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetMachineName
	I0501 03:39:51.467953   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHHostname
	I0501 03:39:51.470653   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:51.471022   68864 main.go:141] libmachine: (embed-certs-277128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:7d", ip: ""} in network mk-embed-certs-277128: {Iface:virbr2 ExpiryTime:2024-05-01 04:39:45 +0000 UTC Type:0 Mac:52:54:00:96:11:7d Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-277128 Clientid:01:52:54:00:96:11:7d}
	I0501 03:39:51.471044   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined IP address 192.168.50.218 and MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:51.471159   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHPort
	I0501 03:39:51.471341   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHKeyPath
	I0501 03:39:51.471482   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHKeyPath
	I0501 03:39:51.471590   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHUsername
	I0501 03:39:51.471729   68864 main.go:141] libmachine: Using SSH client type: native
	I0501 03:39:51.471913   68864 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.218 22 <nil> <nil>}
	I0501 03:39:51.471934   68864 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-277128 && echo "embed-certs-277128" | sudo tee /etc/hostname
	I0501 03:39:51.594372   68864 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-277128
	
	I0501 03:39:51.594422   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHHostname
	I0501 03:39:51.596978   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:51.597307   68864 main.go:141] libmachine: (embed-certs-277128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:7d", ip: ""} in network mk-embed-certs-277128: {Iface:virbr2 ExpiryTime:2024-05-01 04:39:45 +0000 UTC Type:0 Mac:52:54:00:96:11:7d Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-277128 Clientid:01:52:54:00:96:11:7d}
	I0501 03:39:51.597334   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined IP address 192.168.50.218 and MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:51.597495   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHPort
	I0501 03:39:51.597710   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHKeyPath
	I0501 03:39:51.597865   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHKeyPath
	I0501 03:39:51.597971   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHUsername
	I0501 03:39:51.598097   68864 main.go:141] libmachine: Using SSH client type: native
	I0501 03:39:51.598250   68864 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.218 22 <nil> <nil>}
	I0501 03:39:51.598271   68864 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-277128' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-277128/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-277128' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0501 03:39:51.712791   68864 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0501 03:39:51.712825   68864 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18779-13391/.minikube CaCertPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18779-13391/.minikube}
	I0501 03:39:51.712850   68864 buildroot.go:174] setting up certificates
	I0501 03:39:51.712860   68864 provision.go:84] configureAuth start
	I0501 03:39:51.712869   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetMachineName
	I0501 03:39:51.713158   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetIP
	I0501 03:39:51.715577   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:51.715885   68864 main.go:141] libmachine: (embed-certs-277128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:7d", ip: ""} in network mk-embed-certs-277128: {Iface:virbr2 ExpiryTime:2024-05-01 04:39:45 +0000 UTC Type:0 Mac:52:54:00:96:11:7d Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-277128 Clientid:01:52:54:00:96:11:7d}
	I0501 03:39:51.715918   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined IP address 192.168.50.218 and MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:51.716040   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHHostname
	I0501 03:39:51.718057   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:51.718342   68864 main.go:141] libmachine: (embed-certs-277128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:7d", ip: ""} in network mk-embed-certs-277128: {Iface:virbr2 ExpiryTime:2024-05-01 04:39:45 +0000 UTC Type:0 Mac:52:54:00:96:11:7d Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-277128 Clientid:01:52:54:00:96:11:7d}
	I0501 03:39:51.718367   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined IP address 192.168.50.218 and MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:51.718550   68864 provision.go:143] copyHostCerts
	I0501 03:39:51.718612   68864 exec_runner.go:144] found /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem, removing ...
	I0501 03:39:51.718622   68864 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem
	I0501 03:39:51.718685   68864 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem (1078 bytes)
	I0501 03:39:51.718790   68864 exec_runner.go:144] found /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem, removing ...
	I0501 03:39:51.718798   68864 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem
	I0501 03:39:51.718823   68864 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem (1123 bytes)
	I0501 03:39:51.718881   68864 exec_runner.go:144] found /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem, removing ...
	I0501 03:39:51.718888   68864 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem
	I0501 03:39:51.718907   68864 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem (1675 bytes)
	I0501 03:39:51.718957   68864 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca-key.pem org=jenkins.embed-certs-277128 san=[127.0.0.1 192.168.50.218 embed-certs-277128 localhost minikube]
	I0501 03:39:52.100402   68864 provision.go:177] copyRemoteCerts
	I0501 03:39:52.100459   68864 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0501 03:39:52.100494   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHHostname
	I0501 03:39:52.103133   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:52.103363   68864 main.go:141] libmachine: (embed-certs-277128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:7d", ip: ""} in network mk-embed-certs-277128: {Iface:virbr2 ExpiryTime:2024-05-01 04:39:45 +0000 UTC Type:0 Mac:52:54:00:96:11:7d Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-277128 Clientid:01:52:54:00:96:11:7d}
	I0501 03:39:52.103391   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined IP address 192.168.50.218 and MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:52.103522   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHPort
	I0501 03:39:52.103694   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHKeyPath
	I0501 03:39:52.103790   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHUsername
	I0501 03:39:52.103874   68864 sshutil.go:53] new ssh client: &{IP:192.168.50.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/embed-certs-277128/id_rsa Username:docker}
	I0501 03:39:52.186017   68864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0501 03:39:52.211959   68864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0501 03:39:52.237362   68864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0501 03:39:52.264036   68864 provision.go:87] duration metric: took 551.163591ms to configureAuth
	I0501 03:39:52.264060   68864 buildroot.go:189] setting minikube options for container-runtime
	I0501 03:39:52.264220   68864 config.go:182] Loaded profile config "embed-certs-277128": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0501 03:39:52.264290   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHHostname
	I0501 03:39:52.266809   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:52.267117   68864 main.go:141] libmachine: (embed-certs-277128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:7d", ip: ""} in network mk-embed-certs-277128: {Iface:virbr2 ExpiryTime:2024-05-01 04:39:45 +0000 UTC Type:0 Mac:52:54:00:96:11:7d Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-277128 Clientid:01:52:54:00:96:11:7d}
	I0501 03:39:52.267140   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined IP address 192.168.50.218 and MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:52.267336   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHPort
	I0501 03:39:52.267529   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHKeyPath
	I0501 03:39:52.267713   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHKeyPath
	I0501 03:39:52.267863   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHUsername
	I0501 03:39:52.268096   68864 main.go:141] libmachine: Using SSH client type: native
	I0501 03:39:52.268273   68864 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.218 22 <nil> <nil>}
	I0501 03:39:52.268290   68864 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0501 03:39:52.543539   68864 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0501 03:39:52.543569   68864 machine.go:97] duration metric: took 1.184800934s to provisionDockerMachine
	I0501 03:39:52.543585   68864 start.go:293] postStartSetup for "embed-certs-277128" (driver="kvm2")
	I0501 03:39:52.543600   68864 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0501 03:39:52.543621   68864 main.go:141] libmachine: (embed-certs-277128) Calling .DriverName
	I0501 03:39:52.543974   68864 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0501 03:39:52.544007   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHHostname
	I0501 03:39:52.546566   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:52.546918   68864 main.go:141] libmachine: (embed-certs-277128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:7d", ip: ""} in network mk-embed-certs-277128: {Iface:virbr2 ExpiryTime:2024-05-01 04:39:45 +0000 UTC Type:0 Mac:52:54:00:96:11:7d Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-277128 Clientid:01:52:54:00:96:11:7d}
	I0501 03:39:52.546955   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined IP address 192.168.50.218 and MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:52.547108   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHPort
	I0501 03:39:52.547310   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHKeyPath
	I0501 03:39:52.547480   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHUsername
	I0501 03:39:52.547622   68864 sshutil.go:53] new ssh client: &{IP:192.168.50.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/embed-certs-277128/id_rsa Username:docker}
	I0501 03:39:52.636313   68864 ssh_runner.go:195] Run: cat /etc/os-release
	I0501 03:39:52.641408   68864 info.go:137] Remote host: Buildroot 2023.02.9
	I0501 03:39:52.641435   68864 filesync.go:126] Scanning /home/jenkins/minikube-integration/18779-13391/.minikube/addons for local assets ...
	I0501 03:39:52.641514   68864 filesync.go:126] Scanning /home/jenkins/minikube-integration/18779-13391/.minikube/files for local assets ...
	I0501 03:39:52.641598   68864 filesync.go:149] local asset: /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem -> 207242.pem in /etc/ssl/certs
	I0501 03:39:52.641708   68864 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0501 03:39:52.653421   68864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem --> /etc/ssl/certs/207242.pem (1708 bytes)
	I0501 03:39:52.681796   68864 start.go:296] duration metric: took 138.197388ms for postStartSetup
	I0501 03:39:52.681840   68864 fix.go:56] duration metric: took 19.974504059s for fixHost
	I0501 03:39:52.681866   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHHostname
	I0501 03:39:52.684189   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:52.684447   68864 main.go:141] libmachine: (embed-certs-277128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:7d", ip: ""} in network mk-embed-certs-277128: {Iface:virbr2 ExpiryTime:2024-05-01 04:39:45 +0000 UTC Type:0 Mac:52:54:00:96:11:7d Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-277128 Clientid:01:52:54:00:96:11:7d}
	I0501 03:39:52.684475   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined IP address 192.168.50.218 and MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:52.684691   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHPort
	I0501 03:39:52.684901   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHKeyPath
	I0501 03:39:52.685077   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHKeyPath
	I0501 03:39:52.685226   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHUsername
	I0501 03:39:52.685393   68864 main.go:141] libmachine: Using SSH client type: native
	I0501 03:39:52.685556   68864 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.218 22 <nil> <nil>}
	I0501 03:39:52.685568   68864 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0501 03:39:52.791802   68864 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714534792.758254619
	
	I0501 03:39:52.791830   68864 fix.go:216] guest clock: 1714534792.758254619
	I0501 03:39:52.791841   68864 fix.go:229] Guest: 2024-05-01 03:39:52.758254619 +0000 UTC Remote: 2024-05-01 03:39:52.681844878 +0000 UTC m=+298.906990848 (delta=76.409741ms)
	I0501 03:39:52.791886   68864 fix.go:200] guest clock delta is within tolerance: 76.409741ms
	I0501 03:39:52.791892   68864 start.go:83] releasing machines lock for "embed-certs-277128", held for 20.08458366s
	I0501 03:39:52.791918   68864 main.go:141] libmachine: (embed-certs-277128) Calling .DriverName
	I0501 03:39:52.792188   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetIP
	I0501 03:39:52.794820   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:52.795217   68864 main.go:141] libmachine: (embed-certs-277128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:7d", ip: ""} in network mk-embed-certs-277128: {Iface:virbr2 ExpiryTime:2024-05-01 04:39:45 +0000 UTC Type:0 Mac:52:54:00:96:11:7d Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-277128 Clientid:01:52:54:00:96:11:7d}
	I0501 03:39:52.795256   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined IP address 192.168.50.218 and MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:52.795427   68864 main.go:141] libmachine: (embed-certs-277128) Calling .DriverName
	I0501 03:39:52.795971   68864 main.go:141] libmachine: (embed-certs-277128) Calling .DriverName
	I0501 03:39:52.796142   68864 main.go:141] libmachine: (embed-certs-277128) Calling .DriverName
	I0501 03:39:52.796235   68864 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0501 03:39:52.796285   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHHostname
	I0501 03:39:52.796324   68864 ssh_runner.go:195] Run: cat /version.json
	I0501 03:39:52.796346   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHHostname
	I0501 03:39:52.799128   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:52.799153   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:52.799536   68864 main.go:141] libmachine: (embed-certs-277128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:7d", ip: ""} in network mk-embed-certs-277128: {Iface:virbr2 ExpiryTime:2024-05-01 04:39:45 +0000 UTC Type:0 Mac:52:54:00:96:11:7d Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-277128 Clientid:01:52:54:00:96:11:7d}
	I0501 03:39:52.799570   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined IP address 192.168.50.218 and MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:52.799617   68864 main.go:141] libmachine: (embed-certs-277128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:7d", ip: ""} in network mk-embed-certs-277128: {Iface:virbr2 ExpiryTime:2024-05-01 04:39:45 +0000 UTC Type:0 Mac:52:54:00:96:11:7d Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-277128 Clientid:01:52:54:00:96:11:7d}
	I0501 03:39:52.799647   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined IP address 192.168.50.218 and MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:52.799779   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHPort
	I0501 03:39:52.799878   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHPort
	I0501 03:39:52.799961   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHKeyPath
	I0501 03:39:52.800048   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHKeyPath
	I0501 03:39:52.800117   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHUsername
	I0501 03:39:52.800189   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHUsername
	I0501 03:39:52.800243   68864 sshutil.go:53] new ssh client: &{IP:192.168.50.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/embed-certs-277128/id_rsa Username:docker}
	I0501 03:39:52.800299   68864 sshutil.go:53] new ssh client: &{IP:192.168.50.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/embed-certs-277128/id_rsa Username:docker}
	I0501 03:39:52.901147   68864 ssh_runner.go:195] Run: systemctl --version
	I0501 03:39:52.908399   68864 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0501 03:39:53.065012   68864 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0501 03:39:53.073635   68864 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0501 03:39:53.073724   68864 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0501 03:39:53.096146   68864 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0501 03:39:53.096179   68864 start.go:494] detecting cgroup driver to use...
	I0501 03:39:53.096253   68864 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0501 03:39:53.118525   68864 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0501 03:39:53.136238   68864 docker.go:217] disabling cri-docker service (if available) ...
	I0501 03:39:53.136301   68864 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0501 03:39:53.152535   68864 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0501 03:39:53.171415   68864 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0501 03:39:53.297831   68864 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0501 03:39:53.479469   68864 docker.go:233] disabling docker service ...
	I0501 03:39:53.479552   68864 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0501 03:39:53.497271   68864 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0501 03:39:53.512645   68864 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0501 03:39:53.658448   68864 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0501 03:39:53.787528   68864 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0501 03:39:53.804078   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0501 03:39:53.836146   68864 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0501 03:39:53.836206   68864 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:39:53.853846   68864 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0501 03:39:53.853915   68864 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:39:53.866319   68864 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:39:53.878410   68864 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:39:53.890304   68864 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0501 03:39:53.903821   68864 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:39:53.916750   68864 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:39:53.938933   68864 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:39:53.952103   68864 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0501 03:39:53.964833   68864 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0501 03:39:53.964893   68864 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0501 03:39:53.983039   68864 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0501 03:39:53.995830   68864 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 03:39:54.156748   68864 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0501 03:39:54.306973   68864 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0501 03:39:54.307051   68864 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0501 03:39:54.313515   68864 start.go:562] Will wait 60s for crictl version
	I0501 03:39:54.313569   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:39:54.317943   68864 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0501 03:39:54.356360   68864 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0501 03:39:54.356437   68864 ssh_runner.go:195] Run: crio --version
	I0501 03:39:54.391717   68864 ssh_runner.go:195] Run: crio --version
	I0501 03:39:54.428403   68864 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0501 03:39:52.816428   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .Start
	I0501 03:39:52.816592   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Ensuring networks are active...
	I0501 03:39:52.817317   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Ensuring network default is active
	I0501 03:39:52.817668   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Ensuring network mk-default-k8s-diff-port-715118 is active
	I0501 03:39:52.818040   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Getting domain xml...
	I0501 03:39:52.818777   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Creating domain...
	I0501 03:39:54.069624   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Waiting to get IP...
	I0501 03:39:54.070436   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:39:54.070855   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | unable to find current IP address of domain default-k8s-diff-port-715118 in network mk-default-k8s-diff-port-715118
	I0501 03:39:54.070891   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | I0501 03:39:54.070820   70304 retry.go:31] will retry after 260.072623ms: waiting for machine to come up
	I0501 03:39:54.332646   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:39:54.333077   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | unable to find current IP address of domain default-k8s-diff-port-715118 in network mk-default-k8s-diff-port-715118
	I0501 03:39:54.333115   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | I0501 03:39:54.333047   70304 retry.go:31] will retry after 270.897102ms: waiting for machine to come up
	I0501 03:39:54.605705   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:39:54.606102   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | unable to find current IP address of domain default-k8s-diff-port-715118 in network mk-default-k8s-diff-port-715118
	I0501 03:39:54.606155   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | I0501 03:39:54.606070   70304 retry.go:31] will retry after 417.613249ms: waiting for machine to come up
	I0501 03:39:55.025827   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:39:55.026340   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | unable to find current IP address of domain default-k8s-diff-port-715118 in network mk-default-k8s-diff-port-715118
	I0501 03:39:55.026371   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | I0501 03:39:55.026291   70304 retry.go:31] will retry after 428.515161ms: waiting for machine to come up
	I0501 03:39:55.456828   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:39:55.457443   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | unable to find current IP address of domain default-k8s-diff-port-715118 in network mk-default-k8s-diff-port-715118
	I0501 03:39:55.457480   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | I0501 03:39:55.457405   70304 retry.go:31] will retry after 701.294363ms: waiting for machine to come up
	I0501 03:39:54.429689   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetIP
	I0501 03:39:54.432488   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:54.432817   68864 main.go:141] libmachine: (embed-certs-277128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:7d", ip: ""} in network mk-embed-certs-277128: {Iface:virbr2 ExpiryTime:2024-05-01 04:39:45 +0000 UTC Type:0 Mac:52:54:00:96:11:7d Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-277128 Clientid:01:52:54:00:96:11:7d}
	I0501 03:39:54.432858   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined IP address 192.168.50.218 and MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:54.433039   68864 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0501 03:39:54.437866   68864 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0501 03:39:54.451509   68864 kubeadm.go:877] updating cluster {Name:embed-certs-277128 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.0 ClusterName:embed-certs-277128 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.218 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0501 03:39:54.451615   68864 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0501 03:39:54.451665   68864 ssh_runner.go:195] Run: sudo crictl images --output json
	I0501 03:39:54.494304   68864 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0501 03:39:54.494379   68864 ssh_runner.go:195] Run: which lz4
	I0501 03:39:54.499090   68864 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0501 03:39:54.503970   68864 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0501 03:39:54.503992   68864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394544937 bytes)
	I0501 03:39:56.216407   68864 crio.go:462] duration metric: took 1.717351739s to copy over tarball
	I0501 03:39:56.216488   68864 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0501 03:39:58.703133   68864 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.48661051s)
	I0501 03:39:58.703161   68864 crio.go:469] duration metric: took 2.486721448s to extract the tarball
	I0501 03:39:58.703171   68864 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0501 03:39:58.751431   68864 ssh_runner.go:195] Run: sudo crictl images --output json
	I0501 03:39:58.800353   68864 crio.go:514] all images are preloaded for cri-o runtime.
	I0501 03:39:58.800379   68864 cache_images.go:84] Images are preloaded, skipping loading
	I0501 03:39:58.800389   68864 kubeadm.go:928] updating node { 192.168.50.218 8443 v1.30.0 crio true true} ...
	I0501 03:39:58.800516   68864 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-277128 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.218
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:embed-certs-277128 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0501 03:39:58.800598   68864 ssh_runner.go:195] Run: crio config
	I0501 03:39:56.159966   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:39:56.160373   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | unable to find current IP address of domain default-k8s-diff-port-715118 in network mk-default-k8s-diff-port-715118
	I0501 03:39:56.160404   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | I0501 03:39:56.160334   70304 retry.go:31] will retry after 774.079459ms: waiting for machine to come up
	I0501 03:39:56.936654   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:39:56.937201   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | unable to find current IP address of domain default-k8s-diff-port-715118 in network mk-default-k8s-diff-port-715118
	I0501 03:39:56.937232   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | I0501 03:39:56.937161   70304 retry.go:31] will retry after 877.420181ms: waiting for machine to come up
	I0501 03:39:57.816002   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:39:57.816467   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | unable to find current IP address of domain default-k8s-diff-port-715118 in network mk-default-k8s-diff-port-715118
	I0501 03:39:57.816501   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | I0501 03:39:57.816425   70304 retry.go:31] will retry after 1.477997343s: waiting for machine to come up
	I0501 03:39:59.296533   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:39:59.296970   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | unable to find current IP address of domain default-k8s-diff-port-715118 in network mk-default-k8s-diff-port-715118
	I0501 03:39:59.296995   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | I0501 03:39:59.296922   70304 retry.go:31] will retry after 1.199617253s: waiting for machine to come up
	I0501 03:40:00.498388   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:00.498817   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | unable to find current IP address of domain default-k8s-diff-port-715118 in network mk-default-k8s-diff-port-715118
	I0501 03:40:00.498845   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | I0501 03:40:00.498770   70304 retry.go:31] will retry after 2.227608697s: waiting for machine to come up
	I0501 03:39:58.855600   68864 cni.go:84] Creating CNI manager for ""
	I0501 03:39:58.855630   68864 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0501 03:39:58.855650   68864 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0501 03:39:58.855686   68864 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.218 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-277128 NodeName:embed-certs-277128 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.218"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.218 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0501 03:39:58.855826   68864 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.218
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-277128"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.218
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.218"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0501 03:39:58.855890   68864 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0501 03:39:58.868074   68864 binaries.go:44] Found k8s binaries, skipping transfer
	I0501 03:39:58.868145   68864 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0501 03:39:58.879324   68864 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0501 03:39:58.897572   68864 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0501 03:39:58.918416   68864 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0501 03:39:58.940317   68864 ssh_runner.go:195] Run: grep 192.168.50.218	control-plane.minikube.internal$ /etc/hosts
	I0501 03:39:58.944398   68864 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.218	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0501 03:39:58.959372   68864 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 03:39:59.094172   68864 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0501 03:39:59.113612   68864 certs.go:68] Setting up /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/embed-certs-277128 for IP: 192.168.50.218
	I0501 03:39:59.113653   68864 certs.go:194] generating shared ca certs ...
	I0501 03:39:59.113669   68864 certs.go:226] acquiring lock for ca certs: {Name:mk85a75d14c00780d4bf09cd9bdaf0f96f1b8fce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 03:39:59.113863   68864 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18779-13391/.minikube/ca.key
	I0501 03:39:59.113919   68864 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.key
	I0501 03:39:59.113931   68864 certs.go:256] generating profile certs ...
	I0501 03:39:59.114044   68864 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/embed-certs-277128/client.key
	I0501 03:39:59.114117   68864 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/embed-certs-277128/apiserver.key.65584253
	I0501 03:39:59.114166   68864 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/embed-certs-277128/proxy-client.key
	I0501 03:39:59.114325   68864 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/20724.pem (1338 bytes)
	W0501 03:39:59.114369   68864 certs.go:480] ignoring /home/jenkins/minikube-integration/18779-13391/.minikube/certs/20724_empty.pem, impossibly tiny 0 bytes
	I0501 03:39:59.114383   68864 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca-key.pem (1679 bytes)
	I0501 03:39:59.114430   68864 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem (1078 bytes)
	I0501 03:39:59.114466   68864 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem (1123 bytes)
	I0501 03:39:59.114497   68864 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem (1675 bytes)
	I0501 03:39:59.114550   68864 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem (1708 bytes)
	I0501 03:39:59.115448   68864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0501 03:39:59.155890   68864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0501 03:39:59.209160   68864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0501 03:39:59.251552   68864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0501 03:39:59.288100   68864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/embed-certs-277128/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0501 03:39:59.325437   68864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/embed-certs-277128/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0501 03:39:59.352593   68864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/embed-certs-277128/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0501 03:39:59.378992   68864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/embed-certs-277128/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0501 03:39:59.405517   68864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/certs/20724.pem --> /usr/share/ca-certificates/20724.pem (1338 bytes)
	I0501 03:39:59.431253   68864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem --> /usr/share/ca-certificates/207242.pem (1708 bytes)
	I0501 03:39:59.457155   68864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0501 03:39:59.483696   68864 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0501 03:39:59.502758   68864 ssh_runner.go:195] Run: openssl version
	I0501 03:39:59.509307   68864 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20724.pem && ln -fs /usr/share/ca-certificates/20724.pem /etc/ssl/certs/20724.pem"
	I0501 03:39:59.521438   68864 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20724.pem
	I0501 03:39:59.526658   68864 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May  1 02:20 /usr/share/ca-certificates/20724.pem
	I0501 03:39:59.526706   68864 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20724.pem
	I0501 03:39:59.533201   68864 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20724.pem /etc/ssl/certs/51391683.0"
	I0501 03:39:59.546837   68864 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/207242.pem && ln -fs /usr/share/ca-certificates/207242.pem /etc/ssl/certs/207242.pem"
	I0501 03:39:59.560612   68864 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/207242.pem
	I0501 03:39:59.565545   68864 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May  1 02:20 /usr/share/ca-certificates/207242.pem
	I0501 03:39:59.565589   68864 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/207242.pem
	I0501 03:39:59.571737   68864 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/207242.pem /etc/ssl/certs/3ec20f2e.0"
	I0501 03:39:59.584602   68864 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0501 03:39:59.599088   68864 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0501 03:39:59.604230   68864 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May  1 02:08 /usr/share/ca-certificates/minikubeCA.pem
	I0501 03:39:59.604296   68864 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0501 03:39:59.610536   68864 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0501 03:39:59.624810   68864 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0501 03:39:59.629692   68864 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0501 03:39:59.636209   68864 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0501 03:39:59.642907   68864 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0501 03:39:59.649491   68864 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0501 03:39:59.655702   68864 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0501 03:39:59.661884   68864 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0501 03:39:59.668075   68864 kubeadm.go:391] StartCluster: {Name:embed-certs-277128 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.0 ClusterName:embed-certs-277128 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.218 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 03:39:59.668209   68864 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0501 03:39:59.668255   68864 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0501 03:39:59.712172   68864 cri.go:89] found id: ""
	I0501 03:39:59.712262   68864 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0501 03:39:59.723825   68864 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0501 03:39:59.723848   68864 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0501 03:39:59.723854   68864 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0501 03:39:59.723890   68864 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0501 03:39:59.735188   68864 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0501 03:39:59.736670   68864 kubeconfig.go:125] found "embed-certs-277128" server: "https://192.168.50.218:8443"
	I0501 03:39:59.739665   68864 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0501 03:39:59.750292   68864 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.218
	I0501 03:39:59.750329   68864 kubeadm.go:1154] stopping kube-system containers ...
	I0501 03:39:59.750339   68864 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0501 03:39:59.750388   68864 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0501 03:39:59.791334   68864 cri.go:89] found id: ""
	I0501 03:39:59.791436   68864 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0501 03:39:59.809162   68864 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0501 03:39:59.820979   68864 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0501 03:39:59.821013   68864 kubeadm.go:156] found existing configuration files:
	
	I0501 03:39:59.821072   68864 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0501 03:39:59.832368   68864 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0501 03:39:59.832443   68864 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0501 03:39:59.843920   68864 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0501 03:39:59.855489   68864 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0501 03:39:59.855562   68864 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0501 03:39:59.867337   68864 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0501 03:39:59.878582   68864 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0501 03:39:59.878659   68864 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0501 03:39:59.890049   68864 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0501 03:39:59.901054   68864 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0501 03:39:59.901110   68864 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0501 03:39:59.912900   68864 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0501 03:39:59.925358   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:40:00.065105   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:40:00.861756   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:40:01.089790   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:40:01.158944   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:40:01.249842   68864 api_server.go:52] waiting for apiserver process to appear ...
	I0501 03:40:01.250063   68864 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:01.750273   68864 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:02.250155   68864 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:02.291774   68864 api_server.go:72] duration metric: took 1.041932793s to wait for apiserver process to appear ...
	I0501 03:40:02.291807   68864 api_server.go:88] waiting for apiserver healthz status ...
	I0501 03:40:02.291831   68864 api_server.go:253] Checking apiserver healthz at https://192.168.50.218:8443/healthz ...
	I0501 03:40:02.292377   68864 api_server.go:269] stopped: https://192.168.50.218:8443/healthz: Get "https://192.168.50.218:8443/healthz": dial tcp 192.168.50.218:8443: connect: connection refused
	I0501 03:40:02.792584   68864 api_server.go:253] Checking apiserver healthz at https://192.168.50.218:8443/healthz ...
	I0501 03:40:02.727799   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:02.728314   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | unable to find current IP address of domain default-k8s-diff-port-715118 in network mk-default-k8s-diff-port-715118
	I0501 03:40:02.728347   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | I0501 03:40:02.728270   70304 retry.go:31] will retry after 1.844071576s: waiting for machine to come up
	I0501 03:40:04.574870   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:04.575326   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | unable to find current IP address of domain default-k8s-diff-port-715118 in network mk-default-k8s-diff-port-715118
	I0501 03:40:04.575349   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | I0501 03:40:04.575278   70304 retry.go:31] will retry after 2.989286916s: waiting for machine to come up
	I0501 03:40:04.843311   68864 api_server.go:279] https://192.168.50.218:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0501 03:40:04.843360   68864 api_server.go:103] status: https://192.168.50.218:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0501 03:40:04.843377   68864 api_server.go:253] Checking apiserver healthz at https://192.168.50.218:8443/healthz ...
	I0501 03:40:04.899616   68864 api_server.go:279] https://192.168.50.218:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0501 03:40:04.899655   68864 api_server.go:103] status: https://192.168.50.218:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0501 03:40:05.292097   68864 api_server.go:253] Checking apiserver healthz at https://192.168.50.218:8443/healthz ...
	I0501 03:40:05.300803   68864 api_server.go:279] https://192.168.50.218:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0501 03:40:05.300843   68864 api_server.go:103] status: https://192.168.50.218:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0501 03:40:05.792151   68864 api_server.go:253] Checking apiserver healthz at https://192.168.50.218:8443/healthz ...
	I0501 03:40:05.797124   68864 api_server.go:279] https://192.168.50.218:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0501 03:40:05.797158   68864 api_server.go:103] status: https://192.168.50.218:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0501 03:40:06.292821   68864 api_server.go:253] Checking apiserver healthz at https://192.168.50.218:8443/healthz ...
	I0501 03:40:06.297912   68864 api_server.go:279] https://192.168.50.218:8443/healthz returned 200:
	ok
	I0501 03:40:06.305165   68864 api_server.go:141] control plane version: v1.30.0
	I0501 03:40:06.305199   68864 api_server.go:131] duration metric: took 4.013383351s to wait for apiserver health ...
	I0501 03:40:06.305211   68864 cni.go:84] Creating CNI manager for ""
	I0501 03:40:06.305220   68864 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0501 03:40:06.306925   68864 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0501 03:40:06.308450   68864 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0501 03:40:06.325186   68864 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0501 03:40:06.380997   68864 system_pods.go:43] waiting for kube-system pods to appear ...
	I0501 03:40:06.394134   68864 system_pods.go:59] 8 kube-system pods found
	I0501 03:40:06.394178   68864 system_pods.go:61] "coredns-7db6d8ff4d-sjplt" [6701ee8e-0630-4332-b01c-26741ed3a7b7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0501 03:40:06.394191   68864 system_pods.go:61] "etcd-embed-certs-277128" [744d7481-dd80-4435-90da-685fc16a76a4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0501 03:40:06.394206   68864 system_pods.go:61] "kube-apiserver-embed-certs-277128" [2f1d0f09-b270-4cdc-af3c-e17e3a244bbf] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0501 03:40:06.394215   68864 system_pods.go:61] "kube-controller-manager-embed-certs-277128" [729aa138-dc0d-4616-97d4-1592453971b8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0501 03:40:06.394222   68864 system_pods.go:61] "kube-proxy-phx7x" [56c0381e-c140-4f69-bbe4-09d393db8b23] Running
	I0501 03:40:06.394232   68864 system_pods.go:61] "kube-scheduler-embed-certs-277128" [cc56e9bc-7f21-4f57-a65a-73a10ffd7145] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0501 03:40:06.394253   68864 system_pods.go:61] "metrics-server-569cc877fc-p8j59" [f8ad6c24-dd5d-4515-9052-c9aca7412b55] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0501 03:40:06.394258   68864 system_pods.go:61] "storage-provisioner" [785be666-58d5-4b9d-92fd-bcacdbdebeb2] Running
	I0501 03:40:06.394273   68864 system_pods.go:74] duration metric: took 13.25246ms to wait for pod list to return data ...
	I0501 03:40:06.394293   68864 node_conditions.go:102] verifying NodePressure condition ...
	I0501 03:40:06.399912   68864 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0501 03:40:06.399950   68864 node_conditions.go:123] node cpu capacity is 2
	I0501 03:40:06.399974   68864 node_conditions.go:105] duration metric: took 5.664461ms to run NodePressure ...
	I0501 03:40:06.399996   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:40:06.675573   68864 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0501 03:40:06.680567   68864 kubeadm.go:733] kubelet initialised
	I0501 03:40:06.680591   68864 kubeadm.go:734] duration metric: took 4.987942ms waiting for restarted kubelet to initialise ...
	I0501 03:40:06.680598   68864 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0501 03:40:06.687295   68864 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-sjplt" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:06.692224   68864 pod_ready.go:97] node "embed-certs-277128" hosting pod "coredns-7db6d8ff4d-sjplt" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-277128" has status "Ready":"False"
	I0501 03:40:06.692248   68864 pod_ready.go:81] duration metric: took 4.930388ms for pod "coredns-7db6d8ff4d-sjplt" in "kube-system" namespace to be "Ready" ...
	E0501 03:40:06.692258   68864 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-277128" hosting pod "coredns-7db6d8ff4d-sjplt" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-277128" has status "Ready":"False"
	I0501 03:40:06.692266   68864 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-277128" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:06.699559   68864 pod_ready.go:97] node "embed-certs-277128" hosting pod "etcd-embed-certs-277128" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-277128" has status "Ready":"False"
	I0501 03:40:06.699591   68864 pod_ready.go:81] duration metric: took 7.309622ms for pod "etcd-embed-certs-277128" in "kube-system" namespace to be "Ready" ...
	E0501 03:40:06.699602   68864 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-277128" hosting pod "etcd-embed-certs-277128" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-277128" has status "Ready":"False"
	I0501 03:40:06.699613   68864 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-277128" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:06.705459   68864 pod_ready.go:97] node "embed-certs-277128" hosting pod "kube-apiserver-embed-certs-277128" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-277128" has status "Ready":"False"
	I0501 03:40:06.705485   68864 pod_ready.go:81] duration metric: took 5.86335ms for pod "kube-apiserver-embed-certs-277128" in "kube-system" namespace to be "Ready" ...
	E0501 03:40:06.705497   68864 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-277128" hosting pod "kube-apiserver-embed-certs-277128" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-277128" has status "Ready":"False"
	I0501 03:40:06.705504   68864 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-277128" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:06.786157   68864 pod_ready.go:97] node "embed-certs-277128" hosting pod "kube-controller-manager-embed-certs-277128" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-277128" has status "Ready":"False"
	I0501 03:40:06.786186   68864 pod_ready.go:81] duration metric: took 80.673223ms for pod "kube-controller-manager-embed-certs-277128" in "kube-system" namespace to be "Ready" ...
	E0501 03:40:06.786198   68864 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-277128" hosting pod "kube-controller-manager-embed-certs-277128" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-277128" has status "Ready":"False"
	I0501 03:40:06.786205   68864 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-phx7x" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:07.184262   68864 pod_ready.go:97] node "embed-certs-277128" hosting pod "kube-proxy-phx7x" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-277128" has status "Ready":"False"
	I0501 03:40:07.184297   68864 pod_ready.go:81] duration metric: took 398.081204ms for pod "kube-proxy-phx7x" in "kube-system" namespace to be "Ready" ...
	E0501 03:40:07.184309   68864 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-277128" hosting pod "kube-proxy-phx7x" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-277128" has status "Ready":"False"
	I0501 03:40:07.184319   68864 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-277128" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:07.584569   68864 pod_ready.go:97] node "embed-certs-277128" hosting pod "kube-scheduler-embed-certs-277128" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-277128" has status "Ready":"False"
	I0501 03:40:07.584607   68864 pod_ready.go:81] duration metric: took 400.279023ms for pod "kube-scheduler-embed-certs-277128" in "kube-system" namespace to be "Ready" ...
	E0501 03:40:07.584620   68864 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-277128" hosting pod "kube-scheduler-embed-certs-277128" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-277128" has status "Ready":"False"
	I0501 03:40:07.584630   68864 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:07.984376   68864 pod_ready.go:97] node "embed-certs-277128" hosting pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-277128" has status "Ready":"False"
	I0501 03:40:07.984408   68864 pod_ready.go:81] duration metric: took 399.766342ms for pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace to be "Ready" ...
	E0501 03:40:07.984419   68864 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-277128" hosting pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-277128" has status "Ready":"False"
	I0501 03:40:07.984428   68864 pod_ready.go:38] duration metric: took 1.303821777s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0501 03:40:07.984448   68864 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0501 03:40:08.000370   68864 ops.go:34] apiserver oom_adj: -16
	I0501 03:40:08.000391   68864 kubeadm.go:591] duration metric: took 8.276531687s to restartPrimaryControlPlane
	I0501 03:40:08.000401   68864 kubeadm.go:393] duration metric: took 8.332343707s to StartCluster
	I0501 03:40:08.000416   68864 settings.go:142] acquiring lock: {Name:mkcfe08bcadd45d99c00ed1eaf4e9c226a489fe2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 03:40:08.000482   68864 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18779-13391/kubeconfig
	I0501 03:40:08.002013   68864 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18779-13391/kubeconfig: {Name:mk5d1131557f885800877622151c0915337cb23a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 03:40:08.002343   68864 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.218 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0501 03:40:08.004301   68864 out.go:177] * Verifying Kubernetes components...
	I0501 03:40:08.002423   68864 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0501 03:40:08.002582   68864 config.go:182] Loaded profile config "embed-certs-277128": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0501 03:40:08.005608   68864 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-277128"
	I0501 03:40:08.005624   68864 addons.go:69] Setting metrics-server=true in profile "embed-certs-277128"
	I0501 03:40:08.005658   68864 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-277128"
	W0501 03:40:08.005670   68864 addons.go:243] addon storage-provisioner should already be in state true
	I0501 03:40:08.005609   68864 addons.go:69] Setting default-storageclass=true in profile "embed-certs-277128"
	I0501 03:40:08.005785   68864 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-277128"
	I0501 03:40:08.005659   68864 addons.go:234] Setting addon metrics-server=true in "embed-certs-277128"
	W0501 03:40:08.005819   68864 addons.go:243] addon metrics-server should already be in state true
	I0501 03:40:08.005851   68864 host.go:66] Checking if "embed-certs-277128" exists ...
	I0501 03:40:08.005613   68864 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 03:40:08.005695   68864 host.go:66] Checking if "embed-certs-277128" exists ...
	I0501 03:40:08.006230   68864 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:40:08.006258   68864 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:40:08.006291   68864 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:40:08.006310   68864 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:40:08.006326   68864 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:40:08.006378   68864 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:40:08.021231   68864 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35583
	I0501 03:40:08.021276   68864 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44971
	I0501 03:40:08.021621   68864 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:40:08.021673   68864 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:40:08.022126   68864 main.go:141] libmachine: Using API Version  1
	I0501 03:40:08.022146   68864 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:40:08.022353   68864 main.go:141] libmachine: Using API Version  1
	I0501 03:40:08.022390   68864 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:40:08.022537   68864 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:40:08.022730   68864 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:40:08.022904   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetState
	I0501 03:40:08.023118   68864 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:40:08.023165   68864 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:40:08.024792   68864 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33047
	I0501 03:40:08.025226   68864 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:40:08.025734   68864 main.go:141] libmachine: Using API Version  1
	I0501 03:40:08.025761   68864 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:40:08.026090   68864 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:40:08.026569   68864 addons.go:234] Setting addon default-storageclass=true in "embed-certs-277128"
	W0501 03:40:08.026593   68864 addons.go:243] addon default-storageclass should already be in state true
	I0501 03:40:08.026620   68864 host.go:66] Checking if "embed-certs-277128" exists ...
	I0501 03:40:08.026696   68864 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:40:08.026730   68864 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:40:08.026977   68864 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:40:08.027033   68864 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:40:08.039119   68864 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42975
	I0501 03:40:08.039585   68864 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:40:08.040083   68864 main.go:141] libmachine: Using API Version  1
	I0501 03:40:08.040106   68864 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:40:08.040419   68864 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:40:08.040599   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetState
	I0501 03:40:08.042228   68864 main.go:141] libmachine: (embed-certs-277128) Calling .DriverName
	I0501 03:40:08.044289   68864 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0501 03:40:08.045766   68864 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0501 03:40:08.045787   68864 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0501 03:40:08.045804   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHHostname
	I0501 03:40:08.043677   68864 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37141
	I0501 03:40:08.045633   68864 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41653
	I0501 03:40:08.046247   68864 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:40:08.046326   68864 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:40:08.046989   68864 main.go:141] libmachine: Using API Version  1
	I0501 03:40:08.047012   68864 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:40:08.047196   68864 main.go:141] libmachine: Using API Version  1
	I0501 03:40:08.047216   68864 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:40:08.047279   68864 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:40:08.047403   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetState
	I0501 03:40:08.047515   68864 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:40:08.048047   68864 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:40:08.048081   68864 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:40:08.049225   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:40:08.049623   68864 main.go:141] libmachine: (embed-certs-277128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:7d", ip: ""} in network mk-embed-certs-277128: {Iface:virbr2 ExpiryTime:2024-05-01 04:39:45 +0000 UTC Type:0 Mac:52:54:00:96:11:7d Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-277128 Clientid:01:52:54:00:96:11:7d}
	I0501 03:40:08.049649   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined IP address 192.168.50.218 and MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:40:08.049773   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHPort
	I0501 03:40:08.049915   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHKeyPath
	I0501 03:40:08.050096   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHUsername
	I0501 03:40:08.050165   68864 main.go:141] libmachine: (embed-certs-277128) Calling .DriverName
	I0501 03:40:08.050297   68864 sshutil.go:53] new ssh client: &{IP:192.168.50.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/embed-certs-277128/id_rsa Username:docker}
	I0501 03:40:08.052006   68864 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0501 03:40:08.053365   68864 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0501 03:40:08.053380   68864 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0501 03:40:08.053394   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHHostname
	I0501 03:40:08.056360   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:40:08.056752   68864 main.go:141] libmachine: (embed-certs-277128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:7d", ip: ""} in network mk-embed-certs-277128: {Iface:virbr2 ExpiryTime:2024-05-01 04:39:45 +0000 UTC Type:0 Mac:52:54:00:96:11:7d Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-277128 Clientid:01:52:54:00:96:11:7d}
	I0501 03:40:08.056782   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined IP address 192.168.50.218 and MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:40:08.056892   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHPort
	I0501 03:40:08.057074   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHKeyPath
	I0501 03:40:08.057215   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHUsername
	I0501 03:40:08.057334   68864 sshutil.go:53] new ssh client: &{IP:192.168.50.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/embed-certs-277128/id_rsa Username:docker}
	I0501 03:40:08.064476   68864 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43029
	I0501 03:40:08.064882   68864 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:40:08.065323   68864 main.go:141] libmachine: Using API Version  1
	I0501 03:40:08.065352   68864 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:40:08.065696   68864 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:40:08.065895   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetState
	I0501 03:40:08.067420   68864 main.go:141] libmachine: (embed-certs-277128) Calling .DriverName
	I0501 03:40:08.067740   68864 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0501 03:40:08.067762   68864 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0501 03:40:08.067774   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHHostname
	I0501 03:40:08.070587   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:40:08.071043   68864 main.go:141] libmachine: (embed-certs-277128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:7d", ip: ""} in network mk-embed-certs-277128: {Iface:virbr2 ExpiryTime:2024-05-01 04:39:45 +0000 UTC Type:0 Mac:52:54:00:96:11:7d Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-277128 Clientid:01:52:54:00:96:11:7d}
	I0501 03:40:08.071073   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined IP address 192.168.50.218 and MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:40:08.071225   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHPort
	I0501 03:40:08.071401   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHKeyPath
	I0501 03:40:08.071554   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHUsername
	I0501 03:40:08.071688   68864 sshutil.go:53] new ssh client: &{IP:192.168.50.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/embed-certs-277128/id_rsa Username:docker}
	I0501 03:40:08.204158   68864 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0501 03:40:08.229990   68864 node_ready.go:35] waiting up to 6m0s for node "embed-certs-277128" to be "Ready" ...
	I0501 03:40:08.289511   68864 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0501 03:40:08.289535   68864 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0501 03:40:08.301855   68864 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0501 03:40:08.311966   68864 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0501 03:40:08.330943   68864 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0501 03:40:08.330973   68864 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0501 03:40:08.384842   68864 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0501 03:40:08.384867   68864 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0501 03:40:08.445206   68864 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0501 03:40:09.434390   68864 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.122391479s)
	I0501 03:40:09.434458   68864 main.go:141] libmachine: Making call to close driver server
	I0501 03:40:09.434471   68864 main.go:141] libmachine: (embed-certs-277128) Calling .Close
	I0501 03:40:09.434518   68864 main.go:141] libmachine: Making call to close driver server
	I0501 03:40:09.434541   68864 main.go:141] libmachine: (embed-certs-277128) Calling .Close
	I0501 03:40:09.434567   68864 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.132680542s)
	I0501 03:40:09.434595   68864 main.go:141] libmachine: Making call to close driver server
	I0501 03:40:09.434604   68864 main.go:141] libmachine: (embed-certs-277128) Calling .Close
	I0501 03:40:09.434833   68864 main.go:141] libmachine: Successfully made call to close driver server
	I0501 03:40:09.434859   68864 main.go:141] libmachine: Successfully made call to close driver server
	I0501 03:40:09.434870   68864 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 03:40:09.434872   68864 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 03:40:09.434881   68864 main.go:141] libmachine: Making call to close driver server
	I0501 03:40:09.434882   68864 main.go:141] libmachine: Making call to close driver server
	I0501 03:40:09.434889   68864 main.go:141] libmachine: (embed-certs-277128) Calling .Close
	I0501 03:40:09.434890   68864 main.go:141] libmachine: (embed-certs-277128) Calling .Close
	I0501 03:40:09.434936   68864 main.go:141] libmachine: Successfully made call to close driver server
	I0501 03:40:09.434949   68864 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 03:40:09.434967   68864 main.go:141] libmachine: Making call to close driver server
	I0501 03:40:09.434994   68864 main.go:141] libmachine: (embed-certs-277128) Calling .Close
	I0501 03:40:09.434832   68864 main.go:141] libmachine: (embed-certs-277128) DBG | Closing plugin on server side
	I0501 03:40:09.435072   68864 main.go:141] libmachine: (embed-certs-277128) DBG | Closing plugin on server side
	I0501 03:40:09.437116   68864 main.go:141] libmachine: Successfully made call to close driver server
	I0501 03:40:09.437138   68864 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 03:40:09.437146   68864 main.go:141] libmachine: (embed-certs-277128) DBG | Closing plugin on server side
	I0501 03:40:09.437179   68864 main.go:141] libmachine: (embed-certs-277128) DBG | Closing plugin on server side
	I0501 03:40:09.437194   68864 main.go:141] libmachine: Successfully made call to close driver server
	I0501 03:40:09.437215   68864 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 03:40:09.437297   68864 main.go:141] libmachine: Successfully made call to close driver server
	I0501 03:40:09.437342   68864 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 03:40:09.437359   68864 addons.go:470] Verifying addon metrics-server=true in "embed-certs-277128"
	I0501 03:40:09.445787   68864 main.go:141] libmachine: Making call to close driver server
	I0501 03:40:09.445817   68864 main.go:141] libmachine: (embed-certs-277128) Calling .Close
	I0501 03:40:09.446053   68864 main.go:141] libmachine: Successfully made call to close driver server
	I0501 03:40:09.446090   68864 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 03:40:09.446112   68864 main.go:141] libmachine: (embed-certs-277128) DBG | Closing plugin on server side
	I0501 03:40:09.448129   68864 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass
	I0501 03:40:07.567551   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:07.567914   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | unable to find current IP address of domain default-k8s-diff-port-715118 in network mk-default-k8s-diff-port-715118
	I0501 03:40:07.567948   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | I0501 03:40:07.567860   70304 retry.go:31] will retry after 4.440791777s: waiting for machine to come up
	I0501 03:40:13.516002   69580 start.go:364] duration metric: took 3m31.9441828s to acquireMachinesLock for "old-k8s-version-503971"
	I0501 03:40:13.516087   69580 start.go:96] Skipping create...Using existing machine configuration
	I0501 03:40:13.516100   69580 fix.go:54] fixHost starting: 
	I0501 03:40:13.516559   69580 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:40:13.516601   69580 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:40:13.537158   69580 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38535
	I0501 03:40:13.537631   69580 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:40:13.538169   69580 main.go:141] libmachine: Using API Version  1
	I0501 03:40:13.538197   69580 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:40:13.538570   69580 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:40:13.538769   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .DriverName
	I0501 03:40:13.538958   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetState
	I0501 03:40:13.540454   69580 fix.go:112] recreateIfNeeded on old-k8s-version-503971: state=Stopped err=<nil>
	I0501 03:40:13.540486   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .DriverName
	W0501 03:40:13.540787   69580 fix.go:138] unexpected machine state, will restart: <nil>
	I0501 03:40:13.542670   69580 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-503971" ...
	I0501 03:40:09.449483   68864 addons.go:505] duration metric: took 1.447068548s for enable addons: enabled=[storage-provisioner metrics-server default-storageclass]
	I0501 03:40:10.233650   68864 node_ready.go:53] node "embed-certs-277128" has status "Ready":"False"
	I0501 03:40:12.234270   68864 node_ready.go:53] node "embed-certs-277128" has status "Ready":"False"
	I0501 03:40:12.011886   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:12.012305   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Found IP for machine: 192.168.72.158
	I0501 03:40:12.012335   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has current primary IP address 192.168.72.158 and MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:12.012347   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Reserving static IP address...
	I0501 03:40:12.012759   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-715118", mac: "52:54:00:87:12:31", ip: "192.168.72.158"} in network mk-default-k8s-diff-port-715118: {Iface:virbr3 ExpiryTime:2024-05-01 04:40:05 +0000 UTC Type:0 Mac:52:54:00:87:12:31 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-715118 Clientid:01:52:54:00:87:12:31}
	I0501 03:40:12.012796   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | skip adding static IP to network mk-default-k8s-diff-port-715118 - found existing host DHCP lease matching {name: "default-k8s-diff-port-715118", mac: "52:54:00:87:12:31", ip: "192.168.72.158"}
	I0501 03:40:12.012809   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Reserved static IP address: 192.168.72.158
	I0501 03:40:12.012828   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Waiting for SSH to be available...
	I0501 03:40:12.012835   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | Getting to WaitForSSH function...
	I0501 03:40:12.014719   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:12.015044   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:12:31", ip: ""} in network mk-default-k8s-diff-port-715118: {Iface:virbr3 ExpiryTime:2024-05-01 04:40:05 +0000 UTC Type:0 Mac:52:54:00:87:12:31 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-715118 Clientid:01:52:54:00:87:12:31}
	I0501 03:40:12.015080   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined IP address 192.168.72.158 and MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:12.015193   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | Using SSH client type: external
	I0501 03:40:12.015220   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | Using SSH private key: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/default-k8s-diff-port-715118/id_rsa (-rw-------)
	I0501 03:40:12.015269   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.158 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18779-13391/.minikube/machines/default-k8s-diff-port-715118/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0501 03:40:12.015280   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | About to run SSH command:
	I0501 03:40:12.015289   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | exit 0
	I0501 03:40:12.138881   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | SSH cmd err, output: <nil>: 
	I0501 03:40:12.139286   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetConfigRaw
	I0501 03:40:12.140056   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetIP
	I0501 03:40:12.142869   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:12.143322   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:12:31", ip: ""} in network mk-default-k8s-diff-port-715118: {Iface:virbr3 ExpiryTime:2024-05-01 04:40:05 +0000 UTC Type:0 Mac:52:54:00:87:12:31 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-715118 Clientid:01:52:54:00:87:12:31}
	I0501 03:40:12.143353   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined IP address 192.168.72.158 and MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:12.143662   69237 profile.go:143] Saving config to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/default-k8s-diff-port-715118/config.json ...
	I0501 03:40:12.143858   69237 machine.go:94] provisionDockerMachine start ...
	I0501 03:40:12.143876   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .DriverName
	I0501 03:40:12.144117   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHHostname
	I0501 03:40:12.146145   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:12.146535   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:12:31", ip: ""} in network mk-default-k8s-diff-port-715118: {Iface:virbr3 ExpiryTime:2024-05-01 04:40:05 +0000 UTC Type:0 Mac:52:54:00:87:12:31 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-715118 Clientid:01:52:54:00:87:12:31}
	I0501 03:40:12.146563   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined IP address 192.168.72.158 and MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:12.146712   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHPort
	I0501 03:40:12.146889   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHKeyPath
	I0501 03:40:12.147021   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHKeyPath
	I0501 03:40:12.147130   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHUsername
	I0501 03:40:12.147310   69237 main.go:141] libmachine: Using SSH client type: native
	I0501 03:40:12.147558   69237 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.158 22 <nil> <nil>}
	I0501 03:40:12.147574   69237 main.go:141] libmachine: About to run SSH command:
	hostname
	I0501 03:40:12.251357   69237 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0501 03:40:12.251387   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetMachineName
	I0501 03:40:12.251629   69237 buildroot.go:166] provisioning hostname "default-k8s-diff-port-715118"
	I0501 03:40:12.251658   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetMachineName
	I0501 03:40:12.251862   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHHostname
	I0501 03:40:12.254582   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:12.254892   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:12:31", ip: ""} in network mk-default-k8s-diff-port-715118: {Iface:virbr3 ExpiryTime:2024-05-01 04:40:05 +0000 UTC Type:0 Mac:52:54:00:87:12:31 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-715118 Clientid:01:52:54:00:87:12:31}
	I0501 03:40:12.254924   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined IP address 192.168.72.158 and MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:12.255073   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHPort
	I0501 03:40:12.255276   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHKeyPath
	I0501 03:40:12.255435   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHKeyPath
	I0501 03:40:12.255575   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHUsername
	I0501 03:40:12.255744   69237 main.go:141] libmachine: Using SSH client type: native
	I0501 03:40:12.255905   69237 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.158 22 <nil> <nil>}
	I0501 03:40:12.255917   69237 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-715118 && echo "default-k8s-diff-port-715118" | sudo tee /etc/hostname
	I0501 03:40:12.377588   69237 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-715118
	
	I0501 03:40:12.377628   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHHostname
	I0501 03:40:12.380627   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:12.380927   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:12:31", ip: ""} in network mk-default-k8s-diff-port-715118: {Iface:virbr3 ExpiryTime:2024-05-01 04:40:05 +0000 UTC Type:0 Mac:52:54:00:87:12:31 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-715118 Clientid:01:52:54:00:87:12:31}
	I0501 03:40:12.380958   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined IP address 192.168.72.158 and MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:12.381155   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHPort
	I0501 03:40:12.381372   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHKeyPath
	I0501 03:40:12.381550   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHKeyPath
	I0501 03:40:12.381723   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHUsername
	I0501 03:40:12.381907   69237 main.go:141] libmachine: Using SSH client type: native
	I0501 03:40:12.382148   69237 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.158 22 <nil> <nil>}
	I0501 03:40:12.382170   69237 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-715118' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-715118/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-715118' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0501 03:40:12.494424   69237 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0501 03:40:12.494454   69237 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18779-13391/.minikube CaCertPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18779-13391/.minikube}
	I0501 03:40:12.494484   69237 buildroot.go:174] setting up certificates
	I0501 03:40:12.494493   69237 provision.go:84] configureAuth start
	I0501 03:40:12.494504   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetMachineName
	I0501 03:40:12.494786   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetIP
	I0501 03:40:12.497309   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:12.497584   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:12:31", ip: ""} in network mk-default-k8s-diff-port-715118: {Iface:virbr3 ExpiryTime:2024-05-01 04:40:05 +0000 UTC Type:0 Mac:52:54:00:87:12:31 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-715118 Clientid:01:52:54:00:87:12:31}
	I0501 03:40:12.497616   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined IP address 192.168.72.158 and MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:12.497746   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHHostname
	I0501 03:40:12.500010   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:12.500302   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:12:31", ip: ""} in network mk-default-k8s-diff-port-715118: {Iface:virbr3 ExpiryTime:2024-05-01 04:40:05 +0000 UTC Type:0 Mac:52:54:00:87:12:31 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-715118 Clientid:01:52:54:00:87:12:31}
	I0501 03:40:12.500322   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined IP address 192.168.72.158 and MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:12.500449   69237 provision.go:143] copyHostCerts
	I0501 03:40:12.500505   69237 exec_runner.go:144] found /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem, removing ...
	I0501 03:40:12.500524   69237 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem
	I0501 03:40:12.500598   69237 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem (1078 bytes)
	I0501 03:40:12.500759   69237 exec_runner.go:144] found /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem, removing ...
	I0501 03:40:12.500772   69237 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem
	I0501 03:40:12.500815   69237 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem (1123 bytes)
	I0501 03:40:12.500891   69237 exec_runner.go:144] found /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem, removing ...
	I0501 03:40:12.500900   69237 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem
	I0501 03:40:12.500925   69237 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem (1675 bytes)
	I0501 03:40:12.500991   69237 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-715118 san=[127.0.0.1 192.168.72.158 default-k8s-diff-port-715118 localhost minikube]
	I0501 03:40:12.779037   69237 provision.go:177] copyRemoteCerts
	I0501 03:40:12.779104   69237 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0501 03:40:12.779139   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHHostname
	I0501 03:40:12.781800   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:12.782159   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:12:31", ip: ""} in network mk-default-k8s-diff-port-715118: {Iface:virbr3 ExpiryTime:2024-05-01 04:40:05 +0000 UTC Type:0 Mac:52:54:00:87:12:31 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-715118 Clientid:01:52:54:00:87:12:31}
	I0501 03:40:12.782195   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined IP address 192.168.72.158 and MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:12.782356   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHPort
	I0501 03:40:12.782655   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHKeyPath
	I0501 03:40:12.782812   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHUsername
	I0501 03:40:12.782946   69237 sshutil.go:53] new ssh client: &{IP:192.168.72.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/default-k8s-diff-port-715118/id_rsa Username:docker}
	I0501 03:40:12.867622   69237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0501 03:40:12.897105   69237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0501 03:40:12.926675   69237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0501 03:40:12.955373   69237 provision.go:87] duration metric: took 460.865556ms to configureAuth
	I0501 03:40:12.955405   69237 buildroot.go:189] setting minikube options for container-runtime
	I0501 03:40:12.955606   69237 config.go:182] Loaded profile config "default-k8s-diff-port-715118": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0501 03:40:12.955700   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHHostname
	I0501 03:40:12.958286   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:12.958632   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:12:31", ip: ""} in network mk-default-k8s-diff-port-715118: {Iface:virbr3 ExpiryTime:2024-05-01 04:40:05 +0000 UTC Type:0 Mac:52:54:00:87:12:31 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-715118 Clientid:01:52:54:00:87:12:31}
	I0501 03:40:12.958670   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined IP address 192.168.72.158 and MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:12.958800   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHPort
	I0501 03:40:12.959007   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHKeyPath
	I0501 03:40:12.959225   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHKeyPath
	I0501 03:40:12.959374   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHUsername
	I0501 03:40:12.959554   69237 main.go:141] libmachine: Using SSH client type: native
	I0501 03:40:12.959729   69237 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.158 22 <nil> <nil>}
	I0501 03:40:12.959748   69237 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0501 03:40:13.253328   69237 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0501 03:40:13.253356   69237 machine.go:97] duration metric: took 1.109484866s to provisionDockerMachine
	I0501 03:40:13.253371   69237 start.go:293] postStartSetup for "default-k8s-diff-port-715118" (driver="kvm2")
	I0501 03:40:13.253385   69237 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0501 03:40:13.253405   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .DriverName
	I0501 03:40:13.253753   69237 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0501 03:40:13.253790   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHHostname
	I0501 03:40:13.256734   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:13.257187   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:12:31", ip: ""} in network mk-default-k8s-diff-port-715118: {Iface:virbr3 ExpiryTime:2024-05-01 04:40:05 +0000 UTC Type:0 Mac:52:54:00:87:12:31 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-715118 Clientid:01:52:54:00:87:12:31}
	I0501 03:40:13.257214   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined IP address 192.168.72.158 and MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:13.257345   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHPort
	I0501 03:40:13.257547   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHKeyPath
	I0501 03:40:13.257708   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHUsername
	I0501 03:40:13.257856   69237 sshutil.go:53] new ssh client: &{IP:192.168.72.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/default-k8s-diff-port-715118/id_rsa Username:docker}
	I0501 03:40:13.353373   69237 ssh_runner.go:195] Run: cat /etc/os-release
	I0501 03:40:13.359653   69237 info.go:137] Remote host: Buildroot 2023.02.9
	I0501 03:40:13.359679   69237 filesync.go:126] Scanning /home/jenkins/minikube-integration/18779-13391/.minikube/addons for local assets ...
	I0501 03:40:13.359747   69237 filesync.go:126] Scanning /home/jenkins/minikube-integration/18779-13391/.minikube/files for local assets ...
	I0501 03:40:13.359854   69237 filesync.go:149] local asset: /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem -> 207242.pem in /etc/ssl/certs
	I0501 03:40:13.359964   69237 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0501 03:40:13.370608   69237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem --> /etc/ssl/certs/207242.pem (1708 bytes)
	I0501 03:40:13.402903   69237 start.go:296] duration metric: took 149.518346ms for postStartSetup
	I0501 03:40:13.402946   69237 fix.go:56] duration metric: took 20.610871873s for fixHost
	I0501 03:40:13.402967   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHHostname
	I0501 03:40:13.406324   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:13.406762   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:12:31", ip: ""} in network mk-default-k8s-diff-port-715118: {Iface:virbr3 ExpiryTime:2024-05-01 04:40:05 +0000 UTC Type:0 Mac:52:54:00:87:12:31 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-715118 Clientid:01:52:54:00:87:12:31}
	I0501 03:40:13.406792   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined IP address 192.168.72.158 and MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:13.407028   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHPort
	I0501 03:40:13.407274   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHKeyPath
	I0501 03:40:13.407505   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHKeyPath
	I0501 03:40:13.407645   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHUsername
	I0501 03:40:13.407831   69237 main.go:141] libmachine: Using SSH client type: native
	I0501 03:40:13.408034   69237 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.158 22 <nil> <nil>}
	I0501 03:40:13.408045   69237 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0501 03:40:13.515775   69237 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714534813.490981768
	
	I0501 03:40:13.515814   69237 fix.go:216] guest clock: 1714534813.490981768
	I0501 03:40:13.515852   69237 fix.go:229] Guest: 2024-05-01 03:40:13.490981768 +0000 UTC Remote: 2024-05-01 03:40:13.402950224 +0000 UTC m=+262.796298359 (delta=88.031544ms)
	I0501 03:40:13.515884   69237 fix.go:200] guest clock delta is within tolerance: 88.031544ms
	I0501 03:40:13.515891   69237 start.go:83] releasing machines lock for "default-k8s-diff-port-715118", held for 20.723857967s
	I0501 03:40:13.515976   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .DriverName
	I0501 03:40:13.516272   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetIP
	I0501 03:40:13.519627   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:13.520098   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:12:31", ip: ""} in network mk-default-k8s-diff-port-715118: {Iface:virbr3 ExpiryTime:2024-05-01 04:40:05 +0000 UTC Type:0 Mac:52:54:00:87:12:31 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-715118 Clientid:01:52:54:00:87:12:31}
	I0501 03:40:13.520128   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined IP address 192.168.72.158 and MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:13.520304   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .DriverName
	I0501 03:40:13.520922   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .DriverName
	I0501 03:40:13.521122   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .DriverName
	I0501 03:40:13.521212   69237 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0501 03:40:13.521292   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHHostname
	I0501 03:40:13.521355   69237 ssh_runner.go:195] Run: cat /version.json
	I0501 03:40:13.521387   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHHostname
	I0501 03:40:13.524292   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:13.524328   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:13.524612   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:12:31", ip: ""} in network mk-default-k8s-diff-port-715118: {Iface:virbr3 ExpiryTime:2024-05-01 04:40:05 +0000 UTC Type:0 Mac:52:54:00:87:12:31 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-715118 Clientid:01:52:54:00:87:12:31}
	I0501 03:40:13.524672   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined IP address 192.168.72.158 and MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:13.524819   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:12:31", ip: ""} in network mk-default-k8s-diff-port-715118: {Iface:virbr3 ExpiryTime:2024-05-01 04:40:05 +0000 UTC Type:0 Mac:52:54:00:87:12:31 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-715118 Clientid:01:52:54:00:87:12:31}
	I0501 03:40:13.524948   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined IP address 192.168.72.158 and MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:13.524989   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHPort
	I0501 03:40:13.525033   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHPort
	I0501 03:40:13.525171   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHKeyPath
	I0501 03:40:13.525196   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHKeyPath
	I0501 03:40:13.525306   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHUsername
	I0501 03:40:13.525401   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHUsername
	I0501 03:40:13.525490   69237 sshutil.go:53] new ssh client: &{IP:192.168.72.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/default-k8s-diff-port-715118/id_rsa Username:docker}
	I0501 03:40:13.525553   69237 sshutil.go:53] new ssh client: &{IP:192.168.72.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/default-k8s-diff-port-715118/id_rsa Username:docker}
	I0501 03:40:13.628623   69237 ssh_runner.go:195] Run: systemctl --version
	I0501 03:40:13.636013   69237 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0501 03:40:13.787414   69237 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0501 03:40:13.795777   69237 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0501 03:40:13.795867   69237 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0501 03:40:13.822287   69237 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0501 03:40:13.822326   69237 start.go:494] detecting cgroup driver to use...
	I0501 03:40:13.822507   69237 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0501 03:40:13.841310   69237 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0501 03:40:13.857574   69237 docker.go:217] disabling cri-docker service (if available) ...
	I0501 03:40:13.857645   69237 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0501 03:40:13.872903   69237 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0501 03:40:13.889032   69237 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0501 03:40:14.020563   69237 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0501 03:40:14.222615   69237 docker.go:233] disabling docker service ...
	I0501 03:40:14.222691   69237 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0501 03:40:14.245841   69237 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0501 03:40:14.261001   69237 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0501 03:40:14.385943   69237 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0501 03:40:14.516899   69237 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0501 03:40:14.545138   69237 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0501 03:40:14.570308   69237 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0501 03:40:14.570373   69237 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:40:14.586460   69237 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0501 03:40:14.586535   69237 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:40:14.598947   69237 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:40:14.617581   69237 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:40:14.630097   69237 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0501 03:40:14.642379   69237 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:40:14.653723   69237 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:40:14.674508   69237 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:40:14.685890   69237 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0501 03:40:14.696560   69237 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0501 03:40:14.696614   69237 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0501 03:40:14.713050   69237 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0501 03:40:14.723466   69237 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 03:40:14.884910   69237 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0501 03:40:15.030618   69237 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0501 03:40:15.030689   69237 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0501 03:40:15.036403   69237 start.go:562] Will wait 60s for crictl version
	I0501 03:40:15.036470   69237 ssh_runner.go:195] Run: which crictl
	I0501 03:40:15.040924   69237 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0501 03:40:15.082944   69237 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0501 03:40:15.083037   69237 ssh_runner.go:195] Run: crio --version
	I0501 03:40:15.123492   69237 ssh_runner.go:195] Run: crio --version
	I0501 03:40:15.160739   69237 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0501 03:40:15.162026   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetIP
	I0501 03:40:15.164966   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:15.165378   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:12:31", ip: ""} in network mk-default-k8s-diff-port-715118: {Iface:virbr3 ExpiryTime:2024-05-01 04:40:05 +0000 UTC Type:0 Mac:52:54:00:87:12:31 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-715118 Clientid:01:52:54:00:87:12:31}
	I0501 03:40:15.165417   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined IP address 192.168.72.158 and MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:15.165621   69237 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0501 03:40:15.171717   69237 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0501 03:40:15.190203   69237 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-715118 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.0 ClusterName:default-k8s-diff-port-715118 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.158 Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0501 03:40:15.190359   69237 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0501 03:40:15.190439   69237 ssh_runner.go:195] Run: sudo crictl images --output json
	I0501 03:40:15.240549   69237 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0501 03:40:15.240606   69237 ssh_runner.go:195] Run: which lz4
	I0501 03:40:15.246523   69237 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0501 03:40:15.253094   69237 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0501 03:40:15.253139   69237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394544937 bytes)
	I0501 03:40:13.544100   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .Start
	I0501 03:40:13.544328   69580 main.go:141] libmachine: (old-k8s-version-503971) Ensuring networks are active...
	I0501 03:40:13.545238   69580 main.go:141] libmachine: (old-k8s-version-503971) Ensuring network default is active
	I0501 03:40:13.545621   69580 main.go:141] libmachine: (old-k8s-version-503971) Ensuring network mk-old-k8s-version-503971 is active
	I0501 03:40:13.546072   69580 main.go:141] libmachine: (old-k8s-version-503971) Getting domain xml...
	I0501 03:40:13.546928   69580 main.go:141] libmachine: (old-k8s-version-503971) Creating domain...
	I0501 03:40:14.858558   69580 main.go:141] libmachine: (old-k8s-version-503971) Waiting to get IP...
	I0501 03:40:14.859690   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:14.860108   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | unable to find current IP address of domain old-k8s-version-503971 in network mk-old-k8s-version-503971
	I0501 03:40:14.860215   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | I0501 03:40:14.860103   70499 retry.go:31] will retry after 294.057322ms: waiting for machine to come up
	I0501 03:40:15.155490   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:15.155922   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | unable to find current IP address of domain old-k8s-version-503971 in network mk-old-k8s-version-503971
	I0501 03:40:15.155954   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | I0501 03:40:15.155870   70499 retry.go:31] will retry after 281.238966ms: waiting for machine to come up
	I0501 03:40:15.439196   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:15.439735   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | unable to find current IP address of domain old-k8s-version-503971 in network mk-old-k8s-version-503971
	I0501 03:40:15.439783   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | I0501 03:40:15.439697   70499 retry.go:31] will retry after 429.353689ms: waiting for machine to come up
	I0501 03:40:15.871266   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:15.871947   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | unable to find current IP address of domain old-k8s-version-503971 in network mk-old-k8s-version-503971
	I0501 03:40:15.871970   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | I0501 03:40:15.871895   70499 retry.go:31] will retry after 478.685219ms: waiting for machine to come up
	I0501 03:40:16.352661   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:16.353125   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | unable to find current IP address of domain old-k8s-version-503971 in network mk-old-k8s-version-503971
	I0501 03:40:16.353161   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | I0501 03:40:16.353087   70499 retry.go:31] will retry after 642.905156ms: waiting for machine to come up
	I0501 03:40:14.235378   68864 node_ready.go:53] node "embed-certs-277128" has status "Ready":"False"
	I0501 03:40:15.735465   68864 node_ready.go:49] node "embed-certs-277128" has status "Ready":"True"
	I0501 03:40:15.735494   68864 node_ready.go:38] duration metric: took 7.50546727s for node "embed-certs-277128" to be "Ready" ...
	I0501 03:40:15.735503   68864 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0501 03:40:15.743215   68864 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-sjplt" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:17.752821   68864 pod_ready.go:102] pod "coredns-7db6d8ff4d-sjplt" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:17.121023   69237 crio.go:462] duration metric: took 1.874524806s to copy over tarball
	I0501 03:40:17.121097   69237 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0501 03:40:19.792970   69237 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.671840765s)
	I0501 03:40:19.793004   69237 crio.go:469] duration metric: took 2.67194801s to extract the tarball
	I0501 03:40:19.793014   69237 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0501 03:40:19.834845   69237 ssh_runner.go:195] Run: sudo crictl images --output json
	I0501 03:40:19.896841   69237 crio.go:514] all images are preloaded for cri-o runtime.
	I0501 03:40:19.896881   69237 cache_images.go:84] Images are preloaded, skipping loading
	I0501 03:40:19.896892   69237 kubeadm.go:928] updating node { 192.168.72.158 8444 v1.30.0 crio true true} ...
	I0501 03:40:19.897027   69237 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-715118 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.158
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:default-k8s-diff-port-715118 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0501 03:40:19.897113   69237 ssh_runner.go:195] Run: crio config
	I0501 03:40:19.953925   69237 cni.go:84] Creating CNI manager for ""
	I0501 03:40:19.953956   69237 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0501 03:40:19.953971   69237 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0501 03:40:19.953991   69237 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.158 APIServerPort:8444 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-715118 NodeName:default-k8s-diff-port-715118 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.158"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.158 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0501 03:40:19.954133   69237 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.158
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-715118"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.158
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.158"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0501 03:40:19.954198   69237 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0501 03:40:19.967632   69237 binaries.go:44] Found k8s binaries, skipping transfer
	I0501 03:40:19.967708   69237 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0501 03:40:19.984161   69237 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0501 03:40:20.006540   69237 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0501 03:40:20.029218   69237 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0501 03:40:20.051612   69237 ssh_runner.go:195] Run: grep 192.168.72.158	control-plane.minikube.internal$ /etc/hosts
	I0501 03:40:20.056502   69237 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.158	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0501 03:40:20.071665   69237 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 03:40:20.194289   69237 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0501 03:40:20.215402   69237 certs.go:68] Setting up /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/default-k8s-diff-port-715118 for IP: 192.168.72.158
	I0501 03:40:20.215440   69237 certs.go:194] generating shared ca certs ...
	I0501 03:40:20.215471   69237 certs.go:226] acquiring lock for ca certs: {Name:mk85a75d14c00780d4bf09cd9bdaf0f96f1b8fce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 03:40:20.215698   69237 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18779-13391/.minikube/ca.key
	I0501 03:40:20.215769   69237 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.key
	I0501 03:40:20.215785   69237 certs.go:256] generating profile certs ...
	I0501 03:40:20.215922   69237 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/default-k8s-diff-port-715118/client.key
	I0501 03:40:20.216023   69237 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/default-k8s-diff-port-715118/apiserver.key.91bc3872
	I0501 03:40:20.216094   69237 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/default-k8s-diff-port-715118/proxy-client.key
	I0501 03:40:20.216275   69237 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/20724.pem (1338 bytes)
	W0501 03:40:20.216321   69237 certs.go:480] ignoring /home/jenkins/minikube-integration/18779-13391/.minikube/certs/20724_empty.pem, impossibly tiny 0 bytes
	I0501 03:40:20.216337   69237 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca-key.pem (1679 bytes)
	I0501 03:40:20.216375   69237 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem (1078 bytes)
	I0501 03:40:20.216439   69237 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem (1123 bytes)
	I0501 03:40:20.216472   69237 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem (1675 bytes)
	I0501 03:40:20.216560   69237 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem (1708 bytes)
	I0501 03:40:20.217306   69237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0501 03:40:20.256162   69237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0501 03:40:20.293643   69237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0501 03:40:20.329175   69237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0501 03:40:20.367715   69237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/default-k8s-diff-port-715118/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0501 03:40:20.400024   69237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/default-k8s-diff-port-715118/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0501 03:40:20.428636   69237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/default-k8s-diff-port-715118/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0501 03:40:20.458689   69237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/default-k8s-diff-port-715118/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0501 03:40:20.487619   69237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem --> /usr/share/ca-certificates/207242.pem (1708 bytes)
	I0501 03:40:20.518140   69237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0501 03:40:20.547794   69237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/certs/20724.pem --> /usr/share/ca-certificates/20724.pem (1338 bytes)
	I0501 03:40:20.580453   69237 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0501 03:40:20.605211   69237 ssh_runner.go:195] Run: openssl version
	I0501 03:40:20.612269   69237 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/207242.pem && ln -fs /usr/share/ca-certificates/207242.pem /etc/ssl/certs/207242.pem"
	I0501 03:40:20.626575   69237 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/207242.pem
	I0501 03:40:20.632370   69237 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May  1 02:20 /usr/share/ca-certificates/207242.pem
	I0501 03:40:20.632439   69237 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/207242.pem
	I0501 03:40:20.639563   69237 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/207242.pem /etc/ssl/certs/3ec20f2e.0"
	I0501 03:40:16.997533   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:16.998034   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | unable to find current IP address of domain old-k8s-version-503971 in network mk-old-k8s-version-503971
	I0501 03:40:16.998076   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | I0501 03:40:16.997984   70499 retry.go:31] will retry after 596.56948ms: waiting for machine to come up
	I0501 03:40:17.596671   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:17.597182   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | unable to find current IP address of domain old-k8s-version-503971 in network mk-old-k8s-version-503971
	I0501 03:40:17.597207   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | I0501 03:40:17.597132   70499 retry.go:31] will retry after 770.742109ms: waiting for machine to come up
	I0501 03:40:18.369337   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:18.369833   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | unable to find current IP address of domain old-k8s-version-503971 in network mk-old-k8s-version-503971
	I0501 03:40:18.369864   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | I0501 03:40:18.369780   70499 retry.go:31] will retry after 1.382502808s: waiting for machine to come up
	I0501 03:40:19.753936   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:19.754419   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | unable to find current IP address of domain old-k8s-version-503971 in network mk-old-k8s-version-503971
	I0501 03:40:19.754458   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | I0501 03:40:19.754363   70499 retry.go:31] will retry after 1.344792989s: waiting for machine to come up
	I0501 03:40:21.101047   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:21.101474   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | unable to find current IP address of domain old-k8s-version-503971 in network mk-old-k8s-version-503971
	I0501 03:40:21.101514   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | I0501 03:40:21.101442   70499 retry.go:31] will retry after 1.636964906s: waiting for machine to come up
	I0501 03:40:20.252239   68864 pod_ready.go:102] pod "coredns-7db6d8ff4d-sjplt" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:22.751407   68864 pod_ready.go:92] pod "coredns-7db6d8ff4d-sjplt" in "kube-system" namespace has status "Ready":"True"
	I0501 03:40:22.751431   68864 pod_ready.go:81] duration metric: took 7.008190087s for pod "coredns-7db6d8ff4d-sjplt" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:22.751442   68864 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-277128" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:22.757104   68864 pod_ready.go:92] pod "etcd-embed-certs-277128" in "kube-system" namespace has status "Ready":"True"
	I0501 03:40:22.757124   68864 pod_ready.go:81] duration metric: took 5.677117ms for pod "etcd-embed-certs-277128" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:22.757141   68864 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-277128" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:22.763083   68864 pod_ready.go:92] pod "kube-apiserver-embed-certs-277128" in "kube-system" namespace has status "Ready":"True"
	I0501 03:40:22.763107   68864 pod_ready.go:81] duration metric: took 5.958961ms for pod "kube-apiserver-embed-certs-277128" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:22.763119   68864 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-277128" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:22.768163   68864 pod_ready.go:92] pod "kube-controller-manager-embed-certs-277128" in "kube-system" namespace has status "Ready":"True"
	I0501 03:40:22.768182   68864 pod_ready.go:81] duration metric: took 5.055934ms for pod "kube-controller-manager-embed-certs-277128" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:22.768193   68864 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-phx7x" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:22.772478   68864 pod_ready.go:92] pod "kube-proxy-phx7x" in "kube-system" namespace has status "Ready":"True"
	I0501 03:40:22.772497   68864 pod_ready.go:81] duration metric: took 4.297358ms for pod "kube-proxy-phx7x" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:22.772505   68864 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-277128" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:23.149692   68864 pod_ready.go:92] pod "kube-scheduler-embed-certs-277128" in "kube-system" namespace has status "Ready":"True"
	I0501 03:40:23.149726   68864 pod_ready.go:81] duration metric: took 377.213314ms for pod "kube-scheduler-embed-certs-277128" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:23.149741   68864 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:20.653202   69237 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0501 03:40:20.878582   69237 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0501 03:40:20.884671   69237 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May  1 02:08 /usr/share/ca-certificates/minikubeCA.pem
	I0501 03:40:20.884755   69237 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0501 03:40:20.891633   69237 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0501 03:40:20.906032   69237 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20724.pem && ln -fs /usr/share/ca-certificates/20724.pem /etc/ssl/certs/20724.pem"
	I0501 03:40:20.924491   69237 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20724.pem
	I0501 03:40:20.931346   69237 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May  1 02:20 /usr/share/ca-certificates/20724.pem
	I0501 03:40:20.931421   69237 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20724.pem
	I0501 03:40:20.937830   69237 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20724.pem /etc/ssl/certs/51391683.0"
	I0501 03:40:20.951239   69237 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0501 03:40:20.956883   69237 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0501 03:40:20.964048   69237 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0501 03:40:20.971156   69237 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0501 03:40:20.978243   69237 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0501 03:40:20.985183   69237 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0501 03:40:20.991709   69237 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0501 03:40:20.998390   69237 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-715118 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.0 ClusterName:default-k8s-diff-port-715118 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.158 Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 03:40:20.998509   69237 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0501 03:40:20.998558   69237 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0501 03:40:21.051469   69237 cri.go:89] found id: ""
	I0501 03:40:21.051575   69237 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0501 03:40:21.063280   69237 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0501 03:40:21.063301   69237 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0501 03:40:21.063307   69237 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0501 03:40:21.063381   69237 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0501 03:40:21.077380   69237 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0501 03:40:21.078445   69237 kubeconfig.go:125] found "default-k8s-diff-port-715118" server: "https://192.168.72.158:8444"
	I0501 03:40:21.080872   69237 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0501 03:40:21.095004   69237 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.158
	I0501 03:40:21.095045   69237 kubeadm.go:1154] stopping kube-system containers ...
	I0501 03:40:21.095059   69237 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0501 03:40:21.095123   69237 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0501 03:40:21.151629   69237 cri.go:89] found id: ""
	I0501 03:40:21.151711   69237 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0501 03:40:21.177077   69237 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0501 03:40:21.192057   69237 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0501 03:40:21.192087   69237 kubeadm.go:156] found existing configuration files:
	
	I0501 03:40:21.192146   69237 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0501 03:40:21.206784   69237 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0501 03:40:21.206870   69237 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0501 03:40:21.221942   69237 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0501 03:40:21.236442   69237 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0501 03:40:21.236516   69237 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0501 03:40:21.251285   69237 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0501 03:40:21.265997   69237 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0501 03:40:21.266049   69237 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0501 03:40:21.281137   69237 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0501 03:40:21.297713   69237 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0501 03:40:21.297783   69237 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0501 03:40:21.314264   69237 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0501 03:40:21.328605   69237 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:40:21.478475   69237 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:40:22.161692   69237 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:40:22.432136   69237 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:40:22.514744   69237 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:40:22.597689   69237 api_server.go:52] waiting for apiserver process to appear ...
	I0501 03:40:22.597770   69237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:23.098146   69237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:23.597831   69237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:23.629375   69237 api_server.go:72] duration metric: took 1.031684055s to wait for apiserver process to appear ...
	I0501 03:40:23.629462   69237 api_server.go:88] waiting for apiserver healthz status ...
	I0501 03:40:23.629500   69237 api_server.go:253] Checking apiserver healthz at https://192.168.72.158:8444/healthz ...
	I0501 03:40:23.630045   69237 api_server.go:269] stopped: https://192.168.72.158:8444/healthz: Get "https://192.168.72.158:8444/healthz": dial tcp 192.168.72.158:8444: connect: connection refused
	I0501 03:40:24.129831   69237 api_server.go:253] Checking apiserver healthz at https://192.168.72.158:8444/healthz ...
	I0501 03:40:22.740241   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:22.740692   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | unable to find current IP address of domain old-k8s-version-503971 in network mk-old-k8s-version-503971
	I0501 03:40:22.740722   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | I0501 03:40:22.740656   70499 retry.go:31] will retry after 1.899831455s: waiting for machine to come up
	I0501 03:40:24.642609   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:24.643075   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | unable to find current IP address of domain old-k8s-version-503971 in network mk-old-k8s-version-503971
	I0501 03:40:24.643104   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | I0501 03:40:24.643019   70499 retry.go:31] will retry after 3.503333894s: waiting for machine to come up
	I0501 03:40:25.157335   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:27.160083   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:27.091079   69237 api_server.go:279] https://192.168.72.158:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0501 03:40:27.091134   69237 api_server.go:103] status: https://192.168.72.158:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0501 03:40:27.091152   69237 api_server.go:253] Checking apiserver healthz at https://192.168.72.158:8444/healthz ...
	I0501 03:40:27.163481   69237 api_server.go:279] https://192.168.72.158:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0501 03:40:27.163509   69237 api_server.go:103] status: https://192.168.72.158:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0501 03:40:27.163522   69237 api_server.go:253] Checking apiserver healthz at https://192.168.72.158:8444/healthz ...
	I0501 03:40:27.175097   69237 api_server.go:279] https://192.168.72.158:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0501 03:40:27.175129   69237 api_server.go:103] status: https://192.168.72.158:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0501 03:40:27.629613   69237 api_server.go:253] Checking apiserver healthz at https://192.168.72.158:8444/healthz ...
	I0501 03:40:27.637166   69237 api_server.go:279] https://192.168.72.158:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0501 03:40:27.637202   69237 api_server.go:103] status: https://192.168.72.158:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0501 03:40:28.130467   69237 api_server.go:253] Checking apiserver healthz at https://192.168.72.158:8444/healthz ...
	I0501 03:40:28.148799   69237 api_server.go:279] https://192.168.72.158:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0501 03:40:28.148823   69237 api_server.go:103] status: https://192.168.72.158:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0501 03:40:28.630500   69237 api_server.go:253] Checking apiserver healthz at https://192.168.72.158:8444/healthz ...
	I0501 03:40:28.642856   69237 api_server.go:279] https://192.168.72.158:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0501 03:40:28.642890   69237 api_server.go:103] status: https://192.168.72.158:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0501 03:40:29.130453   69237 api_server.go:253] Checking apiserver healthz at https://192.168.72.158:8444/healthz ...
	I0501 03:40:29.137783   69237 api_server.go:279] https://192.168.72.158:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0501 03:40:29.137819   69237 api_server.go:103] status: https://192.168.72.158:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0501 03:40:29.630448   69237 api_server.go:253] Checking apiserver healthz at https://192.168.72.158:8444/healthz ...
	I0501 03:40:29.634736   69237 api_server.go:279] https://192.168.72.158:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0501 03:40:29.634764   69237 api_server.go:103] status: https://192.168.72.158:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0501 03:40:30.130371   69237 api_server.go:253] Checking apiserver healthz at https://192.168.72.158:8444/healthz ...
	I0501 03:40:30.134727   69237 api_server.go:279] https://192.168.72.158:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0501 03:40:30.134755   69237 api_server.go:103] status: https://192.168.72.158:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0501 03:40:30.630555   69237 api_server.go:253] Checking apiserver healthz at https://192.168.72.158:8444/healthz ...
	I0501 03:40:30.637025   69237 api_server.go:279] https://192.168.72.158:8444/healthz returned 200:
	ok
	I0501 03:40:30.644179   69237 api_server.go:141] control plane version: v1.30.0
	I0501 03:40:30.644209   69237 api_server.go:131] duration metric: took 7.014727807s to wait for apiserver health ...
	I0501 03:40:30.644217   69237 cni.go:84] Creating CNI manager for ""
	I0501 03:40:30.644223   69237 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0501 03:40:30.646018   69237 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0501 03:40:30.647222   69237 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0501 03:40:28.148102   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:28.148506   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | unable to find current IP address of domain old-k8s-version-503971 in network mk-old-k8s-version-503971
	I0501 03:40:28.148547   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | I0501 03:40:28.148463   70499 retry.go:31] will retry after 4.150508159s: waiting for machine to come up
	I0501 03:40:33.783990   68640 start.go:364] duration metric: took 56.072338201s to acquireMachinesLock for "no-preload-892672"
	I0501 03:40:33.784047   68640 start.go:96] Skipping create...Using existing machine configuration
	I0501 03:40:33.784056   68640 fix.go:54] fixHost starting: 
	I0501 03:40:33.784468   68640 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:40:33.784504   68640 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:40:33.801460   68640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37745
	I0501 03:40:33.802023   68640 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:40:33.802634   68640 main.go:141] libmachine: Using API Version  1
	I0501 03:40:33.802669   68640 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:40:33.803062   68640 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:40:33.803262   68640 main.go:141] libmachine: (no-preload-892672) Calling .DriverName
	I0501 03:40:33.803379   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetState
	I0501 03:40:33.805241   68640 fix.go:112] recreateIfNeeded on no-preload-892672: state=Stopped err=<nil>
	I0501 03:40:33.805266   68640 main.go:141] libmachine: (no-preload-892672) Calling .DriverName
	W0501 03:40:33.805452   68640 fix.go:138] unexpected machine state, will restart: <nil>
	I0501 03:40:33.807020   68640 out.go:177] * Restarting existing kvm2 VM for "no-preload-892672" ...
	I0501 03:40:29.656911   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:32.158119   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:32.303427   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:32.303804   69580 main.go:141] libmachine: (old-k8s-version-503971) Found IP for machine: 192.168.61.104
	I0501 03:40:32.303837   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has current primary IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:32.303851   69580 main.go:141] libmachine: (old-k8s-version-503971) Reserving static IP address...
	I0501 03:40:32.304254   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "old-k8s-version-503971", mac: "52:54:00:7d:68:83", ip: "192.168.61.104"} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:40:32.304286   69580 main.go:141] libmachine: (old-k8s-version-503971) Reserved static IP address: 192.168.61.104
	I0501 03:40:32.304305   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | skip adding static IP to network mk-old-k8s-version-503971 - found existing host DHCP lease matching {name: "old-k8s-version-503971", mac: "52:54:00:7d:68:83", ip: "192.168.61.104"}
	I0501 03:40:32.304323   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | Getting to WaitForSSH function...
	I0501 03:40:32.304337   69580 main.go:141] libmachine: (old-k8s-version-503971) Waiting for SSH to be available...
	I0501 03:40:32.306619   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:32.306972   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:68:83", ip: ""} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:40:32.307011   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:32.307114   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | Using SSH client type: external
	I0501 03:40:32.307138   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | Using SSH private key: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/old-k8s-version-503971/id_rsa (-rw-------)
	I0501 03:40:32.307174   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.104 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18779-13391/.minikube/machines/old-k8s-version-503971/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0501 03:40:32.307188   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | About to run SSH command:
	I0501 03:40:32.307224   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | exit 0
	I0501 03:40:32.438508   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | SSH cmd err, output: <nil>: 
	I0501 03:40:32.438882   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetConfigRaw
	I0501 03:40:32.439452   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetIP
	I0501 03:40:32.441984   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:32.442342   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:68:83", ip: ""} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:40:32.442369   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:32.442668   69580 profile.go:143] Saving config to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/old-k8s-version-503971/config.json ...
	I0501 03:40:32.442875   69580 machine.go:94] provisionDockerMachine start ...
	I0501 03:40:32.442897   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .DriverName
	I0501 03:40:32.443077   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHHostname
	I0501 03:40:32.445129   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:32.445442   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:68:83", ip: ""} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:40:32.445480   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:32.445628   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHPort
	I0501 03:40:32.445806   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHKeyPath
	I0501 03:40:32.445974   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHKeyPath
	I0501 03:40:32.446122   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHUsername
	I0501 03:40:32.446314   69580 main.go:141] libmachine: Using SSH client type: native
	I0501 03:40:32.446548   69580 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.104 22 <nil> <nil>}
	I0501 03:40:32.446564   69580 main.go:141] libmachine: About to run SSH command:
	hostname
	I0501 03:40:32.559346   69580 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0501 03:40:32.559379   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetMachineName
	I0501 03:40:32.559630   69580 buildroot.go:166] provisioning hostname "old-k8s-version-503971"
	I0501 03:40:32.559654   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetMachineName
	I0501 03:40:32.559832   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHHostname
	I0501 03:40:32.562176   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:32.562547   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:68:83", ip: ""} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:40:32.562582   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:32.562716   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHPort
	I0501 03:40:32.562892   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHKeyPath
	I0501 03:40:32.563019   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHKeyPath
	I0501 03:40:32.563161   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHUsername
	I0501 03:40:32.563332   69580 main.go:141] libmachine: Using SSH client type: native
	I0501 03:40:32.563545   69580 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.104 22 <nil> <nil>}
	I0501 03:40:32.563564   69580 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-503971 && echo "old-k8s-version-503971" | sudo tee /etc/hostname
	I0501 03:40:32.699918   69580 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-503971
	
	I0501 03:40:32.699961   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHHostname
	I0501 03:40:32.702721   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:32.703134   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:68:83", ip: ""} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:40:32.703158   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:32.703361   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHPort
	I0501 03:40:32.703547   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHKeyPath
	I0501 03:40:32.703744   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHKeyPath
	I0501 03:40:32.703881   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHUsername
	I0501 03:40:32.704037   69580 main.go:141] libmachine: Using SSH client type: native
	I0501 03:40:32.704199   69580 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.104 22 <nil> <nil>}
	I0501 03:40:32.704215   69580 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-503971' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-503971/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-503971' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0501 03:40:32.830277   69580 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0501 03:40:32.830307   69580 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18779-13391/.minikube CaCertPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18779-13391/.minikube}
	I0501 03:40:32.830323   69580 buildroot.go:174] setting up certificates
	I0501 03:40:32.830331   69580 provision.go:84] configureAuth start
	I0501 03:40:32.830340   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetMachineName
	I0501 03:40:32.830629   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetIP
	I0501 03:40:32.833575   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:32.833887   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:68:83", ip: ""} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:40:32.833932   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:32.834070   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHHostname
	I0501 03:40:32.836309   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:32.836664   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:68:83", ip: ""} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:40:32.836691   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:32.836824   69580 provision.go:143] copyHostCerts
	I0501 03:40:32.836885   69580 exec_runner.go:144] found /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem, removing ...
	I0501 03:40:32.836895   69580 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem
	I0501 03:40:32.836945   69580 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem (1078 bytes)
	I0501 03:40:32.837046   69580 exec_runner.go:144] found /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem, removing ...
	I0501 03:40:32.837054   69580 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem
	I0501 03:40:32.837072   69580 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem (1123 bytes)
	I0501 03:40:32.837129   69580 exec_runner.go:144] found /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem, removing ...
	I0501 03:40:32.837136   69580 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem
	I0501 03:40:32.837152   69580 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem (1675 bytes)
	I0501 03:40:32.837202   69580 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-503971 san=[127.0.0.1 192.168.61.104 localhost minikube old-k8s-version-503971]
	I0501 03:40:33.047948   69580 provision.go:177] copyRemoteCerts
	I0501 03:40:33.048004   69580 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0501 03:40:33.048030   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHHostname
	I0501 03:40:33.050591   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:33.050975   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:68:83", ip: ""} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:40:33.051012   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:33.051142   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHPort
	I0501 03:40:33.051310   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHKeyPath
	I0501 03:40:33.051465   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHUsername
	I0501 03:40:33.051574   69580 sshutil.go:53] new ssh client: &{IP:192.168.61.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/old-k8s-version-503971/id_rsa Username:docker}
	I0501 03:40:33.143991   69580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0501 03:40:33.175494   69580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0501 03:40:33.204770   69580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0501 03:40:33.232728   69580 provision.go:87] duration metric: took 402.386279ms to configureAuth
	I0501 03:40:33.232756   69580 buildroot.go:189] setting minikube options for container-runtime
	I0501 03:40:33.232962   69580 config.go:182] Loaded profile config "old-k8s-version-503971": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0501 03:40:33.233051   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHHostname
	I0501 03:40:33.235656   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:33.236006   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:68:83", ip: ""} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:40:33.236038   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:33.236162   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHPort
	I0501 03:40:33.236339   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHKeyPath
	I0501 03:40:33.236484   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHKeyPath
	I0501 03:40:33.236633   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHUsername
	I0501 03:40:33.236817   69580 main.go:141] libmachine: Using SSH client type: native
	I0501 03:40:33.236980   69580 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.104 22 <nil> <nil>}
	I0501 03:40:33.236997   69580 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0501 03:40:33.526370   69580 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0501 03:40:33.526419   69580 machine.go:97] duration metric: took 1.083510254s to provisionDockerMachine
	I0501 03:40:33.526432   69580 start.go:293] postStartSetup for "old-k8s-version-503971" (driver="kvm2")
	I0501 03:40:33.526443   69580 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0501 03:40:33.526470   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .DriverName
	I0501 03:40:33.526788   69580 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0501 03:40:33.526831   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHHostname
	I0501 03:40:33.529815   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:33.530209   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:68:83", ip: ""} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:40:33.530268   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:33.530364   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHPort
	I0501 03:40:33.530559   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHKeyPath
	I0501 03:40:33.530741   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHUsername
	I0501 03:40:33.530909   69580 sshutil.go:53] new ssh client: &{IP:192.168.61.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/old-k8s-version-503971/id_rsa Username:docker}
	I0501 03:40:33.620224   69580 ssh_runner.go:195] Run: cat /etc/os-release
	I0501 03:40:33.625417   69580 info.go:137] Remote host: Buildroot 2023.02.9
	I0501 03:40:33.625447   69580 filesync.go:126] Scanning /home/jenkins/minikube-integration/18779-13391/.minikube/addons for local assets ...
	I0501 03:40:33.625511   69580 filesync.go:126] Scanning /home/jenkins/minikube-integration/18779-13391/.minikube/files for local assets ...
	I0501 03:40:33.625594   69580 filesync.go:149] local asset: /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem -> 207242.pem in /etc/ssl/certs
	I0501 03:40:33.625691   69580 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0501 03:40:33.637311   69580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem --> /etc/ssl/certs/207242.pem (1708 bytes)
	I0501 03:40:33.666707   69580 start.go:296] duration metric: took 140.263297ms for postStartSetup
	I0501 03:40:33.666740   69580 fix.go:56] duration metric: took 20.150640355s for fixHost
	I0501 03:40:33.666758   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHHostname
	I0501 03:40:33.669394   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:33.669822   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:68:83", ip: ""} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:40:33.669852   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:33.669963   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHPort
	I0501 03:40:33.670213   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHKeyPath
	I0501 03:40:33.670388   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHKeyPath
	I0501 03:40:33.670589   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHUsername
	I0501 03:40:33.670794   69580 main.go:141] libmachine: Using SSH client type: native
	I0501 03:40:33.670972   69580 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.104 22 <nil> <nil>}
	I0501 03:40:33.670984   69580 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0501 03:40:33.783810   69580 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714534833.728910946
	
	I0501 03:40:33.783839   69580 fix.go:216] guest clock: 1714534833.728910946
	I0501 03:40:33.783850   69580 fix.go:229] Guest: 2024-05-01 03:40:33.728910946 +0000 UTC Remote: 2024-05-01 03:40:33.666743363 +0000 UTC m=+232.246108464 (delta=62.167583ms)
	I0501 03:40:33.783893   69580 fix.go:200] guest clock delta is within tolerance: 62.167583ms
	I0501 03:40:33.783903   69580 start.go:83] releasing machines lock for "old-k8s-version-503971", held for 20.267840723s
	I0501 03:40:33.783933   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .DriverName
	I0501 03:40:33.784203   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetIP
	I0501 03:40:33.786846   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:33.787202   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:68:83", ip: ""} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:40:33.787230   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:33.787385   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .DriverName
	I0501 03:40:33.787837   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .DriverName
	I0501 03:40:33.787997   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .DriverName
	I0501 03:40:33.788085   69580 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0501 03:40:33.788126   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHHostname
	I0501 03:40:33.788252   69580 ssh_runner.go:195] Run: cat /version.json
	I0501 03:40:33.788279   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHHostname
	I0501 03:40:33.790748   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:33.791086   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:33.791118   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:68:83", ip: ""} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:40:33.791142   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:33.791435   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHPort
	I0501 03:40:33.791491   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:68:83", ip: ""} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:40:33.791532   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:33.791618   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHKeyPath
	I0501 03:40:33.791740   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHPort
	I0501 03:40:33.791815   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHUsername
	I0501 03:40:33.791937   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHKeyPath
	I0501 03:40:33.792014   69580 sshutil.go:53] new ssh client: &{IP:192.168.61.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/old-k8s-version-503971/id_rsa Username:docker}
	I0501 03:40:33.792069   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHUsername
	I0501 03:40:33.792206   69580 sshutil.go:53] new ssh client: &{IP:192.168.61.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/old-k8s-version-503971/id_rsa Username:docker}
	I0501 03:40:33.876242   69580 ssh_runner.go:195] Run: systemctl --version
	I0501 03:40:33.901692   69580 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0501 03:40:34.056758   69580 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0501 03:40:34.065070   69580 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0501 03:40:34.065156   69580 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0501 03:40:34.085337   69580 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0501 03:40:34.085364   69580 start.go:494] detecting cgroup driver to use...
	I0501 03:40:34.085432   69580 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0501 03:40:34.102723   69580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0501 03:40:34.118792   69580 docker.go:217] disabling cri-docker service (if available) ...
	I0501 03:40:34.118847   69580 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0501 03:40:34.133978   69580 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0501 03:40:34.153890   69580 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0501 03:40:34.283815   69580 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0501 03:40:34.475851   69580 docker.go:233] disabling docker service ...
	I0501 03:40:34.475926   69580 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0501 03:40:34.500769   69580 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0501 03:40:34.517315   69580 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0501 03:40:34.674322   69580 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0501 03:40:34.833281   69580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0501 03:40:34.852610   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0501 03:40:34.879434   69580 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0501 03:40:34.879517   69580 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:40:34.892197   69580 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0501 03:40:34.892269   69580 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:40:34.904437   69580 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:40:34.919950   69580 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:40:34.933772   69580 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0501 03:40:34.947563   69580 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0501 03:40:34.965724   69580 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0501 03:40:34.965795   69580 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0501 03:40:34.984251   69580 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0501 03:40:34.997050   69580 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 03:40:35.155852   69580 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0501 03:40:35.362090   69580 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0501 03:40:35.362164   69580 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0501 03:40:35.368621   69580 start.go:562] Will wait 60s for crictl version
	I0501 03:40:35.368701   69580 ssh_runner.go:195] Run: which crictl
	I0501 03:40:35.373792   69580 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0501 03:40:35.436905   69580 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0501 03:40:35.437018   69580 ssh_runner.go:195] Run: crio --version
	I0501 03:40:35.485130   69580 ssh_runner.go:195] Run: crio --version
	I0501 03:40:35.528700   69580 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0501 03:40:30.661395   69237 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0501 03:40:30.682810   69237 system_pods.go:43] waiting for kube-system pods to appear ...
	I0501 03:40:30.694277   69237 system_pods.go:59] 8 kube-system pods found
	I0501 03:40:30.694326   69237 system_pods.go:61] "coredns-7db6d8ff4d-9r7dt" [75d43a25-d309-427e-befc-7f1851b90d8c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0501 03:40:30.694343   69237 system_pods.go:61] "etcd-default-k8s-diff-port-715118" [21f6a4cd-f662-4865-9208-83959f0a6782] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0501 03:40:30.694354   69237 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-715118" [4dc3e45e-a5d8-480f-a8e8-763ecab0976b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0501 03:40:30.694369   69237 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-715118" [340580a3-040e-48fc-b89c-36a4f6fccfc2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0501 03:40:30.694376   69237 system_pods.go:61] "kube-proxy-vg7ts" [e55f3363-178c-427a-819d-0dc94c3116f3] Running
	I0501 03:40:30.694388   69237 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-715118" [b850fc4a-da6b-4714-98bb-e36e185880dd] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0501 03:40:30.694417   69237 system_pods.go:61] "metrics-server-569cc877fc-2btjj" [9b8ff94d-9e59-46d4-ac6d-7accca8b3552] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0501 03:40:30.694427   69237 system_pods.go:61] "storage-provisioner" [d44a3cf1-c8a5-4a20-8dd6-b854680b33b9] Running
	I0501 03:40:30.694435   69237 system_pods.go:74] duration metric: took 11.599113ms to wait for pod list to return data ...
	I0501 03:40:30.694449   69237 node_conditions.go:102] verifying NodePressure condition ...
	I0501 03:40:30.697795   69237 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0501 03:40:30.697825   69237 node_conditions.go:123] node cpu capacity is 2
	I0501 03:40:30.697838   69237 node_conditions.go:105] duration metric: took 3.383507ms to run NodePressure ...
	I0501 03:40:30.697858   69237 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:40:30.978827   69237 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0501 03:40:30.984628   69237 kubeadm.go:733] kubelet initialised
	I0501 03:40:30.984650   69237 kubeadm.go:734] duration metric: took 5.799905ms waiting for restarted kubelet to initialise ...
	I0501 03:40:30.984656   69237 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0501 03:40:30.992354   69237 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-9r7dt" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:30.999663   69237 pod_ready.go:97] node "default-k8s-diff-port-715118" hosting pod "coredns-7db6d8ff4d-9r7dt" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-715118" has status "Ready":"False"
	I0501 03:40:30.999690   69237 pod_ready.go:81] duration metric: took 7.312969ms for pod "coredns-7db6d8ff4d-9r7dt" in "kube-system" namespace to be "Ready" ...
	E0501 03:40:30.999700   69237 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-715118" hosting pod "coredns-7db6d8ff4d-9r7dt" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-715118" has status "Ready":"False"
	I0501 03:40:30.999706   69237 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-715118" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:31.006163   69237 pod_ready.go:97] node "default-k8s-diff-port-715118" hosting pod "etcd-default-k8s-diff-port-715118" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-715118" has status "Ready":"False"
	I0501 03:40:31.006187   69237 pod_ready.go:81] duration metric: took 6.471262ms for pod "etcd-default-k8s-diff-port-715118" in "kube-system" namespace to be "Ready" ...
	E0501 03:40:31.006199   69237 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-715118" hosting pod "etcd-default-k8s-diff-port-715118" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-715118" has status "Ready":"False"
	I0501 03:40:31.006208   69237 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-715118" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:31.011772   69237 pod_ready.go:97] node "default-k8s-diff-port-715118" hosting pod "kube-apiserver-default-k8s-diff-port-715118" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-715118" has status "Ready":"False"
	I0501 03:40:31.011793   69237 pod_ready.go:81] duration metric: took 5.576722ms for pod "kube-apiserver-default-k8s-diff-port-715118" in "kube-system" namespace to be "Ready" ...
	E0501 03:40:31.011803   69237 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-715118" hosting pod "kube-apiserver-default-k8s-diff-port-715118" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-715118" has status "Ready":"False"
	I0501 03:40:31.011810   69237 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-715118" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:31.086163   69237 pod_ready.go:97] node "default-k8s-diff-port-715118" hosting pod "kube-controller-manager-default-k8s-diff-port-715118" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-715118" has status "Ready":"False"
	I0501 03:40:31.086194   69237 pod_ready.go:81] duration metric: took 74.377197ms for pod "kube-controller-manager-default-k8s-diff-port-715118" in "kube-system" namespace to be "Ready" ...
	E0501 03:40:31.086207   69237 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-715118" hosting pod "kube-controller-manager-default-k8s-diff-port-715118" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-715118" has status "Ready":"False"
	I0501 03:40:31.086214   69237 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-vg7ts" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:31.487056   69237 pod_ready.go:92] pod "kube-proxy-vg7ts" in "kube-system" namespace has status "Ready":"True"
	I0501 03:40:31.487078   69237 pod_ready.go:81] duration metric: took 400.857543ms for pod "kube-proxy-vg7ts" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:31.487088   69237 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-715118" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:33.502448   69237 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-715118" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:35.530015   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetIP
	I0501 03:40:35.533706   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:35.534178   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:68:83", ip: ""} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:40:35.534254   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:35.534515   69580 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0501 03:40:35.541542   69580 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0501 03:40:35.563291   69580 kubeadm.go:877] updating cluster {Name:old-k8s-version-503971 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-503971 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.104 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0501 03:40:35.563434   69580 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0501 03:40:35.563512   69580 ssh_runner.go:195] Run: sudo crictl images --output json
	I0501 03:40:35.646548   69580 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0501 03:40:35.646635   69580 ssh_runner.go:195] Run: which lz4
	I0501 03:40:35.652824   69580 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0501 03:40:35.660056   69580 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0501 03:40:35.660099   69580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0501 03:40:33.808828   68640 main.go:141] libmachine: (no-preload-892672) Calling .Start
	I0501 03:40:33.809083   68640 main.go:141] libmachine: (no-preload-892672) Ensuring networks are active...
	I0501 03:40:33.809829   68640 main.go:141] libmachine: (no-preload-892672) Ensuring network default is active
	I0501 03:40:33.810166   68640 main.go:141] libmachine: (no-preload-892672) Ensuring network mk-no-preload-892672 is active
	I0501 03:40:33.810632   68640 main.go:141] libmachine: (no-preload-892672) Getting domain xml...
	I0501 03:40:33.811386   68640 main.go:141] libmachine: (no-preload-892672) Creating domain...
	I0501 03:40:35.133886   68640 main.go:141] libmachine: (no-preload-892672) Waiting to get IP...
	I0501 03:40:35.134756   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:35.135216   68640 main.go:141] libmachine: (no-preload-892672) DBG | unable to find current IP address of domain no-preload-892672 in network mk-no-preload-892672
	I0501 03:40:35.135280   68640 main.go:141] libmachine: (no-preload-892672) DBG | I0501 03:40:35.135178   70664 retry.go:31] will retry after 275.796908ms: waiting for machine to come up
	I0501 03:40:35.412670   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:35.413206   68640 main.go:141] libmachine: (no-preload-892672) DBG | unable to find current IP address of domain no-preload-892672 in network mk-no-preload-892672
	I0501 03:40:35.413232   68640 main.go:141] libmachine: (no-preload-892672) DBG | I0501 03:40:35.413162   70664 retry.go:31] will retry after 326.173381ms: waiting for machine to come up
	I0501 03:40:35.740734   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:35.741314   68640 main.go:141] libmachine: (no-preload-892672) DBG | unable to find current IP address of domain no-preload-892672 in network mk-no-preload-892672
	I0501 03:40:35.741342   68640 main.go:141] libmachine: (no-preload-892672) DBG | I0501 03:40:35.741260   70664 retry.go:31] will retry after 476.50915ms: waiting for machine to come up
	I0501 03:40:36.219908   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:36.220440   68640 main.go:141] libmachine: (no-preload-892672) DBG | unable to find current IP address of domain no-preload-892672 in network mk-no-preload-892672
	I0501 03:40:36.220473   68640 main.go:141] libmachine: (no-preload-892672) DBG | I0501 03:40:36.220399   70664 retry.go:31] will retry after 377.277784ms: waiting for machine to come up
	I0501 03:40:36.598936   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:36.599391   68640 main.go:141] libmachine: (no-preload-892672) DBG | unable to find current IP address of domain no-preload-892672 in network mk-no-preload-892672
	I0501 03:40:36.599417   68640 main.go:141] libmachine: (no-preload-892672) DBG | I0501 03:40:36.599348   70664 retry.go:31] will retry after 587.166276ms: waiting for machine to come up
	I0501 03:40:37.188757   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:37.189406   68640 main.go:141] libmachine: (no-preload-892672) DBG | unable to find current IP address of domain no-preload-892672 in network mk-no-preload-892672
	I0501 03:40:37.189441   68640 main.go:141] libmachine: (no-preload-892672) DBG | I0501 03:40:37.189311   70664 retry.go:31] will retry after 801.958256ms: waiting for machine to come up
	I0501 03:40:34.658104   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:36.660517   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:35.998453   69237 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-715118" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:38.495088   69237 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-715118" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:39.004175   69237 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-715118" in "kube-system" namespace has status "Ready":"True"
	I0501 03:40:39.004198   69237 pod_ready.go:81] duration metric: took 7.517103824s for pod "kube-scheduler-default-k8s-diff-port-715118" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:39.004209   69237 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:37.870306   69580 crio.go:462] duration metric: took 2.217531377s to copy over tarball
	I0501 03:40:37.870393   69580 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0501 03:40:37.992669   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:37.993052   68640 main.go:141] libmachine: (no-preload-892672) DBG | unable to find current IP address of domain no-preload-892672 in network mk-no-preload-892672
	I0501 03:40:37.993080   68640 main.go:141] libmachine: (no-preload-892672) DBG | I0501 03:40:37.993016   70664 retry.go:31] will retry after 1.085029482s: waiting for machine to come up
	I0501 03:40:39.079315   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:39.079739   68640 main.go:141] libmachine: (no-preload-892672) DBG | unable to find current IP address of domain no-preload-892672 in network mk-no-preload-892672
	I0501 03:40:39.079779   68640 main.go:141] libmachine: (no-preload-892672) DBG | I0501 03:40:39.079682   70664 retry.go:31] will retry after 1.140448202s: waiting for machine to come up
	I0501 03:40:40.221645   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:40.222165   68640 main.go:141] libmachine: (no-preload-892672) DBG | unable to find current IP address of domain no-preload-892672 in network mk-no-preload-892672
	I0501 03:40:40.222192   68640 main.go:141] libmachine: (no-preload-892672) DBG | I0501 03:40:40.222103   70664 retry.go:31] will retry after 1.434247869s: waiting for machine to come up
	I0501 03:40:41.658447   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:41.659034   68640 main.go:141] libmachine: (no-preload-892672) DBG | unable to find current IP address of domain no-preload-892672 in network mk-no-preload-892672
	I0501 03:40:41.659072   68640 main.go:141] libmachine: (no-preload-892672) DBG | I0501 03:40:41.659003   70664 retry.go:31] will retry after 1.759453732s: waiting for machine to come up
	I0501 03:40:39.157834   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:41.164729   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:43.658248   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:41.014770   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:43.513038   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:45.516821   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:41.534681   69580 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.664236925s)
	I0501 03:40:41.599216   69580 crio.go:469] duration metric: took 3.72886857s to extract the tarball
	I0501 03:40:41.599238   69580 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0501 03:40:41.649221   69580 ssh_runner.go:195] Run: sudo crictl images --output json
	I0501 03:40:41.697169   69580 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0501 03:40:41.697198   69580 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0501 03:40:41.697302   69580 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0501 03:40:41.697346   69580 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0501 03:40:41.697367   69580 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0501 03:40:41.697352   69580 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0501 03:40:41.697375   69580 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0501 03:40:41.697329   69580 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0501 03:40:41.697275   69580 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0501 03:40:41.697329   69580 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0501 03:40:41.698950   69580 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0501 03:40:41.699010   69580 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0501 03:40:41.699114   69580 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0501 03:40:41.699251   69580 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0501 03:40:41.699292   69580 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0501 03:40:41.699020   69580 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0501 03:40:41.699550   69580 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0501 03:40:41.699715   69580 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0501 03:40:41.830042   69580 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0501 03:40:41.881770   69580 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0501 03:40:41.881834   69580 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0501 03:40:41.881896   69580 ssh_runner.go:195] Run: which crictl
	I0501 03:40:41.887083   69580 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0501 03:40:41.894597   69580 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0501 03:40:41.935993   69580 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0501 03:40:41.937339   69580 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0501 03:40:41.961728   69580 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0501 03:40:41.961778   69580 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0501 03:40:41.961827   69580 ssh_runner.go:195] Run: which crictl
	I0501 03:40:42.004327   69580 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0501 03:40:42.004395   69580 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0501 03:40:42.004435   69580 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0501 03:40:42.004493   69580 ssh_runner.go:195] Run: which crictl
	I0501 03:40:42.053743   69580 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0501 03:40:42.055914   69580 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0501 03:40:42.056267   69580 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0501 03:40:42.056610   69580 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0501 03:40:42.060229   69580 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0501 03:40:42.070489   69580 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0501 03:40:42.127829   69580 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0501 03:40:42.127880   69580 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0501 03:40:42.127927   69580 ssh_runner.go:195] Run: which crictl
	I0501 03:40:42.201731   69580 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0501 03:40:42.201783   69580 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0501 03:40:42.201814   69580 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0501 03:40:42.201842   69580 ssh_runner.go:195] Run: which crictl
	I0501 03:40:42.211112   69580 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0501 03:40:42.211163   69580 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0501 03:40:42.211227   69580 ssh_runner.go:195] Run: which crictl
	I0501 03:40:42.217794   69580 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0501 03:40:42.217835   69580 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0501 03:40:42.217873   69580 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0501 03:40:42.217917   69580 ssh_runner.go:195] Run: which crictl
	I0501 03:40:42.217873   69580 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0501 03:40:42.220250   69580 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0501 03:40:42.274880   69580 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0501 03:40:42.294354   69580 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0501 03:40:42.294436   69580 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0501 03:40:42.305191   69580 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0501 03:40:42.342502   69580 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0501 03:40:42.560474   69580 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0501 03:40:42.712970   69580 cache_images.go:92] duration metric: took 1.015752585s to LoadCachedImages
	W0501 03:40:42.713057   69580 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	I0501 03:40:42.713074   69580 kubeadm.go:928] updating node { 192.168.61.104 8443 v1.20.0 crio true true} ...
	I0501 03:40:42.713227   69580 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-503971 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.104
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-503971 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0501 03:40:42.713323   69580 ssh_runner.go:195] Run: crio config
	I0501 03:40:42.771354   69580 cni.go:84] Creating CNI manager for ""
	I0501 03:40:42.771384   69580 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0501 03:40:42.771403   69580 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0501 03:40:42.771428   69580 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.104 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-503971 NodeName:old-k8s-version-503971 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.104"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.104 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0501 03:40:42.771644   69580 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.104
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-503971"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.104
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.104"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0501 03:40:42.771722   69580 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0501 03:40:42.784978   69580 binaries.go:44] Found k8s binaries, skipping transfer
	I0501 03:40:42.785057   69580 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0501 03:40:42.800945   69580 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0501 03:40:42.824293   69580 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0501 03:40:42.845949   69580 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0501 03:40:42.867390   69580 ssh_runner.go:195] Run: grep 192.168.61.104	control-plane.minikube.internal$ /etc/hosts
	I0501 03:40:42.872038   69580 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.104	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0501 03:40:42.890213   69580 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 03:40:43.041533   69580 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0501 03:40:43.070048   69580 certs.go:68] Setting up /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/old-k8s-version-503971 for IP: 192.168.61.104
	I0501 03:40:43.070075   69580 certs.go:194] generating shared ca certs ...
	I0501 03:40:43.070097   69580 certs.go:226] acquiring lock for ca certs: {Name:mk85a75d14c00780d4bf09cd9bdaf0f96f1b8fce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 03:40:43.070315   69580 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18779-13391/.minikube/ca.key
	I0501 03:40:43.070388   69580 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.key
	I0501 03:40:43.070419   69580 certs.go:256] generating profile certs ...
	I0501 03:40:43.070558   69580 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/old-k8s-version-503971/client.key
	I0501 03:40:43.070631   69580 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/old-k8s-version-503971/apiserver.key.760b883a
	I0501 03:40:43.070670   69580 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/old-k8s-version-503971/proxy-client.key
	I0501 03:40:43.070804   69580 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/20724.pem (1338 bytes)
	W0501 03:40:43.070852   69580 certs.go:480] ignoring /home/jenkins/minikube-integration/18779-13391/.minikube/certs/20724_empty.pem, impossibly tiny 0 bytes
	I0501 03:40:43.070865   69580 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca-key.pem (1679 bytes)
	I0501 03:40:43.070914   69580 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem (1078 bytes)
	I0501 03:40:43.070955   69580 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem (1123 bytes)
	I0501 03:40:43.070985   69580 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem (1675 bytes)
	I0501 03:40:43.071044   69580 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem (1708 bytes)
	I0501 03:40:43.071869   69580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0501 03:40:43.110078   69580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0501 03:40:43.164382   69580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0501 03:40:43.197775   69580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0501 03:40:43.230575   69580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/old-k8s-version-503971/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0501 03:40:43.260059   69580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/old-k8s-version-503971/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0501 03:40:43.288704   69580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/old-k8s-version-503971/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0501 03:40:43.315417   69580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/old-k8s-version-503971/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0501 03:40:43.363440   69580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0501 03:40:43.396043   69580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/certs/20724.pem --> /usr/share/ca-certificates/20724.pem (1338 bytes)
	I0501 03:40:43.425997   69580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem --> /usr/share/ca-certificates/207242.pem (1708 bytes)
	I0501 03:40:43.456927   69580 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0501 03:40:43.478177   69580 ssh_runner.go:195] Run: openssl version
	I0501 03:40:43.484513   69580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0501 03:40:43.497230   69580 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0501 03:40:43.504025   69580 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May  1 02:08 /usr/share/ca-certificates/minikubeCA.pem
	I0501 03:40:43.504112   69580 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0501 03:40:43.513309   69580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0501 03:40:43.528592   69580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20724.pem && ln -fs /usr/share/ca-certificates/20724.pem /etc/ssl/certs/20724.pem"
	I0501 03:40:43.544560   69580 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20724.pem
	I0501 03:40:43.550975   69580 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May  1 02:20 /usr/share/ca-certificates/20724.pem
	I0501 03:40:43.551047   69580 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20724.pem
	I0501 03:40:43.559214   69580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20724.pem /etc/ssl/certs/51391683.0"
	I0501 03:40:43.575362   69580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/207242.pem && ln -fs /usr/share/ca-certificates/207242.pem /etc/ssl/certs/207242.pem"
	I0501 03:40:43.587848   69580 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/207242.pem
	I0501 03:40:43.593131   69580 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May  1 02:20 /usr/share/ca-certificates/207242.pem
	I0501 03:40:43.593183   69580 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/207242.pem
	I0501 03:40:43.600365   69580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/207242.pem /etc/ssl/certs/3ec20f2e.0"
	I0501 03:40:43.613912   69580 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0501 03:40:43.619576   69580 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0501 03:40:43.628551   69580 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0501 03:40:43.637418   69580 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0501 03:40:43.645060   69580 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0501 03:40:43.654105   69580 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0501 03:40:43.663501   69580 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0501 03:40:43.670855   69580 kubeadm.go:391] StartCluster: {Name:old-k8s-version-503971 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-503971 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.104 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 03:40:43.670937   69580 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0501 03:40:43.670982   69580 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0501 03:40:43.720350   69580 cri.go:89] found id: ""
	I0501 03:40:43.720419   69580 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0501 03:40:43.732518   69580 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0501 03:40:43.732544   69580 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0501 03:40:43.732552   69580 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0501 03:40:43.732612   69580 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0501 03:40:43.743804   69580 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0501 03:40:43.745071   69580 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-503971" does not appear in /home/jenkins/minikube-integration/18779-13391/kubeconfig
	I0501 03:40:43.745785   69580 kubeconfig.go:62] /home/jenkins/minikube-integration/18779-13391/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-503971" cluster setting kubeconfig missing "old-k8s-version-503971" context setting]
	I0501 03:40:43.747054   69580 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18779-13391/kubeconfig: {Name:mk5d1131557f885800877622151c0915337cb23a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 03:40:43.748989   69580 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0501 03:40:43.760349   69580 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.104
	I0501 03:40:43.760389   69580 kubeadm.go:1154] stopping kube-system containers ...
	I0501 03:40:43.760403   69580 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0501 03:40:43.760473   69580 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0501 03:40:43.804745   69580 cri.go:89] found id: ""
	I0501 03:40:43.804841   69580 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0501 03:40:43.825960   69580 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0501 03:40:43.838038   69580 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0501 03:40:43.838062   69580 kubeadm.go:156] found existing configuration files:
	
	I0501 03:40:43.838115   69580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0501 03:40:43.849075   69580 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0501 03:40:43.849164   69580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0501 03:40:43.860634   69580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0501 03:40:43.871244   69580 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0501 03:40:43.871313   69580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0501 03:40:43.882184   69580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0501 03:40:43.893193   69580 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0501 03:40:43.893254   69580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0501 03:40:43.904257   69580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0501 03:40:43.915414   69580 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0501 03:40:43.915492   69580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0501 03:40:43.927372   69580 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0501 03:40:43.939117   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:40:44.098502   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:40:45.150125   69580 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.051581029s)
	I0501 03:40:45.150161   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:40:45.443307   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:40:45.563369   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:40:45.678620   69580 api_server.go:52] waiting for apiserver process to appear ...
	I0501 03:40:45.678731   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:46.179675   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:43.419480   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:43.419952   68640 main.go:141] libmachine: (no-preload-892672) DBG | unable to find current IP address of domain no-preload-892672 in network mk-no-preload-892672
	I0501 03:40:43.419980   68640 main.go:141] libmachine: (no-preload-892672) DBG | I0501 03:40:43.419907   70664 retry.go:31] will retry after 2.329320519s: waiting for machine to come up
	I0501 03:40:45.751405   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:45.751871   68640 main.go:141] libmachine: (no-preload-892672) DBG | unable to find current IP address of domain no-preload-892672 in network mk-no-preload-892672
	I0501 03:40:45.751902   68640 main.go:141] libmachine: (no-preload-892672) DBG | I0501 03:40:45.751822   70664 retry.go:31] will retry after 3.262804058s: waiting for machine to come up
	I0501 03:40:45.659845   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:48.157145   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:48.013520   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:50.514729   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:46.679449   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:47.179179   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:47.678890   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:48.179190   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:48.679276   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:49.179698   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:49.679121   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:50.179723   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:50.679603   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:51.179094   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:49.016460   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:49.016856   68640 main.go:141] libmachine: (no-preload-892672) DBG | unable to find current IP address of domain no-preload-892672 in network mk-no-preload-892672
	I0501 03:40:49.016878   68640 main.go:141] libmachine: (no-preload-892672) DBG | I0501 03:40:49.016826   70664 retry.go:31] will retry after 3.440852681s: waiting for machine to come up
	I0501 03:40:52.461349   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:52.461771   68640 main.go:141] libmachine: (no-preload-892672) DBG | unable to find current IP address of domain no-preload-892672 in network mk-no-preload-892672
	I0501 03:40:52.461800   68640 main.go:141] libmachine: (no-preload-892672) DBG | I0501 03:40:52.461722   70664 retry.go:31] will retry after 4.871322728s: waiting for machine to come up
	I0501 03:40:50.157703   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:52.655677   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:53.011851   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:55.510458   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:51.679850   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:52.179568   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:52.679100   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:53.179470   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:53.679115   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:54.178815   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:54.679769   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:55.179576   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:55.678864   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:56.179617   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:57.335855   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:57.336228   68640 main.go:141] libmachine: (no-preload-892672) Found IP for machine: 192.168.39.144
	I0501 03:40:57.336263   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has current primary IP address 192.168.39.144 and MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:57.336281   68640 main.go:141] libmachine: (no-preload-892672) Reserving static IP address...
	I0501 03:40:57.336629   68640 main.go:141] libmachine: (no-preload-892672) DBG | found host DHCP lease matching {name: "no-preload-892672", mac: "52:54:00:c7:6d:9a", ip: "192.168.39.144"} in network mk-no-preload-892672: {Iface:virbr1 ExpiryTime:2024-05-01 04:40:47 +0000 UTC Type:0 Mac:52:54:00:c7:6d:9a Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:no-preload-892672 Clientid:01:52:54:00:c7:6d:9a}
	I0501 03:40:57.336649   68640 main.go:141] libmachine: (no-preload-892672) DBG | skip adding static IP to network mk-no-preload-892672 - found existing host DHCP lease matching {name: "no-preload-892672", mac: "52:54:00:c7:6d:9a", ip: "192.168.39.144"}
	I0501 03:40:57.336661   68640 main.go:141] libmachine: (no-preload-892672) Reserved static IP address: 192.168.39.144
	I0501 03:40:57.336671   68640 main.go:141] libmachine: (no-preload-892672) Waiting for SSH to be available...
	I0501 03:40:57.336680   68640 main.go:141] libmachine: (no-preload-892672) DBG | Getting to WaitForSSH function...
	I0501 03:40:57.338862   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:57.339135   68640 main.go:141] libmachine: (no-preload-892672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6d:9a", ip: ""} in network mk-no-preload-892672: {Iface:virbr1 ExpiryTime:2024-05-01 04:40:47 +0000 UTC Type:0 Mac:52:54:00:c7:6d:9a Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:no-preload-892672 Clientid:01:52:54:00:c7:6d:9a}
	I0501 03:40:57.339163   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined IP address 192.168.39.144 and MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:57.339268   68640 main.go:141] libmachine: (no-preload-892672) DBG | Using SSH client type: external
	I0501 03:40:57.339296   68640 main.go:141] libmachine: (no-preload-892672) DBG | Using SSH private key: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/no-preload-892672/id_rsa (-rw-------)
	I0501 03:40:57.339328   68640 main.go:141] libmachine: (no-preload-892672) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.144 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18779-13391/.minikube/machines/no-preload-892672/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0501 03:40:57.339341   68640 main.go:141] libmachine: (no-preload-892672) DBG | About to run SSH command:
	I0501 03:40:57.339370   68640 main.go:141] libmachine: (no-preload-892672) DBG | exit 0
	I0501 03:40:57.466775   68640 main.go:141] libmachine: (no-preload-892672) DBG | SSH cmd err, output: <nil>: 
	I0501 03:40:57.467183   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetConfigRaw
	I0501 03:40:57.467890   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetIP
	I0501 03:40:57.470097   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:57.470527   68640 main.go:141] libmachine: (no-preload-892672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6d:9a", ip: ""} in network mk-no-preload-892672: {Iface:virbr1 ExpiryTime:2024-05-01 04:40:47 +0000 UTC Type:0 Mac:52:54:00:c7:6d:9a Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:no-preload-892672 Clientid:01:52:54:00:c7:6d:9a}
	I0501 03:40:57.470555   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined IP address 192.168.39.144 and MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:57.470767   68640 profile.go:143] Saving config to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/no-preload-892672/config.json ...
	I0501 03:40:57.470929   68640 machine.go:94] provisionDockerMachine start ...
	I0501 03:40:57.470950   68640 main.go:141] libmachine: (no-preload-892672) Calling .DriverName
	I0501 03:40:57.471177   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHHostname
	I0501 03:40:57.473301   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:57.473599   68640 main.go:141] libmachine: (no-preload-892672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6d:9a", ip: ""} in network mk-no-preload-892672: {Iface:virbr1 ExpiryTime:2024-05-01 04:40:47 +0000 UTC Type:0 Mac:52:54:00:c7:6d:9a Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:no-preload-892672 Clientid:01:52:54:00:c7:6d:9a}
	I0501 03:40:57.473626   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined IP address 192.168.39.144 and MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:57.473724   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHPort
	I0501 03:40:57.473863   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHKeyPath
	I0501 03:40:57.474032   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHKeyPath
	I0501 03:40:57.474181   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHUsername
	I0501 03:40:57.474337   68640 main.go:141] libmachine: Using SSH client type: native
	I0501 03:40:57.474545   68640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.144 22 <nil> <nil>}
	I0501 03:40:57.474558   68640 main.go:141] libmachine: About to run SSH command:
	hostname
	I0501 03:40:57.591733   68640 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0501 03:40:57.591766   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetMachineName
	I0501 03:40:57.592016   68640 buildroot.go:166] provisioning hostname "no-preload-892672"
	I0501 03:40:57.592048   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetMachineName
	I0501 03:40:57.592308   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHHostname
	I0501 03:40:57.595192   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:57.595593   68640 main.go:141] libmachine: (no-preload-892672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6d:9a", ip: ""} in network mk-no-preload-892672: {Iface:virbr1 ExpiryTime:2024-05-01 04:40:47 +0000 UTC Type:0 Mac:52:54:00:c7:6d:9a Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:no-preload-892672 Clientid:01:52:54:00:c7:6d:9a}
	I0501 03:40:57.595618   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined IP address 192.168.39.144 and MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:57.595697   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHPort
	I0501 03:40:57.595891   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHKeyPath
	I0501 03:40:57.596041   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHKeyPath
	I0501 03:40:57.596192   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHUsername
	I0501 03:40:57.596376   68640 main.go:141] libmachine: Using SSH client type: native
	I0501 03:40:57.596544   68640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.144 22 <nil> <nil>}
	I0501 03:40:57.596559   68640 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-892672 && echo "no-preload-892672" | sudo tee /etc/hostname
	I0501 03:40:57.727738   68640 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-892672
	
	I0501 03:40:57.727770   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHHostname
	I0501 03:40:57.730673   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:57.731033   68640 main.go:141] libmachine: (no-preload-892672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6d:9a", ip: ""} in network mk-no-preload-892672: {Iface:virbr1 ExpiryTime:2024-05-01 04:40:47 +0000 UTC Type:0 Mac:52:54:00:c7:6d:9a Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:no-preload-892672 Clientid:01:52:54:00:c7:6d:9a}
	I0501 03:40:57.731066   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined IP address 192.168.39.144 and MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:57.731202   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHPort
	I0501 03:40:57.731383   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHKeyPath
	I0501 03:40:57.731577   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHKeyPath
	I0501 03:40:57.731744   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHUsername
	I0501 03:40:57.731936   68640 main.go:141] libmachine: Using SSH client type: native
	I0501 03:40:57.732155   68640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.144 22 <nil> <nil>}
	I0501 03:40:57.732173   68640 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-892672' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-892672/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-892672' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0501 03:40:57.857465   68640 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0501 03:40:57.857492   68640 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18779-13391/.minikube CaCertPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18779-13391/.minikube}
	I0501 03:40:57.857515   68640 buildroot.go:174] setting up certificates
	I0501 03:40:57.857524   68640 provision.go:84] configureAuth start
	I0501 03:40:57.857532   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetMachineName
	I0501 03:40:57.857791   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetIP
	I0501 03:40:57.860530   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:57.860881   68640 main.go:141] libmachine: (no-preload-892672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6d:9a", ip: ""} in network mk-no-preload-892672: {Iface:virbr1 ExpiryTime:2024-05-01 04:40:47 +0000 UTC Type:0 Mac:52:54:00:c7:6d:9a Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:no-preload-892672 Clientid:01:52:54:00:c7:6d:9a}
	I0501 03:40:57.860911   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined IP address 192.168.39.144 and MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:57.861035   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHHostname
	I0501 03:40:57.863122   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:57.863445   68640 main.go:141] libmachine: (no-preload-892672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6d:9a", ip: ""} in network mk-no-preload-892672: {Iface:virbr1 ExpiryTime:2024-05-01 04:40:47 +0000 UTC Type:0 Mac:52:54:00:c7:6d:9a Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:no-preload-892672 Clientid:01:52:54:00:c7:6d:9a}
	I0501 03:40:57.863472   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined IP address 192.168.39.144 and MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:57.863565   68640 provision.go:143] copyHostCerts
	I0501 03:40:57.863614   68640 exec_runner.go:144] found /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem, removing ...
	I0501 03:40:57.863624   68640 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem
	I0501 03:40:57.863689   68640 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem (1078 bytes)
	I0501 03:40:57.863802   68640 exec_runner.go:144] found /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem, removing ...
	I0501 03:40:57.863814   68640 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem
	I0501 03:40:57.863843   68640 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem (1123 bytes)
	I0501 03:40:57.863928   68640 exec_runner.go:144] found /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem, removing ...
	I0501 03:40:57.863938   68640 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem
	I0501 03:40:57.863962   68640 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem (1675 bytes)
	I0501 03:40:57.864040   68640 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca-key.pem org=jenkins.no-preload-892672 san=[127.0.0.1 192.168.39.144 localhost minikube no-preload-892672]
	I0501 03:40:54.658003   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:56.658041   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:58.125270   68640 provision.go:177] copyRemoteCerts
	I0501 03:40:58.125321   68640 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0501 03:40:58.125342   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHHostname
	I0501 03:40:58.127890   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:58.128299   68640 main.go:141] libmachine: (no-preload-892672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6d:9a", ip: ""} in network mk-no-preload-892672: {Iface:virbr1 ExpiryTime:2024-05-01 04:40:47 +0000 UTC Type:0 Mac:52:54:00:c7:6d:9a Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:no-preload-892672 Clientid:01:52:54:00:c7:6d:9a}
	I0501 03:40:58.128330   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined IP address 192.168.39.144 and MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:58.128469   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHPort
	I0501 03:40:58.128645   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHKeyPath
	I0501 03:40:58.128809   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHUsername
	I0501 03:40:58.128941   68640 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/no-preload-892672/id_rsa Username:docker}
	I0501 03:40:58.222112   68640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0501 03:40:58.249760   68640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0501 03:40:58.277574   68640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0501 03:40:58.304971   68640 provision.go:87] duration metric: took 447.420479ms to configureAuth
	I0501 03:40:58.305017   68640 buildroot.go:189] setting minikube options for container-runtime
	I0501 03:40:58.305270   68640 config.go:182] Loaded profile config "no-preload-892672": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0501 03:40:58.305434   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHHostname
	I0501 03:40:58.308098   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:58.308487   68640 main.go:141] libmachine: (no-preload-892672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6d:9a", ip: ""} in network mk-no-preload-892672: {Iface:virbr1 ExpiryTime:2024-05-01 04:40:47 +0000 UTC Type:0 Mac:52:54:00:c7:6d:9a Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:no-preload-892672 Clientid:01:52:54:00:c7:6d:9a}
	I0501 03:40:58.308528   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined IP address 192.168.39.144 and MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:58.308658   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHPort
	I0501 03:40:58.308857   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHKeyPath
	I0501 03:40:58.309025   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHKeyPath
	I0501 03:40:58.309173   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHUsername
	I0501 03:40:58.309354   68640 main.go:141] libmachine: Using SSH client type: native
	I0501 03:40:58.309510   68640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.144 22 <nil> <nil>}
	I0501 03:40:58.309526   68640 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0501 03:40:58.609833   68640 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0501 03:40:58.609859   68640 machine.go:97] duration metric: took 1.138916322s to provisionDockerMachine
	I0501 03:40:58.609873   68640 start.go:293] postStartSetup for "no-preload-892672" (driver="kvm2")
	I0501 03:40:58.609885   68640 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0501 03:40:58.609905   68640 main.go:141] libmachine: (no-preload-892672) Calling .DriverName
	I0501 03:40:58.610271   68640 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0501 03:40:58.610307   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHHostname
	I0501 03:40:58.612954   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:58.613308   68640 main.go:141] libmachine: (no-preload-892672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6d:9a", ip: ""} in network mk-no-preload-892672: {Iface:virbr1 ExpiryTime:2024-05-01 04:40:47 +0000 UTC Type:0 Mac:52:54:00:c7:6d:9a Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:no-preload-892672 Clientid:01:52:54:00:c7:6d:9a}
	I0501 03:40:58.613322   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined IP address 192.168.39.144 and MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:58.613485   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHPort
	I0501 03:40:58.613683   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHKeyPath
	I0501 03:40:58.613871   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHUsername
	I0501 03:40:58.614005   68640 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/no-preload-892672/id_rsa Username:docker}
	I0501 03:40:58.702752   68640 ssh_runner.go:195] Run: cat /etc/os-release
	I0501 03:40:58.707441   68640 info.go:137] Remote host: Buildroot 2023.02.9
	I0501 03:40:58.707468   68640 filesync.go:126] Scanning /home/jenkins/minikube-integration/18779-13391/.minikube/addons for local assets ...
	I0501 03:40:58.707577   68640 filesync.go:126] Scanning /home/jenkins/minikube-integration/18779-13391/.minikube/files for local assets ...
	I0501 03:40:58.707646   68640 filesync.go:149] local asset: /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem -> 207242.pem in /etc/ssl/certs
	I0501 03:40:58.707728   68640 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0501 03:40:58.718247   68640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem --> /etc/ssl/certs/207242.pem (1708 bytes)
	I0501 03:40:58.745184   68640 start.go:296] duration metric: took 135.29943ms for postStartSetup
	I0501 03:40:58.745218   68640 fix.go:56] duration metric: took 24.96116093s for fixHost
	I0501 03:40:58.745236   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHHostname
	I0501 03:40:58.747809   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:58.748228   68640 main.go:141] libmachine: (no-preload-892672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6d:9a", ip: ""} in network mk-no-preload-892672: {Iface:virbr1 ExpiryTime:2024-05-01 04:40:47 +0000 UTC Type:0 Mac:52:54:00:c7:6d:9a Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:no-preload-892672 Clientid:01:52:54:00:c7:6d:9a}
	I0501 03:40:58.748261   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined IP address 192.168.39.144 and MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:58.748380   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHPort
	I0501 03:40:58.748591   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHKeyPath
	I0501 03:40:58.748747   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHKeyPath
	I0501 03:40:58.748870   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHUsername
	I0501 03:40:58.749049   68640 main.go:141] libmachine: Using SSH client type: native
	I0501 03:40:58.749262   68640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.144 22 <nil> <nil>}
	I0501 03:40:58.749275   68640 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0501 03:40:58.867651   68640 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714534858.808639015
	
	I0501 03:40:58.867676   68640 fix.go:216] guest clock: 1714534858.808639015
	I0501 03:40:58.867686   68640 fix.go:229] Guest: 2024-05-01 03:40:58.808639015 +0000 UTC Remote: 2024-05-01 03:40:58.745221709 +0000 UTC m=+370.854832040 (delta=63.417306ms)
	I0501 03:40:58.867735   68640 fix.go:200] guest clock delta is within tolerance: 63.417306ms
	I0501 03:40:58.867746   68640 start.go:83] releasing machines lock for "no-preload-892672", held for 25.083724737s
	I0501 03:40:58.867770   68640 main.go:141] libmachine: (no-preload-892672) Calling .DriverName
	I0501 03:40:58.868053   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetIP
	I0501 03:40:58.871193   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:58.871618   68640 main.go:141] libmachine: (no-preload-892672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6d:9a", ip: ""} in network mk-no-preload-892672: {Iface:virbr1 ExpiryTime:2024-05-01 04:40:47 +0000 UTC Type:0 Mac:52:54:00:c7:6d:9a Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:no-preload-892672 Clientid:01:52:54:00:c7:6d:9a}
	I0501 03:40:58.871664   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined IP address 192.168.39.144 and MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:58.871815   68640 main.go:141] libmachine: (no-preload-892672) Calling .DriverName
	I0501 03:40:58.872441   68640 main.go:141] libmachine: (no-preload-892672) Calling .DriverName
	I0501 03:40:58.872665   68640 main.go:141] libmachine: (no-preload-892672) Calling .DriverName
	I0501 03:40:58.872750   68640 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0501 03:40:58.872787   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHHostname
	I0501 03:40:58.872918   68640 ssh_runner.go:195] Run: cat /version.json
	I0501 03:40:58.872946   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHHostname
	I0501 03:40:58.875797   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:58.875976   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:58.876230   68640 main.go:141] libmachine: (no-preload-892672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6d:9a", ip: ""} in network mk-no-preload-892672: {Iface:virbr1 ExpiryTime:2024-05-01 04:40:47 +0000 UTC Type:0 Mac:52:54:00:c7:6d:9a Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:no-preload-892672 Clientid:01:52:54:00:c7:6d:9a}
	I0501 03:40:58.876341   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined IP address 192.168.39.144 and MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:58.876377   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHPort
	I0501 03:40:58.876502   68640 main.go:141] libmachine: (no-preload-892672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6d:9a", ip: ""} in network mk-no-preload-892672: {Iface:virbr1 ExpiryTime:2024-05-01 04:40:47 +0000 UTC Type:0 Mac:52:54:00:c7:6d:9a Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:no-preload-892672 Clientid:01:52:54:00:c7:6d:9a}
	I0501 03:40:58.876539   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined IP address 192.168.39.144 and MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:58.876587   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHKeyPath
	I0501 03:40:58.876756   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHUsername
	I0501 03:40:58.876894   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHPort
	I0501 03:40:58.876969   68640 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/no-preload-892672/id_rsa Username:docker}
	I0501 03:40:58.877057   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHKeyPath
	I0501 03:40:58.877246   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHUsername
	I0501 03:40:58.877424   68640 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/no-preload-892672/id_rsa Username:docker}
	I0501 03:40:58.983384   68640 ssh_runner.go:195] Run: systemctl --version
	I0501 03:40:58.991625   68640 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0501 03:40:59.143916   68640 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0501 03:40:59.151065   68640 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0501 03:40:59.151124   68640 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0501 03:40:59.168741   68640 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0501 03:40:59.168763   68640 start.go:494] detecting cgroup driver to use...
	I0501 03:40:59.168825   68640 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0501 03:40:59.188524   68640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0501 03:40:59.205602   68640 docker.go:217] disabling cri-docker service (if available) ...
	I0501 03:40:59.205668   68640 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0501 03:40:59.221173   68640 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0501 03:40:59.236546   68640 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0501 03:40:59.364199   68640 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0501 03:40:59.533188   68640 docker.go:233] disabling docker service ...
	I0501 03:40:59.533266   68640 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0501 03:40:59.549488   68640 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0501 03:40:59.562910   68640 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0501 03:40:59.705451   68640 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0501 03:40:59.843226   68640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0501 03:40:59.858878   68640 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0501 03:40:59.882729   68640 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0501 03:40:59.882808   68640 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:40:59.895678   68640 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0501 03:40:59.895763   68640 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:40:59.908439   68640 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:40:59.921319   68640 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:40:59.934643   68640 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0501 03:40:59.947416   68640 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:40:59.959887   68640 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:40:59.981849   68640 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:40:59.994646   68640 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0501 03:41:00.006059   68640 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0501 03:41:00.006133   68640 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0501 03:41:00.024850   68640 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0501 03:41:00.036834   68640 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 03:41:00.161283   68640 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0501 03:41:00.312304   68640 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0501 03:41:00.312375   68640 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0501 03:41:00.317980   68640 start.go:562] Will wait 60s for crictl version
	I0501 03:41:00.318043   68640 ssh_runner.go:195] Run: which crictl
	I0501 03:41:00.322780   68640 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0501 03:41:00.362830   68640 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0501 03:41:00.362920   68640 ssh_runner.go:195] Run: crio --version
	I0501 03:41:00.399715   68640 ssh_runner.go:195] Run: crio --version
	I0501 03:41:00.432510   68640 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0501 03:40:57.511719   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:00.013693   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:56.679034   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:57.179062   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:57.679579   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:58.179221   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:58.679728   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:59.178851   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:59.679647   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:00.179397   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:00.678839   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:01.179679   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:00.433777   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetIP
	I0501 03:41:00.436557   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:41:00.436892   68640 main.go:141] libmachine: (no-preload-892672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6d:9a", ip: ""} in network mk-no-preload-892672: {Iface:virbr1 ExpiryTime:2024-05-01 04:40:47 +0000 UTC Type:0 Mac:52:54:00:c7:6d:9a Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:no-preload-892672 Clientid:01:52:54:00:c7:6d:9a}
	I0501 03:41:00.436920   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined IP address 192.168.39.144 and MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:41:00.437124   68640 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0501 03:41:00.441861   68640 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0501 03:41:00.455315   68640 kubeadm.go:877] updating cluster {Name:no-preload-892672 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.0 ClusterName:no-preload-892672 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.144 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0501 03:41:00.455417   68640 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0501 03:41:00.455462   68640 ssh_runner.go:195] Run: sudo crictl images --output json
	I0501 03:41:00.496394   68640 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0501 03:41:00.496422   68640 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.30.0 registry.k8s.io/kube-controller-manager:v1.30.0 registry.k8s.io/kube-scheduler:v1.30.0 registry.k8s.io/kube-proxy:v1.30.0 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.12-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0501 03:41:00.496508   68640 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0501 03:41:00.496532   68640 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.0
	I0501 03:41:00.496551   68640 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.0
	I0501 03:41:00.496581   68640 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0501 03:41:00.496679   68640 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.0
	I0501 03:41:00.496701   68640 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0501 03:41:00.496736   68640 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0501 03:41:00.496529   68640 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.0
	I0501 03:41:00.498207   68640 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.0
	I0501 03:41:00.498227   68640 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0501 03:41:00.498246   68640 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.0
	I0501 03:41:00.498250   68640 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.0
	I0501 03:41:00.498270   68640 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.0
	I0501 03:41:00.498254   68640 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0501 03:41:00.498298   68640 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0501 03:41:00.498477   68640 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0501 03:41:00.617430   68640 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.30.0
	I0501 03:41:00.621346   68640 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.30.0
	I0501 03:41:00.622759   68640 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0501 03:41:00.628313   68640 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.30.0
	I0501 03:41:00.629087   68640 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0501 03:41:00.633625   68640 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.12-0
	I0501 03:41:00.652130   68640 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.30.0
	I0501 03:41:00.722500   68640 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.30.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.30.0" does not exist at hash "c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0" in container runtime
	I0501 03:41:00.722554   68640 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.30.0
	I0501 03:41:00.722623   68640 ssh_runner.go:195] Run: which crictl
	I0501 03:41:00.796476   68640 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.30.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.30.0" does not exist at hash "259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced" in container runtime
	I0501 03:41:00.796530   68640 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.30.0
	I0501 03:41:00.796580   68640 ssh_runner.go:195] Run: which crictl
	I0501 03:41:00.944235   68640 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.30.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.30.0" does not exist at hash "c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b" in container runtime
	I0501 03:41:00.944262   68640 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0501 03:41:00.944289   68640 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.30.0
	I0501 03:41:00.944297   68640 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0501 03:41:00.944305   68640 cache_images.go:116] "registry.k8s.io/etcd:3.5.12-0" needs transfer: "registry.k8s.io/etcd:3.5.12-0" does not exist at hash "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899" in container runtime
	I0501 03:41:00.944325   68640 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.12-0
	I0501 03:41:00.944344   68640 ssh_runner.go:195] Run: which crictl
	I0501 03:41:00.944357   68640 ssh_runner.go:195] Run: which crictl
	I0501 03:41:00.944398   68640 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.30.0" needs transfer: "registry.k8s.io/kube-proxy:v1.30.0" does not exist at hash "a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b" in container runtime
	I0501 03:41:00.944348   68640 ssh_runner.go:195] Run: which crictl
	I0501 03:41:00.944434   68640 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.30.0
	I0501 03:41:00.944422   68640 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.30.0
	I0501 03:41:00.944452   68640 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.0
	I0501 03:41:00.944464   68640 ssh_runner.go:195] Run: which crictl
	I0501 03:41:00.998765   68640 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.30.0
	I0501 03:41:00.998791   68640 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0
	I0501 03:41:00.998846   68640 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.12-0
	I0501 03:41:00.998891   68640 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.0
	I0501 03:41:01.017469   68640 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.30.0
	I0501 03:41:01.017494   68640 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0
	I0501 03:41:01.017584   68640 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.0
	I0501 03:41:01.018040   68640 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0501 03:41:01.105445   68640 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0
	I0501 03:41:01.105517   68640 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0
	I0501 03:41:01.105560   68640 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0
	I0501 03:41:01.105583   68640 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.30.0 (exists)
	I0501 03:41:01.105595   68640 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.30.0
	I0501 03:41:01.105635   68640 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.0
	I0501 03:41:01.105645   68640 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0
	I0501 03:41:01.105734   68640 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.30.0 (exists)
	I0501 03:41:01.105814   68640 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0
	I0501 03:41:01.105888   68640 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.0
	I0501 03:41:01.120943   68640 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0501 03:41:01.121044   68640 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0501 03:41:01.127975   68640 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.12-0 (exists)
	I0501 03:41:01.359381   68640 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0501 03:40:59.156924   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:01.659307   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:03.661498   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:02.511652   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:05.011220   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:01.679527   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:02.179435   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:02.679626   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:03.179351   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:03.679618   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:04.179426   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:04.678853   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:05.179143   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:05.679065   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:06.179513   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:04.315680   68640 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.0: (3.210016587s)
	I0501 03:41:04.315725   68640 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.30.0 (exists)
	I0501 03:41:04.315756   68640 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.0: (3.209843913s)
	I0501 03:41:04.315784   68640 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1: (3.194721173s)
	I0501 03:41:04.315799   68640 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0: (3.210139611s)
	I0501 03:41:04.315812   68640 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.30.0 (exists)
	I0501 03:41:04.315813   68640 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0501 03:41:04.315813   68640 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0 from cache
	I0501 03:41:04.315844   68640 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.956432506s)
	I0501 03:41:04.315859   68640 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.30.0
	I0501 03:41:04.315902   68640 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0501 03:41:04.315905   68640 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0
	I0501 03:41:04.315927   68640 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0501 03:41:04.315962   68640 ssh_runner.go:195] Run: which crictl
	I0501 03:41:05.691351   68640 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0: (1.375419764s)
	I0501 03:41:05.691394   68640 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0 from cache
	I0501 03:41:05.691418   68640 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.12-0
	I0501 03:41:05.691467   68640 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0
	I0501 03:41:05.691477   68640 ssh_runner.go:235] Completed: which crictl: (1.375499162s)
	I0501 03:41:05.691529   68640 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0501 03:41:06.159381   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:08.659756   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:07.012126   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:09.511459   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:06.679246   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:07.179629   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:07.679601   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:08.179634   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:08.679603   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:09.179675   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:09.678837   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:10.178860   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:10.679638   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:11.179802   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:09.757005   68640 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0: (4.065509843s)
	I0501 03:41:09.757044   68640 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 from cache
	I0501 03:41:09.757079   68640 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.30.0
	I0501 03:41:09.757093   68640 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (4.065539206s)
	I0501 03:41:09.757137   68640 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0501 03:41:09.757158   68640 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0
	I0501 03:41:09.757222   68640 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0501 03:41:12.125691   68640 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0: (2.368504788s)
	I0501 03:41:12.125729   68640 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0 from cache
	I0501 03:41:12.125726   68640 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.368475622s)
	I0501 03:41:12.125755   68640 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0501 03:41:12.125754   68640 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.30.0
	I0501 03:41:12.125817   68640 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0
	I0501 03:41:11.157019   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:13.157632   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:11.513027   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:14.013463   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:11.679355   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:12.178847   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:12.679660   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:13.179641   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:13.678808   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:14.178955   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:14.679651   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:15.179623   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:15.678862   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:16.179775   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:14.315765   68640 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0: (2.18991878s)
	I0501 03:41:14.315791   68640 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0 from cache
	I0501 03:41:14.315835   68640 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0501 03:41:14.315911   68640 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0501 03:41:16.401221   68640 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.085281928s)
	I0501 03:41:16.401261   68640 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0501 03:41:16.401291   68640 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0501 03:41:16.401335   68640 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0501 03:41:17.152926   68640 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0501 03:41:17.152969   68640 cache_images.go:123] Successfully loaded all cached images
	I0501 03:41:17.152976   68640 cache_images.go:92] duration metric: took 16.656540612s to LoadCachedImages
	I0501 03:41:17.152989   68640 kubeadm.go:928] updating node { 192.168.39.144 8443 v1.30.0 crio true true} ...
	I0501 03:41:17.153119   68640 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-892672 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.144
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:no-preload-892672 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0501 03:41:17.153241   68640 ssh_runner.go:195] Run: crio config
	I0501 03:41:17.207153   68640 cni.go:84] Creating CNI manager for ""
	I0501 03:41:17.207181   68640 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0501 03:41:17.207196   68640 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0501 03:41:17.207225   68640 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.144 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-892672 NodeName:no-preload-892672 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.144"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.144 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0501 03:41:17.207407   68640 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.144
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-892672"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.144
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.144"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0501 03:41:17.207488   68640 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0501 03:41:17.221033   68640 binaries.go:44] Found k8s binaries, skipping transfer
	I0501 03:41:17.221099   68640 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0501 03:41:17.232766   68640 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0501 03:41:17.252543   68640 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0501 03:41:17.272030   68640 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0501 03:41:17.291541   68640 ssh_runner.go:195] Run: grep 192.168.39.144	control-plane.minikube.internal$ /etc/hosts
	I0501 03:41:17.295801   68640 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.144	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0501 03:41:17.309880   68640 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 03:41:17.432917   68640 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0501 03:41:17.452381   68640 certs.go:68] Setting up /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/no-preload-892672 for IP: 192.168.39.144
	I0501 03:41:17.452406   68640 certs.go:194] generating shared ca certs ...
	I0501 03:41:17.452425   68640 certs.go:226] acquiring lock for ca certs: {Name:mk85a75d14c00780d4bf09cd9bdaf0f96f1b8fce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 03:41:17.452606   68640 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18779-13391/.minikube/ca.key
	I0501 03:41:17.452655   68640 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.key
	I0501 03:41:17.452669   68640 certs.go:256] generating profile certs ...
	I0501 03:41:17.452746   68640 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/no-preload-892672/client.key
	I0501 03:41:17.452809   68640 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/no-preload-892672/apiserver.key.3644a8af
	I0501 03:41:17.452848   68640 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/no-preload-892672/proxy-client.key
	I0501 03:41:17.452963   68640 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/20724.pem (1338 bytes)
	W0501 03:41:17.453007   68640 certs.go:480] ignoring /home/jenkins/minikube-integration/18779-13391/.minikube/certs/20724_empty.pem, impossibly tiny 0 bytes
	I0501 03:41:17.453021   68640 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca-key.pem (1679 bytes)
	I0501 03:41:17.453050   68640 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem (1078 bytes)
	I0501 03:41:17.453083   68640 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem (1123 bytes)
	I0501 03:41:17.453116   68640 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem (1675 bytes)
	I0501 03:41:17.453166   68640 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem (1708 bytes)
	I0501 03:41:17.453767   68640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0501 03:41:17.490616   68640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0501 03:41:17.545217   68640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0501 03:41:17.576908   68640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0501 03:41:17.607371   68640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/no-preload-892672/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0501 03:41:17.657675   68640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/no-preload-892672/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0501 03:41:17.684681   68640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/no-preload-892672/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0501 03:41:17.716319   68640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/no-preload-892672/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0501 03:41:17.745731   68640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0501 03:41:17.770939   68640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/certs/20724.pem --> /usr/share/ca-certificates/20724.pem (1338 bytes)
	I0501 03:41:17.796366   68640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem --> /usr/share/ca-certificates/207242.pem (1708 bytes)
	I0501 03:41:17.823301   68640 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0501 03:41:17.841496   68640 ssh_runner.go:195] Run: openssl version
	I0501 03:41:17.848026   68640 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0501 03:41:17.860734   68640 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0501 03:41:17.865978   68640 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May  1 02:08 /usr/share/ca-certificates/minikubeCA.pem
	I0501 03:41:17.866037   68640 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0501 03:41:17.872644   68640 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0501 03:41:17.886241   68640 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20724.pem && ln -fs /usr/share/ca-certificates/20724.pem /etc/ssl/certs/20724.pem"
	I0501 03:41:17.899619   68640 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20724.pem
	I0501 03:41:17.904664   68640 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May  1 02:20 /usr/share/ca-certificates/20724.pem
	I0501 03:41:17.904701   68640 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20724.pem
	I0501 03:41:17.910799   68640 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20724.pem /etc/ssl/certs/51391683.0"
	I0501 03:41:17.923007   68640 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/207242.pem && ln -fs /usr/share/ca-certificates/207242.pem /etc/ssl/certs/207242.pem"
	I0501 03:41:15.657403   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:18.156777   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:16.511834   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:18.512735   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:20.513144   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:16.679614   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:17.179604   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:17.679100   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:18.179166   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:18.679202   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:19.179631   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:19.679583   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:20.179584   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:20.679493   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:21.178945   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:17.935647   68640 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/207242.pem
	I0501 03:41:17.942147   68640 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May  1 02:20 /usr/share/ca-certificates/207242.pem
	I0501 03:41:17.942187   68640 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/207242.pem
	I0501 03:41:17.948468   68640 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/207242.pem /etc/ssl/certs/3ec20f2e.0"
	I0501 03:41:17.962737   68640 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0501 03:41:17.968953   68640 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0501 03:41:17.975849   68640 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0501 03:41:17.982324   68640 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0501 03:41:17.988930   68640 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0501 03:41:17.995221   68640 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0501 03:41:18.001868   68640 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0501 03:41:18.008701   68640 kubeadm.go:391] StartCluster: {Name:no-preload-892672 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.0 ClusterName:no-preload-892672 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.144 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 03:41:18.008831   68640 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0501 03:41:18.008893   68640 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0501 03:41:18.056939   68640 cri.go:89] found id: ""
	I0501 03:41:18.057005   68640 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0501 03:41:18.070898   68640 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0501 03:41:18.070921   68640 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0501 03:41:18.070926   68640 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0501 03:41:18.070968   68640 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0501 03:41:18.083907   68640 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0501 03:41:18.085116   68640 kubeconfig.go:125] found "no-preload-892672" server: "https://192.168.39.144:8443"
	I0501 03:41:18.088582   68640 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0501 03:41:18.101426   68640 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.144
	I0501 03:41:18.101471   68640 kubeadm.go:1154] stopping kube-system containers ...
	I0501 03:41:18.101493   68640 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0501 03:41:18.101543   68640 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0501 03:41:18.153129   68640 cri.go:89] found id: ""
	I0501 03:41:18.153193   68640 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0501 03:41:18.173100   68640 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0501 03:41:18.188443   68640 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0501 03:41:18.188463   68640 kubeadm.go:156] found existing configuration files:
	
	I0501 03:41:18.188509   68640 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0501 03:41:18.202153   68640 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0501 03:41:18.202204   68640 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0501 03:41:18.215390   68640 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0501 03:41:18.227339   68640 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0501 03:41:18.227404   68640 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0501 03:41:18.239160   68640 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0501 03:41:18.251992   68640 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0501 03:41:18.252053   68640 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0501 03:41:18.265088   68640 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0501 03:41:18.277922   68640 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0501 03:41:18.277983   68640 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0501 03:41:18.291307   68640 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0501 03:41:18.304879   68640 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:41:18.417921   68640 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:41:19.350848   68640 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:41:19.586348   68640 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:41:19.761056   68640 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:41:19.867315   68640 api_server.go:52] waiting for apiserver process to appear ...
	I0501 03:41:19.867435   68640 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:20.368520   68640 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:20.868444   68640 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:20.913411   68640 api_server.go:72] duration metric: took 1.046095165s to wait for apiserver process to appear ...
	I0501 03:41:20.913444   68640 api_server.go:88] waiting for apiserver healthz status ...
	I0501 03:41:20.913469   68640 api_server.go:253] Checking apiserver healthz at https://192.168.39.144:8443/healthz ...
	I0501 03:41:20.914000   68640 api_server.go:269] stopped: https://192.168.39.144:8443/healthz: Get "https://192.168.39.144:8443/healthz": dial tcp 192.168.39.144:8443: connect: connection refused
	I0501 03:41:21.414544   68640 api_server.go:253] Checking apiserver healthz at https://192.168.39.144:8443/healthz ...
	I0501 03:41:20.658333   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:23.157298   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:23.011395   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:25.012164   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:21.678785   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:22.179435   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:22.679615   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:23.179610   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:23.679473   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:24.179613   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:24.679672   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:25.179400   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:25.679793   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:26.179809   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:24.166756   68640 api_server.go:279] https://192.168.39.144:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0501 03:41:24.166786   68640 api_server.go:103] status: https://192.168.39.144:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0501 03:41:24.166807   68640 api_server.go:253] Checking apiserver healthz at https://192.168.39.144:8443/healthz ...
	I0501 03:41:24.205679   68640 api_server.go:279] https://192.168.39.144:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0501 03:41:24.205713   68640 api_server.go:103] status: https://192.168.39.144:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0501 03:41:24.414055   68640 api_server.go:253] Checking apiserver healthz at https://192.168.39.144:8443/healthz ...
	I0501 03:41:24.420468   68640 api_server.go:279] https://192.168.39.144:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0501 03:41:24.420502   68640 api_server.go:103] status: https://192.168.39.144:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0501 03:41:24.914021   68640 api_server.go:253] Checking apiserver healthz at https://192.168.39.144:8443/healthz ...
	I0501 03:41:24.919717   68640 api_server.go:279] https://192.168.39.144:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0501 03:41:24.919754   68640 api_server.go:103] status: https://192.168.39.144:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0501 03:41:25.414015   68640 api_server.go:253] Checking apiserver healthz at https://192.168.39.144:8443/healthz ...
	I0501 03:41:25.422149   68640 api_server.go:279] https://192.168.39.144:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0501 03:41:25.422180   68640 api_server.go:103] status: https://192.168.39.144:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0501 03:41:25.913751   68640 api_server.go:253] Checking apiserver healthz at https://192.168.39.144:8443/healthz ...
	I0501 03:41:25.917839   68640 api_server.go:279] https://192.168.39.144:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0501 03:41:25.917865   68640 api_server.go:103] status: https://192.168.39.144:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0501 03:41:26.414458   68640 api_server.go:253] Checking apiserver healthz at https://192.168.39.144:8443/healthz ...
	I0501 03:41:26.419346   68640 api_server.go:279] https://192.168.39.144:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0501 03:41:26.419367   68640 api_server.go:103] status: https://192.168.39.144:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0501 03:41:26.913912   68640 api_server.go:253] Checking apiserver healthz at https://192.168.39.144:8443/healthz ...
	I0501 03:41:26.918504   68640 api_server.go:279] https://192.168.39.144:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0501 03:41:26.918537   68640 api_server.go:103] status: https://192.168.39.144:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0501 03:41:27.413693   68640 api_server.go:253] Checking apiserver healthz at https://192.168.39.144:8443/healthz ...
	I0501 03:41:27.421752   68640 api_server.go:279] https://192.168.39.144:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0501 03:41:27.421776   68640 api_server.go:103] status: https://192.168.39.144:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0501 03:41:27.913582   68640 api_server.go:253] Checking apiserver healthz at https://192.168.39.144:8443/healthz ...
	I0501 03:41:27.918116   68640 api_server.go:279] https://192.168.39.144:8443/healthz returned 200:
	ok
	I0501 03:41:27.927764   68640 api_server.go:141] control plane version: v1.30.0
	I0501 03:41:27.927790   68640 api_server.go:131] duration metric: took 7.014339409s to wait for apiserver health ...
	I0501 03:41:27.927799   68640 cni.go:84] Creating CNI manager for ""
	I0501 03:41:27.927805   68640 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0501 03:41:27.929889   68640 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0501 03:41:27.931210   68640 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0501 03:41:25.158177   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:27.656879   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:27.511692   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:30.010468   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:26.679430   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:27.179043   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:27.678801   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:28.179629   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:28.679111   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:29.179599   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:29.679624   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:30.179585   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:30.679442   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:31.179530   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:27.945852   68640 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0501 03:41:27.968311   68640 system_pods.go:43] waiting for kube-system pods to appear ...
	I0501 03:41:27.981571   68640 system_pods.go:59] 8 kube-system pods found
	I0501 03:41:27.981609   68640 system_pods.go:61] "coredns-7db6d8ff4d-v8bqq" [bf389521-9f19-4f2b-83a5-6d469c7ce0fe] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0501 03:41:27.981615   68640 system_pods.go:61] "etcd-no-preload-892672" [108fce6d-03f3-4bb9-a410-a58c58e8f186] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0501 03:41:27.981621   68640 system_pods.go:61] "kube-apiserver-no-preload-892672" [a18b7242-1865-4a67-aab6-c6cc19552326] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0501 03:41:27.981629   68640 system_pods.go:61] "kube-controller-manager-no-preload-892672" [318d39e1-5265-42e5-a3d5-4408b7b73542] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0501 03:41:27.981636   68640 system_pods.go:61] "kube-proxy-dwvdl" [f7a97598-aaa1-4df5-8d6a-8f6286568ad6] Running
	I0501 03:41:27.981642   68640 system_pods.go:61] "kube-scheduler-no-preload-892672" [cbf1c183-16df-42c8-b1c8-b9adf3c25a7a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0501 03:41:27.981647   68640 system_pods.go:61] "metrics-server-569cc877fc-k8jnl" [1dd0fb29-4d90-41c8-9de2-d163eeb0247b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0501 03:41:27.981651   68640 system_pods.go:61] "storage-provisioner" [fc703ab1-f14b-4766-8ee2-a43477d3df21] Running
	I0501 03:41:27.981657   68640 system_pods.go:74] duration metric: took 13.322893ms to wait for pod list to return data ...
	I0501 03:41:27.981667   68640 node_conditions.go:102] verifying NodePressure condition ...
	I0501 03:41:27.985896   68640 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0501 03:41:27.985931   68640 node_conditions.go:123] node cpu capacity is 2
	I0501 03:41:27.985944   68640 node_conditions.go:105] duration metric: took 4.271726ms to run NodePressure ...
	I0501 03:41:27.985966   68640 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:41:28.269675   68640 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0501 03:41:28.276487   68640 kubeadm.go:733] kubelet initialised
	I0501 03:41:28.276512   68640 kubeadm.go:734] duration metric: took 6.808875ms waiting for restarted kubelet to initialise ...
	I0501 03:41:28.276522   68640 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0501 03:41:28.287109   68640 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-v8bqq" in "kube-system" namespace to be "Ready" ...
	I0501 03:41:28.297143   68640 pod_ready.go:97] node "no-preload-892672" hosting pod "coredns-7db6d8ff4d-v8bqq" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-892672" has status "Ready":"False"
	I0501 03:41:28.297185   68640 pod_ready.go:81] duration metric: took 10.040841ms for pod "coredns-7db6d8ff4d-v8bqq" in "kube-system" namespace to be "Ready" ...
	E0501 03:41:28.297198   68640 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-892672" hosting pod "coredns-7db6d8ff4d-v8bqq" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-892672" has status "Ready":"False"
	I0501 03:41:28.297206   68640 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-892672" in "kube-system" namespace to be "Ready" ...
	I0501 03:41:28.307648   68640 pod_ready.go:97] node "no-preload-892672" hosting pod "etcd-no-preload-892672" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-892672" has status "Ready":"False"
	I0501 03:41:28.307682   68640 pod_ready.go:81] duration metric: took 10.464199ms for pod "etcd-no-preload-892672" in "kube-system" namespace to be "Ready" ...
	E0501 03:41:28.307695   68640 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-892672" hosting pod "etcd-no-preload-892672" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-892672" has status "Ready":"False"
	I0501 03:41:28.307707   68640 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-892672" in "kube-system" namespace to be "Ready" ...
	I0501 03:41:30.319652   68640 pod_ready.go:102] pod "kube-apiserver-no-preload-892672" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:32.821375   68640 pod_ready.go:102] pod "kube-apiserver-no-preload-892672" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:29.657167   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:32.157549   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:32.012009   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:34.511543   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:31.679423   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:32.179628   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:32.679456   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:33.179336   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:33.679221   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:34.178900   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:34.679236   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:35.179595   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:35.679520   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:36.179639   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:35.317202   68640 pod_ready.go:102] pod "kube-apiserver-no-preload-892672" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:37.318125   68640 pod_ready.go:92] pod "kube-apiserver-no-preload-892672" in "kube-system" namespace has status "Ready":"True"
	I0501 03:41:37.318157   68640 pod_ready.go:81] duration metric: took 9.010440772s for pod "kube-apiserver-no-preload-892672" in "kube-system" namespace to be "Ready" ...
	I0501 03:41:37.318170   68640 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-892672" in "kube-system" namespace to be "Ready" ...
	I0501 03:41:37.327390   68640 pod_ready.go:92] pod "kube-controller-manager-no-preload-892672" in "kube-system" namespace has status "Ready":"True"
	I0501 03:41:37.327412   68640 pod_ready.go:81] duration metric: took 9.233689ms for pod "kube-controller-manager-no-preload-892672" in "kube-system" namespace to be "Ready" ...
	I0501 03:41:37.327425   68640 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-dwvdl" in "kube-system" namespace to be "Ready" ...
	I0501 03:41:37.333971   68640 pod_ready.go:92] pod "kube-proxy-dwvdl" in "kube-system" namespace has status "Ready":"True"
	I0501 03:41:37.333994   68640 pod_ready.go:81] duration metric: took 6.561014ms for pod "kube-proxy-dwvdl" in "kube-system" namespace to be "Ready" ...
	I0501 03:41:37.334006   68640 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-892672" in "kube-system" namespace to be "Ready" ...
	I0501 03:41:37.338637   68640 pod_ready.go:92] pod "kube-scheduler-no-preload-892672" in "kube-system" namespace has status "Ready":"True"
	I0501 03:41:37.338657   68640 pod_ready.go:81] duration metric: took 4.644395ms for pod "kube-scheduler-no-preload-892672" in "kube-system" namespace to be "Ready" ...
	I0501 03:41:37.338665   68640 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace to be "Ready" ...
	I0501 03:41:34.657958   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:36.658191   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:36.512234   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:39.012636   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:36.678883   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:37.179198   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:37.679101   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:38.179088   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:38.679354   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:39.179163   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:39.678809   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:40.179768   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:40.679046   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:41.179618   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:39.346054   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:41.346434   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:39.157142   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:41.656902   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:41.510939   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:43.511571   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:45.511959   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:41.679751   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:42.178848   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:42.679525   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:43.179706   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:43.679665   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:44.179053   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:44.679615   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:45.178830   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:45.679547   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:41:45.679620   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:41:45.718568   69580 cri.go:89] found id: ""
	I0501 03:41:45.718597   69580 logs.go:276] 0 containers: []
	W0501 03:41:45.718611   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:41:45.718619   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:41:45.718678   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:41:45.755572   69580 cri.go:89] found id: ""
	I0501 03:41:45.755596   69580 logs.go:276] 0 containers: []
	W0501 03:41:45.755604   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:41:45.755609   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:41:45.755654   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:41:45.793411   69580 cri.go:89] found id: ""
	I0501 03:41:45.793440   69580 logs.go:276] 0 containers: []
	W0501 03:41:45.793450   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:41:45.793458   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:41:45.793526   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:41:45.834547   69580 cri.go:89] found id: ""
	I0501 03:41:45.834572   69580 logs.go:276] 0 containers: []
	W0501 03:41:45.834579   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:41:45.834585   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:41:45.834668   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:41:45.873293   69580 cri.go:89] found id: ""
	I0501 03:41:45.873321   69580 logs.go:276] 0 containers: []
	W0501 03:41:45.873332   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:41:45.873348   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:41:45.873411   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:41:45.911703   69580 cri.go:89] found id: ""
	I0501 03:41:45.911734   69580 logs.go:276] 0 containers: []
	W0501 03:41:45.911745   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:41:45.911766   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:41:45.911826   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:41:45.949577   69580 cri.go:89] found id: ""
	I0501 03:41:45.949602   69580 logs.go:276] 0 containers: []
	W0501 03:41:45.949610   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:41:45.949616   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:41:45.949666   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:41:45.986174   69580 cri.go:89] found id: ""
	I0501 03:41:45.986199   69580 logs.go:276] 0 containers: []
	W0501 03:41:45.986207   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:41:45.986216   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:41:45.986228   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:41:46.041028   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:41:46.041064   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:41:46.057097   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:41:46.057126   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:41:46.195021   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:41:46.195042   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:41:46.195055   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:41:46.261153   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:41:46.261197   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:41:43.845096   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:45.845950   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:47.849620   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:44.157041   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:46.158028   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:48.658062   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:48.011975   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:50.512345   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:48.809274   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:48.824295   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:41:48.824369   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:41:48.869945   69580 cri.go:89] found id: ""
	I0501 03:41:48.869975   69580 logs.go:276] 0 containers: []
	W0501 03:41:48.869985   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:41:48.869993   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:41:48.870053   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:41:48.918088   69580 cri.go:89] found id: ""
	I0501 03:41:48.918113   69580 logs.go:276] 0 containers: []
	W0501 03:41:48.918122   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:41:48.918131   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:41:48.918190   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:41:48.958102   69580 cri.go:89] found id: ""
	I0501 03:41:48.958132   69580 logs.go:276] 0 containers: []
	W0501 03:41:48.958143   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:41:48.958149   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:41:48.958207   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:41:48.997163   69580 cri.go:89] found id: ""
	I0501 03:41:48.997194   69580 logs.go:276] 0 containers: []
	W0501 03:41:48.997211   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:41:48.997218   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:41:48.997284   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:41:49.040132   69580 cri.go:89] found id: ""
	I0501 03:41:49.040156   69580 logs.go:276] 0 containers: []
	W0501 03:41:49.040164   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:41:49.040170   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:41:49.040228   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:41:49.079680   69580 cri.go:89] found id: ""
	I0501 03:41:49.079712   69580 logs.go:276] 0 containers: []
	W0501 03:41:49.079724   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:41:49.079732   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:41:49.079790   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:41:49.120577   69580 cri.go:89] found id: ""
	I0501 03:41:49.120610   69580 logs.go:276] 0 containers: []
	W0501 03:41:49.120623   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:41:49.120630   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:41:49.120700   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:41:49.167098   69580 cri.go:89] found id: ""
	I0501 03:41:49.167123   69580 logs.go:276] 0 containers: []
	W0501 03:41:49.167133   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:41:49.167141   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:41:49.167152   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:41:49.242834   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:41:49.242868   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:41:49.264011   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:41:49.264033   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:41:49.367711   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:41:49.367739   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:41:49.367764   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:41:49.441925   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:41:49.441964   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:41:50.346009   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:52.346333   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:51.156287   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:53.657588   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:53.010720   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:55.012329   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:51.986536   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:52.001651   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:41:52.001734   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:41:52.039550   69580 cri.go:89] found id: ""
	I0501 03:41:52.039571   69580 logs.go:276] 0 containers: []
	W0501 03:41:52.039579   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:41:52.039584   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:41:52.039636   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:41:52.082870   69580 cri.go:89] found id: ""
	I0501 03:41:52.082892   69580 logs.go:276] 0 containers: []
	W0501 03:41:52.082900   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:41:52.082905   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:41:52.082949   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:41:52.126970   69580 cri.go:89] found id: ""
	I0501 03:41:52.126996   69580 logs.go:276] 0 containers: []
	W0501 03:41:52.127009   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:41:52.127014   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:41:52.127076   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:41:52.169735   69580 cri.go:89] found id: ""
	I0501 03:41:52.169761   69580 logs.go:276] 0 containers: []
	W0501 03:41:52.169769   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:41:52.169774   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:41:52.169826   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:41:52.207356   69580 cri.go:89] found id: ""
	I0501 03:41:52.207392   69580 logs.go:276] 0 containers: []
	W0501 03:41:52.207404   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:41:52.207412   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:41:52.207472   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:41:52.250074   69580 cri.go:89] found id: ""
	I0501 03:41:52.250102   69580 logs.go:276] 0 containers: []
	W0501 03:41:52.250113   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:41:52.250121   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:41:52.250180   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:41:52.290525   69580 cri.go:89] found id: ""
	I0501 03:41:52.290550   69580 logs.go:276] 0 containers: []
	W0501 03:41:52.290558   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:41:52.290564   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:41:52.290610   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:41:52.336058   69580 cri.go:89] found id: ""
	I0501 03:41:52.336084   69580 logs.go:276] 0 containers: []
	W0501 03:41:52.336092   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:41:52.336103   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:41:52.336118   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:41:52.392738   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:41:52.392773   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:41:52.408475   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:41:52.408503   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:41:52.493567   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:41:52.493594   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:41:52.493608   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:41:52.566550   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:41:52.566583   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:41:55.117129   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:55.134840   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:41:55.134918   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:41:55.193990   69580 cri.go:89] found id: ""
	I0501 03:41:55.194019   69580 logs.go:276] 0 containers: []
	W0501 03:41:55.194029   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:41:55.194038   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:41:55.194100   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:41:55.261710   69580 cri.go:89] found id: ""
	I0501 03:41:55.261743   69580 logs.go:276] 0 containers: []
	W0501 03:41:55.261754   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:41:55.261761   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:41:55.261823   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:41:55.302432   69580 cri.go:89] found id: ""
	I0501 03:41:55.302468   69580 logs.go:276] 0 containers: []
	W0501 03:41:55.302480   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:41:55.302488   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:41:55.302550   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:41:55.346029   69580 cri.go:89] found id: ""
	I0501 03:41:55.346058   69580 logs.go:276] 0 containers: []
	W0501 03:41:55.346067   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:41:55.346073   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:41:55.346117   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:41:55.393206   69580 cri.go:89] found id: ""
	I0501 03:41:55.393229   69580 logs.go:276] 0 containers: []
	W0501 03:41:55.393236   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:41:55.393242   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:41:55.393295   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:41:55.437908   69580 cri.go:89] found id: ""
	I0501 03:41:55.437940   69580 logs.go:276] 0 containers: []
	W0501 03:41:55.437952   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:41:55.437960   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:41:55.438020   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:41:55.480439   69580 cri.go:89] found id: ""
	I0501 03:41:55.480468   69580 logs.go:276] 0 containers: []
	W0501 03:41:55.480480   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:41:55.480488   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:41:55.480589   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:41:55.524782   69580 cri.go:89] found id: ""
	I0501 03:41:55.524811   69580 logs.go:276] 0 containers: []
	W0501 03:41:55.524819   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:41:55.524828   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:41:55.524840   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:41:55.604337   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:41:55.604373   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:41:55.649427   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:41:55.649455   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:41:55.707928   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:41:55.707976   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:41:55.723289   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:41:55.723316   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:41:55.805146   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:41:54.347203   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:56.847806   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:55.658387   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:58.156886   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:57.511280   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:59.511460   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:58.306145   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:58.322207   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:41:58.322280   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:41:58.370291   69580 cri.go:89] found id: ""
	I0501 03:41:58.370319   69580 logs.go:276] 0 containers: []
	W0501 03:41:58.370331   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:41:58.370338   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:41:58.370417   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:41:58.421230   69580 cri.go:89] found id: ""
	I0501 03:41:58.421256   69580 logs.go:276] 0 containers: []
	W0501 03:41:58.421264   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:41:58.421270   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:41:58.421317   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:41:58.463694   69580 cri.go:89] found id: ""
	I0501 03:41:58.463724   69580 logs.go:276] 0 containers: []
	W0501 03:41:58.463735   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:41:58.463743   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:41:58.463797   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:41:58.507756   69580 cri.go:89] found id: ""
	I0501 03:41:58.507785   69580 logs.go:276] 0 containers: []
	W0501 03:41:58.507791   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:41:58.507797   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:41:58.507870   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:41:58.554852   69580 cri.go:89] found id: ""
	I0501 03:41:58.554884   69580 logs.go:276] 0 containers: []
	W0501 03:41:58.554895   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:41:58.554903   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:41:58.554969   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:41:58.602467   69580 cri.go:89] found id: ""
	I0501 03:41:58.602495   69580 logs.go:276] 0 containers: []
	W0501 03:41:58.602505   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:41:58.602511   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:41:58.602561   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:41:58.652718   69580 cri.go:89] found id: ""
	I0501 03:41:58.652749   69580 logs.go:276] 0 containers: []
	W0501 03:41:58.652759   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:41:58.652766   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:41:58.652837   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:41:58.694351   69580 cri.go:89] found id: ""
	I0501 03:41:58.694377   69580 logs.go:276] 0 containers: []
	W0501 03:41:58.694385   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:41:58.694393   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:41:58.694434   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:41:58.779878   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:41:58.779911   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:41:58.826733   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:41:58.826768   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:41:58.883808   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:41:58.883842   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:41:58.900463   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:41:58.900495   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:41:58.991346   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:41:59.345807   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:01.846099   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:00.157131   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:02.157204   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:01.511711   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:03.512536   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:01.492396   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:42:01.508620   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:42:01.508756   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:42:01.555669   69580 cri.go:89] found id: ""
	I0501 03:42:01.555696   69580 logs.go:276] 0 containers: []
	W0501 03:42:01.555712   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:42:01.555720   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:42:01.555782   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:42:01.597591   69580 cri.go:89] found id: ""
	I0501 03:42:01.597615   69580 logs.go:276] 0 containers: []
	W0501 03:42:01.597626   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:42:01.597635   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:42:01.597693   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:42:01.636259   69580 cri.go:89] found id: ""
	I0501 03:42:01.636286   69580 logs.go:276] 0 containers: []
	W0501 03:42:01.636297   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:42:01.636305   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:42:01.636361   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:42:01.684531   69580 cri.go:89] found id: ""
	I0501 03:42:01.684562   69580 logs.go:276] 0 containers: []
	W0501 03:42:01.684572   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:42:01.684579   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:42:01.684647   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:42:01.725591   69580 cri.go:89] found id: ""
	I0501 03:42:01.725621   69580 logs.go:276] 0 containers: []
	W0501 03:42:01.725628   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:42:01.725652   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:42:01.725718   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:42:01.767868   69580 cri.go:89] found id: ""
	I0501 03:42:01.767901   69580 logs.go:276] 0 containers: []
	W0501 03:42:01.767910   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:42:01.767917   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:42:01.767977   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:42:01.817590   69580 cri.go:89] found id: ""
	I0501 03:42:01.817618   69580 logs.go:276] 0 containers: []
	W0501 03:42:01.817629   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:42:01.817637   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:42:01.817697   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:42:01.863549   69580 cri.go:89] found id: ""
	I0501 03:42:01.863576   69580 logs.go:276] 0 containers: []
	W0501 03:42:01.863586   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:42:01.863595   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:42:01.863607   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:42:01.879134   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:42:01.879162   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:42:01.967015   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:42:01.967043   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:42:01.967059   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:42:02.051576   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:42:02.051614   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:42:02.095614   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:42:02.095644   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:42:04.652974   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:42:04.671018   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:42:04.671103   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:42:04.712392   69580 cri.go:89] found id: ""
	I0501 03:42:04.712425   69580 logs.go:276] 0 containers: []
	W0501 03:42:04.712435   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:42:04.712442   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:42:04.712503   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:42:04.756854   69580 cri.go:89] found id: ""
	I0501 03:42:04.756881   69580 logs.go:276] 0 containers: []
	W0501 03:42:04.756893   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:42:04.756900   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:42:04.756962   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:42:04.797665   69580 cri.go:89] found id: ""
	I0501 03:42:04.797694   69580 logs.go:276] 0 containers: []
	W0501 03:42:04.797703   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:42:04.797709   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:42:04.797756   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:42:04.838441   69580 cri.go:89] found id: ""
	I0501 03:42:04.838472   69580 logs.go:276] 0 containers: []
	W0501 03:42:04.838483   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:42:04.838491   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:42:04.838556   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:42:04.879905   69580 cri.go:89] found id: ""
	I0501 03:42:04.879935   69580 logs.go:276] 0 containers: []
	W0501 03:42:04.879945   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:42:04.879952   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:42:04.880012   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:42:04.924759   69580 cri.go:89] found id: ""
	I0501 03:42:04.924792   69580 logs.go:276] 0 containers: []
	W0501 03:42:04.924804   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:42:04.924813   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:42:04.924879   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:42:04.965638   69580 cri.go:89] found id: ""
	I0501 03:42:04.965663   69580 logs.go:276] 0 containers: []
	W0501 03:42:04.965670   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:42:04.965676   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:42:04.965721   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:42:05.013127   69580 cri.go:89] found id: ""
	I0501 03:42:05.013153   69580 logs.go:276] 0 containers: []
	W0501 03:42:05.013163   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:42:05.013173   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:42:05.013185   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:42:05.108388   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:42:05.108409   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:42:05.108422   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:42:05.198239   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:42:05.198281   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:42:05.241042   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:42:05.241076   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:42:05.299017   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:42:05.299069   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:42:04.345910   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:06.346830   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:04.657438   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:06.657707   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:06.011511   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:08.016548   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:10.510503   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:07.815458   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:42:07.832047   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:42:07.832125   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:42:07.882950   69580 cri.go:89] found id: ""
	I0501 03:42:07.882985   69580 logs.go:276] 0 containers: []
	W0501 03:42:07.882996   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:42:07.883002   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:42:07.883051   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:42:07.928086   69580 cri.go:89] found id: ""
	I0501 03:42:07.928111   69580 logs.go:276] 0 containers: []
	W0501 03:42:07.928119   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:42:07.928124   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:42:07.928177   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:42:07.976216   69580 cri.go:89] found id: ""
	I0501 03:42:07.976250   69580 logs.go:276] 0 containers: []
	W0501 03:42:07.976268   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:42:07.976274   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:42:07.976331   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:42:08.019903   69580 cri.go:89] found id: ""
	I0501 03:42:08.019932   69580 logs.go:276] 0 containers: []
	W0501 03:42:08.019943   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:42:08.019951   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:42:08.020009   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:42:08.075980   69580 cri.go:89] found id: ""
	I0501 03:42:08.076004   69580 logs.go:276] 0 containers: []
	W0501 03:42:08.076012   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:42:08.076018   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:42:08.076065   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:42:08.114849   69580 cri.go:89] found id: ""
	I0501 03:42:08.114881   69580 logs.go:276] 0 containers: []
	W0501 03:42:08.114891   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:42:08.114897   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:42:08.114955   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:42:08.159427   69580 cri.go:89] found id: ""
	I0501 03:42:08.159457   69580 logs.go:276] 0 containers: []
	W0501 03:42:08.159468   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:42:08.159476   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:42:08.159543   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:42:08.200117   69580 cri.go:89] found id: ""
	I0501 03:42:08.200151   69580 logs.go:276] 0 containers: []
	W0501 03:42:08.200163   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:42:08.200182   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:42:08.200197   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:42:08.281926   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:42:08.281972   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:42:08.331393   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:42:08.331429   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:42:08.386758   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:42:08.386793   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:42:08.402551   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:42:08.402581   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:42:08.489678   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:42:10.990653   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:42:11.007879   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:42:11.007958   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:42:11.049842   69580 cri.go:89] found id: ""
	I0501 03:42:11.049867   69580 logs.go:276] 0 containers: []
	W0501 03:42:11.049879   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:42:11.049885   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:42:11.049933   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:42:11.091946   69580 cri.go:89] found id: ""
	I0501 03:42:11.091980   69580 logs.go:276] 0 containers: []
	W0501 03:42:11.091992   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:42:11.092000   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:42:11.092079   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:42:11.140100   69580 cri.go:89] found id: ""
	I0501 03:42:11.140129   69580 logs.go:276] 0 containers: []
	W0501 03:42:11.140138   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:42:11.140144   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:42:11.140207   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:42:11.182796   69580 cri.go:89] found id: ""
	I0501 03:42:11.182821   69580 logs.go:276] 0 containers: []
	W0501 03:42:11.182832   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:42:11.182838   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:42:11.182896   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:42:11.222985   69580 cri.go:89] found id: ""
	I0501 03:42:11.223016   69580 logs.go:276] 0 containers: []
	W0501 03:42:11.223027   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:42:11.223033   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:42:11.223114   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:42:11.265793   69580 cri.go:89] found id: ""
	I0501 03:42:11.265818   69580 logs.go:276] 0 containers: []
	W0501 03:42:11.265830   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:42:11.265838   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:42:11.265913   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:42:11.309886   69580 cri.go:89] found id: ""
	I0501 03:42:11.309912   69580 logs.go:276] 0 containers: []
	W0501 03:42:11.309924   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:42:11.309931   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:42:11.309989   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:42:11.357757   69580 cri.go:89] found id: ""
	I0501 03:42:11.357791   69580 logs.go:276] 0 containers: []
	W0501 03:42:11.357803   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:42:11.357823   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:42:11.357839   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:42:11.412668   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:42:11.412704   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:42:11.428380   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:42:11.428422   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0501 03:42:08.347511   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:10.846691   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:09.156632   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:11.158047   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:13.657603   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:12.512713   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:15.011382   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	W0501 03:42:11.521898   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:42:11.521924   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:42:11.521940   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:42:11.607081   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:42:11.607116   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:42:14.153054   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:42:14.173046   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:42:14.173150   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:42:14.219583   69580 cri.go:89] found id: ""
	I0501 03:42:14.219605   69580 logs.go:276] 0 containers: []
	W0501 03:42:14.219613   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:42:14.219619   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:42:14.219664   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:42:14.260316   69580 cri.go:89] found id: ""
	I0501 03:42:14.260349   69580 logs.go:276] 0 containers: []
	W0501 03:42:14.260357   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:42:14.260366   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:42:14.260420   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:42:14.305049   69580 cri.go:89] found id: ""
	I0501 03:42:14.305085   69580 logs.go:276] 0 containers: []
	W0501 03:42:14.305109   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:42:14.305117   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:42:14.305198   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:42:14.359589   69580 cri.go:89] found id: ""
	I0501 03:42:14.359614   69580 logs.go:276] 0 containers: []
	W0501 03:42:14.359622   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:42:14.359628   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:42:14.359672   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:42:14.403867   69580 cri.go:89] found id: ""
	I0501 03:42:14.403895   69580 logs.go:276] 0 containers: []
	W0501 03:42:14.403904   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:42:14.403910   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:42:14.403987   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:42:14.446626   69580 cri.go:89] found id: ""
	I0501 03:42:14.446655   69580 logs.go:276] 0 containers: []
	W0501 03:42:14.446675   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:42:14.446683   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:42:14.446754   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:42:14.490983   69580 cri.go:89] found id: ""
	I0501 03:42:14.491016   69580 logs.go:276] 0 containers: []
	W0501 03:42:14.491028   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:42:14.491036   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:42:14.491117   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:42:14.534180   69580 cri.go:89] found id: ""
	I0501 03:42:14.534205   69580 logs.go:276] 0 containers: []
	W0501 03:42:14.534213   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:42:14.534221   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:42:14.534236   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:42:14.621433   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:42:14.621491   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:42:14.680265   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:42:14.680310   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:42:14.738943   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:42:14.738983   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:42:14.754145   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:42:14.754176   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:42:14.839974   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:42:13.347081   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:15.847072   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:17.847749   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:16.157433   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:18.158120   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:17.017276   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:19.514339   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:17.340948   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:42:17.360007   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:42:17.360068   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:42:17.403201   69580 cri.go:89] found id: ""
	I0501 03:42:17.403231   69580 logs.go:276] 0 containers: []
	W0501 03:42:17.403239   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:42:17.403245   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:42:17.403301   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:42:17.442940   69580 cri.go:89] found id: ""
	I0501 03:42:17.442966   69580 logs.go:276] 0 containers: []
	W0501 03:42:17.442975   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:42:17.442981   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:42:17.443038   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:42:17.487219   69580 cri.go:89] found id: ""
	I0501 03:42:17.487248   69580 logs.go:276] 0 containers: []
	W0501 03:42:17.487259   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:42:17.487267   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:42:17.487324   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:42:17.528551   69580 cri.go:89] found id: ""
	I0501 03:42:17.528583   69580 logs.go:276] 0 containers: []
	W0501 03:42:17.528593   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:42:17.528601   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:42:17.528668   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:42:17.577005   69580 cri.go:89] found id: ""
	I0501 03:42:17.577041   69580 logs.go:276] 0 containers: []
	W0501 03:42:17.577052   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:42:17.577061   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:42:17.577132   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:42:17.618924   69580 cri.go:89] found id: ""
	I0501 03:42:17.618949   69580 logs.go:276] 0 containers: []
	W0501 03:42:17.618957   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:42:17.618963   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:42:17.619022   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:42:17.660487   69580 cri.go:89] found id: ""
	I0501 03:42:17.660514   69580 logs.go:276] 0 containers: []
	W0501 03:42:17.660525   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:42:17.660532   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:42:17.660592   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:42:17.701342   69580 cri.go:89] found id: ""
	I0501 03:42:17.701370   69580 logs.go:276] 0 containers: []
	W0501 03:42:17.701378   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:42:17.701387   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:42:17.701400   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:42:17.757034   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:42:17.757069   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:42:17.772955   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:42:17.772984   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:42:17.888062   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:42:17.888088   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:42:17.888101   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:42:17.969274   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:42:17.969312   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:42:20.521053   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:42:20.536065   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:42:20.536141   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:42:20.577937   69580 cri.go:89] found id: ""
	I0501 03:42:20.577967   69580 logs.go:276] 0 containers: []
	W0501 03:42:20.577977   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:42:20.577986   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:42:20.578055   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:42:20.626690   69580 cri.go:89] found id: ""
	I0501 03:42:20.626714   69580 logs.go:276] 0 containers: []
	W0501 03:42:20.626722   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:42:20.626728   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:42:20.626809   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:42:20.670849   69580 cri.go:89] found id: ""
	I0501 03:42:20.670872   69580 logs.go:276] 0 containers: []
	W0501 03:42:20.670881   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:42:20.670886   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:42:20.670946   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:42:20.711481   69580 cri.go:89] found id: ""
	I0501 03:42:20.711511   69580 logs.go:276] 0 containers: []
	W0501 03:42:20.711522   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:42:20.711531   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:42:20.711596   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:42:20.753413   69580 cri.go:89] found id: ""
	I0501 03:42:20.753443   69580 logs.go:276] 0 containers: []
	W0501 03:42:20.753452   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:42:20.753459   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:42:20.753536   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:42:20.791424   69580 cri.go:89] found id: ""
	I0501 03:42:20.791452   69580 logs.go:276] 0 containers: []
	W0501 03:42:20.791461   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:42:20.791466   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:42:20.791526   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:42:20.833718   69580 cri.go:89] found id: ""
	I0501 03:42:20.833740   69580 logs.go:276] 0 containers: []
	W0501 03:42:20.833748   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:42:20.833752   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:42:20.833799   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:42:20.879788   69580 cri.go:89] found id: ""
	I0501 03:42:20.879818   69580 logs.go:276] 0 containers: []
	W0501 03:42:20.879828   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:42:20.879839   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:42:20.879855   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:42:20.895266   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:42:20.895304   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:42:20.976429   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:42:20.976452   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:42:20.976465   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:42:21.063573   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:42:21.063611   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:42:21.113510   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:42:21.113543   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:42:20.346735   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:22.347096   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:20.658642   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:22.659841   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:22.011045   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:24.012756   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:23.672203   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:42:23.687849   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:42:23.687946   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:42:23.731428   69580 cri.go:89] found id: ""
	I0501 03:42:23.731455   69580 logs.go:276] 0 containers: []
	W0501 03:42:23.731467   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:42:23.731473   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:42:23.731534   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:42:23.772219   69580 cri.go:89] found id: ""
	I0501 03:42:23.772248   69580 logs.go:276] 0 containers: []
	W0501 03:42:23.772259   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:42:23.772266   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:42:23.772369   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:42:23.837203   69580 cri.go:89] found id: ""
	I0501 03:42:23.837235   69580 logs.go:276] 0 containers: []
	W0501 03:42:23.837247   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:42:23.837255   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:42:23.837317   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:42:23.884681   69580 cri.go:89] found id: ""
	I0501 03:42:23.884709   69580 logs.go:276] 0 containers: []
	W0501 03:42:23.884716   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:42:23.884722   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:42:23.884783   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:42:23.927544   69580 cri.go:89] found id: ""
	I0501 03:42:23.927576   69580 logs.go:276] 0 containers: []
	W0501 03:42:23.927584   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:42:23.927590   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:42:23.927652   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:42:23.970428   69580 cri.go:89] found id: ""
	I0501 03:42:23.970457   69580 logs.go:276] 0 containers: []
	W0501 03:42:23.970467   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:42:23.970476   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:42:23.970541   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:42:24.010545   69580 cri.go:89] found id: ""
	I0501 03:42:24.010573   69580 logs.go:276] 0 containers: []
	W0501 03:42:24.010583   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:42:24.010593   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:42:24.010653   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:42:24.053547   69580 cri.go:89] found id: ""
	I0501 03:42:24.053574   69580 logs.go:276] 0 containers: []
	W0501 03:42:24.053582   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:42:24.053591   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:42:24.053602   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:42:24.108416   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:42:24.108452   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:42:24.124052   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:42:24.124083   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:42:24.209024   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:42:24.209048   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:42:24.209063   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:42:24.291644   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:42:24.291693   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:42:24.846439   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:26.846750   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:25.157009   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:27.657022   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:26.510679   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:28.511049   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:30.511542   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:26.840623   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:42:26.856231   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:42:26.856320   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:42:26.897988   69580 cri.go:89] found id: ""
	I0501 03:42:26.898022   69580 logs.go:276] 0 containers: []
	W0501 03:42:26.898033   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:42:26.898041   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:42:26.898109   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:42:26.937608   69580 cri.go:89] found id: ""
	I0501 03:42:26.937638   69580 logs.go:276] 0 containers: []
	W0501 03:42:26.937660   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:42:26.937668   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:42:26.937731   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:42:26.979799   69580 cri.go:89] found id: ""
	I0501 03:42:26.979836   69580 logs.go:276] 0 containers: []
	W0501 03:42:26.979847   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:42:26.979854   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:42:26.979922   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:42:27.018863   69580 cri.go:89] found id: ""
	I0501 03:42:27.018896   69580 logs.go:276] 0 containers: []
	W0501 03:42:27.018903   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:42:27.018909   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:42:27.018959   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:42:27.057864   69580 cri.go:89] found id: ""
	I0501 03:42:27.057893   69580 logs.go:276] 0 containers: []
	W0501 03:42:27.057904   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:42:27.057912   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:42:27.057982   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:42:27.102909   69580 cri.go:89] found id: ""
	I0501 03:42:27.102939   69580 logs.go:276] 0 containers: []
	W0501 03:42:27.102950   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:42:27.102958   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:42:27.103019   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:42:27.148292   69580 cri.go:89] found id: ""
	I0501 03:42:27.148326   69580 logs.go:276] 0 containers: []
	W0501 03:42:27.148336   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:42:27.148344   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:42:27.148407   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:42:27.197557   69580 cri.go:89] found id: ""
	I0501 03:42:27.197581   69580 logs.go:276] 0 containers: []
	W0501 03:42:27.197588   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:42:27.197596   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:42:27.197609   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:42:27.281768   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:42:27.281793   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:42:27.281806   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:42:27.361496   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:42:27.361528   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:42:27.407640   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:42:27.407675   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:42:27.472533   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:42:27.472576   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:42:29.987773   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:42:30.003511   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:42:30.003619   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:42:30.049330   69580 cri.go:89] found id: ""
	I0501 03:42:30.049363   69580 logs.go:276] 0 containers: []
	W0501 03:42:30.049377   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:42:30.049384   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:42:30.049439   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:42:30.088521   69580 cri.go:89] found id: ""
	I0501 03:42:30.088549   69580 logs.go:276] 0 containers: []
	W0501 03:42:30.088560   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:42:30.088568   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:42:30.088624   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:42:30.132731   69580 cri.go:89] found id: ""
	I0501 03:42:30.132765   69580 logs.go:276] 0 containers: []
	W0501 03:42:30.132777   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:42:30.132784   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:42:30.132847   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:42:30.178601   69580 cri.go:89] found id: ""
	I0501 03:42:30.178639   69580 logs.go:276] 0 containers: []
	W0501 03:42:30.178648   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:42:30.178656   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:42:30.178714   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:42:30.230523   69580 cri.go:89] found id: ""
	I0501 03:42:30.230549   69580 logs.go:276] 0 containers: []
	W0501 03:42:30.230561   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:42:30.230569   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:42:30.230632   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:42:30.289234   69580 cri.go:89] found id: ""
	I0501 03:42:30.289262   69580 logs.go:276] 0 containers: []
	W0501 03:42:30.289270   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:42:30.289277   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:42:30.289342   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:42:30.332596   69580 cri.go:89] found id: ""
	I0501 03:42:30.332627   69580 logs.go:276] 0 containers: []
	W0501 03:42:30.332637   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:42:30.332644   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:42:30.332710   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:42:30.383871   69580 cri.go:89] found id: ""
	I0501 03:42:30.383901   69580 logs.go:276] 0 containers: []
	W0501 03:42:30.383908   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:42:30.383917   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:42:30.383929   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:42:30.464382   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:42:30.464404   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:42:30.464417   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:42:30.550604   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:42:30.550637   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:42:30.594927   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:42:30.594959   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:42:30.648392   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:42:30.648426   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:42:28.847271   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:31.345865   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:29.657316   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:31.657435   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:32.511887   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:35.011677   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:33.167591   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:42:33.183804   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:42:33.183874   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:42:33.223501   69580 cri.go:89] found id: ""
	I0501 03:42:33.223525   69580 logs.go:276] 0 containers: []
	W0501 03:42:33.223532   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:42:33.223539   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:42:33.223600   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:42:33.268674   69580 cri.go:89] found id: ""
	I0501 03:42:33.268705   69580 logs.go:276] 0 containers: []
	W0501 03:42:33.268741   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:42:33.268749   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:42:33.268807   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:42:33.310613   69580 cri.go:89] found id: ""
	I0501 03:42:33.310655   69580 logs.go:276] 0 containers: []
	W0501 03:42:33.310666   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:42:33.310674   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:42:33.310737   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:42:33.353156   69580 cri.go:89] found id: ""
	I0501 03:42:33.353177   69580 logs.go:276] 0 containers: []
	W0501 03:42:33.353184   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:42:33.353189   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:42:33.353237   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:42:33.389702   69580 cri.go:89] found id: ""
	I0501 03:42:33.389730   69580 logs.go:276] 0 containers: []
	W0501 03:42:33.389743   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:42:33.389751   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:42:33.389817   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:42:33.431244   69580 cri.go:89] found id: ""
	I0501 03:42:33.431275   69580 logs.go:276] 0 containers: []
	W0501 03:42:33.431290   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:42:33.431298   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:42:33.431384   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:42:33.472382   69580 cri.go:89] found id: ""
	I0501 03:42:33.472412   69580 logs.go:276] 0 containers: []
	W0501 03:42:33.472423   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:42:33.472431   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:42:33.472519   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:42:33.517042   69580 cri.go:89] found id: ""
	I0501 03:42:33.517064   69580 logs.go:276] 0 containers: []
	W0501 03:42:33.517071   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:42:33.517079   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:42:33.517091   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:42:33.573343   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:42:33.573372   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:42:33.588932   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:42:33.588963   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:42:33.674060   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:42:33.674090   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:42:33.674106   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:42:33.756635   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:42:33.756684   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:42:36.300909   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:42:36.320407   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:42:36.320474   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:42:36.367236   69580 cri.go:89] found id: ""
	I0501 03:42:36.367261   69580 logs.go:276] 0 containers: []
	W0501 03:42:36.367269   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:42:36.367274   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:42:36.367335   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:42:36.406440   69580 cri.go:89] found id: ""
	I0501 03:42:36.406471   69580 logs.go:276] 0 containers: []
	W0501 03:42:36.406482   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:42:36.406489   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:42:36.406552   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:42:36.443931   69580 cri.go:89] found id: ""
	I0501 03:42:36.443957   69580 logs.go:276] 0 containers: []
	W0501 03:42:36.443964   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:42:36.443969   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:42:36.444024   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:42:33.844832   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:35.845476   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:37.846291   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:34.156976   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:36.657001   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:38.657056   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:37.510534   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:39.511335   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:36.486169   69580 cri.go:89] found id: ""
	I0501 03:42:36.486200   69580 logs.go:276] 0 containers: []
	W0501 03:42:36.486213   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:42:36.486220   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:42:36.486276   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:42:36.532211   69580 cri.go:89] found id: ""
	I0501 03:42:36.532237   69580 logs.go:276] 0 containers: []
	W0501 03:42:36.532246   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:42:36.532251   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:42:36.532311   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:42:36.571889   69580 cri.go:89] found id: ""
	I0501 03:42:36.571921   69580 logs.go:276] 0 containers: []
	W0501 03:42:36.571933   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:42:36.571940   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:42:36.572000   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:42:36.612126   69580 cri.go:89] found id: ""
	I0501 03:42:36.612159   69580 logs.go:276] 0 containers: []
	W0501 03:42:36.612170   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:42:36.612177   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:42:36.612238   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:42:36.654067   69580 cri.go:89] found id: ""
	I0501 03:42:36.654096   69580 logs.go:276] 0 containers: []
	W0501 03:42:36.654106   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:42:36.654117   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:42:36.654129   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:42:36.740205   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:42:36.740226   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:42:36.740237   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:42:36.821403   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:42:36.821437   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:42:36.874829   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:42:36.874867   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:42:36.928312   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:42:36.928342   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:42:39.444598   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:42:39.460086   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:42:39.460151   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:42:39.500833   69580 cri.go:89] found id: ""
	I0501 03:42:39.500859   69580 logs.go:276] 0 containers: []
	W0501 03:42:39.500870   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:42:39.500879   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:42:39.500936   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:42:39.544212   69580 cri.go:89] found id: ""
	I0501 03:42:39.544238   69580 logs.go:276] 0 containers: []
	W0501 03:42:39.544248   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:42:39.544260   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:42:39.544326   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:42:39.582167   69580 cri.go:89] found id: ""
	I0501 03:42:39.582200   69580 logs.go:276] 0 containers: []
	W0501 03:42:39.582218   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:42:39.582231   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:42:39.582296   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:42:39.624811   69580 cri.go:89] found id: ""
	I0501 03:42:39.624837   69580 logs.go:276] 0 containers: []
	W0501 03:42:39.624848   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:42:39.624855   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:42:39.624913   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:42:39.666001   69580 cri.go:89] found id: ""
	I0501 03:42:39.666030   69580 logs.go:276] 0 containers: []
	W0501 03:42:39.666041   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:42:39.666048   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:42:39.666111   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:42:39.708790   69580 cri.go:89] found id: ""
	I0501 03:42:39.708820   69580 logs.go:276] 0 containers: []
	W0501 03:42:39.708831   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:42:39.708839   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:42:39.708896   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:42:39.750585   69580 cri.go:89] found id: ""
	I0501 03:42:39.750609   69580 logs.go:276] 0 containers: []
	W0501 03:42:39.750617   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:42:39.750622   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:42:39.750670   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:42:39.798576   69580 cri.go:89] found id: ""
	I0501 03:42:39.798612   69580 logs.go:276] 0 containers: []
	W0501 03:42:39.798624   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:42:39.798636   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:42:39.798651   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:42:39.891759   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:42:39.891782   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:42:39.891797   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:42:39.974419   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:42:39.974462   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:42:40.020700   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:42:40.020728   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:42:40.073946   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:42:40.073980   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:42:40.345975   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:42.350579   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:40.657403   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:42.658271   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:41.511780   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:43.512428   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:42.590933   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:42:42.606044   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:42:42.606120   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:42:42.653074   69580 cri.go:89] found id: ""
	I0501 03:42:42.653104   69580 logs.go:276] 0 containers: []
	W0501 03:42:42.653115   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:42:42.653123   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:42:42.653195   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:42:42.693770   69580 cri.go:89] found id: ""
	I0501 03:42:42.693809   69580 logs.go:276] 0 containers: []
	W0501 03:42:42.693821   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:42:42.693829   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:42:42.693885   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:42:42.739087   69580 cri.go:89] found id: ""
	I0501 03:42:42.739115   69580 logs.go:276] 0 containers: []
	W0501 03:42:42.739125   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:42:42.739133   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:42:42.739196   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:42:42.779831   69580 cri.go:89] found id: ""
	I0501 03:42:42.779863   69580 logs.go:276] 0 containers: []
	W0501 03:42:42.779876   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:42:42.779885   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:42:42.779950   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:42:42.826759   69580 cri.go:89] found id: ""
	I0501 03:42:42.826791   69580 logs.go:276] 0 containers: []
	W0501 03:42:42.826799   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:42:42.826804   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:42:42.826854   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:42:42.872602   69580 cri.go:89] found id: ""
	I0501 03:42:42.872629   69580 logs.go:276] 0 containers: []
	W0501 03:42:42.872640   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:42:42.872648   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:42:42.872707   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:42:42.913833   69580 cri.go:89] found id: ""
	I0501 03:42:42.913862   69580 logs.go:276] 0 containers: []
	W0501 03:42:42.913872   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:42:42.913879   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:42:42.913936   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:42:42.953629   69580 cri.go:89] found id: ""
	I0501 03:42:42.953657   69580 logs.go:276] 0 containers: []
	W0501 03:42:42.953667   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:42:42.953679   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:42:42.953695   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:42:42.968420   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:42:42.968447   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:42:43.046840   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:42:43.046874   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:42:43.046898   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:42:43.135453   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:42:43.135492   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:42:43.184103   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:42:43.184141   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:42:45.738246   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:42:45.753193   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:42:45.753258   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:42:45.791191   69580 cri.go:89] found id: ""
	I0501 03:42:45.791216   69580 logs.go:276] 0 containers: []
	W0501 03:42:45.791224   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:42:45.791236   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:42:45.791285   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:42:45.831935   69580 cri.go:89] found id: ""
	I0501 03:42:45.831967   69580 logs.go:276] 0 containers: []
	W0501 03:42:45.831978   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:42:45.831986   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:42:45.832041   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:42:45.869492   69580 cri.go:89] found id: ""
	I0501 03:42:45.869517   69580 logs.go:276] 0 containers: []
	W0501 03:42:45.869529   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:42:45.869536   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:42:45.869593   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:42:45.910642   69580 cri.go:89] found id: ""
	I0501 03:42:45.910672   69580 logs.go:276] 0 containers: []
	W0501 03:42:45.910682   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:42:45.910691   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:42:45.910754   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:42:45.951489   69580 cri.go:89] found id: ""
	I0501 03:42:45.951518   69580 logs.go:276] 0 containers: []
	W0501 03:42:45.951528   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:42:45.951535   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:42:45.951582   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:42:45.991388   69580 cri.go:89] found id: ""
	I0501 03:42:45.991410   69580 logs.go:276] 0 containers: []
	W0501 03:42:45.991418   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:42:45.991423   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:42:45.991467   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:42:46.036524   69580 cri.go:89] found id: ""
	I0501 03:42:46.036546   69580 logs.go:276] 0 containers: []
	W0501 03:42:46.036553   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:42:46.036560   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:42:46.036622   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:42:46.087472   69580 cri.go:89] found id: ""
	I0501 03:42:46.087495   69580 logs.go:276] 0 containers: []
	W0501 03:42:46.087504   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:42:46.087513   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:42:46.087526   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:42:46.101283   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:42:46.101314   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:42:46.176459   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:42:46.176491   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:42:46.176506   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:42:46.261921   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:42:46.261956   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:42:46.309879   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:42:46.309910   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:42:44.846042   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:47.349023   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:44.658318   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:47.155780   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:46.011347   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:48.511156   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:50.512175   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:48.867064   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:42:48.884082   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:42:48.884192   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:42:48.929681   69580 cri.go:89] found id: ""
	I0501 03:42:48.929708   69580 logs.go:276] 0 containers: []
	W0501 03:42:48.929716   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:42:48.929722   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:42:48.929789   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:42:48.977850   69580 cri.go:89] found id: ""
	I0501 03:42:48.977882   69580 logs.go:276] 0 containers: []
	W0501 03:42:48.977894   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:42:48.977901   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:42:48.977962   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:42:49.022590   69580 cri.go:89] found id: ""
	I0501 03:42:49.022619   69580 logs.go:276] 0 containers: []
	W0501 03:42:49.022629   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:42:49.022637   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:42:49.022706   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:42:49.064092   69580 cri.go:89] found id: ""
	I0501 03:42:49.064122   69580 logs.go:276] 0 containers: []
	W0501 03:42:49.064143   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:42:49.064152   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:42:49.064220   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:42:49.103962   69580 cri.go:89] found id: ""
	I0501 03:42:49.103990   69580 logs.go:276] 0 containers: []
	W0501 03:42:49.104002   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:42:49.104009   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:42:49.104070   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:42:49.144566   69580 cri.go:89] found id: ""
	I0501 03:42:49.144596   69580 logs.go:276] 0 containers: []
	W0501 03:42:49.144604   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:42:49.144610   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:42:49.144669   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:42:49.183110   69580 cri.go:89] found id: ""
	I0501 03:42:49.183141   69580 logs.go:276] 0 containers: []
	W0501 03:42:49.183161   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:42:49.183166   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:42:49.183239   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:42:49.225865   69580 cri.go:89] found id: ""
	I0501 03:42:49.225890   69580 logs.go:276] 0 containers: []
	W0501 03:42:49.225902   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:42:49.225912   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:42:49.225926   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:42:49.312967   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:42:49.313005   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:42:49.361171   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:42:49.361206   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:42:49.418731   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:42:49.418780   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:42:49.436976   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:42:49.437007   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:42:49.517994   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:42:49.848517   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:52.346908   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:49.160713   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:51.656444   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:53.659040   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:53.011092   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:55.011811   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:52.018675   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:42:52.033946   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:42:52.034022   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:42:52.081433   69580 cri.go:89] found id: ""
	I0501 03:42:52.081465   69580 logs.go:276] 0 containers: []
	W0501 03:42:52.081477   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:42:52.081485   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:42:52.081544   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:42:52.123914   69580 cri.go:89] found id: ""
	I0501 03:42:52.123947   69580 logs.go:276] 0 containers: []
	W0501 03:42:52.123958   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:42:52.123966   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:42:52.124023   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:42:52.164000   69580 cri.go:89] found id: ""
	I0501 03:42:52.164020   69580 logs.go:276] 0 containers: []
	W0501 03:42:52.164027   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:42:52.164033   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:42:52.164086   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:42:52.205984   69580 cri.go:89] found id: ""
	I0501 03:42:52.206011   69580 logs.go:276] 0 containers: []
	W0501 03:42:52.206023   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:42:52.206031   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:42:52.206096   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:42:52.252743   69580 cri.go:89] found id: ""
	I0501 03:42:52.252766   69580 logs.go:276] 0 containers: []
	W0501 03:42:52.252774   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:42:52.252779   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:42:52.252839   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:42:52.296814   69580 cri.go:89] found id: ""
	I0501 03:42:52.296838   69580 logs.go:276] 0 containers: []
	W0501 03:42:52.296856   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:42:52.296864   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:42:52.296928   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:42:52.335996   69580 cri.go:89] found id: ""
	I0501 03:42:52.336023   69580 logs.go:276] 0 containers: []
	W0501 03:42:52.336034   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:42:52.336042   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:42:52.336105   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:42:52.377470   69580 cri.go:89] found id: ""
	I0501 03:42:52.377498   69580 logs.go:276] 0 containers: []
	W0501 03:42:52.377513   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:42:52.377524   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:42:52.377540   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:42:52.432644   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:42:52.432680   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:42:52.447518   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:42:52.447552   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:42:52.530967   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:42:52.530992   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:42:52.531005   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:42:52.612280   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:42:52.612327   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:42:55.170134   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:42:55.185252   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:42:55.185328   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:42:55.227741   69580 cri.go:89] found id: ""
	I0501 03:42:55.227764   69580 logs.go:276] 0 containers: []
	W0501 03:42:55.227771   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:42:55.227777   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:42:55.227820   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:42:55.270796   69580 cri.go:89] found id: ""
	I0501 03:42:55.270823   69580 logs.go:276] 0 containers: []
	W0501 03:42:55.270834   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:42:55.270840   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:42:55.270898   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:42:55.312146   69580 cri.go:89] found id: ""
	I0501 03:42:55.312171   69580 logs.go:276] 0 containers: []
	W0501 03:42:55.312180   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:42:55.312190   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:42:55.312236   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:42:55.354410   69580 cri.go:89] found id: ""
	I0501 03:42:55.354436   69580 logs.go:276] 0 containers: []
	W0501 03:42:55.354445   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:42:55.354450   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:42:55.354509   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:42:55.393550   69580 cri.go:89] found id: ""
	I0501 03:42:55.393580   69580 logs.go:276] 0 containers: []
	W0501 03:42:55.393589   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:42:55.393594   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:42:55.393651   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:42:55.431468   69580 cri.go:89] found id: ""
	I0501 03:42:55.431497   69580 logs.go:276] 0 containers: []
	W0501 03:42:55.431507   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:42:55.431514   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:42:55.431566   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:42:55.470491   69580 cri.go:89] found id: ""
	I0501 03:42:55.470513   69580 logs.go:276] 0 containers: []
	W0501 03:42:55.470520   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:42:55.470526   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:42:55.470571   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:42:55.509849   69580 cri.go:89] found id: ""
	I0501 03:42:55.509875   69580 logs.go:276] 0 containers: []
	W0501 03:42:55.509885   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:42:55.509894   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:42:55.509909   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:42:55.566680   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:42:55.566762   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:42:55.584392   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:42:55.584423   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:42:55.663090   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:42:55.663116   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:42:55.663131   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:42:55.741459   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:42:55.741494   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:42:54.846549   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:56.848989   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:56.156918   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:58.157016   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:57.012980   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:59.513719   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:58.294435   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:42:58.310204   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:42:58.310267   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:42:58.350292   69580 cri.go:89] found id: ""
	I0501 03:42:58.350322   69580 logs.go:276] 0 containers: []
	W0501 03:42:58.350334   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:42:58.350343   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:42:58.350431   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:42:58.395998   69580 cri.go:89] found id: ""
	I0501 03:42:58.396029   69580 logs.go:276] 0 containers: []
	W0501 03:42:58.396041   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:42:58.396049   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:42:58.396131   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:42:58.434371   69580 cri.go:89] found id: ""
	I0501 03:42:58.434414   69580 logs.go:276] 0 containers: []
	W0501 03:42:58.434427   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:42:58.434434   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:42:58.434493   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:42:58.473457   69580 cri.go:89] found id: ""
	I0501 03:42:58.473489   69580 logs.go:276] 0 containers: []
	W0501 03:42:58.473499   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:42:58.473507   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:42:58.473572   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:42:58.515172   69580 cri.go:89] found id: ""
	I0501 03:42:58.515201   69580 logs.go:276] 0 containers: []
	W0501 03:42:58.515212   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:42:58.515221   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:42:58.515291   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:42:58.560305   69580 cri.go:89] found id: ""
	I0501 03:42:58.560333   69580 logs.go:276] 0 containers: []
	W0501 03:42:58.560341   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:42:58.560348   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:42:58.560407   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:42:58.617980   69580 cri.go:89] found id: ""
	I0501 03:42:58.618005   69580 logs.go:276] 0 containers: []
	W0501 03:42:58.618013   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:42:58.618019   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:42:58.618080   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:42:58.659800   69580 cri.go:89] found id: ""
	I0501 03:42:58.659827   69580 logs.go:276] 0 containers: []
	W0501 03:42:58.659838   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:42:58.659848   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:42:58.659862   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:42:58.718134   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:42:58.718169   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:42:58.733972   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:42:58.734001   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:42:58.813055   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:42:58.813082   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:42:58.813099   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:42:58.897293   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:42:58.897331   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:43:01.442980   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:43:01.459602   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:43:01.459687   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:42:58.849599   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:01.346264   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:00.157322   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:02.657002   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:02.012753   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:04.510896   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:01.502817   69580 cri.go:89] found id: ""
	I0501 03:43:01.502848   69580 logs.go:276] 0 containers: []
	W0501 03:43:01.502857   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:43:01.502863   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:43:01.502924   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:43:01.547251   69580 cri.go:89] found id: ""
	I0501 03:43:01.547289   69580 logs.go:276] 0 containers: []
	W0501 03:43:01.547301   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:43:01.547308   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:43:01.547376   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:43:01.590179   69580 cri.go:89] found id: ""
	I0501 03:43:01.590211   69580 logs.go:276] 0 containers: []
	W0501 03:43:01.590221   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:43:01.590228   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:43:01.590296   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:43:01.628772   69580 cri.go:89] found id: ""
	I0501 03:43:01.628814   69580 logs.go:276] 0 containers: []
	W0501 03:43:01.628826   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:43:01.628834   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:43:01.628893   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:43:01.677414   69580 cri.go:89] found id: ""
	I0501 03:43:01.677440   69580 logs.go:276] 0 containers: []
	W0501 03:43:01.677448   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:43:01.677453   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:43:01.677500   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:43:01.723107   69580 cri.go:89] found id: ""
	I0501 03:43:01.723139   69580 logs.go:276] 0 containers: []
	W0501 03:43:01.723152   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:43:01.723160   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:43:01.723225   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:43:01.771846   69580 cri.go:89] found id: ""
	I0501 03:43:01.771873   69580 logs.go:276] 0 containers: []
	W0501 03:43:01.771883   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:43:01.771890   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:43:01.771952   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:43:01.818145   69580 cri.go:89] found id: ""
	I0501 03:43:01.818179   69580 logs.go:276] 0 containers: []
	W0501 03:43:01.818191   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:43:01.818202   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:43:01.818218   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:43:01.881502   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:43:01.881546   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:43:01.897580   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:43:01.897614   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:43:01.981959   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:43:01.981980   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:43:01.981996   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:43:02.066228   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:43:02.066269   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:43:04.609855   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:43:04.626885   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:43:04.626962   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:43:04.668248   69580 cri.go:89] found id: ""
	I0501 03:43:04.668277   69580 logs.go:276] 0 containers: []
	W0501 03:43:04.668290   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:43:04.668298   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:43:04.668364   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:43:04.711032   69580 cri.go:89] found id: ""
	I0501 03:43:04.711057   69580 logs.go:276] 0 containers: []
	W0501 03:43:04.711068   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:43:04.711076   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:43:04.711136   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:43:04.754197   69580 cri.go:89] found id: ""
	I0501 03:43:04.754232   69580 logs.go:276] 0 containers: []
	W0501 03:43:04.754241   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:43:04.754248   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:43:04.754317   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:43:04.801062   69580 cri.go:89] found id: ""
	I0501 03:43:04.801089   69580 logs.go:276] 0 containers: []
	W0501 03:43:04.801097   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:43:04.801103   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:43:04.801163   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:43:04.849425   69580 cri.go:89] found id: ""
	I0501 03:43:04.849454   69580 logs.go:276] 0 containers: []
	W0501 03:43:04.849465   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:43:04.849473   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:43:04.849536   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:43:04.892555   69580 cri.go:89] found id: ""
	I0501 03:43:04.892589   69580 logs.go:276] 0 containers: []
	W0501 03:43:04.892597   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:43:04.892603   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:43:04.892661   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:43:04.934101   69580 cri.go:89] found id: ""
	I0501 03:43:04.934129   69580 logs.go:276] 0 containers: []
	W0501 03:43:04.934137   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:43:04.934142   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:43:04.934191   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:43:04.985720   69580 cri.go:89] found id: ""
	I0501 03:43:04.985747   69580 logs.go:276] 0 containers: []
	W0501 03:43:04.985760   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:43:04.985773   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:43:04.985789   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:43:05.060634   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:43:05.060692   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:43:05.082007   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:43:05.082036   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:43:05.164613   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:43:05.164636   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:43:05.164652   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:43:05.244064   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:43:05.244103   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:43:03.845495   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:06.346757   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:05.157929   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:07.657094   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:06.511168   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:08.511512   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:10.511984   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:07.793867   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:43:07.811161   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:43:07.811236   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:43:07.850738   69580 cri.go:89] found id: ""
	I0501 03:43:07.850765   69580 logs.go:276] 0 containers: []
	W0501 03:43:07.850775   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:43:07.850782   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:43:07.850841   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:43:07.892434   69580 cri.go:89] found id: ""
	I0501 03:43:07.892466   69580 logs.go:276] 0 containers: []
	W0501 03:43:07.892476   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:43:07.892483   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:43:07.892543   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:43:07.934093   69580 cri.go:89] found id: ""
	I0501 03:43:07.934122   69580 logs.go:276] 0 containers: []
	W0501 03:43:07.934133   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:43:07.934141   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:43:07.934200   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:43:07.976165   69580 cri.go:89] found id: ""
	I0501 03:43:07.976196   69580 logs.go:276] 0 containers: []
	W0501 03:43:07.976205   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:43:07.976216   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:43:07.976278   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:43:08.016925   69580 cri.go:89] found id: ""
	I0501 03:43:08.016956   69580 logs.go:276] 0 containers: []
	W0501 03:43:08.016968   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:43:08.016975   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:43:08.017038   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:43:08.063385   69580 cri.go:89] found id: ""
	I0501 03:43:08.063438   69580 logs.go:276] 0 containers: []
	W0501 03:43:08.063454   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:43:08.063465   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:43:08.063551   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:43:08.103586   69580 cri.go:89] found id: ""
	I0501 03:43:08.103610   69580 logs.go:276] 0 containers: []
	W0501 03:43:08.103618   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:43:08.103628   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:43:08.103672   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:43:08.142564   69580 cri.go:89] found id: ""
	I0501 03:43:08.142594   69580 logs.go:276] 0 containers: []
	W0501 03:43:08.142605   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:43:08.142617   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:43:08.142635   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:43:08.231532   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:43:08.231556   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:43:08.231571   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:43:08.311009   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:43:08.311053   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:43:08.357841   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:43:08.357877   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:43:08.409577   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:43:08.409610   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:43:10.924898   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:43:10.941525   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:43:10.941591   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:43:11.009214   69580 cri.go:89] found id: ""
	I0501 03:43:11.009238   69580 logs.go:276] 0 containers: []
	W0501 03:43:11.009247   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:43:11.009255   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:43:11.009316   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:43:11.072233   69580 cri.go:89] found id: ""
	I0501 03:43:11.072259   69580 logs.go:276] 0 containers: []
	W0501 03:43:11.072267   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:43:11.072273   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:43:11.072327   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:43:11.111662   69580 cri.go:89] found id: ""
	I0501 03:43:11.111691   69580 logs.go:276] 0 containers: []
	W0501 03:43:11.111701   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:43:11.111708   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:43:11.111765   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:43:11.151540   69580 cri.go:89] found id: ""
	I0501 03:43:11.151570   69580 logs.go:276] 0 containers: []
	W0501 03:43:11.151580   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:43:11.151594   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:43:11.151656   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:43:11.194030   69580 cri.go:89] found id: ""
	I0501 03:43:11.194064   69580 logs.go:276] 0 containers: []
	W0501 03:43:11.194076   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:43:11.194083   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:43:11.194146   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:43:11.233010   69580 cri.go:89] found id: ""
	I0501 03:43:11.233045   69580 logs.go:276] 0 containers: []
	W0501 03:43:11.233056   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:43:11.233063   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:43:11.233117   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:43:11.270979   69580 cri.go:89] found id: ""
	I0501 03:43:11.271009   69580 logs.go:276] 0 containers: []
	W0501 03:43:11.271019   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:43:11.271026   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:43:11.271088   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:43:11.312338   69580 cri.go:89] found id: ""
	I0501 03:43:11.312369   69580 logs.go:276] 0 containers: []
	W0501 03:43:11.312381   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:43:11.312393   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:43:11.312408   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:43:11.364273   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:43:11.364307   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:43:11.418603   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:43:11.418634   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:43:11.433409   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:43:11.433438   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0501 03:43:08.349537   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:10.845566   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:12.846699   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:10.157910   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:12.657859   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:12.512669   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:15.013314   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	W0501 03:43:11.511243   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:43:11.511265   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:43:11.511280   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:43:14.089834   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:43:14.104337   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:43:14.104419   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:43:14.148799   69580 cri.go:89] found id: ""
	I0501 03:43:14.148826   69580 logs.go:276] 0 containers: []
	W0501 03:43:14.148833   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:43:14.148839   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:43:14.148904   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:43:14.191330   69580 cri.go:89] found id: ""
	I0501 03:43:14.191366   69580 logs.go:276] 0 containers: []
	W0501 03:43:14.191378   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:43:14.191386   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:43:14.191448   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:43:14.245978   69580 cri.go:89] found id: ""
	I0501 03:43:14.246010   69580 logs.go:276] 0 containers: []
	W0501 03:43:14.246018   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:43:14.246024   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:43:14.246093   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:43:14.287188   69580 cri.go:89] found id: ""
	I0501 03:43:14.287215   69580 logs.go:276] 0 containers: []
	W0501 03:43:14.287223   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:43:14.287228   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:43:14.287276   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:43:14.328060   69580 cri.go:89] found id: ""
	I0501 03:43:14.328093   69580 logs.go:276] 0 containers: []
	W0501 03:43:14.328104   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:43:14.328113   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:43:14.328179   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:43:14.370734   69580 cri.go:89] found id: ""
	I0501 03:43:14.370765   69580 logs.go:276] 0 containers: []
	W0501 03:43:14.370776   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:43:14.370783   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:43:14.370837   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:43:14.414690   69580 cri.go:89] found id: ""
	I0501 03:43:14.414713   69580 logs.go:276] 0 containers: []
	W0501 03:43:14.414721   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:43:14.414726   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:43:14.414790   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:43:14.459030   69580 cri.go:89] found id: ""
	I0501 03:43:14.459060   69580 logs.go:276] 0 containers: []
	W0501 03:43:14.459072   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:43:14.459083   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:43:14.459098   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:43:14.519728   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:43:14.519761   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:43:14.535841   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:43:14.535871   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:43:14.615203   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:43:14.615231   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:43:14.615249   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:43:14.707677   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:43:14.707725   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:43:15.345927   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:17.846732   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:14.657956   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:17.156935   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:17.512424   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:20.012471   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:17.254918   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:43:17.270643   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:43:17.270698   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:43:17.310692   69580 cri.go:89] found id: ""
	I0501 03:43:17.310724   69580 logs.go:276] 0 containers: []
	W0501 03:43:17.310732   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:43:17.310739   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:43:17.310806   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:43:17.349932   69580 cri.go:89] found id: ""
	I0501 03:43:17.349959   69580 logs.go:276] 0 containers: []
	W0501 03:43:17.349969   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:43:17.349976   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:43:17.350040   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:43:17.393073   69580 cri.go:89] found id: ""
	I0501 03:43:17.393099   69580 logs.go:276] 0 containers: []
	W0501 03:43:17.393109   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:43:17.393116   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:43:17.393176   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:43:17.429736   69580 cri.go:89] found id: ""
	I0501 03:43:17.429763   69580 logs.go:276] 0 containers: []
	W0501 03:43:17.429773   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:43:17.429787   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:43:17.429858   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:43:17.473052   69580 cri.go:89] found id: ""
	I0501 03:43:17.473085   69580 logs.go:276] 0 containers: []
	W0501 03:43:17.473097   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:43:17.473105   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:43:17.473168   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:43:17.514035   69580 cri.go:89] found id: ""
	I0501 03:43:17.514062   69580 logs.go:276] 0 containers: []
	W0501 03:43:17.514071   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:43:17.514078   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:43:17.514126   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:43:17.553197   69580 cri.go:89] found id: ""
	I0501 03:43:17.553225   69580 logs.go:276] 0 containers: []
	W0501 03:43:17.553234   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:43:17.553240   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:43:17.553300   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:43:17.592170   69580 cri.go:89] found id: ""
	I0501 03:43:17.592192   69580 logs.go:276] 0 containers: []
	W0501 03:43:17.592199   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:43:17.592208   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:43:17.592220   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:43:17.647549   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:43:17.647584   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:43:17.663084   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:43:17.663114   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:43:17.748357   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:43:17.748385   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:43:17.748401   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:43:17.832453   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:43:17.832491   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:43:20.375927   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:43:20.391840   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:43:20.391918   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:43:20.434158   69580 cri.go:89] found id: ""
	I0501 03:43:20.434185   69580 logs.go:276] 0 containers: []
	W0501 03:43:20.434193   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:43:20.434198   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:43:20.434254   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:43:20.477209   69580 cri.go:89] found id: ""
	I0501 03:43:20.477237   69580 logs.go:276] 0 containers: []
	W0501 03:43:20.477253   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:43:20.477259   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:43:20.477309   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:43:20.517227   69580 cri.go:89] found id: ""
	I0501 03:43:20.517260   69580 logs.go:276] 0 containers: []
	W0501 03:43:20.517270   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:43:20.517282   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:43:20.517340   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:43:20.555771   69580 cri.go:89] found id: ""
	I0501 03:43:20.555802   69580 logs.go:276] 0 containers: []
	W0501 03:43:20.555812   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:43:20.555820   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:43:20.555866   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:43:20.598177   69580 cri.go:89] found id: ""
	I0501 03:43:20.598200   69580 logs.go:276] 0 containers: []
	W0501 03:43:20.598213   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:43:20.598218   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:43:20.598326   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:43:20.637336   69580 cri.go:89] found id: ""
	I0501 03:43:20.637364   69580 logs.go:276] 0 containers: []
	W0501 03:43:20.637373   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:43:20.637378   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:43:20.637435   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:43:20.687736   69580 cri.go:89] found id: ""
	I0501 03:43:20.687761   69580 logs.go:276] 0 containers: []
	W0501 03:43:20.687768   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:43:20.687782   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:43:20.687840   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:43:20.726102   69580 cri.go:89] found id: ""
	I0501 03:43:20.726135   69580 logs.go:276] 0 containers: []
	W0501 03:43:20.726143   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:43:20.726154   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:43:20.726169   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:43:20.780874   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:43:20.780905   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:43:20.795798   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:43:20.795836   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:43:20.882337   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:43:20.882367   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:43:20.882381   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:43:20.962138   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:43:20.962188   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:43:20.345887   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:22.346061   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:19.157165   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:21.657358   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:22.015676   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:24.511682   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:23.512174   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:43:23.528344   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:43:23.528417   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:43:23.567182   69580 cri.go:89] found id: ""
	I0501 03:43:23.567212   69580 logs.go:276] 0 containers: []
	W0501 03:43:23.567222   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:43:23.567230   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:43:23.567291   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:43:23.607522   69580 cri.go:89] found id: ""
	I0501 03:43:23.607556   69580 logs.go:276] 0 containers: []
	W0501 03:43:23.607567   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:43:23.607574   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:43:23.607637   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:43:23.650932   69580 cri.go:89] found id: ""
	I0501 03:43:23.650959   69580 logs.go:276] 0 containers: []
	W0501 03:43:23.650970   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:43:23.650976   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:43:23.651035   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:43:23.695392   69580 cri.go:89] found id: ""
	I0501 03:43:23.695419   69580 logs.go:276] 0 containers: []
	W0501 03:43:23.695428   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:43:23.695436   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:43:23.695514   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:43:23.736577   69580 cri.go:89] found id: ""
	I0501 03:43:23.736607   69580 logs.go:276] 0 containers: []
	W0501 03:43:23.736619   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:43:23.736627   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:43:23.736685   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:43:23.776047   69580 cri.go:89] found id: ""
	I0501 03:43:23.776070   69580 logs.go:276] 0 containers: []
	W0501 03:43:23.776077   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:43:23.776082   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:43:23.776134   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:43:23.813896   69580 cri.go:89] found id: ""
	I0501 03:43:23.813934   69580 logs.go:276] 0 containers: []
	W0501 03:43:23.813943   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:43:23.813949   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:43:23.813997   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:43:23.858898   69580 cri.go:89] found id: ""
	I0501 03:43:23.858925   69580 logs.go:276] 0 containers: []
	W0501 03:43:23.858936   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:43:23.858947   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:43:23.858964   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:43:23.901796   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:43:23.901850   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:43:23.957009   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:43:23.957040   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:43:23.972811   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:43:23.972839   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:43:24.055535   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:43:24.055557   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:43:24.055576   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:43:24.845310   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:26.847397   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:24.157453   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:26.661073   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:27.012181   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:29.511387   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:26.640114   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:43:26.657217   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:43:26.657285   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:43:26.701191   69580 cri.go:89] found id: ""
	I0501 03:43:26.701218   69580 logs.go:276] 0 containers: []
	W0501 03:43:26.701227   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:43:26.701232   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:43:26.701287   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:43:26.740710   69580 cri.go:89] found id: ""
	I0501 03:43:26.740737   69580 logs.go:276] 0 containers: []
	W0501 03:43:26.740745   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:43:26.740750   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:43:26.740808   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:43:26.778682   69580 cri.go:89] found id: ""
	I0501 03:43:26.778710   69580 logs.go:276] 0 containers: []
	W0501 03:43:26.778724   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:43:26.778730   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:43:26.778789   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:43:26.822143   69580 cri.go:89] found id: ""
	I0501 03:43:26.822190   69580 logs.go:276] 0 containers: []
	W0501 03:43:26.822201   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:43:26.822209   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:43:26.822270   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:43:26.865938   69580 cri.go:89] found id: ""
	I0501 03:43:26.865976   69580 logs.go:276] 0 containers: []
	W0501 03:43:26.865988   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:43:26.865996   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:43:26.866058   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:43:26.914939   69580 cri.go:89] found id: ""
	I0501 03:43:26.914969   69580 logs.go:276] 0 containers: []
	W0501 03:43:26.914979   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:43:26.914986   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:43:26.915043   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:43:26.961822   69580 cri.go:89] found id: ""
	I0501 03:43:26.961850   69580 logs.go:276] 0 containers: []
	W0501 03:43:26.961860   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:43:26.961867   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:43:26.961920   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:43:27.005985   69580 cri.go:89] found id: ""
	I0501 03:43:27.006012   69580 logs.go:276] 0 containers: []
	W0501 03:43:27.006021   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:43:27.006032   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:43:27.006046   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:43:27.058265   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:43:27.058303   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:43:27.076270   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:43:27.076308   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:43:27.152627   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:43:27.152706   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:43:27.152728   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:43:27.229638   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:43:27.229678   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:43:29.775960   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:43:29.792849   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:43:29.792925   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:43:29.832508   69580 cri.go:89] found id: ""
	I0501 03:43:29.832537   69580 logs.go:276] 0 containers: []
	W0501 03:43:29.832551   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:43:29.832559   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:43:29.832617   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:43:29.873160   69580 cri.go:89] found id: ""
	I0501 03:43:29.873188   69580 logs.go:276] 0 containers: []
	W0501 03:43:29.873199   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:43:29.873207   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:43:29.873271   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:43:29.919431   69580 cri.go:89] found id: ""
	I0501 03:43:29.919459   69580 logs.go:276] 0 containers: []
	W0501 03:43:29.919468   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:43:29.919474   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:43:29.919533   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:43:29.967944   69580 cri.go:89] found id: ""
	I0501 03:43:29.967976   69580 logs.go:276] 0 containers: []
	W0501 03:43:29.967987   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:43:29.967995   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:43:29.968060   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:43:30.011626   69580 cri.go:89] found id: ""
	I0501 03:43:30.011657   69580 logs.go:276] 0 containers: []
	W0501 03:43:30.011669   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:43:30.011678   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:43:30.011743   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:43:30.051998   69580 cri.go:89] found id: ""
	I0501 03:43:30.052020   69580 logs.go:276] 0 containers: []
	W0501 03:43:30.052028   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:43:30.052034   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:43:30.052095   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:43:30.094140   69580 cri.go:89] found id: ""
	I0501 03:43:30.094164   69580 logs.go:276] 0 containers: []
	W0501 03:43:30.094172   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:43:30.094179   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:43:30.094253   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:43:30.132363   69580 cri.go:89] found id: ""
	I0501 03:43:30.132391   69580 logs.go:276] 0 containers: []
	W0501 03:43:30.132399   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:43:30.132411   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:43:30.132422   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:43:30.221368   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:43:30.221410   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:43:30.271279   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:43:30.271317   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:43:30.325549   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:43:30.325586   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:43:30.345337   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:43:30.345376   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:43:30.427552   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:43:29.347108   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:31.846435   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:29.156483   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:31.156871   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:33.157355   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:32.015498   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:34.511190   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:32.928667   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:43:32.945489   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:43:32.945557   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:43:32.989604   69580 cri.go:89] found id: ""
	I0501 03:43:32.989628   69580 logs.go:276] 0 containers: []
	W0501 03:43:32.989636   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:43:32.989642   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:43:32.989701   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:43:33.030862   69580 cri.go:89] found id: ""
	I0501 03:43:33.030892   69580 logs.go:276] 0 containers: []
	W0501 03:43:33.030903   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:43:33.030912   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:43:33.030977   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:43:33.079795   69580 cri.go:89] found id: ""
	I0501 03:43:33.079827   69580 logs.go:276] 0 containers: []
	W0501 03:43:33.079835   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:43:33.079841   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:43:33.079898   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:43:33.120612   69580 cri.go:89] found id: ""
	I0501 03:43:33.120636   69580 logs.go:276] 0 containers: []
	W0501 03:43:33.120644   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:43:33.120649   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:43:33.120694   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:43:33.161824   69580 cri.go:89] found id: ""
	I0501 03:43:33.161851   69580 logs.go:276] 0 containers: []
	W0501 03:43:33.161861   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:43:33.161868   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:43:33.161924   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:43:33.200068   69580 cri.go:89] found id: ""
	I0501 03:43:33.200098   69580 logs.go:276] 0 containers: []
	W0501 03:43:33.200107   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:43:33.200113   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:43:33.200175   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:43:33.239314   69580 cri.go:89] found id: ""
	I0501 03:43:33.239341   69580 logs.go:276] 0 containers: []
	W0501 03:43:33.239351   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:43:33.239359   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:43:33.239427   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:43:33.281381   69580 cri.go:89] found id: ""
	I0501 03:43:33.281408   69580 logs.go:276] 0 containers: []
	W0501 03:43:33.281419   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:43:33.281431   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:43:33.281447   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:43:33.297992   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:43:33.298047   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:43:33.383273   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:43:33.383292   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:43:33.383303   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:43:33.465256   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:43:33.465289   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:43:33.509593   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:43:33.509621   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:43:36.065074   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:43:36.081361   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:43:36.081429   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:43:36.130394   69580 cri.go:89] found id: ""
	I0501 03:43:36.130436   69580 logs.go:276] 0 containers: []
	W0501 03:43:36.130448   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:43:36.130456   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:43:36.130524   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:43:36.171013   69580 cri.go:89] found id: ""
	I0501 03:43:36.171038   69580 logs.go:276] 0 containers: []
	W0501 03:43:36.171046   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:43:36.171052   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:43:36.171099   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:43:36.215372   69580 cri.go:89] found id: ""
	I0501 03:43:36.215411   69580 logs.go:276] 0 containers: []
	W0501 03:43:36.215424   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:43:36.215431   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:43:36.215493   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:43:36.257177   69580 cri.go:89] found id: ""
	I0501 03:43:36.257204   69580 logs.go:276] 0 containers: []
	W0501 03:43:36.257216   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:43:36.257223   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:43:36.257293   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:43:36.299035   69580 cri.go:89] found id: ""
	I0501 03:43:36.299066   69580 logs.go:276] 0 containers: []
	W0501 03:43:36.299085   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:43:36.299094   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:43:36.299166   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:43:36.339060   69580 cri.go:89] found id: ""
	I0501 03:43:36.339087   69580 logs.go:276] 0 containers: []
	W0501 03:43:36.339097   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:43:36.339105   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:43:36.339163   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:43:36.379982   69580 cri.go:89] found id: ""
	I0501 03:43:36.380016   69580 logs.go:276] 0 containers: []
	W0501 03:43:36.380028   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:43:36.380037   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:43:36.380100   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:43:36.419702   69580 cri.go:89] found id: ""
	I0501 03:43:36.419734   69580 logs.go:276] 0 containers: []
	W0501 03:43:36.419746   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:43:36.419758   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:43:36.419780   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:43:33.846499   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:35.846579   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:37.852802   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:35.159724   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:37.657040   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:36.516601   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:39.012001   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:36.472553   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:43:36.472774   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:43:36.488402   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:43:36.488439   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:43:36.566390   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:43:36.566433   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:43:36.566446   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:43:36.643493   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:43:36.643527   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:43:39.199060   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:43:39.216612   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:43:39.216695   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:43:39.262557   69580 cri.go:89] found id: ""
	I0501 03:43:39.262581   69580 logs.go:276] 0 containers: []
	W0501 03:43:39.262589   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:43:39.262595   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:43:39.262642   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:43:39.331051   69580 cri.go:89] found id: ""
	I0501 03:43:39.331076   69580 logs.go:276] 0 containers: []
	W0501 03:43:39.331093   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:43:39.331098   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:43:39.331162   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:43:39.382033   69580 cri.go:89] found id: ""
	I0501 03:43:39.382058   69580 logs.go:276] 0 containers: []
	W0501 03:43:39.382066   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:43:39.382071   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:43:39.382122   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:43:39.424019   69580 cri.go:89] found id: ""
	I0501 03:43:39.424049   69580 logs.go:276] 0 containers: []
	W0501 03:43:39.424058   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:43:39.424064   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:43:39.424120   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:43:39.465787   69580 cri.go:89] found id: ""
	I0501 03:43:39.465833   69580 logs.go:276] 0 containers: []
	W0501 03:43:39.465846   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:43:39.465855   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:43:39.465916   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:43:39.507746   69580 cri.go:89] found id: ""
	I0501 03:43:39.507781   69580 logs.go:276] 0 containers: []
	W0501 03:43:39.507791   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:43:39.507798   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:43:39.507861   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:43:39.550737   69580 cri.go:89] found id: ""
	I0501 03:43:39.550768   69580 logs.go:276] 0 containers: []
	W0501 03:43:39.550775   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:43:39.550781   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:43:39.550831   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:43:39.592279   69580 cri.go:89] found id: ""
	I0501 03:43:39.592329   69580 logs.go:276] 0 containers: []
	W0501 03:43:39.592343   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:43:39.592356   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:43:39.592373   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:43:39.648858   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:43:39.648896   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:43:39.665316   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:43:39.665343   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:43:39.743611   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:43:39.743632   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:43:39.743646   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:43:39.829285   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:43:39.829322   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:43:40.347121   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:42.845466   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:39.657888   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:41.657976   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:41.512061   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:44.017693   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:42.374457   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:43:42.389944   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:43:42.390002   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:43:42.431270   69580 cri.go:89] found id: ""
	I0501 03:43:42.431294   69580 logs.go:276] 0 containers: []
	W0501 03:43:42.431302   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:43:42.431308   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:43:42.431366   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:43:42.470515   69580 cri.go:89] found id: ""
	I0501 03:43:42.470546   69580 logs.go:276] 0 containers: []
	W0501 03:43:42.470558   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:43:42.470566   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:43:42.470619   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:43:42.518472   69580 cri.go:89] found id: ""
	I0501 03:43:42.518494   69580 logs.go:276] 0 containers: []
	W0501 03:43:42.518501   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:43:42.518506   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:43:42.518555   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:43:42.562192   69580 cri.go:89] found id: ""
	I0501 03:43:42.562220   69580 logs.go:276] 0 containers: []
	W0501 03:43:42.562231   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:43:42.562239   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:43:42.562300   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:43:42.599372   69580 cri.go:89] found id: ""
	I0501 03:43:42.599403   69580 logs.go:276] 0 containers: []
	W0501 03:43:42.599414   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:43:42.599422   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:43:42.599483   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:43:42.636738   69580 cri.go:89] found id: ""
	I0501 03:43:42.636766   69580 logs.go:276] 0 containers: []
	W0501 03:43:42.636777   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:43:42.636786   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:43:42.636845   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:43:42.682087   69580 cri.go:89] found id: ""
	I0501 03:43:42.682115   69580 logs.go:276] 0 containers: []
	W0501 03:43:42.682125   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:43:42.682133   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:43:42.682198   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:43:42.724280   69580 cri.go:89] found id: ""
	I0501 03:43:42.724316   69580 logs.go:276] 0 containers: []
	W0501 03:43:42.724328   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:43:42.724340   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:43:42.724354   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:43:42.771667   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:43:42.771702   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:43:42.827390   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:43:42.827428   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:43:42.843452   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:43:42.843480   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:43:42.925544   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:43:42.925563   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:43:42.925577   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:43:45.515104   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:43:45.529545   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:43:45.529619   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:43:45.573451   69580 cri.go:89] found id: ""
	I0501 03:43:45.573475   69580 logs.go:276] 0 containers: []
	W0501 03:43:45.573483   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:43:45.573489   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:43:45.573536   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:43:45.613873   69580 cri.go:89] found id: ""
	I0501 03:43:45.613897   69580 logs.go:276] 0 containers: []
	W0501 03:43:45.613905   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:43:45.613910   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:43:45.613954   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:43:45.660195   69580 cri.go:89] found id: ""
	I0501 03:43:45.660215   69580 logs.go:276] 0 containers: []
	W0501 03:43:45.660221   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:43:45.660226   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:43:45.660284   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:43:45.703539   69580 cri.go:89] found id: ""
	I0501 03:43:45.703566   69580 logs.go:276] 0 containers: []
	W0501 03:43:45.703574   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:43:45.703580   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:43:45.703637   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:43:45.754635   69580 cri.go:89] found id: ""
	I0501 03:43:45.754659   69580 logs.go:276] 0 containers: []
	W0501 03:43:45.754668   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:43:45.754675   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:43:45.754738   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:43:45.800836   69580 cri.go:89] found id: ""
	I0501 03:43:45.800866   69580 logs.go:276] 0 containers: []
	W0501 03:43:45.800884   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:43:45.800892   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:43:45.800955   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:43:45.859057   69580 cri.go:89] found id: ""
	I0501 03:43:45.859084   69580 logs.go:276] 0 containers: []
	W0501 03:43:45.859092   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:43:45.859098   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:43:45.859145   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:43:45.913173   69580 cri.go:89] found id: ""
	I0501 03:43:45.913204   69580 logs.go:276] 0 containers: []
	W0501 03:43:45.913216   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:43:45.913227   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:43:45.913243   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:43:45.930050   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:43:45.930087   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:43:46.006047   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:43:46.006081   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:43:46.006097   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:43:46.086630   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:43:46.086666   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:43:46.134635   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:43:46.134660   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:43:45.347071   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:47.845983   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:44.157143   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:46.157880   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:48.656747   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:46.510981   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:48.512854   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:48.690330   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:43:48.705024   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:43:48.705093   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:43:48.750244   69580 cri.go:89] found id: ""
	I0501 03:43:48.750278   69580 logs.go:276] 0 containers: []
	W0501 03:43:48.750299   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:43:48.750307   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:43:48.750377   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:43:48.791231   69580 cri.go:89] found id: ""
	I0501 03:43:48.791264   69580 logs.go:276] 0 containers: []
	W0501 03:43:48.791276   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:43:48.791283   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:43:48.791348   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:43:48.834692   69580 cri.go:89] found id: ""
	I0501 03:43:48.834720   69580 logs.go:276] 0 containers: []
	W0501 03:43:48.834731   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:43:48.834739   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:43:48.834809   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:43:48.877383   69580 cri.go:89] found id: ""
	I0501 03:43:48.877415   69580 logs.go:276] 0 containers: []
	W0501 03:43:48.877424   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:43:48.877430   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:43:48.877479   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:43:48.919728   69580 cri.go:89] found id: ""
	I0501 03:43:48.919756   69580 logs.go:276] 0 containers: []
	W0501 03:43:48.919767   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:43:48.919775   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:43:48.919836   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:43:48.962090   69580 cri.go:89] found id: ""
	I0501 03:43:48.962122   69580 logs.go:276] 0 containers: []
	W0501 03:43:48.962137   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:43:48.962144   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:43:48.962205   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:43:48.998456   69580 cri.go:89] found id: ""
	I0501 03:43:48.998487   69580 logs.go:276] 0 containers: []
	W0501 03:43:48.998498   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:43:48.998506   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:43:48.998566   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:43:49.042591   69580 cri.go:89] found id: ""
	I0501 03:43:49.042623   69580 logs.go:276] 0 containers: []
	W0501 03:43:49.042633   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:43:49.042645   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:43:49.042661   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:43:49.088533   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:43:49.088571   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:43:49.145252   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:43:49.145288   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:43:49.163093   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:43:49.163120   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:43:49.240805   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:43:49.240831   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:43:49.240844   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:43:49.848864   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:52.347128   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:50.656790   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:52.658130   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:51.011713   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:53.510598   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:55.512900   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:51.825530   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:43:51.839596   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:43:51.839669   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:43:51.879493   69580 cri.go:89] found id: ""
	I0501 03:43:51.879516   69580 logs.go:276] 0 containers: []
	W0501 03:43:51.879524   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:43:51.879530   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:43:51.879585   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:43:51.921577   69580 cri.go:89] found id: ""
	I0501 03:43:51.921608   69580 logs.go:276] 0 containers: []
	W0501 03:43:51.921620   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:43:51.921627   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:43:51.921693   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:43:51.961000   69580 cri.go:89] found id: ""
	I0501 03:43:51.961028   69580 logs.go:276] 0 containers: []
	W0501 03:43:51.961037   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:43:51.961043   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:43:51.961103   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:43:52.006087   69580 cri.go:89] found id: ""
	I0501 03:43:52.006118   69580 logs.go:276] 0 containers: []
	W0501 03:43:52.006129   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:43:52.006137   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:43:52.006201   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:43:52.047196   69580 cri.go:89] found id: ""
	I0501 03:43:52.047228   69580 logs.go:276] 0 containers: []
	W0501 03:43:52.047239   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:43:52.047250   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:43:52.047319   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:43:52.086380   69580 cri.go:89] found id: ""
	I0501 03:43:52.086423   69580 logs.go:276] 0 containers: []
	W0501 03:43:52.086434   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:43:52.086442   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:43:52.086499   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:43:52.128824   69580 cri.go:89] found id: ""
	I0501 03:43:52.128851   69580 logs.go:276] 0 containers: []
	W0501 03:43:52.128861   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:43:52.128868   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:43:52.128933   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:43:52.168743   69580 cri.go:89] found id: ""
	I0501 03:43:52.168769   69580 logs.go:276] 0 containers: []
	W0501 03:43:52.168776   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:43:52.168788   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:43:52.168802   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:43:52.184391   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:43:52.184419   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:43:52.268330   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:43:52.268368   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:43:52.268386   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:43:52.350556   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:43:52.350586   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:43:52.395930   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:43:52.395967   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:43:54.952879   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:43:54.968440   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:43:54.968517   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:43:55.008027   69580 cri.go:89] found id: ""
	I0501 03:43:55.008056   69580 logs.go:276] 0 containers: []
	W0501 03:43:55.008067   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:43:55.008074   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:43:55.008137   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:43:55.048848   69580 cri.go:89] found id: ""
	I0501 03:43:55.048869   69580 logs.go:276] 0 containers: []
	W0501 03:43:55.048877   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:43:55.048882   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:43:55.048931   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:43:55.085886   69580 cri.go:89] found id: ""
	I0501 03:43:55.085910   69580 logs.go:276] 0 containers: []
	W0501 03:43:55.085919   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:43:55.085924   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:43:55.085971   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:43:55.119542   69580 cri.go:89] found id: ""
	I0501 03:43:55.119567   69580 logs.go:276] 0 containers: []
	W0501 03:43:55.119574   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:43:55.119580   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:43:55.119636   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:43:55.158327   69580 cri.go:89] found id: ""
	I0501 03:43:55.158357   69580 logs.go:276] 0 containers: []
	W0501 03:43:55.158367   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:43:55.158374   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:43:55.158449   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:43:55.200061   69580 cri.go:89] found id: ""
	I0501 03:43:55.200085   69580 logs.go:276] 0 containers: []
	W0501 03:43:55.200093   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:43:55.200100   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:43:55.200146   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:43:55.239446   69580 cri.go:89] found id: ""
	I0501 03:43:55.239476   69580 logs.go:276] 0 containers: []
	W0501 03:43:55.239487   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:43:55.239493   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:43:55.239557   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:43:55.275593   69580 cri.go:89] found id: ""
	I0501 03:43:55.275623   69580 logs.go:276] 0 containers: []
	W0501 03:43:55.275635   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:43:55.275646   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:43:55.275662   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:43:55.356701   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:43:55.356724   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:43:55.356740   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:43:55.437445   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:43:55.437483   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:43:55.489024   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:43:55.489051   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:43:55.548083   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:43:55.548114   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:43:54.845529   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:57.348771   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:55.158591   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:57.657361   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:58.010099   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:00.010511   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:58.067063   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:43:58.080485   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:43:58.080539   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:43:58.121459   69580 cri.go:89] found id: ""
	I0501 03:43:58.121488   69580 logs.go:276] 0 containers: []
	W0501 03:43:58.121498   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:43:58.121505   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:43:58.121562   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:43:58.161445   69580 cri.go:89] found id: ""
	I0501 03:43:58.161479   69580 logs.go:276] 0 containers: []
	W0501 03:43:58.161489   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:43:58.161499   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:43:58.161560   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:43:58.203216   69580 cri.go:89] found id: ""
	I0501 03:43:58.203238   69580 logs.go:276] 0 containers: []
	W0501 03:43:58.203246   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:43:58.203251   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:43:58.203297   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:43:58.239496   69580 cri.go:89] found id: ""
	I0501 03:43:58.239526   69580 logs.go:276] 0 containers: []
	W0501 03:43:58.239538   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:43:58.239546   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:43:58.239605   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:43:58.280331   69580 cri.go:89] found id: ""
	I0501 03:43:58.280359   69580 logs.go:276] 0 containers: []
	W0501 03:43:58.280370   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:43:58.280378   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:43:58.280438   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:43:58.318604   69580 cri.go:89] found id: ""
	I0501 03:43:58.318634   69580 logs.go:276] 0 containers: []
	W0501 03:43:58.318646   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:43:58.318653   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:43:58.318712   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:43:58.359360   69580 cri.go:89] found id: ""
	I0501 03:43:58.359383   69580 logs.go:276] 0 containers: []
	W0501 03:43:58.359392   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:43:58.359398   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:43:58.359446   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:43:58.401172   69580 cri.go:89] found id: ""
	I0501 03:43:58.401202   69580 logs.go:276] 0 containers: []
	W0501 03:43:58.401211   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:43:58.401220   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:43:58.401232   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:43:58.416877   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:43:58.416907   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:43:58.489812   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:43:58.489835   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:43:58.489849   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:43:58.574971   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:43:58.575004   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:43:58.619526   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:43:58.619557   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:44:01.173759   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:44:01.187838   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:44:01.187922   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:44:01.227322   69580 cri.go:89] found id: ""
	I0501 03:44:01.227355   69580 logs.go:276] 0 containers: []
	W0501 03:44:01.227366   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:44:01.227372   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:44:01.227432   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:44:01.268418   69580 cri.go:89] found id: ""
	I0501 03:44:01.268453   69580 logs.go:276] 0 containers: []
	W0501 03:44:01.268465   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:44:01.268472   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:44:01.268530   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:44:01.314641   69580 cri.go:89] found id: ""
	I0501 03:44:01.314667   69580 logs.go:276] 0 containers: []
	W0501 03:44:01.314675   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:44:01.314681   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:44:01.314739   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:44:01.361237   69580 cri.go:89] found id: ""
	I0501 03:44:01.361272   69580 logs.go:276] 0 containers: []
	W0501 03:44:01.361288   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:44:01.361294   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:44:01.361348   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:44:01.400650   69580 cri.go:89] found id: ""
	I0501 03:44:01.400676   69580 logs.go:276] 0 containers: []
	W0501 03:44:01.400684   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:44:01.400690   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:44:01.400739   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:44:01.447998   69580 cri.go:89] found id: ""
	I0501 03:44:01.448023   69580 logs.go:276] 0 containers: []
	W0501 03:44:01.448032   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:44:01.448040   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:44:01.448101   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:43:59.845726   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:02.345826   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:00.155851   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:02.155998   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:02.010828   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:04.014801   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:01.492172   69580 cri.go:89] found id: ""
	I0501 03:44:01.492199   69580 logs.go:276] 0 containers: []
	W0501 03:44:01.492207   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:44:01.492213   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:44:01.492265   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:44:01.538589   69580 cri.go:89] found id: ""
	I0501 03:44:01.538617   69580 logs.go:276] 0 containers: []
	W0501 03:44:01.538628   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:44:01.538638   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:44:01.538653   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:44:01.592914   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:44:01.592952   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:44:01.611706   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:44:01.611754   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:44:01.693469   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:44:01.693488   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:44:01.693501   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:44:01.774433   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:44:01.774470   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:44:04.321593   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:44:04.335428   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:44:04.335497   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:44:04.378479   69580 cri.go:89] found id: ""
	I0501 03:44:04.378505   69580 logs.go:276] 0 containers: []
	W0501 03:44:04.378516   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:44:04.378525   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:44:04.378585   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:44:04.420025   69580 cri.go:89] found id: ""
	I0501 03:44:04.420050   69580 logs.go:276] 0 containers: []
	W0501 03:44:04.420059   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:44:04.420065   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:44:04.420113   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:44:04.464009   69580 cri.go:89] found id: ""
	I0501 03:44:04.464039   69580 logs.go:276] 0 containers: []
	W0501 03:44:04.464047   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:44:04.464052   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:44:04.464113   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:44:04.502039   69580 cri.go:89] found id: ""
	I0501 03:44:04.502069   69580 logs.go:276] 0 containers: []
	W0501 03:44:04.502081   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:44:04.502088   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:44:04.502150   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:44:04.544566   69580 cri.go:89] found id: ""
	I0501 03:44:04.544593   69580 logs.go:276] 0 containers: []
	W0501 03:44:04.544605   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:44:04.544614   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:44:04.544672   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:44:04.584067   69580 cri.go:89] found id: ""
	I0501 03:44:04.584095   69580 logs.go:276] 0 containers: []
	W0501 03:44:04.584104   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:44:04.584112   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:44:04.584174   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:44:04.625165   69580 cri.go:89] found id: ""
	I0501 03:44:04.625197   69580 logs.go:276] 0 containers: []
	W0501 03:44:04.625210   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:44:04.625219   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:44:04.625292   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:44:04.667796   69580 cri.go:89] found id: ""
	I0501 03:44:04.667830   69580 logs.go:276] 0 containers: []
	W0501 03:44:04.667839   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:44:04.667850   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:44:04.667868   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:44:04.722269   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:44:04.722303   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:44:04.738232   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:44:04.738265   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:44:04.821551   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:44:04.821578   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:44:04.821595   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:44:04.902575   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:44:04.902618   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:44:04.346197   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:06.845552   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:04.157333   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:06.157366   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:08.656837   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:06.513484   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:09.012004   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:07.449793   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:44:07.466348   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:44:07.466450   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:44:07.510325   69580 cri.go:89] found id: ""
	I0501 03:44:07.510352   69580 logs.go:276] 0 containers: []
	W0501 03:44:07.510363   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:44:07.510371   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:44:07.510450   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:44:07.550722   69580 cri.go:89] found id: ""
	I0501 03:44:07.550748   69580 logs.go:276] 0 containers: []
	W0501 03:44:07.550756   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:44:07.550762   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:44:07.550810   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:44:07.589592   69580 cri.go:89] found id: ""
	I0501 03:44:07.589617   69580 logs.go:276] 0 containers: []
	W0501 03:44:07.589625   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:44:07.589630   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:44:07.589678   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:44:07.631628   69580 cri.go:89] found id: ""
	I0501 03:44:07.631655   69580 logs.go:276] 0 containers: []
	W0501 03:44:07.631662   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:44:07.631668   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:44:07.631726   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:44:07.674709   69580 cri.go:89] found id: ""
	I0501 03:44:07.674743   69580 logs.go:276] 0 containers: []
	W0501 03:44:07.674753   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:44:07.674760   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:44:07.674811   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:44:07.714700   69580 cri.go:89] found id: ""
	I0501 03:44:07.714767   69580 logs.go:276] 0 containers: []
	W0501 03:44:07.714788   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:44:07.714797   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:44:07.714856   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:44:07.753440   69580 cri.go:89] found id: ""
	I0501 03:44:07.753467   69580 logs.go:276] 0 containers: []
	W0501 03:44:07.753478   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:44:07.753485   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:44:07.753549   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:44:07.791579   69580 cri.go:89] found id: ""
	I0501 03:44:07.791606   69580 logs.go:276] 0 containers: []
	W0501 03:44:07.791617   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:44:07.791628   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:44:07.791644   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:44:07.845568   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:44:07.845606   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:44:07.861861   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:44:07.861885   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:44:07.941719   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:44:07.941743   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:44:07.941757   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:44:08.022684   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:44:08.022720   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:44:10.575417   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:44:10.593408   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:44:10.593468   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:44:10.641322   69580 cri.go:89] found id: ""
	I0501 03:44:10.641357   69580 logs.go:276] 0 containers: []
	W0501 03:44:10.641370   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:44:10.641378   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:44:10.641442   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:44:10.686330   69580 cri.go:89] found id: ""
	I0501 03:44:10.686358   69580 logs.go:276] 0 containers: []
	W0501 03:44:10.686368   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:44:10.686377   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:44:10.686458   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:44:10.734414   69580 cri.go:89] found id: ""
	I0501 03:44:10.734444   69580 logs.go:276] 0 containers: []
	W0501 03:44:10.734456   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:44:10.734463   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:44:10.734527   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:44:10.776063   69580 cri.go:89] found id: ""
	I0501 03:44:10.776095   69580 logs.go:276] 0 containers: []
	W0501 03:44:10.776106   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:44:10.776113   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:44:10.776176   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:44:10.819035   69580 cri.go:89] found id: ""
	I0501 03:44:10.819065   69580 logs.go:276] 0 containers: []
	W0501 03:44:10.819076   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:44:10.819084   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:44:10.819150   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:44:10.868912   69580 cri.go:89] found id: ""
	I0501 03:44:10.868938   69580 logs.go:276] 0 containers: []
	W0501 03:44:10.868946   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:44:10.868952   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:44:10.869000   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:44:10.910517   69580 cri.go:89] found id: ""
	I0501 03:44:10.910549   69580 logs.go:276] 0 containers: []
	W0501 03:44:10.910572   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:44:10.910581   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:44:10.910678   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:44:10.949267   69580 cri.go:89] found id: ""
	I0501 03:44:10.949297   69580 logs.go:276] 0 containers: []
	W0501 03:44:10.949306   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:44:10.949314   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:44:10.949327   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:44:11.004731   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:44:11.004779   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:44:11.022146   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:44:11.022174   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:44:11.108992   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:44:11.109020   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:44:11.109035   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:44:11.192571   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:44:11.192605   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:44:08.846431   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:11.346295   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:10.657938   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:13.156112   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:11.012040   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:13.512166   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:15.512232   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:13.739336   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:44:13.758622   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:44:13.758721   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:44:13.805395   69580 cri.go:89] found id: ""
	I0501 03:44:13.805423   69580 logs.go:276] 0 containers: []
	W0501 03:44:13.805434   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:44:13.805442   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:44:13.805523   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:44:13.847372   69580 cri.go:89] found id: ""
	I0501 03:44:13.847400   69580 logs.go:276] 0 containers: []
	W0501 03:44:13.847409   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:44:13.847417   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:44:13.847474   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:44:13.891842   69580 cri.go:89] found id: ""
	I0501 03:44:13.891867   69580 logs.go:276] 0 containers: []
	W0501 03:44:13.891874   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:44:13.891880   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:44:13.891935   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:44:13.933382   69580 cri.go:89] found id: ""
	I0501 03:44:13.933411   69580 logs.go:276] 0 containers: []
	W0501 03:44:13.933422   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:44:13.933430   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:44:13.933490   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:44:13.973955   69580 cri.go:89] found id: ""
	I0501 03:44:13.973980   69580 logs.go:276] 0 containers: []
	W0501 03:44:13.973991   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:44:13.974000   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:44:13.974053   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:44:14.015202   69580 cri.go:89] found id: ""
	I0501 03:44:14.015226   69580 logs.go:276] 0 containers: []
	W0501 03:44:14.015234   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:44:14.015240   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:44:14.015287   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:44:14.057441   69580 cri.go:89] found id: ""
	I0501 03:44:14.057471   69580 logs.go:276] 0 containers: []
	W0501 03:44:14.057483   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:44:14.057491   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:44:14.057551   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:44:14.099932   69580 cri.go:89] found id: ""
	I0501 03:44:14.099961   69580 logs.go:276] 0 containers: []
	W0501 03:44:14.099972   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:44:14.099983   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:44:14.099996   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:44:14.160386   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:44:14.160418   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:44:14.176880   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:44:14.176908   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:44:14.272137   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:44:14.272155   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:44:14.272168   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:44:14.366523   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:44:14.366571   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:44:13.349770   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:15.351345   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:17.845182   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:15.156569   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:17.157994   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:17.512836   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:20.012034   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:16.914394   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:44:16.930976   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:44:16.931038   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:44:16.977265   69580 cri.go:89] found id: ""
	I0501 03:44:16.977294   69580 logs.go:276] 0 containers: []
	W0501 03:44:16.977303   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:44:16.977309   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:44:16.977363   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:44:17.015656   69580 cri.go:89] found id: ""
	I0501 03:44:17.015686   69580 logs.go:276] 0 containers: []
	W0501 03:44:17.015694   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:44:17.015700   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:44:17.015768   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:44:17.056079   69580 cri.go:89] found id: ""
	I0501 03:44:17.056111   69580 logs.go:276] 0 containers: []
	W0501 03:44:17.056121   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:44:17.056129   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:44:17.056188   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:44:17.099504   69580 cri.go:89] found id: ""
	I0501 03:44:17.099528   69580 logs.go:276] 0 containers: []
	W0501 03:44:17.099536   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:44:17.099542   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:44:17.099606   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:44:17.141371   69580 cri.go:89] found id: ""
	I0501 03:44:17.141401   69580 logs.go:276] 0 containers: []
	W0501 03:44:17.141410   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:44:17.141417   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:44:17.141484   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:44:17.184143   69580 cri.go:89] found id: ""
	I0501 03:44:17.184167   69580 logs.go:276] 0 containers: []
	W0501 03:44:17.184179   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:44:17.184193   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:44:17.184246   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:44:17.224012   69580 cri.go:89] found id: ""
	I0501 03:44:17.224049   69580 logs.go:276] 0 containers: []
	W0501 03:44:17.224061   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:44:17.224069   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:44:17.224136   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:44:17.268185   69580 cri.go:89] found id: ""
	I0501 03:44:17.268216   69580 logs.go:276] 0 containers: []
	W0501 03:44:17.268224   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:44:17.268233   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:44:17.268248   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:44:17.351342   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:44:17.351392   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:44:17.398658   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:44:17.398689   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:44:17.452476   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:44:17.452517   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:44:17.468734   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:44:17.468771   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:44:17.558971   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:44:20.059342   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:44:20.075707   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:44:20.075791   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:44:20.114436   69580 cri.go:89] found id: ""
	I0501 03:44:20.114472   69580 logs.go:276] 0 containers: []
	W0501 03:44:20.114486   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:44:20.114495   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:44:20.114562   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:44:20.155607   69580 cri.go:89] found id: ""
	I0501 03:44:20.155638   69580 logs.go:276] 0 containers: []
	W0501 03:44:20.155649   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:44:20.155657   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:44:20.155715   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:44:20.198188   69580 cri.go:89] found id: ""
	I0501 03:44:20.198218   69580 logs.go:276] 0 containers: []
	W0501 03:44:20.198227   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:44:20.198234   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:44:20.198291   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:44:20.237183   69580 cri.go:89] found id: ""
	I0501 03:44:20.237213   69580 logs.go:276] 0 containers: []
	W0501 03:44:20.237223   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:44:20.237232   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:44:20.237286   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:44:20.279289   69580 cri.go:89] found id: ""
	I0501 03:44:20.279320   69580 logs.go:276] 0 containers: []
	W0501 03:44:20.279332   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:44:20.279341   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:44:20.279409   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:44:20.334066   69580 cri.go:89] found id: ""
	I0501 03:44:20.334091   69580 logs.go:276] 0 containers: []
	W0501 03:44:20.334112   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:44:20.334121   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:44:20.334181   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:44:20.385740   69580 cri.go:89] found id: ""
	I0501 03:44:20.385775   69580 logs.go:276] 0 containers: []
	W0501 03:44:20.385785   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:44:20.385796   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:44:20.385860   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:44:20.425151   69580 cri.go:89] found id: ""
	I0501 03:44:20.425176   69580 logs.go:276] 0 containers: []
	W0501 03:44:20.425183   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:44:20.425193   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:44:20.425214   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:44:20.472563   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:44:20.472605   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:44:20.526589   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:44:20.526626   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:44:20.541978   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:44:20.542013   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:44:20.619513   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:44:20.619540   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:44:20.619555   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:44:19.846208   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:22.345166   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:19.658986   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:22.156821   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:23.159267   68864 pod_ready.go:81] duration metric: took 4m0.009511824s for pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace to be "Ready" ...
	E0501 03:44:23.159296   68864 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0501 03:44:23.159308   68864 pod_ready.go:38] duration metric: took 4m7.423794373s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0501 03:44:23.159327   68864 api_server.go:52] waiting for apiserver process to appear ...
	I0501 03:44:23.159362   68864 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:44:23.159422   68864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:44:23.225563   68864 cri.go:89] found id: "a96815c49ac45b34c359d7171b30be5c25cee80fae08d947fc004d479693df00"
	I0501 03:44:23.225590   68864 cri.go:89] found id: ""
	I0501 03:44:23.225607   68864 logs.go:276] 1 containers: [a96815c49ac45b34c359d7171b30be5c25cee80fae08d947fc004d479693df00]
	I0501 03:44:23.225663   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:44:23.231542   68864 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:44:23.231598   68864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:44:23.290847   68864 cri.go:89] found id: "d109948ffbbddd2e38484d83d2260690afce3c51e16abff0fe8713e844bfdcb3"
	I0501 03:44:23.290871   68864 cri.go:89] found id: ""
	I0501 03:44:23.290878   68864 logs.go:276] 1 containers: [d109948ffbbddd2e38484d83d2260690afce3c51e16abff0fe8713e844bfdcb3]
	I0501 03:44:23.290926   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:44:23.295697   68864 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:44:23.295755   68864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:44:23.348625   68864 cri.go:89] found id: "e3c74de489af3d78a4b0cf620ed16e1fba7c9ce1c7021a15c5b4bd241b7a934a"
	I0501 03:44:23.348652   68864 cri.go:89] found id: ""
	I0501 03:44:23.348661   68864 logs.go:276] 1 containers: [e3c74de489af3d78a4b0cf620ed16e1fba7c9ce1c7021a15c5b4bd241b7a934a]
	I0501 03:44:23.348717   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:44:23.355801   68864 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:44:23.355896   68864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:44:23.409428   68864 cri.go:89] found id: "1813f35574f4f0398e7d482fe77c5d7f8490726666a277035bda0a5cff80af6e"
	I0501 03:44:23.409461   68864 cri.go:89] found id: ""
	I0501 03:44:23.409471   68864 logs.go:276] 1 containers: [1813f35574f4f0398e7d482fe77c5d7f8490726666a277035bda0a5cff80af6e]
	I0501 03:44:23.409530   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:44:23.416480   68864 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:44:23.416560   68864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:44:23.466642   68864 cri.go:89] found id: "94afdb03c382269209154b2e73edbf200662e89960c248ba775a0ef2b8fbe6b1"
	I0501 03:44:23.466672   68864 cri.go:89] found id: ""
	I0501 03:44:23.466681   68864 logs.go:276] 1 containers: [94afdb03c382269209154b2e73edbf200662e89960c248ba775a0ef2b8fbe6b1]
	I0501 03:44:23.466739   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:44:23.472831   68864 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:44:23.472906   68864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:44:23.524815   68864 cri.go:89] found id: "7e7158f7ff3922e97ba5ee97108acc1d0ba0350dff2f9aa56c2f71519108791c"
	I0501 03:44:23.524841   68864 cri.go:89] found id: ""
	I0501 03:44:23.524850   68864 logs.go:276] 1 containers: [7e7158f7ff3922e97ba5ee97108acc1d0ba0350dff2f9aa56c2f71519108791c]
	I0501 03:44:23.524902   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:44:23.532092   68864 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:44:23.532161   68864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:44:23.577262   68864 cri.go:89] found id: ""
	I0501 03:44:23.577292   68864 logs.go:276] 0 containers: []
	W0501 03:44:23.577305   68864 logs.go:278] No container was found matching "kindnet"
	I0501 03:44:23.577312   68864 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0501 03:44:23.577374   68864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0501 03:44:23.623597   68864 cri.go:89] found id: "f9a8d2f0f9453969b410fb7f880f6cf4c7e6e2f3c17a687583906efe4181dd17"
	I0501 03:44:23.623626   68864 cri.go:89] found id: "aaae36261c5ce1accf3bfb5993be5a256a037a640d5f6f30de8384900dda06b2"
	I0501 03:44:23.623632   68864 cri.go:89] found id: ""
	I0501 03:44:23.623640   68864 logs.go:276] 2 containers: [f9a8d2f0f9453969b410fb7f880f6cf4c7e6e2f3c17a687583906efe4181dd17 aaae36261c5ce1accf3bfb5993be5a256a037a640d5f6f30de8384900dda06b2]
	I0501 03:44:23.623702   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:44:23.630189   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:44:23.635673   68864 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:44:23.635694   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:44:22.012084   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:24.511736   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:23.203031   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:44:23.219964   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:44:23.220043   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:44:23.264287   69580 cri.go:89] found id: ""
	I0501 03:44:23.264315   69580 logs.go:276] 0 containers: []
	W0501 03:44:23.264323   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:44:23.264328   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:44:23.264395   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:44:23.310337   69580 cri.go:89] found id: ""
	I0501 03:44:23.310366   69580 logs.go:276] 0 containers: []
	W0501 03:44:23.310375   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:44:23.310383   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:44:23.310461   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:44:23.364550   69580 cri.go:89] found id: ""
	I0501 03:44:23.364577   69580 logs.go:276] 0 containers: []
	W0501 03:44:23.364588   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:44:23.364596   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:44:23.364676   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:44:23.412620   69580 cri.go:89] found id: ""
	I0501 03:44:23.412647   69580 logs.go:276] 0 containers: []
	W0501 03:44:23.412657   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:44:23.412665   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:44:23.412726   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:44:23.461447   69580 cri.go:89] found id: ""
	I0501 03:44:23.461477   69580 logs.go:276] 0 containers: []
	W0501 03:44:23.461488   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:44:23.461496   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:44:23.461558   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:44:23.514868   69580 cri.go:89] found id: ""
	I0501 03:44:23.514896   69580 logs.go:276] 0 containers: []
	W0501 03:44:23.514915   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:44:23.514924   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:44:23.514984   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:44:23.559171   69580 cri.go:89] found id: ""
	I0501 03:44:23.559200   69580 logs.go:276] 0 containers: []
	W0501 03:44:23.559210   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:44:23.559218   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:44:23.559284   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:44:23.601713   69580 cri.go:89] found id: ""
	I0501 03:44:23.601740   69580 logs.go:276] 0 containers: []
	W0501 03:44:23.601749   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:44:23.601760   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:44:23.601772   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:44:23.656147   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:44:23.656187   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:44:23.673507   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:44:23.673545   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:44:23.771824   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:44:23.771846   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:44:23.771861   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:44:23.861128   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:44:23.861161   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:44:26.406507   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:44:26.421836   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:44:26.421894   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:44:26.462758   69580 cri.go:89] found id: ""
	I0501 03:44:26.462785   69580 logs.go:276] 0 containers: []
	W0501 03:44:26.462796   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:44:26.462804   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:44:26.462860   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:44:24.346534   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:26.847370   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:24.220047   68864 logs.go:123] Gathering logs for kubelet ...
	I0501 03:44:24.220087   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:44:24.279596   68864 logs.go:123] Gathering logs for kube-apiserver [a96815c49ac45b34c359d7171b30be5c25cee80fae08d947fc004d479693df00] ...
	I0501 03:44:24.279633   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a96815c49ac45b34c359d7171b30be5c25cee80fae08d947fc004d479693df00"
	I0501 03:44:24.336092   68864 logs.go:123] Gathering logs for etcd [d109948ffbbddd2e38484d83d2260690afce3c51e16abff0fe8713e844bfdcb3] ...
	I0501 03:44:24.336128   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d109948ffbbddd2e38484d83d2260690afce3c51e16abff0fe8713e844bfdcb3"
	I0501 03:44:24.396117   68864 logs.go:123] Gathering logs for storage-provisioner [f9a8d2f0f9453969b410fb7f880f6cf4c7e6e2f3c17a687583906efe4181dd17] ...
	I0501 03:44:24.396145   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f9a8d2f0f9453969b410fb7f880f6cf4c7e6e2f3c17a687583906efe4181dd17"
	I0501 03:44:24.443608   68864 logs.go:123] Gathering logs for storage-provisioner [aaae36261c5ce1accf3bfb5993be5a256a037a640d5f6f30de8384900dda06b2] ...
	I0501 03:44:24.443644   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aaae36261c5ce1accf3bfb5993be5a256a037a640d5f6f30de8384900dda06b2"
	I0501 03:44:24.499533   68864 logs.go:123] Gathering logs for kube-controller-manager [7e7158f7ff3922e97ba5ee97108acc1d0ba0350dff2f9aa56c2f71519108791c] ...
	I0501 03:44:24.499560   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7e7158f7ff3922e97ba5ee97108acc1d0ba0350dff2f9aa56c2f71519108791c"
	I0501 03:44:24.562990   68864 logs.go:123] Gathering logs for container status ...
	I0501 03:44:24.563028   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:44:24.622630   68864 logs.go:123] Gathering logs for dmesg ...
	I0501 03:44:24.622671   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:44:24.641106   68864 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:44:24.641145   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0501 03:44:24.781170   68864 logs.go:123] Gathering logs for coredns [e3c74de489af3d78a4b0cf620ed16e1fba7c9ce1c7021a15c5b4bd241b7a934a] ...
	I0501 03:44:24.781203   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e3c74de489af3d78a4b0cf620ed16e1fba7c9ce1c7021a15c5b4bd241b7a934a"
	I0501 03:44:24.824616   68864 logs.go:123] Gathering logs for kube-scheduler [1813f35574f4f0398e7d482fe77c5d7f8490726666a277035bda0a5cff80af6e] ...
	I0501 03:44:24.824643   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1813f35574f4f0398e7d482fe77c5d7f8490726666a277035bda0a5cff80af6e"
	I0501 03:44:24.871956   68864 logs.go:123] Gathering logs for kube-proxy [94afdb03c382269209154b2e73edbf200662e89960c248ba775a0ef2b8fbe6b1] ...
	I0501 03:44:24.871992   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 94afdb03c382269209154b2e73edbf200662e89960c248ba775a0ef2b8fbe6b1"
	I0501 03:44:27.424582   68864 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:44:27.447490   68864 api_server.go:72] duration metric: took 4m19.445111196s to wait for apiserver process to appear ...
	I0501 03:44:27.447522   68864 api_server.go:88] waiting for apiserver healthz status ...
	I0501 03:44:27.447555   68864 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:44:27.447601   68864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:44:27.494412   68864 cri.go:89] found id: "a96815c49ac45b34c359d7171b30be5c25cee80fae08d947fc004d479693df00"
	I0501 03:44:27.494437   68864 cri.go:89] found id: ""
	I0501 03:44:27.494445   68864 logs.go:276] 1 containers: [a96815c49ac45b34c359d7171b30be5c25cee80fae08d947fc004d479693df00]
	I0501 03:44:27.494490   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:44:27.503782   68864 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:44:27.503853   68864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:44:27.550991   68864 cri.go:89] found id: "d109948ffbbddd2e38484d83d2260690afce3c51e16abff0fe8713e844bfdcb3"
	I0501 03:44:27.551018   68864 cri.go:89] found id: ""
	I0501 03:44:27.551026   68864 logs.go:276] 1 containers: [d109948ffbbddd2e38484d83d2260690afce3c51e16abff0fe8713e844bfdcb3]
	I0501 03:44:27.551073   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:44:27.556919   68864 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:44:27.556983   68864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:44:27.606005   68864 cri.go:89] found id: "e3c74de489af3d78a4b0cf620ed16e1fba7c9ce1c7021a15c5b4bd241b7a934a"
	I0501 03:44:27.606033   68864 cri.go:89] found id: ""
	I0501 03:44:27.606042   68864 logs.go:276] 1 containers: [e3c74de489af3d78a4b0cf620ed16e1fba7c9ce1c7021a15c5b4bd241b7a934a]
	I0501 03:44:27.606100   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:44:27.611639   68864 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:44:27.611706   68864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:44:27.661151   68864 cri.go:89] found id: "1813f35574f4f0398e7d482fe77c5d7f8490726666a277035bda0a5cff80af6e"
	I0501 03:44:27.661172   68864 cri.go:89] found id: ""
	I0501 03:44:27.661179   68864 logs.go:276] 1 containers: [1813f35574f4f0398e7d482fe77c5d7f8490726666a277035bda0a5cff80af6e]
	I0501 03:44:27.661278   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:44:27.666443   68864 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:44:27.666514   68864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:44:27.712387   68864 cri.go:89] found id: "94afdb03c382269209154b2e73edbf200662e89960c248ba775a0ef2b8fbe6b1"
	I0501 03:44:27.712416   68864 cri.go:89] found id: ""
	I0501 03:44:27.712424   68864 logs.go:276] 1 containers: [94afdb03c382269209154b2e73edbf200662e89960c248ba775a0ef2b8fbe6b1]
	I0501 03:44:27.712480   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:44:27.717280   68864 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:44:27.717342   68864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:44:27.767124   68864 cri.go:89] found id: "7e7158f7ff3922e97ba5ee97108acc1d0ba0350dff2f9aa56c2f71519108791c"
	I0501 03:44:27.767154   68864 cri.go:89] found id: ""
	I0501 03:44:27.767163   68864 logs.go:276] 1 containers: [7e7158f7ff3922e97ba5ee97108acc1d0ba0350dff2f9aa56c2f71519108791c]
	I0501 03:44:27.767215   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:44:27.773112   68864 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:44:27.773183   68864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:44:27.829966   68864 cri.go:89] found id: ""
	I0501 03:44:27.829991   68864 logs.go:276] 0 containers: []
	W0501 03:44:27.829999   68864 logs.go:278] No container was found matching "kindnet"
	I0501 03:44:27.830005   68864 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0501 03:44:27.830056   68864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0501 03:44:27.873391   68864 cri.go:89] found id: "f9a8d2f0f9453969b410fb7f880f6cf4c7e6e2f3c17a687583906efe4181dd17"
	I0501 03:44:27.873415   68864 cri.go:89] found id: "aaae36261c5ce1accf3bfb5993be5a256a037a640d5f6f30de8384900dda06b2"
	I0501 03:44:27.873419   68864 cri.go:89] found id: ""
	I0501 03:44:27.873426   68864 logs.go:276] 2 containers: [f9a8d2f0f9453969b410fb7f880f6cf4c7e6e2f3c17a687583906efe4181dd17 aaae36261c5ce1accf3bfb5993be5a256a037a640d5f6f30de8384900dda06b2]
	I0501 03:44:27.873473   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:44:27.878537   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:44:27.883518   68864 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:44:27.883543   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0501 03:44:28.012337   68864 logs.go:123] Gathering logs for kube-apiserver [a96815c49ac45b34c359d7171b30be5c25cee80fae08d947fc004d479693df00] ...
	I0501 03:44:28.012377   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a96815c49ac45b34c359d7171b30be5c25cee80fae08d947fc004d479693df00"
	I0501 03:44:28.063686   68864 logs.go:123] Gathering logs for etcd [d109948ffbbddd2e38484d83d2260690afce3c51e16abff0fe8713e844bfdcb3] ...
	I0501 03:44:28.063715   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d109948ffbbddd2e38484d83d2260690afce3c51e16abff0fe8713e844bfdcb3"
	I0501 03:44:28.116507   68864 logs.go:123] Gathering logs for storage-provisioner [aaae36261c5ce1accf3bfb5993be5a256a037a640d5f6f30de8384900dda06b2] ...
	I0501 03:44:28.116535   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aaae36261c5ce1accf3bfb5993be5a256a037a640d5f6f30de8384900dda06b2"
	I0501 03:44:28.165593   68864 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:44:28.165636   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:44:28.595278   68864 logs.go:123] Gathering logs for container status ...
	I0501 03:44:28.595333   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:44:28.645790   68864 logs.go:123] Gathering logs for dmesg ...
	I0501 03:44:28.645836   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:44:28.662952   68864 logs.go:123] Gathering logs for coredns [e3c74de489af3d78a4b0cf620ed16e1fba7c9ce1c7021a15c5b4bd241b7a934a] ...
	I0501 03:44:28.662984   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e3c74de489af3d78a4b0cf620ed16e1fba7c9ce1c7021a15c5b4bd241b7a934a"
	I0501 03:44:28.710273   68864 logs.go:123] Gathering logs for kube-scheduler [1813f35574f4f0398e7d482fe77c5d7f8490726666a277035bda0a5cff80af6e] ...
	I0501 03:44:28.710302   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1813f35574f4f0398e7d482fe77c5d7f8490726666a277035bda0a5cff80af6e"
	I0501 03:44:28.761838   68864 logs.go:123] Gathering logs for kube-proxy [94afdb03c382269209154b2e73edbf200662e89960c248ba775a0ef2b8fbe6b1] ...
	I0501 03:44:28.761872   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 94afdb03c382269209154b2e73edbf200662e89960c248ba775a0ef2b8fbe6b1"
	I0501 03:44:28.810775   68864 logs.go:123] Gathering logs for kube-controller-manager [7e7158f7ff3922e97ba5ee97108acc1d0ba0350dff2f9aa56c2f71519108791c] ...
	I0501 03:44:28.810808   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7e7158f7ff3922e97ba5ee97108acc1d0ba0350dff2f9aa56c2f71519108791c"
	I0501 03:44:27.012119   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:29.510651   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:26.505067   69580 cri.go:89] found id: ""
	I0501 03:44:26.505098   69580 logs.go:276] 0 containers: []
	W0501 03:44:26.505110   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:44:26.505121   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:44:26.505182   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:44:26.544672   69580 cri.go:89] found id: ""
	I0501 03:44:26.544699   69580 logs.go:276] 0 containers: []
	W0501 03:44:26.544711   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:44:26.544717   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:44:26.544764   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:44:26.590579   69580 cri.go:89] found id: ""
	I0501 03:44:26.590605   69580 logs.go:276] 0 containers: []
	W0501 03:44:26.590614   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:44:26.590620   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:44:26.590670   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:44:26.637887   69580 cri.go:89] found id: ""
	I0501 03:44:26.637920   69580 logs.go:276] 0 containers: []
	W0501 03:44:26.637930   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:44:26.637939   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:44:26.637998   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:44:26.686778   69580 cri.go:89] found id: ""
	I0501 03:44:26.686807   69580 logs.go:276] 0 containers: []
	W0501 03:44:26.686815   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:44:26.686821   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:44:26.686882   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:44:26.729020   69580 cri.go:89] found id: ""
	I0501 03:44:26.729045   69580 logs.go:276] 0 containers: []
	W0501 03:44:26.729054   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:44:26.729060   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:44:26.729124   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:44:26.769022   69580 cri.go:89] found id: ""
	I0501 03:44:26.769043   69580 logs.go:276] 0 containers: []
	W0501 03:44:26.769051   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:44:26.769059   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:44:26.769073   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:44:26.854985   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:44:26.855011   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:44:26.855024   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:44:26.937031   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:44:26.937063   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:44:27.006267   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:44:27.006301   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:44:27.080503   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:44:27.080545   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:44:29.598176   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:44:29.614465   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:44:29.614523   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:44:29.662384   69580 cri.go:89] found id: ""
	I0501 03:44:29.662421   69580 logs.go:276] 0 containers: []
	W0501 03:44:29.662433   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:44:29.662439   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:44:29.662483   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:44:29.705262   69580 cri.go:89] found id: ""
	I0501 03:44:29.705286   69580 logs.go:276] 0 containers: []
	W0501 03:44:29.705295   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:44:29.705300   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:44:29.705345   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:44:29.752308   69580 cri.go:89] found id: ""
	I0501 03:44:29.752335   69580 logs.go:276] 0 containers: []
	W0501 03:44:29.752343   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:44:29.752349   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:44:29.752403   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:44:29.802702   69580 cri.go:89] found id: ""
	I0501 03:44:29.802729   69580 logs.go:276] 0 containers: []
	W0501 03:44:29.802741   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:44:29.802749   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:44:29.802814   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:44:29.854112   69580 cri.go:89] found id: ""
	I0501 03:44:29.854138   69580 logs.go:276] 0 containers: []
	W0501 03:44:29.854149   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:44:29.854157   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:44:29.854217   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:44:29.898447   69580 cri.go:89] found id: ""
	I0501 03:44:29.898470   69580 logs.go:276] 0 containers: []
	W0501 03:44:29.898480   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:44:29.898486   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:44:29.898545   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:44:29.938832   69580 cri.go:89] found id: ""
	I0501 03:44:29.938862   69580 logs.go:276] 0 containers: []
	W0501 03:44:29.938873   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:44:29.938881   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:44:29.938948   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:44:29.987697   69580 cri.go:89] found id: ""
	I0501 03:44:29.987721   69580 logs.go:276] 0 containers: []
	W0501 03:44:29.987730   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:44:29.987738   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:44:29.987753   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:44:30.042446   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:44:30.042473   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:44:30.095358   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:44:30.095389   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:44:30.110745   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:44:30.110782   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:44:30.190923   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:44:30.190951   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:44:30.190965   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:44:29.346013   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:31.347513   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:28.868838   68864 logs.go:123] Gathering logs for storage-provisioner [f9a8d2f0f9453969b410fb7f880f6cf4c7e6e2f3c17a687583906efe4181dd17] ...
	I0501 03:44:28.868876   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f9a8d2f0f9453969b410fb7f880f6cf4c7e6e2f3c17a687583906efe4181dd17"
	I0501 03:44:28.912436   68864 logs.go:123] Gathering logs for kubelet ...
	I0501 03:44:28.912474   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:44:31.469456   68864 api_server.go:253] Checking apiserver healthz at https://192.168.50.218:8443/healthz ...
	I0501 03:44:31.478498   68864 api_server.go:279] https://192.168.50.218:8443/healthz returned 200:
	ok
	I0501 03:44:31.479838   68864 api_server.go:141] control plane version: v1.30.0
	I0501 03:44:31.479861   68864 api_server.go:131] duration metric: took 4.032331979s to wait for apiserver health ...
	I0501 03:44:31.479869   68864 system_pods.go:43] waiting for kube-system pods to appear ...
	I0501 03:44:31.479889   68864 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:44:31.479930   68864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:44:31.531068   68864 cri.go:89] found id: "a96815c49ac45b34c359d7171b30be5c25cee80fae08d947fc004d479693df00"
	I0501 03:44:31.531088   68864 cri.go:89] found id: ""
	I0501 03:44:31.531095   68864 logs.go:276] 1 containers: [a96815c49ac45b34c359d7171b30be5c25cee80fae08d947fc004d479693df00]
	I0501 03:44:31.531137   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:44:31.536216   68864 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:44:31.536292   68864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:44:31.584155   68864 cri.go:89] found id: "d109948ffbbddd2e38484d83d2260690afce3c51e16abff0fe8713e844bfdcb3"
	I0501 03:44:31.584183   68864 cri.go:89] found id: ""
	I0501 03:44:31.584194   68864 logs.go:276] 1 containers: [d109948ffbbddd2e38484d83d2260690afce3c51e16abff0fe8713e844bfdcb3]
	I0501 03:44:31.584250   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:44:31.589466   68864 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:44:31.589528   68864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:44:31.639449   68864 cri.go:89] found id: "e3c74de489af3d78a4b0cf620ed16e1fba7c9ce1c7021a15c5b4bd241b7a934a"
	I0501 03:44:31.639476   68864 cri.go:89] found id: ""
	I0501 03:44:31.639484   68864 logs.go:276] 1 containers: [e3c74de489af3d78a4b0cf620ed16e1fba7c9ce1c7021a15c5b4bd241b7a934a]
	I0501 03:44:31.639535   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:44:31.644684   68864 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:44:31.644750   68864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:44:31.702095   68864 cri.go:89] found id: "1813f35574f4f0398e7d482fe77c5d7f8490726666a277035bda0a5cff80af6e"
	I0501 03:44:31.702119   68864 cri.go:89] found id: ""
	I0501 03:44:31.702125   68864 logs.go:276] 1 containers: [1813f35574f4f0398e7d482fe77c5d7f8490726666a277035bda0a5cff80af6e]
	I0501 03:44:31.702173   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:44:31.707443   68864 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:44:31.707508   68864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:44:31.758582   68864 cri.go:89] found id: "94afdb03c382269209154b2e73edbf200662e89960c248ba775a0ef2b8fbe6b1"
	I0501 03:44:31.758603   68864 cri.go:89] found id: ""
	I0501 03:44:31.758610   68864 logs.go:276] 1 containers: [94afdb03c382269209154b2e73edbf200662e89960c248ba775a0ef2b8fbe6b1]
	I0501 03:44:31.758656   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:44:31.764261   68864 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:44:31.764325   68864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:44:31.813385   68864 cri.go:89] found id: "7e7158f7ff3922e97ba5ee97108acc1d0ba0350dff2f9aa56c2f71519108791c"
	I0501 03:44:31.813407   68864 cri.go:89] found id: ""
	I0501 03:44:31.813414   68864 logs.go:276] 1 containers: [7e7158f7ff3922e97ba5ee97108acc1d0ba0350dff2f9aa56c2f71519108791c]
	I0501 03:44:31.813457   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:44:31.818289   68864 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:44:31.818348   68864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:44:31.862788   68864 cri.go:89] found id: ""
	I0501 03:44:31.862814   68864 logs.go:276] 0 containers: []
	W0501 03:44:31.862824   68864 logs.go:278] No container was found matching "kindnet"
	I0501 03:44:31.862832   68864 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0501 03:44:31.862890   68864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0501 03:44:31.912261   68864 cri.go:89] found id: "f9a8d2f0f9453969b410fb7f880f6cf4c7e6e2f3c17a687583906efe4181dd17"
	I0501 03:44:31.912284   68864 cri.go:89] found id: "aaae36261c5ce1accf3bfb5993be5a256a037a640d5f6f30de8384900dda06b2"
	I0501 03:44:31.912298   68864 cri.go:89] found id: ""
	I0501 03:44:31.912312   68864 logs.go:276] 2 containers: [f9a8d2f0f9453969b410fb7f880f6cf4c7e6e2f3c17a687583906efe4181dd17 aaae36261c5ce1accf3bfb5993be5a256a037a640d5f6f30de8384900dda06b2]
	I0501 03:44:31.912367   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:44:31.917696   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:44:31.922432   68864 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:44:31.922450   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:44:32.332797   68864 logs.go:123] Gathering logs for kubelet ...
	I0501 03:44:32.332836   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:44:32.396177   68864 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:44:32.396214   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0501 03:44:32.511915   68864 logs.go:123] Gathering logs for etcd [d109948ffbbddd2e38484d83d2260690afce3c51e16abff0fe8713e844bfdcb3] ...
	I0501 03:44:32.511953   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d109948ffbbddd2e38484d83d2260690afce3c51e16abff0fe8713e844bfdcb3"
	I0501 03:44:32.564447   68864 logs.go:123] Gathering logs for kube-proxy [94afdb03c382269209154b2e73edbf200662e89960c248ba775a0ef2b8fbe6b1] ...
	I0501 03:44:32.564475   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 94afdb03c382269209154b2e73edbf200662e89960c248ba775a0ef2b8fbe6b1"
	I0501 03:44:32.610196   68864 logs.go:123] Gathering logs for kube-controller-manager [7e7158f7ff3922e97ba5ee97108acc1d0ba0350dff2f9aa56c2f71519108791c] ...
	I0501 03:44:32.610235   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7e7158f7ff3922e97ba5ee97108acc1d0ba0350dff2f9aa56c2f71519108791c"
	I0501 03:44:32.665262   68864 logs.go:123] Gathering logs for storage-provisioner [aaae36261c5ce1accf3bfb5993be5a256a037a640d5f6f30de8384900dda06b2] ...
	I0501 03:44:32.665314   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aaae36261c5ce1accf3bfb5993be5a256a037a640d5f6f30de8384900dda06b2"
	I0501 03:44:32.707346   68864 logs.go:123] Gathering logs for container status ...
	I0501 03:44:32.707377   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:44:32.757693   68864 logs.go:123] Gathering logs for dmesg ...
	I0501 03:44:32.757726   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:44:32.775720   68864 logs.go:123] Gathering logs for kube-apiserver [a96815c49ac45b34c359d7171b30be5c25cee80fae08d947fc004d479693df00] ...
	I0501 03:44:32.775759   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a96815c49ac45b34c359d7171b30be5c25cee80fae08d947fc004d479693df00"
	I0501 03:44:32.831002   68864 logs.go:123] Gathering logs for coredns [e3c74de489af3d78a4b0cf620ed16e1fba7c9ce1c7021a15c5b4bd241b7a934a] ...
	I0501 03:44:32.831039   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e3c74de489af3d78a4b0cf620ed16e1fba7c9ce1c7021a15c5b4bd241b7a934a"
	I0501 03:44:32.878365   68864 logs.go:123] Gathering logs for kube-scheduler [1813f35574f4f0398e7d482fe77c5d7f8490726666a277035bda0a5cff80af6e] ...
	I0501 03:44:32.878416   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1813f35574f4f0398e7d482fe77c5d7f8490726666a277035bda0a5cff80af6e"
	I0501 03:44:32.935752   68864 logs.go:123] Gathering logs for storage-provisioner [f9a8d2f0f9453969b410fb7f880f6cf4c7e6e2f3c17a687583906efe4181dd17] ...
	I0501 03:44:32.935791   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f9a8d2f0f9453969b410fb7f880f6cf4c7e6e2f3c17a687583906efe4181dd17"
	I0501 03:44:35.492575   68864 system_pods.go:59] 8 kube-system pods found
	I0501 03:44:35.492603   68864 system_pods.go:61] "coredns-7db6d8ff4d-sjplt" [6701ee8e-0630-4332-b01c-26741ed3a7b7] Running
	I0501 03:44:35.492607   68864 system_pods.go:61] "etcd-embed-certs-277128" [744d7481-dd80-4435-90da-685fc16a76a4] Running
	I0501 03:44:35.492612   68864 system_pods.go:61] "kube-apiserver-embed-certs-277128" [2f1d0f09-b270-4cdc-af3c-e17e3a244bbf] Running
	I0501 03:44:35.492616   68864 system_pods.go:61] "kube-controller-manager-embed-certs-277128" [729aa138-dc0d-4616-97d4-1592453971b8] Running
	I0501 03:44:35.492619   68864 system_pods.go:61] "kube-proxy-phx7x" [56c0381e-c140-4f69-bbe4-09d393db8b23] Running
	I0501 03:44:35.492621   68864 system_pods.go:61] "kube-scheduler-embed-certs-277128" [cc56e9bc-7f21-4f57-a65a-73a10ffd7145] Running
	I0501 03:44:35.492627   68864 system_pods.go:61] "metrics-server-569cc877fc-p8j59" [f8ad6c24-dd5d-4515-9052-c9aca7412b55] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0501 03:44:35.492631   68864 system_pods.go:61] "storage-provisioner" [785be666-58d5-4b9d-92fd-bcacdbdebeb2] Running
	I0501 03:44:35.492638   68864 system_pods.go:74] duration metric: took 4.012764043s to wait for pod list to return data ...
	I0501 03:44:35.492644   68864 default_sa.go:34] waiting for default service account to be created ...
	I0501 03:44:35.494580   68864 default_sa.go:45] found service account: "default"
	I0501 03:44:35.494599   68864 default_sa.go:55] duration metric: took 1.949121ms for default service account to be created ...
	I0501 03:44:35.494606   68864 system_pods.go:116] waiting for k8s-apps to be running ...
	I0501 03:44:35.499484   68864 system_pods.go:86] 8 kube-system pods found
	I0501 03:44:35.499507   68864 system_pods.go:89] "coredns-7db6d8ff4d-sjplt" [6701ee8e-0630-4332-b01c-26741ed3a7b7] Running
	I0501 03:44:35.499514   68864 system_pods.go:89] "etcd-embed-certs-277128" [744d7481-dd80-4435-90da-685fc16a76a4] Running
	I0501 03:44:35.499519   68864 system_pods.go:89] "kube-apiserver-embed-certs-277128" [2f1d0f09-b270-4cdc-af3c-e17e3a244bbf] Running
	I0501 03:44:35.499523   68864 system_pods.go:89] "kube-controller-manager-embed-certs-277128" [729aa138-dc0d-4616-97d4-1592453971b8] Running
	I0501 03:44:35.499526   68864 system_pods.go:89] "kube-proxy-phx7x" [56c0381e-c140-4f69-bbe4-09d393db8b23] Running
	I0501 03:44:35.499531   68864 system_pods.go:89] "kube-scheduler-embed-certs-277128" [cc56e9bc-7f21-4f57-a65a-73a10ffd7145] Running
	I0501 03:44:35.499537   68864 system_pods.go:89] "metrics-server-569cc877fc-p8j59" [f8ad6c24-dd5d-4515-9052-c9aca7412b55] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0501 03:44:35.499544   68864 system_pods.go:89] "storage-provisioner" [785be666-58d5-4b9d-92fd-bcacdbdebeb2] Running
	I0501 03:44:35.499550   68864 system_pods.go:126] duration metric: took 4.939659ms to wait for k8s-apps to be running ...
	I0501 03:44:35.499559   68864 system_svc.go:44] waiting for kubelet service to be running ....
	I0501 03:44:35.499599   68864 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 03:44:35.518471   68864 system_svc.go:56] duration metric: took 18.902776ms WaitForService to wait for kubelet
	I0501 03:44:35.518498   68864 kubeadm.go:576] duration metric: took 4m27.516125606s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0501 03:44:35.518521   68864 node_conditions.go:102] verifying NodePressure condition ...
	I0501 03:44:35.521936   68864 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0501 03:44:35.521956   68864 node_conditions.go:123] node cpu capacity is 2
	I0501 03:44:35.521966   68864 node_conditions.go:105] duration metric: took 3.439997ms to run NodePressure ...
	I0501 03:44:35.521976   68864 start.go:240] waiting for startup goroutines ...
	I0501 03:44:35.521983   68864 start.go:245] waiting for cluster config update ...
	I0501 03:44:35.521994   68864 start.go:254] writing updated cluster config ...
	I0501 03:44:35.522311   68864 ssh_runner.go:195] Run: rm -f paused
	I0501 03:44:35.572130   68864 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0501 03:44:35.573709   68864 out.go:177] * Done! kubectl is now configured to use "embed-certs-277128" cluster and "default" namespace by default
	I0501 03:44:31.512755   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:34.011892   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:32.772208   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:44:32.791063   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:44:32.791145   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:44:32.856883   69580 cri.go:89] found id: ""
	I0501 03:44:32.856909   69580 logs.go:276] 0 containers: []
	W0501 03:44:32.856920   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:44:32.856927   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:44:32.856988   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:44:32.928590   69580 cri.go:89] found id: ""
	I0501 03:44:32.928625   69580 logs.go:276] 0 containers: []
	W0501 03:44:32.928637   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:44:32.928644   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:44:32.928707   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:44:32.978068   69580 cri.go:89] found id: ""
	I0501 03:44:32.978100   69580 logs.go:276] 0 containers: []
	W0501 03:44:32.978113   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:44:32.978120   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:44:32.978184   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:44:33.018873   69580 cri.go:89] found id: ""
	I0501 03:44:33.018897   69580 logs.go:276] 0 containers: []
	W0501 03:44:33.018905   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:44:33.018911   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:44:33.018970   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:44:33.060633   69580 cri.go:89] found id: ""
	I0501 03:44:33.060661   69580 logs.go:276] 0 containers: []
	W0501 03:44:33.060673   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:44:33.060681   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:44:33.060735   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:44:33.099862   69580 cri.go:89] found id: ""
	I0501 03:44:33.099891   69580 logs.go:276] 0 containers: []
	W0501 03:44:33.099900   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:44:33.099906   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:44:33.099953   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:44:33.139137   69580 cri.go:89] found id: ""
	I0501 03:44:33.139163   69580 logs.go:276] 0 containers: []
	W0501 03:44:33.139171   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:44:33.139177   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:44:33.139224   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:44:33.178800   69580 cri.go:89] found id: ""
	I0501 03:44:33.178826   69580 logs.go:276] 0 containers: []
	W0501 03:44:33.178834   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:44:33.178842   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:44:33.178856   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:44:33.233811   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:44:33.233842   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:44:33.248931   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:44:33.248958   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:44:33.325530   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:44:33.325551   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:44:33.325563   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:44:33.412071   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:44:33.412103   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:44:35.954706   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:44:35.970256   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:44:35.970333   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:44:36.010417   69580 cri.go:89] found id: ""
	I0501 03:44:36.010443   69580 logs.go:276] 0 containers: []
	W0501 03:44:36.010452   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:44:36.010459   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:44:36.010524   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:44:36.051571   69580 cri.go:89] found id: ""
	I0501 03:44:36.051600   69580 logs.go:276] 0 containers: []
	W0501 03:44:36.051611   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:44:36.051619   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:44:36.051683   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:44:36.092148   69580 cri.go:89] found id: ""
	I0501 03:44:36.092176   69580 logs.go:276] 0 containers: []
	W0501 03:44:36.092185   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:44:36.092190   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:44:36.092247   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:44:36.136243   69580 cri.go:89] found id: ""
	I0501 03:44:36.136282   69580 logs.go:276] 0 containers: []
	W0501 03:44:36.136290   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:44:36.136296   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:44:36.136342   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:44:36.178154   69580 cri.go:89] found id: ""
	I0501 03:44:36.178183   69580 logs.go:276] 0 containers: []
	W0501 03:44:36.178193   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:44:36.178200   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:44:36.178264   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:44:36.217050   69580 cri.go:89] found id: ""
	I0501 03:44:36.217077   69580 logs.go:276] 0 containers: []
	W0501 03:44:36.217089   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:44:36.217096   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:44:36.217172   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:44:36.260438   69580 cri.go:89] found id: ""
	I0501 03:44:36.260470   69580 logs.go:276] 0 containers: []
	W0501 03:44:36.260481   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:44:36.260488   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:44:36.260546   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:44:36.303410   69580 cri.go:89] found id: ""
	I0501 03:44:36.303436   69580 logs.go:276] 0 containers: []
	W0501 03:44:36.303448   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:44:36.303459   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:44:36.303475   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:44:36.390427   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:44:36.390468   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:44:36.433631   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:44:36.433663   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:44:33.845863   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:35.847896   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:36.012448   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:38.510722   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:39.005005   69237 pod_ready.go:81] duration metric: took 4m0.000783466s for pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace to be "Ready" ...
	E0501 03:44:39.005036   69237 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace to be "Ready" (will not retry!)
	I0501 03:44:39.005057   69237 pod_ready.go:38] duration metric: took 4m8.020392425s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0501 03:44:39.005089   69237 kubeadm.go:591] duration metric: took 4m17.941775807s to restartPrimaryControlPlane
	W0501 03:44:39.005175   69237 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0501 03:44:39.005208   69237 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0501 03:44:36.486334   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:44:36.486365   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:44:36.502145   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:44:36.502175   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:44:36.586733   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:44:39.087607   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:44:39.102475   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:44:39.102552   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:44:39.141916   69580 cri.go:89] found id: ""
	I0501 03:44:39.141947   69580 logs.go:276] 0 containers: []
	W0501 03:44:39.141958   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:44:39.141964   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:44:39.142012   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:44:39.188472   69580 cri.go:89] found id: ""
	I0501 03:44:39.188501   69580 logs.go:276] 0 containers: []
	W0501 03:44:39.188512   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:44:39.188520   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:44:39.188582   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:44:39.243282   69580 cri.go:89] found id: ""
	I0501 03:44:39.243306   69580 logs.go:276] 0 containers: []
	W0501 03:44:39.243313   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:44:39.243318   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:44:39.243377   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:44:39.288254   69580 cri.go:89] found id: ""
	I0501 03:44:39.288284   69580 logs.go:276] 0 containers: []
	W0501 03:44:39.288296   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:44:39.288304   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:44:39.288379   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:44:39.330846   69580 cri.go:89] found id: ""
	I0501 03:44:39.330879   69580 logs.go:276] 0 containers: []
	W0501 03:44:39.330892   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:44:39.330901   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:44:39.330969   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:44:39.377603   69580 cri.go:89] found id: ""
	I0501 03:44:39.377632   69580 logs.go:276] 0 containers: []
	W0501 03:44:39.377642   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:44:39.377649   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:44:39.377710   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:44:39.421545   69580 cri.go:89] found id: ""
	I0501 03:44:39.421574   69580 logs.go:276] 0 containers: []
	W0501 03:44:39.421585   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:44:39.421594   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:44:39.421653   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:44:39.463394   69580 cri.go:89] found id: ""
	I0501 03:44:39.463424   69580 logs.go:276] 0 containers: []
	W0501 03:44:39.463435   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:44:39.463447   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:44:39.463464   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:44:39.552196   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:44:39.552218   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:44:39.552229   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:44:39.648509   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:44:39.648549   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:44:39.702829   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:44:39.702866   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:44:39.757712   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:44:39.757746   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:44:38.347120   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:40.355310   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:42.847346   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:42.273443   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:44:42.289788   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:44:42.289856   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:44:42.336802   69580 cri.go:89] found id: ""
	I0501 03:44:42.336833   69580 logs.go:276] 0 containers: []
	W0501 03:44:42.336846   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:44:42.336854   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:44:42.336919   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:44:42.387973   69580 cri.go:89] found id: ""
	I0501 03:44:42.388017   69580 logs.go:276] 0 containers: []
	W0501 03:44:42.388028   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:44:42.388036   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:44:42.388103   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:44:42.444866   69580 cri.go:89] found id: ""
	I0501 03:44:42.444895   69580 logs.go:276] 0 containers: []
	W0501 03:44:42.444906   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:44:42.444914   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:44:42.444987   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:44:42.493647   69580 cri.go:89] found id: ""
	I0501 03:44:42.493676   69580 logs.go:276] 0 containers: []
	W0501 03:44:42.493686   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:44:42.493692   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:44:42.493748   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:44:42.535046   69580 cri.go:89] found id: ""
	I0501 03:44:42.535075   69580 logs.go:276] 0 containers: []
	W0501 03:44:42.535086   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:44:42.535093   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:44:42.535161   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:44:42.579453   69580 cri.go:89] found id: ""
	I0501 03:44:42.579486   69580 logs.go:276] 0 containers: []
	W0501 03:44:42.579499   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:44:42.579507   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:44:42.579568   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:44:42.621903   69580 cri.go:89] found id: ""
	I0501 03:44:42.621931   69580 logs.go:276] 0 containers: []
	W0501 03:44:42.621942   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:44:42.621950   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:44:42.622009   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:44:42.666202   69580 cri.go:89] found id: ""
	I0501 03:44:42.666232   69580 logs.go:276] 0 containers: []
	W0501 03:44:42.666243   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:44:42.666257   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:44:42.666272   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:44:42.736032   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:44:42.736078   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:44:42.750773   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:44:42.750799   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:44:42.836942   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:44:42.836975   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:44:42.836997   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:44:42.930660   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:44:42.930695   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:44:45.479619   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:44:45.495112   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:44:45.495174   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:44:45.536693   69580 cri.go:89] found id: ""
	I0501 03:44:45.536722   69580 logs.go:276] 0 containers: []
	W0501 03:44:45.536730   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:44:45.536737   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:44:45.536785   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:44:45.577838   69580 cri.go:89] found id: ""
	I0501 03:44:45.577866   69580 logs.go:276] 0 containers: []
	W0501 03:44:45.577876   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:44:45.577894   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:44:45.577958   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:44:45.615842   69580 cri.go:89] found id: ""
	I0501 03:44:45.615868   69580 logs.go:276] 0 containers: []
	W0501 03:44:45.615879   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:44:45.615892   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:44:45.615953   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:44:45.654948   69580 cri.go:89] found id: ""
	I0501 03:44:45.654972   69580 logs.go:276] 0 containers: []
	W0501 03:44:45.654980   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:44:45.654986   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:44:45.655042   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:44:45.695104   69580 cri.go:89] found id: ""
	I0501 03:44:45.695129   69580 logs.go:276] 0 containers: []
	W0501 03:44:45.695138   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:44:45.695145   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:44:45.695212   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:44:45.737609   69580 cri.go:89] found id: ""
	I0501 03:44:45.737633   69580 logs.go:276] 0 containers: []
	W0501 03:44:45.737641   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:44:45.737647   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:44:45.737693   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:44:45.778655   69580 cri.go:89] found id: ""
	I0501 03:44:45.778685   69580 logs.go:276] 0 containers: []
	W0501 03:44:45.778696   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:44:45.778702   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:44:45.778781   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:44:45.819430   69580 cri.go:89] found id: ""
	I0501 03:44:45.819452   69580 logs.go:276] 0 containers: []
	W0501 03:44:45.819460   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:44:45.819469   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:44:45.819485   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:44:45.875879   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:44:45.875911   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:44:45.892035   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:44:45.892062   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:44:45.975803   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:44:45.975836   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:44:45.975853   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:44:46.058183   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:44:46.058222   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:44:45.345465   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:47.346947   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:48.604991   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:44:48.621226   69580 kubeadm.go:591] duration metric: took 4m4.888665162s to restartPrimaryControlPlane
	W0501 03:44:48.621351   69580 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0501 03:44:48.621407   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0501 03:44:49.654748   69580 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.033320548s)
	I0501 03:44:49.654838   69580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 03:44:49.671511   69580 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0501 03:44:49.684266   69580 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0501 03:44:49.697079   69580 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0501 03:44:49.697101   69580 kubeadm.go:156] found existing configuration files:
	
	I0501 03:44:49.697159   69580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0501 03:44:49.710609   69580 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0501 03:44:49.710692   69580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0501 03:44:49.723647   69580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0501 03:44:49.736855   69580 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0501 03:44:49.737023   69580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0501 03:44:49.748842   69580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0501 03:44:49.760856   69580 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0501 03:44:49.760923   69580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0501 03:44:49.772685   69580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0501 03:44:49.784035   69580 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0501 03:44:49.784114   69580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0501 03:44:49.795699   69580 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0501 03:44:49.869387   69580 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0501 03:44:49.869481   69580 kubeadm.go:309] [preflight] Running pre-flight checks
	I0501 03:44:50.028858   69580 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0501 03:44:50.028999   69580 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0501 03:44:50.029182   69580 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0501 03:44:50.242773   69580 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0501 03:44:50.244816   69580 out.go:204]   - Generating certificates and keys ...
	I0501 03:44:50.244918   69580 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0501 03:44:50.245008   69580 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0501 03:44:50.245111   69580 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0501 03:44:50.245216   69580 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0501 03:44:50.245331   69580 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0501 03:44:50.245424   69580 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0501 03:44:50.245490   69580 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0501 03:44:50.245556   69580 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0501 03:44:50.245629   69580 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0501 03:44:50.245724   69580 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0501 03:44:50.245784   69580 kubeadm.go:309] [certs] Using the existing "sa" key
	I0501 03:44:50.245877   69580 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0501 03:44:50.501955   69580 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0501 03:44:50.683749   69580 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0501 03:44:50.905745   69580 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0501 03:44:51.005912   69580 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0501 03:44:51.025470   69580 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0501 03:44:51.029411   69580 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0501 03:44:51.029859   69580 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0501 03:44:51.181498   69580 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0501 03:44:51.183222   69580 out.go:204]   - Booting up control plane ...
	I0501 03:44:51.183334   69580 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0501 03:44:51.200394   69580 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0501 03:44:51.201612   69580 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0501 03:44:51.202445   69580 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0501 03:44:51.204681   69580 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0501 03:44:49.847629   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:52.345383   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:54.346479   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:56.348560   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:58.846207   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:45:01.345790   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:45:03.847746   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:45:06.346172   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:45:08.346693   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:45:10.846797   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:45:11.778923   69237 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.773690939s)
	I0501 03:45:11.778992   69237 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 03:45:11.796337   69237 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0501 03:45:11.810167   69237 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0501 03:45:11.822425   69237 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0501 03:45:11.822457   69237 kubeadm.go:156] found existing configuration files:
	
	I0501 03:45:11.822514   69237 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0501 03:45:11.834539   69237 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0501 03:45:11.834596   69237 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0501 03:45:11.848336   69237 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0501 03:45:11.860459   69237 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0501 03:45:11.860535   69237 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0501 03:45:11.873903   69237 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0501 03:45:11.887353   69237 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0501 03:45:11.887427   69237 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0501 03:45:11.900805   69237 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0501 03:45:11.912512   69237 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0501 03:45:11.912572   69237 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0501 03:45:11.924870   69237 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0501 03:45:12.149168   69237 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0501 03:45:13.348651   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:45:15.847148   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:45:20.882309   69237 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0501 03:45:20.882382   69237 kubeadm.go:309] [preflight] Running pre-flight checks
	I0501 03:45:20.882472   69237 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0501 03:45:20.882602   69237 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0501 03:45:20.882741   69237 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0501 03:45:20.882836   69237 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0501 03:45:20.884733   69237 out.go:204]   - Generating certificates and keys ...
	I0501 03:45:20.884837   69237 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0501 03:45:20.884894   69237 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0501 03:45:20.884996   69237 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0501 03:45:20.885106   69237 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0501 03:45:20.885209   69237 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0501 03:45:20.885316   69237 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0501 03:45:20.885400   69237 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0501 03:45:20.885483   69237 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0501 03:45:20.885590   69237 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0501 03:45:20.885702   69237 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0501 03:45:20.885759   69237 kubeadm.go:309] [certs] Using the existing "sa" key
	I0501 03:45:20.885838   69237 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0501 03:45:20.885915   69237 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0501 03:45:20.885996   69237 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0501 03:45:20.886074   69237 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0501 03:45:20.886164   69237 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0501 03:45:20.886233   69237 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0501 03:45:20.886362   69237 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0501 03:45:20.886492   69237 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0501 03:45:20.888113   69237 out.go:204]   - Booting up control plane ...
	I0501 03:45:20.888194   69237 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0501 03:45:20.888264   69237 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0501 03:45:20.888329   69237 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0501 03:45:20.888458   69237 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0501 03:45:20.888570   69237 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0501 03:45:20.888627   69237 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0501 03:45:20.888777   69237 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0501 03:45:20.888863   69237 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0501 03:45:20.888964   69237 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 502.867448ms
	I0501 03:45:20.889080   69237 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0501 03:45:20.889177   69237 kubeadm.go:309] [api-check] The API server is healthy after 5.503139909s
	I0501 03:45:20.889335   69237 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0501 03:45:20.889506   69237 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0501 03:45:20.889579   69237 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0501 03:45:20.889817   69237 kubeadm.go:309] [mark-control-plane] Marking the node default-k8s-diff-port-715118 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0501 03:45:20.889868   69237 kubeadm.go:309] [bootstrap-token] Using token: 2vhvw6.gdesonhc2twrukzt
	I0501 03:45:20.892253   69237 out.go:204]   - Configuring RBAC rules ...
	I0501 03:45:20.892395   69237 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0501 03:45:20.892475   69237 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0501 03:45:20.892652   69237 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0501 03:45:20.892812   69237 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0501 03:45:20.892931   69237 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0501 03:45:20.893040   69237 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0501 03:45:20.893201   69237 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0501 03:45:20.893264   69237 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0501 03:45:20.893309   69237 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0501 03:45:20.893319   69237 kubeadm.go:309] 
	I0501 03:45:20.893367   69237 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0501 03:45:20.893373   69237 kubeadm.go:309] 
	I0501 03:45:20.893450   69237 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0501 03:45:20.893458   69237 kubeadm.go:309] 
	I0501 03:45:20.893481   69237 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0501 03:45:20.893544   69237 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0501 03:45:20.893591   69237 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0501 03:45:20.893597   69237 kubeadm.go:309] 
	I0501 03:45:20.893643   69237 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0501 03:45:20.893650   69237 kubeadm.go:309] 
	I0501 03:45:20.893685   69237 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0501 03:45:20.893690   69237 kubeadm.go:309] 
	I0501 03:45:20.893741   69237 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0501 03:45:20.893805   69237 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0501 03:45:20.893858   69237 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0501 03:45:20.893863   69237 kubeadm.go:309] 
	I0501 03:45:20.893946   69237 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0501 03:45:20.894035   69237 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0501 03:45:20.894045   69237 kubeadm.go:309] 
	I0501 03:45:20.894139   69237 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8444 --token 2vhvw6.gdesonhc2twrukzt \
	I0501 03:45:20.894267   69237 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:bd94cc6acfb95fe36f70b12c6a1f2980601b8235e7c4dea65cebdf35eb514754 \
	I0501 03:45:20.894294   69237 kubeadm.go:309] 	--control-plane 
	I0501 03:45:20.894301   69237 kubeadm.go:309] 
	I0501 03:45:20.894368   69237 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0501 03:45:20.894375   69237 kubeadm.go:309] 
	I0501 03:45:20.894498   69237 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8444 --token 2vhvw6.gdesonhc2twrukzt \
	I0501 03:45:20.894605   69237 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:bd94cc6acfb95fe36f70b12c6a1f2980601b8235e7c4dea65cebdf35eb514754 
	I0501 03:45:20.894616   69237 cni.go:84] Creating CNI manager for ""
	I0501 03:45:20.894623   69237 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0501 03:45:20.896151   69237 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0501 03:45:18.346276   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:45:20.846958   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:45:20.897443   69237 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0501 03:45:20.911935   69237 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0501 03:45:20.941109   69237 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0501 03:45:20.941193   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:20.941249   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-715118 minikube.k8s.io/updated_at=2024_05_01T03_45_20_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=2c4eae41cda912e6a762d77f0d8868e00f97bb4e minikube.k8s.io/name=default-k8s-diff-port-715118 minikube.k8s.io/primary=true
	I0501 03:45:20.971300   69237 ops.go:34] apiserver oom_adj: -16
	I0501 03:45:21.143744   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:21.643800   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:22.144096   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:22.643852   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:23.144726   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:23.644174   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:24.143735   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:24.643947   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:25.143871   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:25.644557   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:23.345774   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:45:25.346189   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:45:27.348026   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:45:26.144443   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:26.643761   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:27.144691   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:27.644445   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:28.144006   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:28.643904   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:29.144077   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:29.644690   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:30.144692   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:30.644604   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:31.207553   69580 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0501 03:45:31.208328   69580 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0501 03:45:31.208516   69580 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0501 03:45:29.845785   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:45:32.348020   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:45:31.144517   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:31.644673   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:32.143793   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:32.644380   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:33.144729   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:33.644415   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:33.752056   69237 kubeadm.go:1107] duration metric: took 12.810918189s to wait for elevateKubeSystemPrivileges
	W0501 03:45:33.752096   69237 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0501 03:45:33.752105   69237 kubeadm.go:393] duration metric: took 5m12.753721662s to StartCluster
	I0501 03:45:33.752124   69237 settings.go:142] acquiring lock: {Name:mkcfe08bcadd45d99c00ed1eaf4e9c226a489fe2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 03:45:33.752219   69237 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18779-13391/kubeconfig
	I0501 03:45:33.753829   69237 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18779-13391/kubeconfig: {Name:mk5d1131557f885800877622151c0915337cb23a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 03:45:33.754094   69237 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.72.158 Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0501 03:45:33.755764   69237 out.go:177] * Verifying Kubernetes components...
	I0501 03:45:33.754178   69237 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0501 03:45:33.754310   69237 config.go:182] Loaded profile config "default-k8s-diff-port-715118": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0501 03:45:33.757144   69237 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 03:45:33.757151   69237 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-715118"
	I0501 03:45:33.757172   69237 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-715118"
	I0501 03:45:33.757189   69237 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-715118"
	I0501 03:45:33.757213   69237 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-715118"
	I0501 03:45:33.757221   69237 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-715118"
	W0501 03:45:33.757230   69237 addons.go:243] addon metrics-server should already be in state true
	I0501 03:45:33.757264   69237 host.go:66] Checking if "default-k8s-diff-port-715118" exists ...
	I0501 03:45:33.757180   69237 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-715118"
	W0501 03:45:33.757295   69237 addons.go:243] addon storage-provisioner should already be in state true
	I0501 03:45:33.757355   69237 host.go:66] Checking if "default-k8s-diff-port-715118" exists ...
	I0501 03:45:33.757596   69237 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:45:33.757624   69237 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:45:33.757630   69237 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:45:33.757762   69237 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:45:33.757808   69237 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:45:33.757662   69237 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:45:33.773846   69237 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44313
	I0501 03:45:33.774442   69237 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:45:33.775002   69237 main.go:141] libmachine: Using API Version  1
	I0501 03:45:33.775024   69237 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:45:33.775438   69237 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:45:33.776086   69237 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:45:33.776117   69237 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:45:33.777715   69237 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37079
	I0501 03:45:33.777835   69237 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38097
	I0501 03:45:33.778170   69237 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:45:33.778240   69237 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:45:33.778701   69237 main.go:141] libmachine: Using API Version  1
	I0501 03:45:33.778734   69237 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:45:33.778778   69237 main.go:141] libmachine: Using API Version  1
	I0501 03:45:33.778795   69237 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:45:33.779142   69237 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:45:33.779150   69237 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:45:33.779427   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetState
	I0501 03:45:33.779721   69237 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:45:33.779769   69237 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:45:33.783493   69237 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-715118"
	W0501 03:45:33.783519   69237 addons.go:243] addon default-storageclass should already be in state true
	I0501 03:45:33.783551   69237 host.go:66] Checking if "default-k8s-diff-port-715118" exists ...
	I0501 03:45:33.783922   69237 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:45:33.783965   69237 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:45:33.795373   69237 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43491
	I0501 03:45:33.795988   69237 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:45:33.796557   69237 main.go:141] libmachine: Using API Version  1
	I0501 03:45:33.796579   69237 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:45:33.796931   69237 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:45:33.797093   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetState
	I0501 03:45:33.797155   69237 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46755
	I0501 03:45:33.797806   69237 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:45:33.798383   69237 main.go:141] libmachine: Using API Version  1
	I0501 03:45:33.798442   69237 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:45:33.798848   69237 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:45:33.799052   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetState
	I0501 03:45:33.799105   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .DriverName
	I0501 03:45:33.801809   69237 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0501 03:45:33.800600   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .DriverName
	I0501 03:45:33.803752   69237 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0501 03:45:33.803779   69237 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0501 03:45:33.803800   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHHostname
	I0501 03:45:33.805235   69237 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0501 03:45:33.804172   69237 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35709
	I0501 03:45:33.806635   69237 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0501 03:45:33.806651   69237 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0501 03:45:33.806670   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHHostname
	I0501 03:45:33.806889   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:45:33.806967   69237 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:45:33.807292   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:12:31", ip: ""} in network mk-default-k8s-diff-port-715118: {Iface:virbr3 ExpiryTime:2024-05-01 04:40:05 +0000 UTC Type:0 Mac:52:54:00:87:12:31 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-715118 Clientid:01:52:54:00:87:12:31}
	I0501 03:45:33.807426   69237 main.go:141] libmachine: Using API Version  1
	I0501 03:45:33.807428   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHPort
	I0501 03:45:33.807437   69237 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:45:33.807449   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined IP address 192.168.72.158 and MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:45:33.807578   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHKeyPath
	I0501 03:45:33.807680   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHUsername
	I0501 03:45:33.807799   69237 sshutil.go:53] new ssh client: &{IP:192.168.72.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/default-k8s-diff-port-715118/id_rsa Username:docker}
	I0501 03:45:33.808171   69237 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:45:33.808625   69237 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:45:33.808660   69237 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:45:33.810668   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:45:33.811266   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:12:31", ip: ""} in network mk-default-k8s-diff-port-715118: {Iface:virbr3 ExpiryTime:2024-05-01 04:40:05 +0000 UTC Type:0 Mac:52:54:00:87:12:31 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-715118 Clientid:01:52:54:00:87:12:31}
	I0501 03:45:33.811297   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined IP address 192.168.72.158 and MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:45:33.811595   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHPort
	I0501 03:45:33.811794   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHKeyPath
	I0501 03:45:33.811983   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHUsername
	I0501 03:45:33.812124   69237 sshutil.go:53] new ssh client: &{IP:192.168.72.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/default-k8s-diff-port-715118/id_rsa Username:docker}
	I0501 03:45:33.825315   69237 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35461
	I0501 03:45:33.825891   69237 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:45:33.826334   69237 main.go:141] libmachine: Using API Version  1
	I0501 03:45:33.826351   69237 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:45:33.826679   69237 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:45:33.826912   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetState
	I0501 03:45:33.828659   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .DriverName
	I0501 03:45:33.828931   69237 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0501 03:45:33.828946   69237 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0501 03:45:33.828963   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHHostname
	I0501 03:45:33.832151   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:45:33.832632   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:12:31", ip: ""} in network mk-default-k8s-diff-port-715118: {Iface:virbr3 ExpiryTime:2024-05-01 04:40:05 +0000 UTC Type:0 Mac:52:54:00:87:12:31 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-715118 Clientid:01:52:54:00:87:12:31}
	I0501 03:45:33.832656   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined IP address 192.168.72.158 and MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:45:33.832863   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHPort
	I0501 03:45:33.833010   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHKeyPath
	I0501 03:45:33.833146   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHUsername
	I0501 03:45:33.833302   69237 sshutil.go:53] new ssh client: &{IP:192.168.72.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/default-k8s-diff-port-715118/id_rsa Username:docker}
	I0501 03:45:34.014287   69237 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0501 03:45:34.047199   69237 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-715118" to be "Ready" ...
	I0501 03:45:34.069000   69237 node_ready.go:49] node "default-k8s-diff-port-715118" has status "Ready":"True"
	I0501 03:45:34.069023   69237 node_ready.go:38] duration metric: took 21.790599ms for node "default-k8s-diff-port-715118" to be "Ready" ...
	I0501 03:45:34.069033   69237 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0501 03:45:34.077182   69237 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-bg755" in "kube-system" namespace to be "Ready" ...
	I0501 03:45:34.151001   69237 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0501 03:45:34.166362   69237 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0501 03:45:34.166385   69237 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0501 03:45:34.214624   69237 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0501 03:45:34.329110   69237 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0501 03:45:34.329133   69237 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0501 03:45:34.436779   69237 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0501 03:45:34.436804   69237 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0501 03:45:34.611410   69237 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0501 03:45:34.698997   69237 main.go:141] libmachine: Making call to close driver server
	I0501 03:45:34.699026   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .Close
	I0501 03:45:34.699321   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | Closing plugin on server side
	I0501 03:45:34.699389   69237 main.go:141] libmachine: Successfully made call to close driver server
	I0501 03:45:34.699408   69237 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 03:45:34.699423   69237 main.go:141] libmachine: Making call to close driver server
	I0501 03:45:34.699437   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .Close
	I0501 03:45:34.699684   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | Closing plugin on server side
	I0501 03:45:34.699726   69237 main.go:141] libmachine: Successfully made call to close driver server
	I0501 03:45:34.699734   69237 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 03:45:34.708143   69237 main.go:141] libmachine: Making call to close driver server
	I0501 03:45:34.708171   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .Close
	I0501 03:45:34.708438   69237 main.go:141] libmachine: Successfully made call to close driver server
	I0501 03:45:34.708457   69237 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 03:45:34.708463   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | Closing plugin on server side
	I0501 03:45:35.510225   69237 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.295555956s)
	I0501 03:45:35.510274   69237 main.go:141] libmachine: Making call to close driver server
	I0501 03:45:35.510286   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .Close
	I0501 03:45:35.510700   69237 main.go:141] libmachine: Successfully made call to close driver server
	I0501 03:45:35.510721   69237 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 03:45:35.510732   69237 main.go:141] libmachine: Making call to close driver server
	I0501 03:45:35.510728   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | Closing plugin on server side
	I0501 03:45:35.510740   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .Close
	I0501 03:45:35.510961   69237 main.go:141] libmachine: Successfully made call to close driver server
	I0501 03:45:35.510979   69237 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 03:45:35.510983   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | Closing plugin on server side
	I0501 03:45:35.845633   69237 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.234178466s)
	I0501 03:45:35.845691   69237 main.go:141] libmachine: Making call to close driver server
	I0501 03:45:35.845708   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .Close
	I0501 03:45:35.845997   69237 main.go:141] libmachine: Successfully made call to close driver server
	I0501 03:45:35.846017   69237 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 03:45:35.846027   69237 main.go:141] libmachine: Making call to close driver server
	I0501 03:45:35.846026   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | Closing plugin on server side
	I0501 03:45:35.846036   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .Close
	I0501 03:45:35.847736   69237 main.go:141] libmachine: Successfully made call to close driver server
	I0501 03:45:35.847767   69237 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 03:45:35.847781   69237 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-715118"
	I0501 03:45:35.847786   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | Closing plugin on server side
	I0501 03:45:35.849438   69237 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0501 03:45:36.209029   69580 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0501 03:45:36.209300   69580 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0501 03:45:34.848699   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:45:37.338985   68640 pod_ready.go:81] duration metric: took 4m0.000306796s for pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace to be "Ready" ...
	E0501 03:45:37.339010   68640 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace to be "Ready" (will not retry!)
	I0501 03:45:37.339029   68640 pod_ready.go:38] duration metric: took 4m9.062496127s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0501 03:45:37.339089   68640 kubeadm.go:591] duration metric: took 4m19.268153875s to restartPrimaryControlPlane
	W0501 03:45:37.339148   68640 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0501 03:45:37.339176   68640 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0501 03:45:35.851156   69237 addons.go:505] duration metric: took 2.096980743s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0501 03:45:36.085176   69237 pod_ready.go:102] pod "coredns-7db6d8ff4d-bg755" in "kube-system" namespace has status "Ready":"False"
	I0501 03:45:36.585390   69237 pod_ready.go:92] pod "coredns-7db6d8ff4d-bg755" in "kube-system" namespace has status "Ready":"True"
	I0501 03:45:36.585415   69237 pod_ready.go:81] duration metric: took 2.508204204s for pod "coredns-7db6d8ff4d-bg755" in "kube-system" namespace to be "Ready" ...
	I0501 03:45:36.585428   69237 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-mp6f5" in "kube-system" namespace to be "Ready" ...
	I0501 03:45:36.594575   69237 pod_ready.go:92] pod "coredns-7db6d8ff4d-mp6f5" in "kube-system" namespace has status "Ready":"True"
	I0501 03:45:36.594600   69237 pod_ready.go:81] duration metric: took 9.163923ms for pod "coredns-7db6d8ff4d-mp6f5" in "kube-system" namespace to be "Ready" ...
	I0501 03:45:36.594613   69237 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-715118" in "kube-system" namespace to be "Ready" ...
	I0501 03:45:36.606784   69237 pod_ready.go:92] pod "etcd-default-k8s-diff-port-715118" in "kube-system" namespace has status "Ready":"True"
	I0501 03:45:36.606807   69237 pod_ready.go:81] duration metric: took 12.186129ms for pod "etcd-default-k8s-diff-port-715118" in "kube-system" namespace to be "Ready" ...
	I0501 03:45:36.606819   69237 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-715118" in "kube-system" namespace to be "Ready" ...
	I0501 03:45:36.617373   69237 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-715118" in "kube-system" namespace has status "Ready":"True"
	I0501 03:45:36.617394   69237 pod_ready.go:81] duration metric: took 10.566278ms for pod "kube-apiserver-default-k8s-diff-port-715118" in "kube-system" namespace to be "Ready" ...
	I0501 03:45:36.617404   69237 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-715118" in "kube-system" namespace to be "Ready" ...
	I0501 03:45:36.622441   69237 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-715118" in "kube-system" namespace has status "Ready":"True"
	I0501 03:45:36.622460   69237 pod_ready.go:81] duration metric: took 5.049948ms for pod "kube-controller-manager-default-k8s-diff-port-715118" in "kube-system" namespace to be "Ready" ...
	I0501 03:45:36.622469   69237 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2knrp" in "kube-system" namespace to be "Ready" ...
	I0501 03:45:36.981490   69237 pod_ready.go:92] pod "kube-proxy-2knrp" in "kube-system" namespace has status "Ready":"True"
	I0501 03:45:36.981513   69237 pod_ready.go:81] duration metric: took 359.038927ms for pod "kube-proxy-2knrp" in "kube-system" namespace to be "Ready" ...
	I0501 03:45:36.981523   69237 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-715118" in "kube-system" namespace to be "Ready" ...
	I0501 03:45:37.381970   69237 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-715118" in "kube-system" namespace has status "Ready":"True"
	I0501 03:45:37.381999   69237 pod_ready.go:81] duration metric: took 400.468372ms for pod "kube-scheduler-default-k8s-diff-port-715118" in "kube-system" namespace to be "Ready" ...
	I0501 03:45:37.382011   69237 pod_ready.go:38] duration metric: took 3.312967983s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0501 03:45:37.382028   69237 api_server.go:52] waiting for apiserver process to appear ...
	I0501 03:45:37.382091   69237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:45:37.401961   69237 api_server.go:72] duration metric: took 3.647829991s to wait for apiserver process to appear ...
	I0501 03:45:37.401992   69237 api_server.go:88] waiting for apiserver healthz status ...
	I0501 03:45:37.402016   69237 api_server.go:253] Checking apiserver healthz at https://192.168.72.158:8444/healthz ...
	I0501 03:45:37.407177   69237 api_server.go:279] https://192.168.72.158:8444/healthz returned 200:
	ok
	I0501 03:45:37.408020   69237 api_server.go:141] control plane version: v1.30.0
	I0501 03:45:37.408037   69237 api_server.go:131] duration metric: took 6.036621ms to wait for apiserver health ...
	I0501 03:45:37.408046   69237 system_pods.go:43] waiting for kube-system pods to appear ...
	I0501 03:45:37.586052   69237 system_pods.go:59] 9 kube-system pods found
	I0501 03:45:37.586081   69237 system_pods.go:61] "coredns-7db6d8ff4d-bg755" [884d489a-bc1e-442c-8e00-4616f983d3e9] Running
	I0501 03:45:37.586085   69237 system_pods.go:61] "coredns-7db6d8ff4d-mp6f5" [4c8550d0-0029-48f1-a892-1800f6639c75] Running
	I0501 03:45:37.586090   69237 system_pods.go:61] "etcd-default-k8s-diff-port-715118" [12be9bec-1d84-49ee-898c-499ff75a8026] Running
	I0501 03:45:37.586094   69237 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-715118" [ae9a476b-03cf-4d4d-9990-5e760db82e60] Running
	I0501 03:45:37.586098   69237 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-715118" [542bbe50-58b6-40fb-b81b-0cc2444a3401] Running
	I0501 03:45:37.586101   69237 system_pods.go:61] "kube-proxy-2knrp" [cf1406ff-8a6e-49bb-b180-1e72f4b54fbf] Running
	I0501 03:45:37.586104   69237 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-715118" [d24f02a2-67a9-4f28-9acc-445e0e74a68d] Running
	I0501 03:45:37.586109   69237 system_pods.go:61] "metrics-server-569cc877fc-xwxx9" [a66f5df4-355c-47f0-8b6e-da29e1c4394e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0501 03:45:37.586113   69237 system_pods.go:61] "storage-provisioner" [debb3a59-143a-46d3-87da-c2403e264861] Running
	I0501 03:45:37.586123   69237 system_pods.go:74] duration metric: took 178.07045ms to wait for pod list to return data ...
	I0501 03:45:37.586132   69237 default_sa.go:34] waiting for default service account to be created ...
	I0501 03:45:37.780696   69237 default_sa.go:45] found service account: "default"
	I0501 03:45:37.780720   69237 default_sa.go:55] duration metric: took 194.582743ms for default service account to be created ...
	I0501 03:45:37.780728   69237 system_pods.go:116] waiting for k8s-apps to be running ...
	I0501 03:45:37.985342   69237 system_pods.go:86] 9 kube-system pods found
	I0501 03:45:37.985368   69237 system_pods.go:89] "coredns-7db6d8ff4d-bg755" [884d489a-bc1e-442c-8e00-4616f983d3e9] Running
	I0501 03:45:37.985374   69237 system_pods.go:89] "coredns-7db6d8ff4d-mp6f5" [4c8550d0-0029-48f1-a892-1800f6639c75] Running
	I0501 03:45:37.985378   69237 system_pods.go:89] "etcd-default-k8s-diff-port-715118" [12be9bec-1d84-49ee-898c-499ff75a8026] Running
	I0501 03:45:37.985383   69237 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-715118" [ae9a476b-03cf-4d4d-9990-5e760db82e60] Running
	I0501 03:45:37.985387   69237 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-715118" [542bbe50-58b6-40fb-b81b-0cc2444a3401] Running
	I0501 03:45:37.985391   69237 system_pods.go:89] "kube-proxy-2knrp" [cf1406ff-8a6e-49bb-b180-1e72f4b54fbf] Running
	I0501 03:45:37.985395   69237 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-715118" [d24f02a2-67a9-4f28-9acc-445e0e74a68d] Running
	I0501 03:45:37.985401   69237 system_pods.go:89] "metrics-server-569cc877fc-xwxx9" [a66f5df4-355c-47f0-8b6e-da29e1c4394e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0501 03:45:37.985405   69237 system_pods.go:89] "storage-provisioner" [debb3a59-143a-46d3-87da-c2403e264861] Running
	I0501 03:45:37.985412   69237 system_pods.go:126] duration metric: took 204.679545ms to wait for k8s-apps to be running ...
	I0501 03:45:37.985418   69237 system_svc.go:44] waiting for kubelet service to be running ....
	I0501 03:45:37.985463   69237 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 03:45:38.002421   69237 system_svc.go:56] duration metric: took 16.992346ms WaitForService to wait for kubelet
	I0501 03:45:38.002458   69237 kubeadm.go:576] duration metric: took 4.248332952s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0501 03:45:38.002477   69237 node_conditions.go:102] verifying NodePressure condition ...
	I0501 03:45:38.181465   69237 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0501 03:45:38.181496   69237 node_conditions.go:123] node cpu capacity is 2
	I0501 03:45:38.181510   69237 node_conditions.go:105] duration metric: took 179.027834ms to run NodePressure ...
	I0501 03:45:38.181524   69237 start.go:240] waiting for startup goroutines ...
	I0501 03:45:38.181534   69237 start.go:245] waiting for cluster config update ...
	I0501 03:45:38.181547   69237 start.go:254] writing updated cluster config ...
	I0501 03:45:38.181810   69237 ssh_runner.go:195] Run: rm -f paused
	I0501 03:45:38.244075   69237 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0501 03:45:38.246261   69237 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-715118" cluster and "default" namespace by default
	I0501 03:45:46.209837   69580 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0501 03:45:46.210120   69580 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0501 03:46:06.211471   69580 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0501 03:46:06.211673   69580 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0501 03:46:09.967666   68640 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.628454657s)
	I0501 03:46:09.967737   68640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 03:46:09.985802   68640 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0501 03:46:09.996494   68640 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0501 03:46:10.006956   68640 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0501 03:46:10.006979   68640 kubeadm.go:156] found existing configuration files:
	
	I0501 03:46:10.007025   68640 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0501 03:46:10.017112   68640 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0501 03:46:10.017174   68640 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0501 03:46:10.027747   68640 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0501 03:46:10.037853   68640 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0501 03:46:10.037910   68640 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0501 03:46:10.048023   68640 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0501 03:46:10.057354   68640 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0501 03:46:10.057408   68640 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0501 03:46:10.067352   68640 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0501 03:46:10.076696   68640 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0501 03:46:10.076741   68640 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0501 03:46:10.086799   68640 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0501 03:46:10.150816   68640 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0501 03:46:10.150871   68640 kubeadm.go:309] [preflight] Running pre-flight checks
	I0501 03:46:10.325430   68640 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0501 03:46:10.325546   68640 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0501 03:46:10.325669   68640 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0501 03:46:10.581934   68640 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0501 03:46:10.585119   68640 out.go:204]   - Generating certificates and keys ...
	I0501 03:46:10.585221   68640 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0501 03:46:10.585319   68640 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0501 03:46:10.585416   68640 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0501 03:46:10.585522   68640 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0501 03:46:10.585620   68640 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0501 03:46:10.585695   68640 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0501 03:46:10.585781   68640 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0501 03:46:10.585861   68640 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0501 03:46:10.585959   68640 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0501 03:46:10.586064   68640 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0501 03:46:10.586116   68640 kubeadm.go:309] [certs] Using the existing "sa" key
	I0501 03:46:10.586208   68640 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0501 03:46:10.789482   68640 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0501 03:46:10.991219   68640 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0501 03:46:11.194897   68640 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0501 03:46:11.411926   68640 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0501 03:46:11.994791   68640 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0501 03:46:11.995468   68640 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0501 03:46:11.998463   68640 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0501 03:46:12.000394   68640 out.go:204]   - Booting up control plane ...
	I0501 03:46:12.000521   68640 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0501 03:46:12.000632   68640 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0501 03:46:12.000721   68640 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0501 03:46:12.022371   68640 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0501 03:46:12.023628   68640 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0501 03:46:12.023709   68640 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0501 03:46:12.178475   68640 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0501 03:46:12.178615   68640 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0501 03:46:12.680307   68640 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 502.179909ms
	I0501 03:46:12.680409   68640 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0501 03:46:18.182830   68640 kubeadm.go:309] [api-check] The API server is healthy after 5.502227274s
	I0501 03:46:18.197822   68640 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0501 03:46:18.217282   68640 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0501 03:46:18.247591   68640 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0501 03:46:18.247833   68640 kubeadm.go:309] [mark-control-plane] Marking the node no-preload-892672 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0501 03:46:18.259687   68640 kubeadm.go:309] [bootstrap-token] Using token: 8rc6kt.ele1oeavg6hezahw
	I0501 03:46:18.261204   68640 out.go:204]   - Configuring RBAC rules ...
	I0501 03:46:18.261333   68640 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0501 03:46:18.272461   68640 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0501 03:46:18.284615   68640 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0501 03:46:18.288686   68640 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0501 03:46:18.292005   68640 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0501 03:46:18.295772   68640 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0501 03:46:18.591035   68640 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0501 03:46:19.028299   68640 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0501 03:46:19.598192   68640 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0501 03:46:19.598219   68640 kubeadm.go:309] 
	I0501 03:46:19.598323   68640 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0501 03:46:19.598337   68640 kubeadm.go:309] 
	I0501 03:46:19.598490   68640 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0501 03:46:19.598514   68640 kubeadm.go:309] 
	I0501 03:46:19.598542   68640 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0501 03:46:19.598604   68640 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0501 03:46:19.598648   68640 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0501 03:46:19.598673   68640 kubeadm.go:309] 
	I0501 03:46:19.598771   68640 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0501 03:46:19.598784   68640 kubeadm.go:309] 
	I0501 03:46:19.598850   68640 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0501 03:46:19.598860   68640 kubeadm.go:309] 
	I0501 03:46:19.598963   68640 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0501 03:46:19.599069   68640 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0501 03:46:19.599167   68640 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0501 03:46:19.599183   68640 kubeadm.go:309] 
	I0501 03:46:19.599283   68640 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0501 03:46:19.599389   68640 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0501 03:46:19.599400   68640 kubeadm.go:309] 
	I0501 03:46:19.599500   68640 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 8rc6kt.ele1oeavg6hezahw \
	I0501 03:46:19.599626   68640 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:bd94cc6acfb95fe36f70b12c6a1f2980601b8235e7c4dea65cebdf35eb514754 \
	I0501 03:46:19.599666   68640 kubeadm.go:309] 	--control-plane 
	I0501 03:46:19.599676   68640 kubeadm.go:309] 
	I0501 03:46:19.599779   68640 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0501 03:46:19.599807   68640 kubeadm.go:309] 
	I0501 03:46:19.599931   68640 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 8rc6kt.ele1oeavg6hezahw \
	I0501 03:46:19.600079   68640 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:bd94cc6acfb95fe36f70b12c6a1f2980601b8235e7c4dea65cebdf35eb514754 
	I0501 03:46:19.600763   68640 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0501 03:46:19.600786   68640 cni.go:84] Creating CNI manager for ""
	I0501 03:46:19.600792   68640 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0501 03:46:19.602473   68640 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0501 03:46:19.603816   68640 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0501 03:46:19.621706   68640 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0501 03:46:19.649643   68640 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0501 03:46:19.649762   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:19.649787   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-892672 minikube.k8s.io/updated_at=2024_05_01T03_46_19_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=2c4eae41cda912e6a762d77f0d8868e00f97bb4e minikube.k8s.io/name=no-preload-892672 minikube.k8s.io/primary=true
	I0501 03:46:19.892482   68640 ops.go:34] apiserver oom_adj: -16
	I0501 03:46:19.892631   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:20.393436   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:20.893412   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:21.393634   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:21.893273   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:22.393031   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:22.893498   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:23.393599   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:23.893024   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:24.393544   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:24.893431   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:25.393290   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:25.892718   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:26.392928   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:26.893101   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:27.393045   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:27.892722   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:28.393102   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:28.892871   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:29.392650   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:29.893034   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:30.393561   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:30.893661   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:31.393235   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:31.892889   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:32.393263   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:32.893427   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:33.046965   68640 kubeadm.go:1107] duration metric: took 13.397277159s to wait for elevateKubeSystemPrivileges
	W0501 03:46:33.047010   68640 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0501 03:46:33.047020   68640 kubeadm.go:393] duration metric: took 5m15.038324633s to StartCluster
	I0501 03:46:33.047042   68640 settings.go:142] acquiring lock: {Name:mkcfe08bcadd45d99c00ed1eaf4e9c226a489fe2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 03:46:33.047126   68640 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18779-13391/kubeconfig
	I0501 03:46:33.048731   68640 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18779-13391/kubeconfig: {Name:mk5d1131557f885800877622151c0915337cb23a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 03:46:33.048988   68640 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.144 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0501 03:46:33.050376   68640 out.go:177] * Verifying Kubernetes components...
	I0501 03:46:33.049030   68640 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0501 03:46:33.049253   68640 config.go:182] Loaded profile config "no-preload-892672": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0501 03:46:33.051595   68640 addons.go:69] Setting storage-provisioner=true in profile "no-preload-892672"
	I0501 03:46:33.051616   68640 addons.go:69] Setting metrics-server=true in profile "no-preload-892672"
	I0501 03:46:33.051639   68640 addons.go:234] Setting addon storage-provisioner=true in "no-preload-892672"
	I0501 03:46:33.051644   68640 addons.go:234] Setting addon metrics-server=true in "no-preload-892672"
	W0501 03:46:33.051649   68640 addons.go:243] addon storage-provisioner should already be in state true
	W0501 03:46:33.051653   68640 addons.go:243] addon metrics-server should already be in state true
	I0501 03:46:33.051675   68640 host.go:66] Checking if "no-preload-892672" exists ...
	I0501 03:46:33.051680   68640 host.go:66] Checking if "no-preload-892672" exists ...
	I0501 03:46:33.051599   68640 addons.go:69] Setting default-storageclass=true in profile "no-preload-892672"
	I0501 03:46:33.051760   68640 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-892672"
	I0501 03:46:33.051600   68640 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 03:46:33.052016   68640 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:46:33.052047   68640 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:46:33.052064   68640 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:46:33.052095   68640 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:46:33.052110   68640 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:46:33.052135   68640 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:46:33.068515   68640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46407
	I0501 03:46:33.069115   68640 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:46:33.069702   68640 main.go:141] libmachine: Using API Version  1
	I0501 03:46:33.069728   68640 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:46:33.070085   68640 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:46:33.070731   68640 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:46:33.070763   68640 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:46:33.072166   68640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46609
	I0501 03:46:33.072179   68640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46109
	I0501 03:46:33.072632   68640 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:46:33.072770   68640 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:46:33.073161   68640 main.go:141] libmachine: Using API Version  1
	I0501 03:46:33.073180   68640 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:46:33.073318   68640 main.go:141] libmachine: Using API Version  1
	I0501 03:46:33.073333   68640 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:46:33.073467   68640 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:46:33.073893   68640 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:46:33.074056   68640 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:46:33.074065   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetState
	I0501 03:46:33.074092   68640 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:46:33.077976   68640 addons.go:234] Setting addon default-storageclass=true in "no-preload-892672"
	W0501 03:46:33.077997   68640 addons.go:243] addon default-storageclass should already be in state true
	I0501 03:46:33.078110   68640 host.go:66] Checking if "no-preload-892672" exists ...
	I0501 03:46:33.078535   68640 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:46:33.078566   68640 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:46:33.092605   68640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39707
	I0501 03:46:33.092996   68640 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:46:33.093578   68640 main.go:141] libmachine: Using API Version  1
	I0501 03:46:33.093597   68640 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:46:33.093602   68640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36671
	I0501 03:46:33.093778   68640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34727
	I0501 03:46:33.093893   68640 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:46:33.094117   68640 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:46:33.094169   68640 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:46:33.094250   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetState
	I0501 03:46:33.094577   68640 main.go:141] libmachine: Using API Version  1
	I0501 03:46:33.094602   68640 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:46:33.094986   68640 main.go:141] libmachine: Using API Version  1
	I0501 03:46:33.095004   68640 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:46:33.095062   68640 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:46:33.095389   68640 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:46:33.096401   68640 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:46:33.096423   68640 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:46:33.096665   68640 main.go:141] libmachine: (no-preload-892672) Calling .DriverName
	I0501 03:46:33.096678   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetState
	I0501 03:46:33.098465   68640 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0501 03:46:33.099842   68640 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0501 03:46:33.099861   68640 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0501 03:46:33.099879   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHHostname
	I0501 03:46:33.098734   68640 main.go:141] libmachine: (no-preload-892672) Calling .DriverName
	I0501 03:46:33.101305   68640 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0501 03:46:33.102491   68640 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0501 03:46:33.102512   68640 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0501 03:46:33.102531   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHHostname
	I0501 03:46:33.103006   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:46:33.103617   68640 main.go:141] libmachine: (no-preload-892672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6d:9a", ip: ""} in network mk-no-preload-892672: {Iface:virbr1 ExpiryTime:2024-05-01 04:40:47 +0000 UTC Type:0 Mac:52:54:00:c7:6d:9a Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:no-preload-892672 Clientid:01:52:54:00:c7:6d:9a}
	I0501 03:46:33.103641   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined IP address 192.168.39.144 and MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:46:33.103799   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHPort
	I0501 03:46:33.103977   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHKeyPath
	I0501 03:46:33.104143   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHUsername
	I0501 03:46:33.104272   68640 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/no-preload-892672/id_rsa Username:docker}
	I0501 03:46:33.105452   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:46:33.105795   68640 main.go:141] libmachine: (no-preload-892672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6d:9a", ip: ""} in network mk-no-preload-892672: {Iface:virbr1 ExpiryTime:2024-05-01 04:40:47 +0000 UTC Type:0 Mac:52:54:00:c7:6d:9a Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:no-preload-892672 Clientid:01:52:54:00:c7:6d:9a}
	I0501 03:46:33.105821   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined IP address 192.168.39.144 and MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:46:33.106142   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHPort
	I0501 03:46:33.106290   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHKeyPath
	I0501 03:46:33.106392   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHUsername
	I0501 03:46:33.106511   68640 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/no-preload-892672/id_rsa Username:docker}
	I0501 03:46:33.113012   68640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42615
	I0501 03:46:33.113365   68640 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:46:33.113813   68640 main.go:141] libmachine: Using API Version  1
	I0501 03:46:33.113834   68640 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:46:33.114127   68640 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:46:33.114304   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetState
	I0501 03:46:33.115731   68640 main.go:141] libmachine: (no-preload-892672) Calling .DriverName
	I0501 03:46:33.115997   68640 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0501 03:46:33.116010   68640 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0501 03:46:33.116023   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHHostname
	I0501 03:46:33.119272   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:46:33.119644   68640 main.go:141] libmachine: (no-preload-892672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6d:9a", ip: ""} in network mk-no-preload-892672: {Iface:virbr1 ExpiryTime:2024-05-01 04:40:47 +0000 UTC Type:0 Mac:52:54:00:c7:6d:9a Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:no-preload-892672 Clientid:01:52:54:00:c7:6d:9a}
	I0501 03:46:33.119661   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined IP address 192.168.39.144 and MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:46:33.119845   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHPort
	I0501 03:46:33.120223   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHKeyPath
	I0501 03:46:33.120358   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHUsername
	I0501 03:46:33.120449   68640 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/no-preload-892672/id_rsa Username:docker}
	I0501 03:46:33.296711   68640 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0501 03:46:33.342215   68640 node_ready.go:35] waiting up to 6m0s for node "no-preload-892672" to be "Ready" ...
	I0501 03:46:33.355677   68640 node_ready.go:49] node "no-preload-892672" has status "Ready":"True"
	I0501 03:46:33.355707   68640 node_ready.go:38] duration metric: took 13.392381ms for node "no-preload-892672" to be "Ready" ...
	I0501 03:46:33.355718   68640 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0501 03:46:33.367706   68640 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-57k52" in "kube-system" namespace to be "Ready" ...
	I0501 03:46:33.413444   68640 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0501 03:46:33.418869   68640 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0501 03:46:33.439284   68640 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0501 03:46:33.439318   68640 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0501 03:46:33.512744   68640 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0501 03:46:33.512768   68640 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0501 03:46:33.594777   68640 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0501 03:46:33.594798   68640 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0501 03:46:33.658506   68640 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0501 03:46:34.013890   68640 main.go:141] libmachine: Making call to close driver server
	I0501 03:46:34.013919   68640 main.go:141] libmachine: (no-preload-892672) Calling .Close
	I0501 03:46:34.014023   68640 main.go:141] libmachine: Making call to close driver server
	I0501 03:46:34.014056   68640 main.go:141] libmachine: (no-preload-892672) Calling .Close
	I0501 03:46:34.014250   68640 main.go:141] libmachine: Successfully made call to close driver server
	I0501 03:46:34.014284   68640 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 03:46:34.014297   68640 main.go:141] libmachine: Making call to close driver server
	I0501 03:46:34.014306   68640 main.go:141] libmachine: (no-preload-892672) Calling .Close
	I0501 03:46:34.014353   68640 main.go:141] libmachine: Successfully made call to close driver server
	I0501 03:46:34.014370   68640 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 03:46:34.014383   68640 main.go:141] libmachine: Making call to close driver server
	I0501 03:46:34.014393   68640 main.go:141] libmachine: (no-preload-892672) Calling .Close
	I0501 03:46:34.014642   68640 main.go:141] libmachine: Successfully made call to close driver server
	I0501 03:46:34.014664   68640 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 03:46:34.016263   68640 main.go:141] libmachine: (no-preload-892672) DBG | Closing plugin on server side
	I0501 03:46:34.016263   68640 main.go:141] libmachine: Successfully made call to close driver server
	I0501 03:46:34.016288   68640 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 03:46:34.031961   68640 main.go:141] libmachine: Making call to close driver server
	I0501 03:46:34.031996   68640 main.go:141] libmachine: (no-preload-892672) Calling .Close
	I0501 03:46:34.032303   68640 main.go:141] libmachine: Successfully made call to close driver server
	I0501 03:46:34.032324   68640 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 03:46:34.260154   68640 main.go:141] libmachine: Making call to close driver server
	I0501 03:46:34.260180   68640 main.go:141] libmachine: (no-preload-892672) Calling .Close
	I0501 03:46:34.260600   68640 main.go:141] libmachine: Successfully made call to close driver server
	I0501 03:46:34.260629   68640 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 03:46:34.260641   68640 main.go:141] libmachine: Making call to close driver server
	I0501 03:46:34.260650   68640 main.go:141] libmachine: (no-preload-892672) Calling .Close
	I0501 03:46:34.260876   68640 main.go:141] libmachine: Successfully made call to close driver server
	I0501 03:46:34.260888   68640 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 03:46:34.260899   68640 addons.go:470] Verifying addon metrics-server=true in "no-preload-892672"
	I0501 03:46:34.262520   68640 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0501 03:46:34.264176   68640 addons.go:505] duration metric: took 1.215147486s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0501 03:46:35.384910   68640 pod_ready.go:102] pod "coredns-7db6d8ff4d-57k52" in "kube-system" namespace has status "Ready":"False"
	I0501 03:46:36.377298   68640 pod_ready.go:92] pod "coredns-7db6d8ff4d-57k52" in "kube-system" namespace has status "Ready":"True"
	I0501 03:46:36.377321   68640 pod_ready.go:81] duration metric: took 3.009581117s for pod "coredns-7db6d8ff4d-57k52" in "kube-system" namespace to be "Ready" ...
	I0501 03:46:36.377331   68640 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-c6lnj" in "kube-system" namespace to be "Ready" ...
	I0501 03:46:36.383022   68640 pod_ready.go:92] pod "coredns-7db6d8ff4d-c6lnj" in "kube-system" namespace has status "Ready":"True"
	I0501 03:46:36.383042   68640 pod_ready.go:81] duration metric: took 5.704691ms for pod "coredns-7db6d8ff4d-c6lnj" in "kube-system" namespace to be "Ready" ...
	I0501 03:46:36.383051   68640 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-892672" in "kube-system" namespace to be "Ready" ...
	I0501 03:46:36.387456   68640 pod_ready.go:92] pod "etcd-no-preload-892672" in "kube-system" namespace has status "Ready":"True"
	I0501 03:46:36.387476   68640 pod_ready.go:81] duration metric: took 4.418883ms for pod "etcd-no-preload-892672" in "kube-system" namespace to be "Ready" ...
	I0501 03:46:36.387485   68640 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-892672" in "kube-system" namespace to be "Ready" ...
	I0501 03:46:36.392348   68640 pod_ready.go:92] pod "kube-apiserver-no-preload-892672" in "kube-system" namespace has status "Ready":"True"
	I0501 03:46:36.392366   68640 pod_ready.go:81] duration metric: took 4.874928ms for pod "kube-apiserver-no-preload-892672" in "kube-system" namespace to be "Ready" ...
	I0501 03:46:36.392375   68640 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-892672" in "kube-system" namespace to be "Ready" ...
	I0501 03:46:36.397155   68640 pod_ready.go:92] pod "kube-controller-manager-no-preload-892672" in "kube-system" namespace has status "Ready":"True"
	I0501 03:46:36.397175   68640 pod_ready.go:81] duration metric: took 4.794583ms for pod "kube-controller-manager-no-preload-892672" in "kube-system" namespace to be "Ready" ...
	I0501 03:46:36.397185   68640 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-czsqz" in "kube-system" namespace to be "Ready" ...
	I0501 03:46:36.774003   68640 pod_ready.go:92] pod "kube-proxy-czsqz" in "kube-system" namespace has status "Ready":"True"
	I0501 03:46:36.774025   68640 pod_ready.go:81] duration metric: took 376.83321ms for pod "kube-proxy-czsqz" in "kube-system" namespace to be "Ready" ...
	I0501 03:46:36.774036   68640 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-892672" in "kube-system" namespace to be "Ready" ...
	I0501 03:46:37.171504   68640 pod_ready.go:92] pod "kube-scheduler-no-preload-892672" in "kube-system" namespace has status "Ready":"True"
	I0501 03:46:37.171526   68640 pod_ready.go:81] duration metric: took 397.484706ms for pod "kube-scheduler-no-preload-892672" in "kube-system" namespace to be "Ready" ...
	I0501 03:46:37.171535   68640 pod_ready.go:38] duration metric: took 3.815806043s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0501 03:46:37.171549   68640 api_server.go:52] waiting for apiserver process to appear ...
	I0501 03:46:37.171609   68640 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:46:37.189446   68640 api_server.go:72] duration metric: took 4.140414812s to wait for apiserver process to appear ...
	I0501 03:46:37.189473   68640 api_server.go:88] waiting for apiserver healthz status ...
	I0501 03:46:37.189494   68640 api_server.go:253] Checking apiserver healthz at https://192.168.39.144:8443/healthz ...
	I0501 03:46:37.195052   68640 api_server.go:279] https://192.168.39.144:8443/healthz returned 200:
	ok
	I0501 03:46:37.196163   68640 api_server.go:141] control plane version: v1.30.0
	I0501 03:46:37.196183   68640 api_server.go:131] duration metric: took 6.703804ms to wait for apiserver health ...
	I0501 03:46:37.196191   68640 system_pods.go:43] waiting for kube-system pods to appear ...
	I0501 03:46:37.375742   68640 system_pods.go:59] 9 kube-system pods found
	I0501 03:46:37.375775   68640 system_pods.go:61] "coredns-7db6d8ff4d-57k52" [f98cb358-71ba-49c5-8213-0f3160c6e38b] Running
	I0501 03:46:37.375784   68640 system_pods.go:61] "coredns-7db6d8ff4d-c6lnj" [f8b8c1f1-7696-43f2-98be-339f99963e7c] Running
	I0501 03:46:37.375789   68640 system_pods.go:61] "etcd-no-preload-892672" [5f92eb1b-6611-4663-95f0-8c071a3a37c9] Running
	I0501 03:46:37.375796   68640 system_pods.go:61] "kube-apiserver-no-preload-892672" [90bcaa82-61b0-49d5-b50c-76288b099683] Running
	I0501 03:46:37.375804   68640 system_pods.go:61] "kube-controller-manager-no-preload-892672" [f80af654-aa81-4cd2-b5ce-4f31f6e49e9f] Running
	I0501 03:46:37.375809   68640 system_pods.go:61] "kube-proxy-czsqz" [4254b019-b6c8-4ff9-a361-c96eaf20dc65] Running
	I0501 03:46:37.375813   68640 system_pods.go:61] "kube-scheduler-no-preload-892672" [6753a5df-86d1-47bf-9514-6b8352acf969] Running
	I0501 03:46:37.375824   68640 system_pods.go:61] "metrics-server-569cc877fc-5m5qf" [a1ec3e6c-fe90-4168-b0ec-54f82f17b46d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0501 03:46:37.375830   68640 system_pods.go:61] "storage-provisioner" [b55b7e8b-4de0-40f8-96ff-bf0b550699d1] Running
	I0501 03:46:37.375841   68640 system_pods.go:74] duration metric: took 179.642731ms to wait for pod list to return data ...
	I0501 03:46:37.375857   68640 default_sa.go:34] waiting for default service account to be created ...
	I0501 03:46:37.572501   68640 default_sa.go:45] found service account: "default"
	I0501 03:46:37.572530   68640 default_sa.go:55] duration metric: took 196.664812ms for default service account to be created ...
	I0501 03:46:37.572542   68640 system_pods.go:116] waiting for k8s-apps to be running ...
	I0501 03:46:37.778012   68640 system_pods.go:86] 9 kube-system pods found
	I0501 03:46:37.778053   68640 system_pods.go:89] "coredns-7db6d8ff4d-57k52" [f98cb358-71ba-49c5-8213-0f3160c6e38b] Running
	I0501 03:46:37.778062   68640 system_pods.go:89] "coredns-7db6d8ff4d-c6lnj" [f8b8c1f1-7696-43f2-98be-339f99963e7c] Running
	I0501 03:46:37.778068   68640 system_pods.go:89] "etcd-no-preload-892672" [5f92eb1b-6611-4663-95f0-8c071a3a37c9] Running
	I0501 03:46:37.778075   68640 system_pods.go:89] "kube-apiserver-no-preload-892672" [90bcaa82-61b0-49d5-b50c-76288b099683] Running
	I0501 03:46:37.778082   68640 system_pods.go:89] "kube-controller-manager-no-preload-892672" [f80af654-aa81-4cd2-b5ce-4f31f6e49e9f] Running
	I0501 03:46:37.778088   68640 system_pods.go:89] "kube-proxy-czsqz" [4254b019-b6c8-4ff9-a361-c96eaf20dc65] Running
	I0501 03:46:37.778094   68640 system_pods.go:89] "kube-scheduler-no-preload-892672" [6753a5df-86d1-47bf-9514-6b8352acf969] Running
	I0501 03:46:37.778104   68640 system_pods.go:89] "metrics-server-569cc877fc-5m5qf" [a1ec3e6c-fe90-4168-b0ec-54f82f17b46d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0501 03:46:37.778112   68640 system_pods.go:89] "storage-provisioner" [b55b7e8b-4de0-40f8-96ff-bf0b550699d1] Running
	I0501 03:46:37.778127   68640 system_pods.go:126] duration metric: took 205.578312ms to wait for k8s-apps to be running ...
	I0501 03:46:37.778148   68640 system_svc.go:44] waiting for kubelet service to be running ....
	I0501 03:46:37.778215   68640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 03:46:37.794660   68640 system_svc.go:56] duration metric: took 16.509214ms WaitForService to wait for kubelet
	I0501 03:46:37.794694   68640 kubeadm.go:576] duration metric: took 4.745668881s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0501 03:46:37.794721   68640 node_conditions.go:102] verifying NodePressure condition ...
	I0501 03:46:37.972621   68640 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0501 03:46:37.972647   68640 node_conditions.go:123] node cpu capacity is 2
	I0501 03:46:37.972660   68640 node_conditions.go:105] duration metric: took 177.933367ms to run NodePressure ...
	I0501 03:46:37.972676   68640 start.go:240] waiting for startup goroutines ...
	I0501 03:46:37.972684   68640 start.go:245] waiting for cluster config update ...
	I0501 03:46:37.972699   68640 start.go:254] writing updated cluster config ...
	I0501 03:46:37.972951   68640 ssh_runner.go:195] Run: rm -f paused
	I0501 03:46:38.023054   68640 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0501 03:46:38.025098   68640 out.go:177] * Done! kubectl is now configured to use "no-preload-892672" cluster and "default" namespace by default
	I0501 03:46:46.214470   69580 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0501 03:46:46.214695   69580 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0501 03:46:46.214721   69580 kubeadm.go:309] 
	I0501 03:46:46.214770   69580 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0501 03:46:46.214837   69580 kubeadm.go:309] 		timed out waiting for the condition
	I0501 03:46:46.214875   69580 kubeadm.go:309] 
	I0501 03:46:46.214936   69580 kubeadm.go:309] 	This error is likely caused by:
	I0501 03:46:46.214983   69580 kubeadm.go:309] 		- The kubelet is not running
	I0501 03:46:46.215076   69580 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0501 03:46:46.215084   69580 kubeadm.go:309] 
	I0501 03:46:46.215169   69580 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0501 03:46:46.215201   69580 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0501 03:46:46.215233   69580 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0501 03:46:46.215239   69580 kubeadm.go:309] 
	I0501 03:46:46.215380   69580 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0501 03:46:46.215489   69580 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0501 03:46:46.215505   69580 kubeadm.go:309] 
	I0501 03:46:46.215657   69580 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0501 03:46:46.215782   69580 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0501 03:46:46.215882   69580 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0501 03:46:46.215972   69580 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0501 03:46:46.215984   69580 kubeadm.go:309] 
	I0501 03:46:46.217243   69580 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0501 03:46:46.217352   69580 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0501 03:46:46.217426   69580 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0501 03:46:46.217550   69580 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0501 03:46:46.217611   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0501 03:46:47.375634   69580 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.157990231s)
	I0501 03:46:47.375723   69580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 03:46:47.392333   69580 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0501 03:46:47.404983   69580 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0501 03:46:47.405007   69580 kubeadm.go:156] found existing configuration files:
	
	I0501 03:46:47.405054   69580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0501 03:46:47.417437   69580 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0501 03:46:47.417501   69580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0501 03:46:47.429929   69580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0501 03:46:47.441141   69580 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0501 03:46:47.441215   69580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0501 03:46:47.453012   69580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0501 03:46:47.463702   69580 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0501 03:46:47.463759   69580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0501 03:46:47.474783   69580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0501 03:46:47.485793   69580 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0501 03:46:47.485853   69580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0501 03:46:47.497706   69580 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0501 03:46:47.588221   69580 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0501 03:46:47.588340   69580 kubeadm.go:309] [preflight] Running pre-flight checks
	I0501 03:46:47.759631   69580 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0501 03:46:47.759801   69580 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0501 03:46:47.759949   69580 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0501 03:46:47.978077   69580 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0501 03:46:47.980130   69580 out.go:204]   - Generating certificates and keys ...
	I0501 03:46:47.980240   69580 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0501 03:46:47.980323   69580 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0501 03:46:47.980455   69580 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0501 03:46:47.980579   69580 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0501 03:46:47.980679   69580 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0501 03:46:47.980771   69580 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0501 03:46:47.980864   69580 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0501 03:46:47.981256   69580 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0501 03:46:47.981616   69580 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0501 03:46:47.981858   69580 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0501 03:46:47.981907   69580 kubeadm.go:309] [certs] Using the existing "sa" key
	I0501 03:46:47.981991   69580 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0501 03:46:48.100377   69580 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0501 03:46:48.463892   69580 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0501 03:46:48.521991   69580 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0501 03:46:48.735222   69580 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0501 03:46:48.753098   69580 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0501 03:46:48.756950   69580 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0501 03:46:48.757379   69580 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0501 03:46:48.937039   69580 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0501 03:46:48.939065   69580 out.go:204]   - Booting up control plane ...
	I0501 03:46:48.939183   69580 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0501 03:46:48.961380   69580 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0501 03:46:48.962890   69580 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0501 03:46:48.963978   69580 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0501 03:46:48.971754   69580 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0501 03:47:28.974873   69580 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0501 03:47:28.975296   69580 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0501 03:47:28.975545   69580 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0501 03:47:33.976469   69580 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0501 03:47:33.976699   69580 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0501 03:47:43.977443   69580 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0501 03:47:43.977663   69580 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0501 03:48:03.979113   69580 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0501 03:48:03.979409   69580 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0501 03:48:43.982479   69580 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0501 03:48:43.982781   69580 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0501 03:48:43.983363   69580 kubeadm.go:309] 
	I0501 03:48:43.983427   69580 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0501 03:48:43.983484   69580 kubeadm.go:309] 		timed out waiting for the condition
	I0501 03:48:43.983490   69580 kubeadm.go:309] 
	I0501 03:48:43.983520   69580 kubeadm.go:309] 	This error is likely caused by:
	I0501 03:48:43.983547   69580 kubeadm.go:309] 		- The kubelet is not running
	I0501 03:48:43.983633   69580 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0501 03:48:43.983637   69580 kubeadm.go:309] 
	I0501 03:48:43.983721   69580 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0501 03:48:43.983748   69580 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0501 03:48:43.983774   69580 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0501 03:48:43.983778   69580 kubeadm.go:309] 
	I0501 03:48:43.983861   69580 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0501 03:48:43.983928   69580 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0501 03:48:43.983932   69580 kubeadm.go:309] 
	I0501 03:48:43.984023   69580 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0501 03:48:43.984094   69580 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0501 03:48:43.984155   69580 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0501 03:48:43.984212   69580 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0501 03:48:43.984216   69580 kubeadm.go:309] 
	I0501 03:48:43.985577   69580 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0501 03:48:43.985777   69580 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0501 03:48:43.985875   69580 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0501 03:48:43.985971   69580 kubeadm.go:393] duration metric: took 8m0.315126498s to StartCluster
	I0501 03:48:43.986025   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:48:43.986092   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:48:44.038296   69580 cri.go:89] found id: ""
	I0501 03:48:44.038328   69580 logs.go:276] 0 containers: []
	W0501 03:48:44.038339   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:48:44.038346   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:48:44.038426   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:48:44.081855   69580 cri.go:89] found id: ""
	I0501 03:48:44.081891   69580 logs.go:276] 0 containers: []
	W0501 03:48:44.081904   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:48:44.081913   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:48:44.081996   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:48:44.131400   69580 cri.go:89] found id: ""
	I0501 03:48:44.131435   69580 logs.go:276] 0 containers: []
	W0501 03:48:44.131445   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:48:44.131451   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:48:44.131519   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:48:44.178274   69580 cri.go:89] found id: ""
	I0501 03:48:44.178302   69580 logs.go:276] 0 containers: []
	W0501 03:48:44.178310   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:48:44.178316   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:48:44.178376   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:48:44.223087   69580 cri.go:89] found id: ""
	I0501 03:48:44.223115   69580 logs.go:276] 0 containers: []
	W0501 03:48:44.223125   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:48:44.223133   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:48:44.223196   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:48:44.266093   69580 cri.go:89] found id: ""
	I0501 03:48:44.266122   69580 logs.go:276] 0 containers: []
	W0501 03:48:44.266135   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:48:44.266143   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:48:44.266204   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:48:44.307766   69580 cri.go:89] found id: ""
	I0501 03:48:44.307795   69580 logs.go:276] 0 containers: []
	W0501 03:48:44.307806   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:48:44.307813   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:48:44.307876   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:48:44.348548   69580 cri.go:89] found id: ""
	I0501 03:48:44.348576   69580 logs.go:276] 0 containers: []
	W0501 03:48:44.348585   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:48:44.348594   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:48:44.348614   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:48:44.394160   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:48:44.394209   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:48:44.449845   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:48:44.449879   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:48:44.467663   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:48:44.467694   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:48:44.556150   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:48:44.556183   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:48:44.556199   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W0501 03:48:44.661110   69580 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0501 03:48:44.661169   69580 out.go:239] * 
	W0501 03:48:44.661226   69580 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0501 03:48:44.661246   69580 out.go:239] * 
	W0501 03:48:44.662064   69580 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0501 03:48:44.665608   69580 out.go:177] 
	W0501 03:48:44.666799   69580 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0501 03:48:44.666851   69580 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0501 03:48:44.666870   69580 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0501 03:48:44.668487   69580 out.go:177] 
	
	
	==> CRI-O <==
	May 01 03:53:37 embed-certs-277128 crio[724]: time="2024-05-01 03:53:37.705755488Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714535617705738112,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133261,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=98c42c85-bd26-4bf6-9792-15c03f3513dc name=/runtime.v1.ImageService/ImageFsInfo
	May 01 03:53:37 embed-certs-277128 crio[724]: time="2024-05-01 03:53:37.706624160Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e0e62d74-e564-4957-9105-706723168113 name=/runtime.v1.RuntimeService/ListContainers
	May 01 03:53:37 embed-certs-277128 crio[724]: time="2024-05-01 03:53:37.706922265Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e0e62d74-e564-4957-9105-706723168113 name=/runtime.v1.RuntimeService/ListContainers
	May 01 03:53:37 embed-certs-277128 crio[724]: time="2024-05-01 03:53:37.707099640Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f9a8d2f0f9453969b410fb7f880f6cf4c7e6e2f3c17a687583906efe4181dd17,PodSandboxId:547fe01dd31038fc019087a6db1fa7e7b44ea15a157584ffbd0aa6cdd3cf08b4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714534836531788505,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 785be666-58d5-4b9d-92fd-bcacdbdebeb2,},Annotations:map[string]string{io.kubernetes.container.hash: c88b4da4,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3c74de489af3d78a4b0cf620ed16e1fba7c9ce1c7021a15c5b4bd241b7a934a,PodSandboxId:aa9d355e603c7861a2f071569dfb4a7cb20ec2430f8bdd0246d00adc0e5ec201,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714534822085371460,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sjplt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6701ee8e-0630-4332-b01c-26741ed3a7b7,},Annotations:map[string]string{io.kubernetes.container.hash: c52dc745,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eaaa2457f3825d23c9124baf727b248c8ae44a540669b26c888b887edb6e6096,PodSandboxId:b316d2fa718c57ee546cd0e7c6676cf7f048c4b01def7a73cbb35a78db72fc65,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1714534816519344878,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kuberne
tes.pod.uid: ff71ad09-63da-4dd0-99c7-9ffb0f4eeae3,},Annotations:map[string]string{io.kubernetes.container.hash: 85b1f6f5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94afdb03c382269209154b2e73edbf200662e89960c248ba775a0ef2b8fbe6b1,PodSandboxId:19d7e38955886efeca25c599f334336ad453e231add7410a16e538399ce6da41,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714534805698551271,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-phx7x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56c0381e-c140-4f69-b
be4-09d393db8b23,},Annotations:map[string]string{io.kubernetes.container.hash: e5799dc5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aaae36261c5ce1accf3bfb5993be5a256a037a640d5f6f30de8384900dda06b2,PodSandboxId:547fe01dd31038fc019087a6db1fa7e7b44ea15a157584ffbd0aa6cdd3cf08b4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1714534805694050491,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 785be666-58d5-4b9d-92fd-bcacdbdeb
eb2,},Annotations:map[string]string{io.kubernetes.container.hash: c88b4da4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1813f35574f4f0398e7d482fe77c5d7f8490726666a277035bda0a5cff80af6e,PodSandboxId:7212f5087ba09b79f58f15756887b1d9e38cf5501f38802286314c3be8daf914,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714534801956494338,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-277128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d93b953fdcb3197a925f72d5f1925818,},Annota
tions:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e7158f7ff3922e97ba5ee97108acc1d0ba0350dff2f9aa56c2f71519108791c,PodSandboxId:cd226d9eea9632ec815202404544eb5687a36a3097cab2af50e23979f4fc5026,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714534801936348492,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-277128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e21a7c0a2e06e5de26960a82e6
6d8e6d,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a96815c49ac45b34c359d7171b30be5c25cee80fae08d947fc004d479693df00,PodSandboxId:464a0acb133488889f9601dcdece2117c4eb53e229a62c35b942da265898373e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714534801919027276,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-277128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6dee1fba7311ab90adf2d7b6467002b,},Ann
otations:map[string]string{io.kubernetes.container.hash: 7e88cfe1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d109948ffbbddd2e38484d83d2260690afce3c51e16abff0fe8713e844bfdcb3,PodSandboxId:f6338b841057be5ce903e4539b40e972adb0d1a022af422482cc77db570d5486,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714534801887944145,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-277128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68d692155d566ac180b3b7676623c918,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 5b59e402,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e0e62d74-e564-4957-9105-706723168113 name=/runtime.v1.RuntimeService/ListContainers
	May 01 03:53:37 embed-certs-277128 crio[724]: time="2024-05-01 03:53:37.751970323Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a4d2ca69-c81e-4462-89bb-caf791646b28 name=/runtime.v1.RuntimeService/Version
	May 01 03:53:37 embed-certs-277128 crio[724]: time="2024-05-01 03:53:37.752042023Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a4d2ca69-c81e-4462-89bb-caf791646b28 name=/runtime.v1.RuntimeService/Version
	May 01 03:53:37 embed-certs-277128 crio[724]: time="2024-05-01 03:53:37.753218243Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9b742cc6-b464-4096-bbc8-0a1619f194e7 name=/runtime.v1.ImageService/ImageFsInfo
	May 01 03:53:37 embed-certs-277128 crio[724]: time="2024-05-01 03:53:37.754342482Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714535617754316626,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133261,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9b742cc6-b464-4096-bbc8-0a1619f194e7 name=/runtime.v1.ImageService/ImageFsInfo
	May 01 03:53:37 embed-certs-277128 crio[724]: time="2024-05-01 03:53:37.755320285Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=23bfd8b0-f8f4-40b6-8f4b-d16466743b3b name=/runtime.v1.RuntimeService/ListContainers
	May 01 03:53:37 embed-certs-277128 crio[724]: time="2024-05-01 03:53:37.755371495Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=23bfd8b0-f8f4-40b6-8f4b-d16466743b3b name=/runtime.v1.RuntimeService/ListContainers
	May 01 03:53:37 embed-certs-277128 crio[724]: time="2024-05-01 03:53:37.755621803Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f9a8d2f0f9453969b410fb7f880f6cf4c7e6e2f3c17a687583906efe4181dd17,PodSandboxId:547fe01dd31038fc019087a6db1fa7e7b44ea15a157584ffbd0aa6cdd3cf08b4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714534836531788505,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 785be666-58d5-4b9d-92fd-bcacdbdebeb2,},Annotations:map[string]string{io.kubernetes.container.hash: c88b4da4,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3c74de489af3d78a4b0cf620ed16e1fba7c9ce1c7021a15c5b4bd241b7a934a,PodSandboxId:aa9d355e603c7861a2f071569dfb4a7cb20ec2430f8bdd0246d00adc0e5ec201,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714534822085371460,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sjplt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6701ee8e-0630-4332-b01c-26741ed3a7b7,},Annotations:map[string]string{io.kubernetes.container.hash: c52dc745,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eaaa2457f3825d23c9124baf727b248c8ae44a540669b26c888b887edb6e6096,PodSandboxId:b316d2fa718c57ee546cd0e7c6676cf7f048c4b01def7a73cbb35a78db72fc65,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1714534816519344878,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kuberne
tes.pod.uid: ff71ad09-63da-4dd0-99c7-9ffb0f4eeae3,},Annotations:map[string]string{io.kubernetes.container.hash: 85b1f6f5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94afdb03c382269209154b2e73edbf200662e89960c248ba775a0ef2b8fbe6b1,PodSandboxId:19d7e38955886efeca25c599f334336ad453e231add7410a16e538399ce6da41,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714534805698551271,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-phx7x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56c0381e-c140-4f69-b
be4-09d393db8b23,},Annotations:map[string]string{io.kubernetes.container.hash: e5799dc5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aaae36261c5ce1accf3bfb5993be5a256a037a640d5f6f30de8384900dda06b2,PodSandboxId:547fe01dd31038fc019087a6db1fa7e7b44ea15a157584ffbd0aa6cdd3cf08b4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1714534805694050491,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 785be666-58d5-4b9d-92fd-bcacdbdeb
eb2,},Annotations:map[string]string{io.kubernetes.container.hash: c88b4da4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1813f35574f4f0398e7d482fe77c5d7f8490726666a277035bda0a5cff80af6e,PodSandboxId:7212f5087ba09b79f58f15756887b1d9e38cf5501f38802286314c3be8daf914,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714534801956494338,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-277128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d93b953fdcb3197a925f72d5f1925818,},Annota
tions:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e7158f7ff3922e97ba5ee97108acc1d0ba0350dff2f9aa56c2f71519108791c,PodSandboxId:cd226d9eea9632ec815202404544eb5687a36a3097cab2af50e23979f4fc5026,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714534801936348492,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-277128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e21a7c0a2e06e5de26960a82e6
6d8e6d,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a96815c49ac45b34c359d7171b30be5c25cee80fae08d947fc004d479693df00,PodSandboxId:464a0acb133488889f9601dcdece2117c4eb53e229a62c35b942da265898373e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714534801919027276,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-277128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6dee1fba7311ab90adf2d7b6467002b,},Ann
otations:map[string]string{io.kubernetes.container.hash: 7e88cfe1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d109948ffbbddd2e38484d83d2260690afce3c51e16abff0fe8713e844bfdcb3,PodSandboxId:f6338b841057be5ce903e4539b40e972adb0d1a022af422482cc77db570d5486,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714534801887944145,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-277128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68d692155d566ac180b3b7676623c918,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 5b59e402,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=23bfd8b0-f8f4-40b6-8f4b-d16466743b3b name=/runtime.v1.RuntimeService/ListContainers
	May 01 03:53:37 embed-certs-277128 crio[724]: time="2024-05-01 03:53:37.798050670Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=66ae42e7-297b-4094-8faa-63f1d7281d13 name=/runtime.v1.RuntimeService/Version
	May 01 03:53:37 embed-certs-277128 crio[724]: time="2024-05-01 03:53:37.798205213Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=66ae42e7-297b-4094-8faa-63f1d7281d13 name=/runtime.v1.RuntimeService/Version
	May 01 03:53:37 embed-certs-277128 crio[724]: time="2024-05-01 03:53:37.800666075Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=14370df3-b43f-4f0b-9f11-72b3909986d4 name=/runtime.v1.ImageService/ImageFsInfo
	May 01 03:53:37 embed-certs-277128 crio[724]: time="2024-05-01 03:53:37.802593946Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714535617802567326,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133261,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=14370df3-b43f-4f0b-9f11-72b3909986d4 name=/runtime.v1.ImageService/ImageFsInfo
	May 01 03:53:37 embed-certs-277128 crio[724]: time="2024-05-01 03:53:37.803672506Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4b4ab38b-1884-4cd1-87d3-dd61c7948184 name=/runtime.v1.RuntimeService/ListContainers
	May 01 03:53:37 embed-certs-277128 crio[724]: time="2024-05-01 03:53:37.803840163Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4b4ab38b-1884-4cd1-87d3-dd61c7948184 name=/runtime.v1.RuntimeService/ListContainers
	May 01 03:53:37 embed-certs-277128 crio[724]: time="2024-05-01 03:53:37.804215567Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f9a8d2f0f9453969b410fb7f880f6cf4c7e6e2f3c17a687583906efe4181dd17,PodSandboxId:547fe01dd31038fc019087a6db1fa7e7b44ea15a157584ffbd0aa6cdd3cf08b4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714534836531788505,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 785be666-58d5-4b9d-92fd-bcacdbdebeb2,},Annotations:map[string]string{io.kubernetes.container.hash: c88b4da4,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3c74de489af3d78a4b0cf620ed16e1fba7c9ce1c7021a15c5b4bd241b7a934a,PodSandboxId:aa9d355e603c7861a2f071569dfb4a7cb20ec2430f8bdd0246d00adc0e5ec201,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714534822085371460,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sjplt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6701ee8e-0630-4332-b01c-26741ed3a7b7,},Annotations:map[string]string{io.kubernetes.container.hash: c52dc745,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eaaa2457f3825d23c9124baf727b248c8ae44a540669b26c888b887edb6e6096,PodSandboxId:b316d2fa718c57ee546cd0e7c6676cf7f048c4b01def7a73cbb35a78db72fc65,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1714534816519344878,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kuberne
tes.pod.uid: ff71ad09-63da-4dd0-99c7-9ffb0f4eeae3,},Annotations:map[string]string{io.kubernetes.container.hash: 85b1f6f5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94afdb03c382269209154b2e73edbf200662e89960c248ba775a0ef2b8fbe6b1,PodSandboxId:19d7e38955886efeca25c599f334336ad453e231add7410a16e538399ce6da41,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714534805698551271,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-phx7x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56c0381e-c140-4f69-b
be4-09d393db8b23,},Annotations:map[string]string{io.kubernetes.container.hash: e5799dc5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aaae36261c5ce1accf3bfb5993be5a256a037a640d5f6f30de8384900dda06b2,PodSandboxId:547fe01dd31038fc019087a6db1fa7e7b44ea15a157584ffbd0aa6cdd3cf08b4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1714534805694050491,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 785be666-58d5-4b9d-92fd-bcacdbdeb
eb2,},Annotations:map[string]string{io.kubernetes.container.hash: c88b4da4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1813f35574f4f0398e7d482fe77c5d7f8490726666a277035bda0a5cff80af6e,PodSandboxId:7212f5087ba09b79f58f15756887b1d9e38cf5501f38802286314c3be8daf914,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714534801956494338,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-277128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d93b953fdcb3197a925f72d5f1925818,},Annota
tions:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e7158f7ff3922e97ba5ee97108acc1d0ba0350dff2f9aa56c2f71519108791c,PodSandboxId:cd226d9eea9632ec815202404544eb5687a36a3097cab2af50e23979f4fc5026,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714534801936348492,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-277128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e21a7c0a2e06e5de26960a82e6
6d8e6d,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a96815c49ac45b34c359d7171b30be5c25cee80fae08d947fc004d479693df00,PodSandboxId:464a0acb133488889f9601dcdece2117c4eb53e229a62c35b942da265898373e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714534801919027276,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-277128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6dee1fba7311ab90adf2d7b6467002b,},Ann
otations:map[string]string{io.kubernetes.container.hash: 7e88cfe1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d109948ffbbddd2e38484d83d2260690afce3c51e16abff0fe8713e844bfdcb3,PodSandboxId:f6338b841057be5ce903e4539b40e972adb0d1a022af422482cc77db570d5486,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714534801887944145,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-277128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68d692155d566ac180b3b7676623c918,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 5b59e402,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4b4ab38b-1884-4cd1-87d3-dd61c7948184 name=/runtime.v1.RuntimeService/ListContainers
	May 01 03:53:37 embed-certs-277128 crio[724]: time="2024-05-01 03:53:37.846101246Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=74dfa158-3df0-4224-adab-08dbf79b8f47 name=/runtime.v1.RuntimeService/Version
	May 01 03:53:37 embed-certs-277128 crio[724]: time="2024-05-01 03:53:37.846299818Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=74dfa158-3df0-4224-adab-08dbf79b8f47 name=/runtime.v1.RuntimeService/Version
	May 01 03:53:37 embed-certs-277128 crio[724]: time="2024-05-01 03:53:37.850039332Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7428005b-7c1a-4d11-8b06-08bf214fdc9a name=/runtime.v1.ImageService/ImageFsInfo
	May 01 03:53:37 embed-certs-277128 crio[724]: time="2024-05-01 03:53:37.850506322Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714535617850481395,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133261,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7428005b-7c1a-4d11-8b06-08bf214fdc9a name=/runtime.v1.ImageService/ImageFsInfo
	May 01 03:53:37 embed-certs-277128 crio[724]: time="2024-05-01 03:53:37.851935065Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5c84e1d0-8bf0-4d13-87f1-be8564b2b331 name=/runtime.v1.RuntimeService/ListContainers
	May 01 03:53:37 embed-certs-277128 crio[724]: time="2024-05-01 03:53:37.851991845Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5c84e1d0-8bf0-4d13-87f1-be8564b2b331 name=/runtime.v1.RuntimeService/ListContainers
	May 01 03:53:37 embed-certs-277128 crio[724]: time="2024-05-01 03:53:37.852506683Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f9a8d2f0f9453969b410fb7f880f6cf4c7e6e2f3c17a687583906efe4181dd17,PodSandboxId:547fe01dd31038fc019087a6db1fa7e7b44ea15a157584ffbd0aa6cdd3cf08b4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714534836531788505,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 785be666-58d5-4b9d-92fd-bcacdbdebeb2,},Annotations:map[string]string{io.kubernetes.container.hash: c88b4da4,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3c74de489af3d78a4b0cf620ed16e1fba7c9ce1c7021a15c5b4bd241b7a934a,PodSandboxId:aa9d355e603c7861a2f071569dfb4a7cb20ec2430f8bdd0246d00adc0e5ec201,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714534822085371460,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sjplt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6701ee8e-0630-4332-b01c-26741ed3a7b7,},Annotations:map[string]string{io.kubernetes.container.hash: c52dc745,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eaaa2457f3825d23c9124baf727b248c8ae44a540669b26c888b887edb6e6096,PodSandboxId:b316d2fa718c57ee546cd0e7c6676cf7f048c4b01def7a73cbb35a78db72fc65,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1714534816519344878,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kuberne
tes.pod.uid: ff71ad09-63da-4dd0-99c7-9ffb0f4eeae3,},Annotations:map[string]string{io.kubernetes.container.hash: 85b1f6f5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94afdb03c382269209154b2e73edbf200662e89960c248ba775a0ef2b8fbe6b1,PodSandboxId:19d7e38955886efeca25c599f334336ad453e231add7410a16e538399ce6da41,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714534805698551271,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-phx7x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56c0381e-c140-4f69-b
be4-09d393db8b23,},Annotations:map[string]string{io.kubernetes.container.hash: e5799dc5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aaae36261c5ce1accf3bfb5993be5a256a037a640d5f6f30de8384900dda06b2,PodSandboxId:547fe01dd31038fc019087a6db1fa7e7b44ea15a157584ffbd0aa6cdd3cf08b4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1714534805694050491,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 785be666-58d5-4b9d-92fd-bcacdbdeb
eb2,},Annotations:map[string]string{io.kubernetes.container.hash: c88b4da4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1813f35574f4f0398e7d482fe77c5d7f8490726666a277035bda0a5cff80af6e,PodSandboxId:7212f5087ba09b79f58f15756887b1d9e38cf5501f38802286314c3be8daf914,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714534801956494338,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-277128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d93b953fdcb3197a925f72d5f1925818,},Annota
tions:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e7158f7ff3922e97ba5ee97108acc1d0ba0350dff2f9aa56c2f71519108791c,PodSandboxId:cd226d9eea9632ec815202404544eb5687a36a3097cab2af50e23979f4fc5026,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714534801936348492,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-277128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e21a7c0a2e06e5de26960a82e6
6d8e6d,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a96815c49ac45b34c359d7171b30be5c25cee80fae08d947fc004d479693df00,PodSandboxId:464a0acb133488889f9601dcdece2117c4eb53e229a62c35b942da265898373e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714534801919027276,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-277128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6dee1fba7311ab90adf2d7b6467002b,},Ann
otations:map[string]string{io.kubernetes.container.hash: 7e88cfe1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d109948ffbbddd2e38484d83d2260690afce3c51e16abff0fe8713e844bfdcb3,PodSandboxId:f6338b841057be5ce903e4539b40e972adb0d1a022af422482cc77db570d5486,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714534801887944145,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-277128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68d692155d566ac180b3b7676623c918,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 5b59e402,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5c84e1d0-8bf0-4d13-87f1-be8564b2b331 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f9a8d2f0f9453       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Running             storage-provisioner       2                   547fe01dd3103       storage-provisioner
	e3c74de489af3       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago      Running             coredns                   1                   aa9d355e603c7       coredns-7db6d8ff4d-sjplt
	eaaa2457f3825       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   b316d2fa718c5       busybox
	94afdb03c3822       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b                                      13 minutes ago      Running             kube-proxy                1                   19d7e38955886       kube-proxy-phx7x
	aaae36261c5ce       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       1                   547fe01dd3103       storage-provisioner
	1813f35574f4f       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced                                      13 minutes ago      Running             kube-scheduler            1                   7212f5087ba09       kube-scheduler-embed-certs-277128
	7e7158f7ff392       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b                                      13 minutes ago      Running             kube-controller-manager   1                   cd226d9eea963       kube-controller-manager-embed-certs-277128
	a96815c49ac45       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0                                      13 minutes ago      Running             kube-apiserver            1                   464a0acb13348       kube-apiserver-embed-certs-277128
	d109948ffbbdd       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      13 minutes ago      Running             etcd                      1                   f6338b841057b       etcd-embed-certs-277128
	
	
	==> coredns [e3c74de489af3d78a4b0cf620ed16e1fba7c9ce1c7021a15c5b4bd241b7a934a] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:60045 - 41276 "HINFO IN 8860685169335691977.5956143156893298464. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015607388s
	
	
	==> describe nodes <==
	Name:               embed-certs-277128
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-277128
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2c4eae41cda912e6a762d77f0d8868e00f97bb4e
	                    minikube.k8s.io/name=embed-certs-277128
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_01T03_31_53_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 01 May 2024 03:31:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-277128
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 01 May 2024 03:53:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 01 May 2024 03:50:49 +0000   Wed, 01 May 2024 03:31:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 01 May 2024 03:50:49 +0000   Wed, 01 May 2024 03:31:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 01 May 2024 03:50:49 +0000   Wed, 01 May 2024 03:31:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 01 May 2024 03:50:49 +0000   Wed, 01 May 2024 03:40:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.218
	  Hostname:    embed-certs-277128
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ad39f4832e9c4708b4e1c4cd2dd491e3
	  System UUID:                ad39f483-2e9c-4708-b4e1-c4cd2dd491e3
	  Boot ID:                    84ceacf6-d21b-4d8e-bbd3-e4c7ef6a03f1
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 coredns-7db6d8ff4d-sjplt                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     21m
	  kube-system                 etcd-embed-certs-277128                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         21m
	  kube-system                 kube-apiserver-embed-certs-277128             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-controller-manager-embed-certs-277128    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-proxy-phx7x                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-scheduler-embed-certs-277128             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 metrics-server-569cc877fc-p8j59               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         21m
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 21m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     21m                kubelet          Node embed-certs-277128 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  21m                kubelet          Node embed-certs-277128 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m                kubelet          Node embed-certs-277128 status is now: NodeHasNoDiskPressure
	  Normal  NodeReady                21m                kubelet          Node embed-certs-277128 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           21m                node-controller  Node embed-certs-277128 event: Registered Node embed-certs-277128 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node embed-certs-277128 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node embed-certs-277128 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node embed-certs-277128 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node embed-certs-277128 event: Registered Node embed-certs-277128 in Controller
	
	
	==> dmesg <==
	[May 1 03:39] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052425] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.044249] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.622609] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.580831] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.514186] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.078345] systemd-fstab-generator[641]: Ignoring "noauto" option for root device
	[  +0.056973] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.072552] systemd-fstab-generator[653]: Ignoring "noauto" option for root device
	[  +0.211948] systemd-fstab-generator[667]: Ignoring "noauto" option for root device
	[  +0.139896] systemd-fstab-generator[679]: Ignoring "noauto" option for root device
	[  +0.358137] systemd-fstab-generator[709]: Ignoring "noauto" option for root device
	[  +4.954162] systemd-fstab-generator[806]: Ignoring "noauto" option for root device
	[  +0.059456] kauditd_printk_skb: 130 callbacks suppressed
	[May 1 03:40] systemd-fstab-generator[930]: Ignoring "noauto" option for root device
	[  +4.594794] kauditd_printk_skb: 97 callbacks suppressed
	[  +2.497015] systemd-fstab-generator[1546]: Ignoring "noauto" option for root device
	[  +5.102827] kauditd_printk_skb: 62 callbacks suppressed
	[  +5.083710] kauditd_printk_skb: 26 callbacks suppressed
	[ +18.323503] kauditd_printk_skb: 15 callbacks suppressed
	
	
	==> etcd [d109948ffbbddd2e38484d83d2260690afce3c51e16abff0fe8713e844bfdcb3] <==
	{"level":"info","ts":"2024-05-01T03:40:03.535657Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: d4bfeef2bb38c2b5 elected leader d4bfeef2bb38c2b5 at term 3"}
	{"level":"info","ts":"2024-05-01T03:40:03.540481Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"d4bfeef2bb38c2b5","local-member-attributes":"{Name:embed-certs-277128 ClientURLs:[https://192.168.50.218:2379]}","request-path":"/0/members/d4bfeef2bb38c2b5/attributes","cluster-id":"db562ccfd877cf13","publish-timeout":"7s"}
	{"level":"info","ts":"2024-05-01T03:40:03.540509Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-01T03:40:03.540528Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-01T03:40:03.541059Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-05-01T03:40:03.541226Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-05-01T03:40:03.543593Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.218:2379"}
	{"level":"info","ts":"2024-05-01T03:40:03.543593Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2024-05-01T03:40:22.08801Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"349.079844ms","expected-duration":"100ms","prefix":"","request":"header:<ID:14030277660075336088 > lease_revoke:<id:42b58f323ce3a071>","response":"size:29"}
	{"level":"info","ts":"2024-05-01T03:40:22.0883Z","caller":"traceutil/trace.go:171","msg":"trace[1584344752] linearizableReadLoop","detail":"{readStateIndex:594; appliedIndex:593; }","duration":"354.900577ms","start":"2024-05-01T03:40:21.733358Z","end":"2024-05-01T03:40:22.088259Z","steps":["trace[1584344752] 'read index received'  (duration: 5.278129ms)","trace[1584344752] 'applied index is now lower than readState.Index'  (duration: 349.621321ms)"],"step_count":2}
	{"level":"warn","ts":"2024-05-01T03:40:22.088482Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"355.084306ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-7db6d8ff4d-sjplt\" ","response":"range_response_count:1 size:4822"}
	{"level":"info","ts":"2024-05-01T03:40:22.088506Z","caller":"traceutil/trace.go:171","msg":"trace[390937185] range","detail":"{range_begin:/registry/pods/kube-system/coredns-7db6d8ff4d-sjplt; range_end:; response_count:1; response_revision:553; }","duration":"355.161958ms","start":"2024-05-01T03:40:21.733334Z","end":"2024-05-01T03:40:22.088496Z","steps":["trace[390937185] 'agreement among raft nodes before linearized reading'  (duration: 355.007274ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-01T03:40:22.088542Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-01T03:40:21.733316Z","time spent":"355.214129ms","remote":"127.0.0.1:58990","response type":"/etcdserverpb.KV/Range","request count":0,"request size":53,"response count":1,"response size":4846,"request content":"key:\"/registry/pods/kube-system/coredns-7db6d8ff4d-sjplt\" "}
	{"level":"warn","ts":"2024-05-01T03:40:41.815299Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"132.795428ms","expected-duration":"100ms","prefix":"","request":"header:<ID:14030277660075336256 > lease_revoke:<id:42b58f323ce3a1cd>","response":"size:29"}
	{"level":"info","ts":"2024-05-01T03:40:41.815398Z","caller":"traceutil/trace.go:171","msg":"trace[788791491] linearizableReadLoop","detail":"{readStateIndex:625; appliedIndex:624; }","duration":"176.609199ms","start":"2024-05-01T03:40:41.638777Z","end":"2024-05-01T03:40:41.815386Z","steps":["trace[788791491] 'read index received'  (duration: 43.673822ms)","trace[788791491] 'applied index is now lower than readState.Index'  (duration: 132.934532ms)"],"step_count":2}
	{"level":"warn","ts":"2024-05-01T03:40:41.815521Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"176.729563ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-569cc877fc-p8j59\" ","response":"range_response_count:1 size:4239"}
	{"level":"info","ts":"2024-05-01T03:40:41.815549Z","caller":"traceutil/trace.go:171","msg":"trace[1709848795] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-569cc877fc-p8j59; range_end:; response_count:1; response_revision:580; }","duration":"176.783608ms","start":"2024-05-01T03:40:41.638753Z","end":"2024-05-01T03:40:41.815537Z","steps":["trace[1709848795] 'agreement among raft nodes before linearized reading'  (duration: 176.663763ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-01T03:41:09.494558Z","caller":"traceutil/trace.go:171","msg":"trace[799975887] transaction","detail":"{read_only:false; response_revision:606; number_of_response:1; }","duration":"222.980256ms","start":"2024-05-01T03:41:09.271563Z","end":"2024-05-01T03:41:09.494544Z","steps":["trace[799975887] 'process raft request'  (duration: 221.005416ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-01T03:41:09.497836Z","caller":"traceutil/trace.go:171","msg":"trace[1236022152] linearizableReadLoop","detail":"{readStateIndex:658; appliedIndex:656; }","duration":"113.979463ms","start":"2024-05-01T03:41:09.38384Z","end":"2024-05-01T03:41:09.49782Z","steps":["trace[1236022152] 'read index received'  (duration: 108.804286ms)","trace[1236022152] 'applied index is now lower than readState.Index'  (duration: 5.174581ms)"],"step_count":2}
	{"level":"info","ts":"2024-05-01T03:41:09.499999Z","caller":"traceutil/trace.go:171","msg":"trace[460494178] transaction","detail":"{read_only:false; response_revision:607; number_of_response:1; }","duration":"146.621872ms","start":"2024-05-01T03:41:09.35335Z","end":"2024-05-01T03:41:09.499972Z","steps":["trace[460494178] 'process raft request'  (duration: 144.372229ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-01T03:41:09.501508Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"117.673586ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/flowschemas/\" range_end:\"/registry/flowschemas0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-05-01T03:41:09.501654Z","caller":"traceutil/trace.go:171","msg":"trace[1194557245] range","detail":"{range_begin:/registry/flowschemas/; range_end:/registry/flowschemas0; response_count:0; response_revision:607; }","duration":"117.845466ms","start":"2024-05-01T03:41:09.383784Z","end":"2024-05-01T03:41:09.50163Z","steps":["trace[1194557245] 'agreement among raft nodes before linearized reading'  (duration: 114.132729ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-01T03:50:03.582928Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":806}
	{"level":"info","ts":"2024-05-01T03:50:03.592675Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":806,"took":"9.375644ms","hash":416217357,"current-db-size-bytes":2666496,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":2666496,"current-db-size-in-use":"2.7 MB"}
	{"level":"info","ts":"2024-05-01T03:50:03.592746Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":416217357,"revision":806,"compact-revision":-1}
	
	
	==> kernel <==
	 03:53:38 up 14 min,  0 users,  load average: 0.14, 0.12, 0.09
	Linux embed-certs-277128 5.10.207 #1 SMP Tue Apr 30 22:38:43 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [a96815c49ac45b34c359d7171b30be5c25cee80fae08d947fc004d479693df00] <==
	I0501 03:48:05.909843       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0501 03:50:04.911559       1 handler_proxy.go:93] no RequestInfo found in the context
	E0501 03:50:04.911680       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0501 03:50:05.912537       1 handler_proxy.go:93] no RequestInfo found in the context
	E0501 03:50:05.912653       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0501 03:50:05.912681       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0501 03:50:05.912537       1 handler_proxy.go:93] no RequestInfo found in the context
	E0501 03:50:05.912775       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0501 03:50:05.913739       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0501 03:51:05.913022       1 handler_proxy.go:93] no RequestInfo found in the context
	E0501 03:51:05.913235       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0501 03:51:05.913252       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0501 03:51:05.914395       1 handler_proxy.go:93] no RequestInfo found in the context
	E0501 03:51:05.914463       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0501 03:51:05.914475       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0501 03:53:05.913447       1 handler_proxy.go:93] no RequestInfo found in the context
	E0501 03:53:05.913729       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0501 03:53:05.913764       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0501 03:53:05.915690       1 handler_proxy.go:93] no RequestInfo found in the context
	E0501 03:53:05.915781       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0501 03:53:05.915792       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [7e7158f7ff3922e97ba5ee97108acc1d0ba0350dff2f9aa56c2f71519108791c] <==
	I0501 03:47:48.975337       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0501 03:48:18.434943       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0501 03:48:18.984489       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0501 03:48:48.440453       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0501 03:48:48.994538       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0501 03:49:18.445868       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0501 03:49:19.002637       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0501 03:49:48.451001       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0501 03:49:49.010661       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0501 03:50:18.458266       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0501 03:50:19.024626       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0501 03:50:48.462968       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0501 03:50:49.033714       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0501 03:51:14.275691       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="287.668µs"
	E0501 03:51:18.469057       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0501 03:51:19.042891       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0501 03:51:27.273040       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="105.502µs"
	E0501 03:51:48.474186       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0501 03:51:49.052543       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0501 03:52:18.482479       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0501 03:52:19.062967       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0501 03:52:48.487761       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0501 03:52:49.072414       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0501 03:53:18.494945       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0501 03:53:19.081331       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [94afdb03c382269209154b2e73edbf200662e89960c248ba775a0ef2b8fbe6b1] <==
	I0501 03:40:05.952100       1 server_linux.go:69] "Using iptables proxy"
	I0501 03:40:05.974921       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.50.218"]
	I0501 03:40:06.079352       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0501 03:40:06.080620       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0501 03:40:06.080724       1 server_linux.go:165] "Using iptables Proxier"
	I0501 03:40:06.093010       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0501 03:40:06.093276       1 server.go:872] "Version info" version="v1.30.0"
	I0501 03:40:06.093854       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0501 03:40:06.095284       1 config.go:192] "Starting service config controller"
	I0501 03:40:06.095346       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0501 03:40:06.095387       1 config.go:101] "Starting endpoint slice config controller"
	I0501 03:40:06.095404       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0501 03:40:06.098349       1 config.go:319] "Starting node config controller"
	I0501 03:40:06.098431       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0501 03:40:06.196450       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0501 03:40:06.196500       1 shared_informer.go:320] Caches are synced for service config
	I0501 03:40:06.198572       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [1813f35574f4f0398e7d482fe77c5d7f8490726666a277035bda0a5cff80af6e] <==
	I0501 03:40:03.016766       1 serving.go:380] Generated self-signed cert in-memory
	W0501 03:40:04.842816       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0501 03:40:04.842942       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0501 03:40:04.843080       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0501 03:40:04.843114       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0501 03:40:04.907947       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0"
	I0501 03:40:04.908035       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0501 03:40:04.909802       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0501 03:40:04.909874       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0501 03:40:04.909992       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0501 03:40:04.910064       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0501 03:40:05.011267       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	May 01 03:51:01 embed-certs-277128 kubelet[937]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 01 03:51:01 embed-certs-277128 kubelet[937]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 01 03:51:01 embed-certs-277128 kubelet[937]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 01 03:51:01 embed-certs-277128 kubelet[937]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 01 03:51:14 embed-certs-277128 kubelet[937]: E0501 03:51:14.257093     937 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-p8j59" podUID="f8ad6c24-dd5d-4515-9052-c9aca7412b55"
	May 01 03:51:27 embed-certs-277128 kubelet[937]: E0501 03:51:27.256976     937 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-p8j59" podUID="f8ad6c24-dd5d-4515-9052-c9aca7412b55"
	May 01 03:51:42 embed-certs-277128 kubelet[937]: E0501 03:51:42.257842     937 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-p8j59" podUID="f8ad6c24-dd5d-4515-9052-c9aca7412b55"
	May 01 03:51:57 embed-certs-277128 kubelet[937]: E0501 03:51:57.257293     937 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-p8j59" podUID="f8ad6c24-dd5d-4515-9052-c9aca7412b55"
	May 01 03:52:01 embed-certs-277128 kubelet[937]: E0501 03:52:01.308801     937 iptables.go:577] "Could not set up iptables canary" err=<
	May 01 03:52:01 embed-certs-277128 kubelet[937]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 01 03:52:01 embed-certs-277128 kubelet[937]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 01 03:52:01 embed-certs-277128 kubelet[937]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 01 03:52:01 embed-certs-277128 kubelet[937]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 01 03:52:09 embed-certs-277128 kubelet[937]: E0501 03:52:09.259759     937 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-p8j59" podUID="f8ad6c24-dd5d-4515-9052-c9aca7412b55"
	May 01 03:52:22 embed-certs-277128 kubelet[937]: E0501 03:52:22.258238     937 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-p8j59" podUID="f8ad6c24-dd5d-4515-9052-c9aca7412b55"
	May 01 03:52:35 embed-certs-277128 kubelet[937]: E0501 03:52:35.258858     937 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-p8j59" podUID="f8ad6c24-dd5d-4515-9052-c9aca7412b55"
	May 01 03:52:50 embed-certs-277128 kubelet[937]: E0501 03:52:50.257523     937 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-p8j59" podUID="f8ad6c24-dd5d-4515-9052-c9aca7412b55"
	May 01 03:53:01 embed-certs-277128 kubelet[937]: E0501 03:53:01.308650     937 iptables.go:577] "Could not set up iptables canary" err=<
	May 01 03:53:01 embed-certs-277128 kubelet[937]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 01 03:53:01 embed-certs-277128 kubelet[937]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 01 03:53:01 embed-certs-277128 kubelet[937]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 01 03:53:01 embed-certs-277128 kubelet[937]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 01 03:53:02 embed-certs-277128 kubelet[937]: E0501 03:53:02.257742     937 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-p8j59" podUID="f8ad6c24-dd5d-4515-9052-c9aca7412b55"
	May 01 03:53:16 embed-certs-277128 kubelet[937]: E0501 03:53:16.258531     937 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-p8j59" podUID="f8ad6c24-dd5d-4515-9052-c9aca7412b55"
	May 01 03:53:27 embed-certs-277128 kubelet[937]: E0501 03:53:27.259909     937 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-p8j59" podUID="f8ad6c24-dd5d-4515-9052-c9aca7412b55"
	
	
	==> storage-provisioner [aaae36261c5ce1accf3bfb5993be5a256a037a640d5f6f30de8384900dda06b2] <==
	I0501 03:40:05.904319       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0501 03:40:35.908215       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [f9a8d2f0f9453969b410fb7f880f6cf4c7e6e2f3c17a687583906efe4181dd17] <==
	I0501 03:40:36.685379       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0501 03:40:36.700336       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0501 03:40:36.700528       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0501 03:40:54.110988       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0501 03:40:54.111690       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9e13eae9-b179-487f-bd34-653ce075558a", APIVersion:"v1", ResourceVersion:"587", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-277128_2eb83f7d-a184-4cb0-9be5-8cfdad84d7a9 became leader
	I0501 03:40:54.111951       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-277128_2eb83f7d-a184-4cb0-9be5-8cfdad84d7a9!
	I0501 03:40:54.213181       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-277128_2eb83f7d-a184-4cb0-9be5-8cfdad84d7a9!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-277128 -n embed-certs-277128
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-277128 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-p8j59
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-277128 describe pod metrics-server-569cc877fc-p8j59
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-277128 describe pod metrics-server-569cc877fc-p8j59: exit status 1 (65.147198ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-p8j59" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-277128 describe pod metrics-server-569cc877fc-p8j59: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.40s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.41s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0501 03:46:24.420065   20724 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/addons-286595/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-715118 -n default-k8s-diff-port-715118
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-05-01 03:54:38.874833825 +0000 UTC m=+6453.980750124
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-715118 -n default-k8s-diff-port-715118
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-715118 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-715118 logs -n 25: (2.214954667s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p cert-options-582976                                 | cert-options-582976          | jenkins | v1.33.0 | 01 May 24 03:30 UTC | 01 May 24 03:30 UTC |
	| delete  | -p pause-542495                                        | pause-542495                 | jenkins | v1.33.0 | 01 May 24 03:30 UTC | 01 May 24 03:30 UTC |
	| start   | -p no-preload-892672                                   | no-preload-892672            | jenkins | v1.33.0 | 01 May 24 03:30 UTC | 01 May 24 03:32 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-277128                                  | embed-certs-277128           | jenkins | v1.33.0 | 01 May 24 03:30 UTC | 01 May 24 03:32 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-046243                           | kubernetes-upgrade-046243    | jenkins | v1.33.0 | 01 May 24 03:30 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-046243                           | kubernetes-upgrade-046243    | jenkins | v1.33.0 | 01 May 24 03:30 UTC | 01 May 24 03:32 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-046243                           | kubernetes-upgrade-046243    | jenkins | v1.33.0 | 01 May 24 03:32 UTC | 01 May 24 03:32 UTC |
	| delete  | -p                                                     | disable-driver-mounts-483221 | jenkins | v1.33.0 | 01 May 24 03:32 UTC | 01 May 24 03:32 UTC |
	|         | disable-driver-mounts-483221                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-715118 | jenkins | v1.33.0 | 01 May 24 03:32 UTC | 01 May 24 03:33 UTC |
	|         | default-k8s-diff-port-715118                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-892672             | no-preload-892672            | jenkins | v1.33.0 | 01 May 24 03:32 UTC | 01 May 24 03:32 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-892672                                   | no-preload-892672            | jenkins | v1.33.0 | 01 May 24 03:32 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-277128            | embed-certs-277128           | jenkins | v1.33.0 | 01 May 24 03:32 UTC | 01 May 24 03:32 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-277128                                  | embed-certs-277128           | jenkins | v1.33.0 | 01 May 24 03:32 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-715118  | default-k8s-diff-port-715118 | jenkins | v1.33.0 | 01 May 24 03:33 UTC | 01 May 24 03:33 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-715118 | jenkins | v1.33.0 | 01 May 24 03:33 UTC |                     |
	|         | default-k8s-diff-port-715118                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-892672                  | no-preload-892672            | jenkins | v1.33.0 | 01 May 24 03:34 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-277128                 | embed-certs-277128           | jenkins | v1.33.0 | 01 May 24 03:34 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-892672                                   | no-preload-892672            | jenkins | v1.33.0 | 01 May 24 03:34 UTC | 01 May 24 03:46 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-503971        | old-k8s-version-503971       | jenkins | v1.33.0 | 01 May 24 03:34 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p embed-certs-277128                                  | embed-certs-277128           | jenkins | v1.33.0 | 01 May 24 03:34 UTC | 01 May 24 03:44 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-715118       | default-k8s-diff-port-715118 | jenkins | v1.33.0 | 01 May 24 03:35 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-715118 | jenkins | v1.33.0 | 01 May 24 03:35 UTC | 01 May 24 03:45 UTC |
	|         | default-k8s-diff-port-715118                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-503971                              | old-k8s-version-503971       | jenkins | v1.33.0 | 01 May 24 03:36 UTC | 01 May 24 03:36 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-503971             | old-k8s-version-503971       | jenkins | v1.33.0 | 01 May 24 03:36 UTC | 01 May 24 03:36 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-503971                              | old-k8s-version-503971       | jenkins | v1.33.0 | 01 May 24 03:36 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/01 03:36:41
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0501 03:36:41.470152   69580 out.go:291] Setting OutFile to fd 1 ...
	I0501 03:36:41.470256   69580 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 03:36:41.470264   69580 out.go:304] Setting ErrFile to fd 2...
	I0501 03:36:41.470268   69580 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 03:36:41.470484   69580 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18779-13391/.minikube/bin
	I0501 03:36:41.470989   69580 out.go:298] Setting JSON to false
	I0501 03:36:41.471856   69580 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":8345,"bootTime":1714526257,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0501 03:36:41.471911   69580 start.go:139] virtualization: kvm guest
	I0501 03:36:41.473901   69580 out.go:177] * [old-k8s-version-503971] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0501 03:36:41.474994   69580 out.go:177]   - MINIKUBE_LOCATION=18779
	I0501 03:36:41.475003   69580 notify.go:220] Checking for updates...
	I0501 03:36:41.477150   69580 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0501 03:36:41.478394   69580 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18779-13391/kubeconfig
	I0501 03:36:41.479462   69580 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18779-13391/.minikube
	I0501 03:36:41.480507   69580 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0501 03:36:41.481543   69580 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0501 03:36:41.482907   69580 config.go:182] Loaded profile config "old-k8s-version-503971": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0501 03:36:41.483279   69580 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:36:41.483311   69580 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:36:41.497758   69580 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40255
	I0501 03:36:41.498090   69580 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:36:41.498591   69580 main.go:141] libmachine: Using API Version  1
	I0501 03:36:41.498616   69580 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:36:41.498891   69580 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:36:41.499055   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .DriverName
	I0501 03:36:41.500675   69580 out.go:177] * Kubernetes 1.30.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.0
	I0501 03:36:41.501716   69580 driver.go:392] Setting default libvirt URI to qemu:///system
	I0501 03:36:41.501974   69580 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:36:41.502024   69580 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:36:41.515991   69580 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40591
	I0501 03:36:41.516392   69580 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:36:41.516826   69580 main.go:141] libmachine: Using API Version  1
	I0501 03:36:41.516846   69580 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:36:41.517120   69580 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:36:41.517281   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .DriverName
	I0501 03:36:41.551130   69580 out.go:177] * Using the kvm2 driver based on existing profile
	I0501 03:36:41.552244   69580 start.go:297] selected driver: kvm2
	I0501 03:36:41.552253   69580 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-503971 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-503971 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.104 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 03:36:41.552369   69580 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0501 03:36:41.553004   69580 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0501 03:36:41.553071   69580 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18779-13391/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0501 03:36:41.567362   69580 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0501 03:36:41.567736   69580 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0501 03:36:41.567815   69580 cni.go:84] Creating CNI manager for ""
	I0501 03:36:41.567832   69580 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0501 03:36:41.567881   69580 start.go:340] cluster config:
	{Name:old-k8s-version-503971 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-503971 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.104 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 03:36:41.568012   69580 iso.go:125] acquiring lock: {Name:mkcd2496aadb29931e179193d707c731bfda98b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0501 03:36:41.569791   69580 out.go:177] * Starting "old-k8s-version-503971" primary control-plane node in "old-k8s-version-503971" cluster
	I0501 03:36:38.886755   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:36:41.571352   69580 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0501 03:36:41.571389   69580 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0501 03:36:41.571408   69580 cache.go:56] Caching tarball of preloaded images
	I0501 03:36:41.571478   69580 preload.go:173] Found /home/jenkins/minikube-integration/18779-13391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0501 03:36:41.571490   69580 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0501 03:36:41.571588   69580 profile.go:143] Saving config to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/old-k8s-version-503971/config.json ...
	I0501 03:36:41.571775   69580 start.go:360] acquireMachinesLock for old-k8s-version-503971: {Name:mkc4548e5afe1d4b0a833f6c522103562b0cefff Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0501 03:36:44.966689   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:36:48.038769   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:36:54.118675   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:36:57.190716   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:37:03.270653   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:37:06.342693   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:37:12.422726   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:37:15.494702   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:37:21.574646   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:37:24.646711   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:37:30.726724   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:37:33.798628   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:37:39.878657   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:37:42.950647   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:37:49.030699   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:37:52.102665   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:37:58.182647   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:38:01.254620   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:38:07.334707   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:38:10.406670   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:38:16.486684   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:38:19.558714   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:38:25.638642   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:38:28.710687   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:38:34.790659   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:38:37.862651   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:38:43.942639   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:38:47.014729   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:38:53.094674   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:38:56.166684   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:39:02.246662   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:39:05.318633   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:39:11.398705   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:39:14.470640   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:39:20.550642   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:39:23.622701   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:39:32.707273   68864 start.go:364] duration metric: took 4m38.787656406s to acquireMachinesLock for "embed-certs-277128"
	I0501 03:39:32.707327   68864 start.go:96] Skipping create...Using existing machine configuration
	I0501 03:39:32.707336   68864 fix.go:54] fixHost starting: 
	I0501 03:39:32.707655   68864 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:39:32.707697   68864 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:39:32.722689   68864 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35015
	I0501 03:39:32.723061   68864 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:39:32.723536   68864 main.go:141] libmachine: Using API Version  1
	I0501 03:39:32.723557   68864 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:39:32.723848   68864 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:39:32.724041   68864 main.go:141] libmachine: (embed-certs-277128) Calling .DriverName
	I0501 03:39:32.724164   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetState
	I0501 03:39:32.725542   68864 fix.go:112] recreateIfNeeded on embed-certs-277128: state=Stopped err=<nil>
	I0501 03:39:32.725569   68864 main.go:141] libmachine: (embed-certs-277128) Calling .DriverName
	W0501 03:39:32.725714   68864 fix.go:138] unexpected machine state, will restart: <nil>
	I0501 03:39:32.727403   68864 out.go:177] * Restarting existing kvm2 VM for "embed-certs-277128" ...
	I0501 03:39:29.702654   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:39:32.704906   68640 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0501 03:39:32.704940   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetMachineName
	I0501 03:39:32.705254   68640 buildroot.go:166] provisioning hostname "no-preload-892672"
	I0501 03:39:32.705278   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetMachineName
	I0501 03:39:32.705499   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHHostname
	I0501 03:39:32.707128   68640 machine.go:97] duration metric: took 4m44.649178925s to provisionDockerMachine
	I0501 03:39:32.707171   68640 fix.go:56] duration metric: took 4m44.67002247s for fixHost
	I0501 03:39:32.707178   68640 start.go:83] releasing machines lock for "no-preload-892672", held for 4m44.670048235s
	W0501 03:39:32.707201   68640 start.go:713] error starting host: provision: host is not running
	W0501 03:39:32.707293   68640 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0501 03:39:32.707305   68640 start.go:728] Will try again in 5 seconds ...
	I0501 03:39:32.728616   68864 main.go:141] libmachine: (embed-certs-277128) Calling .Start
	I0501 03:39:32.728768   68864 main.go:141] libmachine: (embed-certs-277128) Ensuring networks are active...
	I0501 03:39:32.729434   68864 main.go:141] libmachine: (embed-certs-277128) Ensuring network default is active
	I0501 03:39:32.729789   68864 main.go:141] libmachine: (embed-certs-277128) Ensuring network mk-embed-certs-277128 is active
	I0501 03:39:32.730218   68864 main.go:141] libmachine: (embed-certs-277128) Getting domain xml...
	I0501 03:39:32.730972   68864 main.go:141] libmachine: (embed-certs-277128) Creating domain...
	I0501 03:39:37.711605   68640 start.go:360] acquireMachinesLock for no-preload-892672: {Name:mkc4548e5afe1d4b0a833f6c522103562b0cefff Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0501 03:39:33.914124   68864 main.go:141] libmachine: (embed-certs-277128) Waiting to get IP...
	I0501 03:39:33.915022   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:33.915411   68864 main.go:141] libmachine: (embed-certs-277128) DBG | unable to find current IP address of domain embed-certs-277128 in network mk-embed-certs-277128
	I0501 03:39:33.915473   68864 main.go:141] libmachine: (embed-certs-277128) DBG | I0501 03:39:33.915391   70171 retry.go:31] will retry after 278.418743ms: waiting for machine to come up
	I0501 03:39:34.195933   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:34.196470   68864 main.go:141] libmachine: (embed-certs-277128) DBG | unable to find current IP address of domain embed-certs-277128 in network mk-embed-certs-277128
	I0501 03:39:34.196497   68864 main.go:141] libmachine: (embed-certs-277128) DBG | I0501 03:39:34.196417   70171 retry.go:31] will retry after 375.593174ms: waiting for machine to come up
	I0501 03:39:34.574178   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:34.574666   68864 main.go:141] libmachine: (embed-certs-277128) DBG | unable to find current IP address of domain embed-certs-277128 in network mk-embed-certs-277128
	I0501 03:39:34.574689   68864 main.go:141] libmachine: (embed-certs-277128) DBG | I0501 03:39:34.574617   70171 retry.go:31] will retry after 377.853045ms: waiting for machine to come up
	I0501 03:39:34.954022   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:34.954436   68864 main.go:141] libmachine: (embed-certs-277128) DBG | unable to find current IP address of domain embed-certs-277128 in network mk-embed-certs-277128
	I0501 03:39:34.954465   68864 main.go:141] libmachine: (embed-certs-277128) DBG | I0501 03:39:34.954375   70171 retry.go:31] will retry after 374.024178ms: waiting for machine to come up
	I0501 03:39:35.330087   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:35.330514   68864 main.go:141] libmachine: (embed-certs-277128) DBG | unable to find current IP address of domain embed-certs-277128 in network mk-embed-certs-277128
	I0501 03:39:35.330545   68864 main.go:141] libmachine: (embed-certs-277128) DBG | I0501 03:39:35.330478   70171 retry.go:31] will retry after 488.296666ms: waiting for machine to come up
	I0501 03:39:35.820177   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:35.820664   68864 main.go:141] libmachine: (embed-certs-277128) DBG | unable to find current IP address of domain embed-certs-277128 in network mk-embed-certs-277128
	I0501 03:39:35.820692   68864 main.go:141] libmachine: (embed-certs-277128) DBG | I0501 03:39:35.820629   70171 retry.go:31] will retry after 665.825717ms: waiting for machine to come up
	I0501 03:39:36.488492   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:36.488910   68864 main.go:141] libmachine: (embed-certs-277128) DBG | unable to find current IP address of domain embed-certs-277128 in network mk-embed-certs-277128
	I0501 03:39:36.488941   68864 main.go:141] libmachine: (embed-certs-277128) DBG | I0501 03:39:36.488860   70171 retry.go:31] will retry after 1.04269192s: waiting for machine to come up
	I0501 03:39:37.532622   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:37.533006   68864 main.go:141] libmachine: (embed-certs-277128) DBG | unable to find current IP address of domain embed-certs-277128 in network mk-embed-certs-277128
	I0501 03:39:37.533032   68864 main.go:141] libmachine: (embed-certs-277128) DBG | I0501 03:39:37.532968   70171 retry.go:31] will retry after 1.348239565s: waiting for machine to come up
	I0501 03:39:38.882895   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:38.883364   68864 main.go:141] libmachine: (embed-certs-277128) DBG | unable to find current IP address of domain embed-certs-277128 in network mk-embed-certs-277128
	I0501 03:39:38.883396   68864 main.go:141] libmachine: (embed-certs-277128) DBG | I0501 03:39:38.883301   70171 retry.go:31] will retry after 1.718495999s: waiting for machine to come up
	I0501 03:39:40.604329   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:40.604760   68864 main.go:141] libmachine: (embed-certs-277128) DBG | unable to find current IP address of domain embed-certs-277128 in network mk-embed-certs-277128
	I0501 03:39:40.604791   68864 main.go:141] libmachine: (embed-certs-277128) DBG | I0501 03:39:40.604703   70171 retry.go:31] will retry after 2.237478005s: waiting for machine to come up
	I0501 03:39:42.843398   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:42.843920   68864 main.go:141] libmachine: (embed-certs-277128) DBG | unable to find current IP address of domain embed-certs-277128 in network mk-embed-certs-277128
	I0501 03:39:42.843949   68864 main.go:141] libmachine: (embed-certs-277128) DBG | I0501 03:39:42.843869   70171 retry.go:31] will retry after 2.618059388s: waiting for machine to come up
	I0501 03:39:45.465576   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:45.465968   68864 main.go:141] libmachine: (embed-certs-277128) DBG | unable to find current IP address of domain embed-certs-277128 in network mk-embed-certs-277128
	I0501 03:39:45.465992   68864 main.go:141] libmachine: (embed-certs-277128) DBG | I0501 03:39:45.465928   70171 retry.go:31] will retry after 2.895120972s: waiting for machine to come up
	I0501 03:39:48.362239   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:48.362651   68864 main.go:141] libmachine: (embed-certs-277128) DBG | unable to find current IP address of domain embed-certs-277128 in network mk-embed-certs-277128
	I0501 03:39:48.362683   68864 main.go:141] libmachine: (embed-certs-277128) DBG | I0501 03:39:48.362617   70171 retry.go:31] will retry after 2.857441112s: waiting for machine to come up
	I0501 03:39:52.791989   69237 start.go:364] duration metric: took 4m2.036138912s to acquireMachinesLock for "default-k8s-diff-port-715118"
	I0501 03:39:52.792059   69237 start.go:96] Skipping create...Using existing machine configuration
	I0501 03:39:52.792071   69237 fix.go:54] fixHost starting: 
	I0501 03:39:52.792454   69237 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:39:52.792492   69237 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:39:52.809707   69237 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46419
	I0501 03:39:52.810075   69237 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:39:52.810544   69237 main.go:141] libmachine: Using API Version  1
	I0501 03:39:52.810564   69237 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:39:52.810881   69237 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:39:52.811067   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .DriverName
	I0501 03:39:52.811217   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetState
	I0501 03:39:52.812787   69237 fix.go:112] recreateIfNeeded on default-k8s-diff-port-715118: state=Stopped err=<nil>
	I0501 03:39:52.812820   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .DriverName
	W0501 03:39:52.812969   69237 fix.go:138] unexpected machine state, will restart: <nil>
	I0501 03:39:52.815136   69237 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-715118" ...
	I0501 03:39:51.223450   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:51.223938   68864 main.go:141] libmachine: (embed-certs-277128) Found IP for machine: 192.168.50.218
	I0501 03:39:51.223965   68864 main.go:141] libmachine: (embed-certs-277128) Reserving static IP address...
	I0501 03:39:51.223982   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has current primary IP address 192.168.50.218 and MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:51.224375   68864 main.go:141] libmachine: (embed-certs-277128) Reserved static IP address: 192.168.50.218
	I0501 03:39:51.224440   68864 main.go:141] libmachine: (embed-certs-277128) DBG | found host DHCP lease matching {name: "embed-certs-277128", mac: "52:54:00:96:11:7d", ip: "192.168.50.218"} in network mk-embed-certs-277128: {Iface:virbr2 ExpiryTime:2024-05-01 04:39:45 +0000 UTC Type:0 Mac:52:54:00:96:11:7d Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-277128 Clientid:01:52:54:00:96:11:7d}
	I0501 03:39:51.224454   68864 main.go:141] libmachine: (embed-certs-277128) Waiting for SSH to be available...
	I0501 03:39:51.224491   68864 main.go:141] libmachine: (embed-certs-277128) DBG | skip adding static IP to network mk-embed-certs-277128 - found existing host DHCP lease matching {name: "embed-certs-277128", mac: "52:54:00:96:11:7d", ip: "192.168.50.218"}
	I0501 03:39:51.224507   68864 main.go:141] libmachine: (embed-certs-277128) DBG | Getting to WaitForSSH function...
	I0501 03:39:51.226437   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:51.226733   68864 main.go:141] libmachine: (embed-certs-277128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:7d", ip: ""} in network mk-embed-certs-277128: {Iface:virbr2 ExpiryTime:2024-05-01 04:39:45 +0000 UTC Type:0 Mac:52:54:00:96:11:7d Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-277128 Clientid:01:52:54:00:96:11:7d}
	I0501 03:39:51.226764   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined IP address 192.168.50.218 and MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:51.226863   68864 main.go:141] libmachine: (embed-certs-277128) DBG | Using SSH client type: external
	I0501 03:39:51.226886   68864 main.go:141] libmachine: (embed-certs-277128) DBG | Using SSH private key: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/embed-certs-277128/id_rsa (-rw-------)
	I0501 03:39:51.226917   68864 main.go:141] libmachine: (embed-certs-277128) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.218 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18779-13391/.minikube/machines/embed-certs-277128/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0501 03:39:51.226930   68864 main.go:141] libmachine: (embed-certs-277128) DBG | About to run SSH command:
	I0501 03:39:51.226941   68864 main.go:141] libmachine: (embed-certs-277128) DBG | exit 0
	I0501 03:39:51.354225   68864 main.go:141] libmachine: (embed-certs-277128) DBG | SSH cmd err, output: <nil>: 
	I0501 03:39:51.354641   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetConfigRaw
	I0501 03:39:51.355337   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetIP
	I0501 03:39:51.357934   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:51.358265   68864 main.go:141] libmachine: (embed-certs-277128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:7d", ip: ""} in network mk-embed-certs-277128: {Iface:virbr2 ExpiryTime:2024-05-01 04:39:45 +0000 UTC Type:0 Mac:52:54:00:96:11:7d Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-277128 Clientid:01:52:54:00:96:11:7d}
	I0501 03:39:51.358302   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined IP address 192.168.50.218 and MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:51.358584   68864 profile.go:143] Saving config to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/embed-certs-277128/config.json ...
	I0501 03:39:51.358753   68864 machine.go:94] provisionDockerMachine start ...
	I0501 03:39:51.358771   68864 main.go:141] libmachine: (embed-certs-277128) Calling .DriverName
	I0501 03:39:51.358940   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHHostname
	I0501 03:39:51.361202   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:51.361564   68864 main.go:141] libmachine: (embed-certs-277128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:7d", ip: ""} in network mk-embed-certs-277128: {Iface:virbr2 ExpiryTime:2024-05-01 04:39:45 +0000 UTC Type:0 Mac:52:54:00:96:11:7d Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-277128 Clientid:01:52:54:00:96:11:7d}
	I0501 03:39:51.361600   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined IP address 192.168.50.218 and MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:51.361711   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHPort
	I0501 03:39:51.361884   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHKeyPath
	I0501 03:39:51.362054   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHKeyPath
	I0501 03:39:51.362170   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHUsername
	I0501 03:39:51.362344   68864 main.go:141] libmachine: Using SSH client type: native
	I0501 03:39:51.362572   68864 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.218 22 <nil> <nil>}
	I0501 03:39:51.362586   68864 main.go:141] libmachine: About to run SSH command:
	hostname
	I0501 03:39:51.467448   68864 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0501 03:39:51.467480   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetMachineName
	I0501 03:39:51.467740   68864 buildroot.go:166] provisioning hostname "embed-certs-277128"
	I0501 03:39:51.467772   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetMachineName
	I0501 03:39:51.467953   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHHostname
	I0501 03:39:51.470653   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:51.471022   68864 main.go:141] libmachine: (embed-certs-277128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:7d", ip: ""} in network mk-embed-certs-277128: {Iface:virbr2 ExpiryTime:2024-05-01 04:39:45 +0000 UTC Type:0 Mac:52:54:00:96:11:7d Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-277128 Clientid:01:52:54:00:96:11:7d}
	I0501 03:39:51.471044   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined IP address 192.168.50.218 and MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:51.471159   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHPort
	I0501 03:39:51.471341   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHKeyPath
	I0501 03:39:51.471482   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHKeyPath
	I0501 03:39:51.471590   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHUsername
	I0501 03:39:51.471729   68864 main.go:141] libmachine: Using SSH client type: native
	I0501 03:39:51.471913   68864 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.218 22 <nil> <nil>}
	I0501 03:39:51.471934   68864 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-277128 && echo "embed-certs-277128" | sudo tee /etc/hostname
	I0501 03:39:51.594372   68864 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-277128
	
	I0501 03:39:51.594422   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHHostname
	I0501 03:39:51.596978   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:51.597307   68864 main.go:141] libmachine: (embed-certs-277128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:7d", ip: ""} in network mk-embed-certs-277128: {Iface:virbr2 ExpiryTime:2024-05-01 04:39:45 +0000 UTC Type:0 Mac:52:54:00:96:11:7d Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-277128 Clientid:01:52:54:00:96:11:7d}
	I0501 03:39:51.597334   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined IP address 192.168.50.218 and MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:51.597495   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHPort
	I0501 03:39:51.597710   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHKeyPath
	I0501 03:39:51.597865   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHKeyPath
	I0501 03:39:51.597971   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHUsername
	I0501 03:39:51.598097   68864 main.go:141] libmachine: Using SSH client type: native
	I0501 03:39:51.598250   68864 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.218 22 <nil> <nil>}
	I0501 03:39:51.598271   68864 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-277128' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-277128/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-277128' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0501 03:39:51.712791   68864 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0501 03:39:51.712825   68864 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18779-13391/.minikube CaCertPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18779-13391/.minikube}
	I0501 03:39:51.712850   68864 buildroot.go:174] setting up certificates
	I0501 03:39:51.712860   68864 provision.go:84] configureAuth start
	I0501 03:39:51.712869   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetMachineName
	I0501 03:39:51.713158   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetIP
	I0501 03:39:51.715577   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:51.715885   68864 main.go:141] libmachine: (embed-certs-277128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:7d", ip: ""} in network mk-embed-certs-277128: {Iface:virbr2 ExpiryTime:2024-05-01 04:39:45 +0000 UTC Type:0 Mac:52:54:00:96:11:7d Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-277128 Clientid:01:52:54:00:96:11:7d}
	I0501 03:39:51.715918   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined IP address 192.168.50.218 and MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:51.716040   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHHostname
	I0501 03:39:51.718057   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:51.718342   68864 main.go:141] libmachine: (embed-certs-277128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:7d", ip: ""} in network mk-embed-certs-277128: {Iface:virbr2 ExpiryTime:2024-05-01 04:39:45 +0000 UTC Type:0 Mac:52:54:00:96:11:7d Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-277128 Clientid:01:52:54:00:96:11:7d}
	I0501 03:39:51.718367   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined IP address 192.168.50.218 and MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:51.718550   68864 provision.go:143] copyHostCerts
	I0501 03:39:51.718612   68864 exec_runner.go:144] found /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem, removing ...
	I0501 03:39:51.718622   68864 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem
	I0501 03:39:51.718685   68864 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem (1078 bytes)
	I0501 03:39:51.718790   68864 exec_runner.go:144] found /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem, removing ...
	I0501 03:39:51.718798   68864 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem
	I0501 03:39:51.718823   68864 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem (1123 bytes)
	I0501 03:39:51.718881   68864 exec_runner.go:144] found /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem, removing ...
	I0501 03:39:51.718888   68864 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem
	I0501 03:39:51.718907   68864 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem (1675 bytes)
	I0501 03:39:51.718957   68864 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca-key.pem org=jenkins.embed-certs-277128 san=[127.0.0.1 192.168.50.218 embed-certs-277128 localhost minikube]
	I0501 03:39:52.100402   68864 provision.go:177] copyRemoteCerts
	I0501 03:39:52.100459   68864 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0501 03:39:52.100494   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHHostname
	I0501 03:39:52.103133   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:52.103363   68864 main.go:141] libmachine: (embed-certs-277128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:7d", ip: ""} in network mk-embed-certs-277128: {Iface:virbr2 ExpiryTime:2024-05-01 04:39:45 +0000 UTC Type:0 Mac:52:54:00:96:11:7d Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-277128 Clientid:01:52:54:00:96:11:7d}
	I0501 03:39:52.103391   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined IP address 192.168.50.218 and MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:52.103522   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHPort
	I0501 03:39:52.103694   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHKeyPath
	I0501 03:39:52.103790   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHUsername
	I0501 03:39:52.103874   68864 sshutil.go:53] new ssh client: &{IP:192.168.50.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/embed-certs-277128/id_rsa Username:docker}
	I0501 03:39:52.186017   68864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0501 03:39:52.211959   68864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0501 03:39:52.237362   68864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0501 03:39:52.264036   68864 provision.go:87] duration metric: took 551.163591ms to configureAuth
	I0501 03:39:52.264060   68864 buildroot.go:189] setting minikube options for container-runtime
	I0501 03:39:52.264220   68864 config.go:182] Loaded profile config "embed-certs-277128": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0501 03:39:52.264290   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHHostname
	I0501 03:39:52.266809   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:52.267117   68864 main.go:141] libmachine: (embed-certs-277128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:7d", ip: ""} in network mk-embed-certs-277128: {Iface:virbr2 ExpiryTime:2024-05-01 04:39:45 +0000 UTC Type:0 Mac:52:54:00:96:11:7d Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-277128 Clientid:01:52:54:00:96:11:7d}
	I0501 03:39:52.267140   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined IP address 192.168.50.218 and MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:52.267336   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHPort
	I0501 03:39:52.267529   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHKeyPath
	I0501 03:39:52.267713   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHKeyPath
	I0501 03:39:52.267863   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHUsername
	I0501 03:39:52.268096   68864 main.go:141] libmachine: Using SSH client type: native
	I0501 03:39:52.268273   68864 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.218 22 <nil> <nil>}
	I0501 03:39:52.268290   68864 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0501 03:39:52.543539   68864 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0501 03:39:52.543569   68864 machine.go:97] duration metric: took 1.184800934s to provisionDockerMachine
	I0501 03:39:52.543585   68864 start.go:293] postStartSetup for "embed-certs-277128" (driver="kvm2")
	I0501 03:39:52.543600   68864 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0501 03:39:52.543621   68864 main.go:141] libmachine: (embed-certs-277128) Calling .DriverName
	I0501 03:39:52.543974   68864 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0501 03:39:52.544007   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHHostname
	I0501 03:39:52.546566   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:52.546918   68864 main.go:141] libmachine: (embed-certs-277128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:7d", ip: ""} in network mk-embed-certs-277128: {Iface:virbr2 ExpiryTime:2024-05-01 04:39:45 +0000 UTC Type:0 Mac:52:54:00:96:11:7d Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-277128 Clientid:01:52:54:00:96:11:7d}
	I0501 03:39:52.546955   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined IP address 192.168.50.218 and MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:52.547108   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHPort
	I0501 03:39:52.547310   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHKeyPath
	I0501 03:39:52.547480   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHUsername
	I0501 03:39:52.547622   68864 sshutil.go:53] new ssh client: &{IP:192.168.50.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/embed-certs-277128/id_rsa Username:docker}
	I0501 03:39:52.636313   68864 ssh_runner.go:195] Run: cat /etc/os-release
	I0501 03:39:52.641408   68864 info.go:137] Remote host: Buildroot 2023.02.9
	I0501 03:39:52.641435   68864 filesync.go:126] Scanning /home/jenkins/minikube-integration/18779-13391/.minikube/addons for local assets ...
	I0501 03:39:52.641514   68864 filesync.go:126] Scanning /home/jenkins/minikube-integration/18779-13391/.minikube/files for local assets ...
	I0501 03:39:52.641598   68864 filesync.go:149] local asset: /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem -> 207242.pem in /etc/ssl/certs
	I0501 03:39:52.641708   68864 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0501 03:39:52.653421   68864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem --> /etc/ssl/certs/207242.pem (1708 bytes)
	I0501 03:39:52.681796   68864 start.go:296] duration metric: took 138.197388ms for postStartSetup
	I0501 03:39:52.681840   68864 fix.go:56] duration metric: took 19.974504059s for fixHost
	I0501 03:39:52.681866   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHHostname
	I0501 03:39:52.684189   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:52.684447   68864 main.go:141] libmachine: (embed-certs-277128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:7d", ip: ""} in network mk-embed-certs-277128: {Iface:virbr2 ExpiryTime:2024-05-01 04:39:45 +0000 UTC Type:0 Mac:52:54:00:96:11:7d Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-277128 Clientid:01:52:54:00:96:11:7d}
	I0501 03:39:52.684475   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined IP address 192.168.50.218 and MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:52.684691   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHPort
	I0501 03:39:52.684901   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHKeyPath
	I0501 03:39:52.685077   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHKeyPath
	I0501 03:39:52.685226   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHUsername
	I0501 03:39:52.685393   68864 main.go:141] libmachine: Using SSH client type: native
	I0501 03:39:52.685556   68864 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.218 22 <nil> <nil>}
	I0501 03:39:52.685568   68864 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0501 03:39:52.791802   68864 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714534792.758254619
	
	I0501 03:39:52.791830   68864 fix.go:216] guest clock: 1714534792.758254619
	I0501 03:39:52.791841   68864 fix.go:229] Guest: 2024-05-01 03:39:52.758254619 +0000 UTC Remote: 2024-05-01 03:39:52.681844878 +0000 UTC m=+298.906990848 (delta=76.409741ms)
	I0501 03:39:52.791886   68864 fix.go:200] guest clock delta is within tolerance: 76.409741ms
	I0501 03:39:52.791892   68864 start.go:83] releasing machines lock for "embed-certs-277128", held for 20.08458366s
	I0501 03:39:52.791918   68864 main.go:141] libmachine: (embed-certs-277128) Calling .DriverName
	I0501 03:39:52.792188   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetIP
	I0501 03:39:52.794820   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:52.795217   68864 main.go:141] libmachine: (embed-certs-277128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:7d", ip: ""} in network mk-embed-certs-277128: {Iface:virbr2 ExpiryTime:2024-05-01 04:39:45 +0000 UTC Type:0 Mac:52:54:00:96:11:7d Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-277128 Clientid:01:52:54:00:96:11:7d}
	I0501 03:39:52.795256   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined IP address 192.168.50.218 and MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:52.795427   68864 main.go:141] libmachine: (embed-certs-277128) Calling .DriverName
	I0501 03:39:52.795971   68864 main.go:141] libmachine: (embed-certs-277128) Calling .DriverName
	I0501 03:39:52.796142   68864 main.go:141] libmachine: (embed-certs-277128) Calling .DriverName
	I0501 03:39:52.796235   68864 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0501 03:39:52.796285   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHHostname
	I0501 03:39:52.796324   68864 ssh_runner.go:195] Run: cat /version.json
	I0501 03:39:52.796346   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHHostname
	I0501 03:39:52.799128   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:52.799153   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:52.799536   68864 main.go:141] libmachine: (embed-certs-277128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:7d", ip: ""} in network mk-embed-certs-277128: {Iface:virbr2 ExpiryTime:2024-05-01 04:39:45 +0000 UTC Type:0 Mac:52:54:00:96:11:7d Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-277128 Clientid:01:52:54:00:96:11:7d}
	I0501 03:39:52.799570   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined IP address 192.168.50.218 and MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:52.799617   68864 main.go:141] libmachine: (embed-certs-277128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:7d", ip: ""} in network mk-embed-certs-277128: {Iface:virbr2 ExpiryTime:2024-05-01 04:39:45 +0000 UTC Type:0 Mac:52:54:00:96:11:7d Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-277128 Clientid:01:52:54:00:96:11:7d}
	I0501 03:39:52.799647   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined IP address 192.168.50.218 and MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:52.799779   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHPort
	I0501 03:39:52.799878   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHPort
	I0501 03:39:52.799961   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHKeyPath
	I0501 03:39:52.800048   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHKeyPath
	I0501 03:39:52.800117   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHUsername
	I0501 03:39:52.800189   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHUsername
	I0501 03:39:52.800243   68864 sshutil.go:53] new ssh client: &{IP:192.168.50.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/embed-certs-277128/id_rsa Username:docker}
	I0501 03:39:52.800299   68864 sshutil.go:53] new ssh client: &{IP:192.168.50.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/embed-certs-277128/id_rsa Username:docker}
	I0501 03:39:52.901147   68864 ssh_runner.go:195] Run: systemctl --version
	I0501 03:39:52.908399   68864 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0501 03:39:53.065012   68864 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0501 03:39:53.073635   68864 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0501 03:39:53.073724   68864 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0501 03:39:53.096146   68864 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0501 03:39:53.096179   68864 start.go:494] detecting cgroup driver to use...
	I0501 03:39:53.096253   68864 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0501 03:39:53.118525   68864 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0501 03:39:53.136238   68864 docker.go:217] disabling cri-docker service (if available) ...
	I0501 03:39:53.136301   68864 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0501 03:39:53.152535   68864 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0501 03:39:53.171415   68864 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0501 03:39:53.297831   68864 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0501 03:39:53.479469   68864 docker.go:233] disabling docker service ...
	I0501 03:39:53.479552   68864 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0501 03:39:53.497271   68864 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0501 03:39:53.512645   68864 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0501 03:39:53.658448   68864 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0501 03:39:53.787528   68864 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0501 03:39:53.804078   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0501 03:39:53.836146   68864 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0501 03:39:53.836206   68864 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:39:53.853846   68864 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0501 03:39:53.853915   68864 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:39:53.866319   68864 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:39:53.878410   68864 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:39:53.890304   68864 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0501 03:39:53.903821   68864 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:39:53.916750   68864 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:39:53.938933   68864 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:39:53.952103   68864 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0501 03:39:53.964833   68864 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0501 03:39:53.964893   68864 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0501 03:39:53.983039   68864 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0501 03:39:53.995830   68864 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 03:39:54.156748   68864 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0501 03:39:54.306973   68864 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0501 03:39:54.307051   68864 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0501 03:39:54.313515   68864 start.go:562] Will wait 60s for crictl version
	I0501 03:39:54.313569   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:39:54.317943   68864 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0501 03:39:54.356360   68864 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0501 03:39:54.356437   68864 ssh_runner.go:195] Run: crio --version
	I0501 03:39:54.391717   68864 ssh_runner.go:195] Run: crio --version
	I0501 03:39:54.428403   68864 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0501 03:39:52.816428   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .Start
	I0501 03:39:52.816592   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Ensuring networks are active...
	I0501 03:39:52.817317   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Ensuring network default is active
	I0501 03:39:52.817668   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Ensuring network mk-default-k8s-diff-port-715118 is active
	I0501 03:39:52.818040   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Getting domain xml...
	I0501 03:39:52.818777   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Creating domain...
	I0501 03:39:54.069624   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Waiting to get IP...
	I0501 03:39:54.070436   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:39:54.070855   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | unable to find current IP address of domain default-k8s-diff-port-715118 in network mk-default-k8s-diff-port-715118
	I0501 03:39:54.070891   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | I0501 03:39:54.070820   70304 retry.go:31] will retry after 260.072623ms: waiting for machine to come up
	I0501 03:39:54.332646   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:39:54.333077   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | unable to find current IP address of domain default-k8s-diff-port-715118 in network mk-default-k8s-diff-port-715118
	I0501 03:39:54.333115   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | I0501 03:39:54.333047   70304 retry.go:31] will retry after 270.897102ms: waiting for machine to come up
	I0501 03:39:54.605705   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:39:54.606102   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | unable to find current IP address of domain default-k8s-diff-port-715118 in network mk-default-k8s-diff-port-715118
	I0501 03:39:54.606155   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | I0501 03:39:54.606070   70304 retry.go:31] will retry after 417.613249ms: waiting for machine to come up
	I0501 03:39:55.025827   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:39:55.026340   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | unable to find current IP address of domain default-k8s-diff-port-715118 in network mk-default-k8s-diff-port-715118
	I0501 03:39:55.026371   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | I0501 03:39:55.026291   70304 retry.go:31] will retry after 428.515161ms: waiting for machine to come up
	I0501 03:39:55.456828   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:39:55.457443   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | unable to find current IP address of domain default-k8s-diff-port-715118 in network mk-default-k8s-diff-port-715118
	I0501 03:39:55.457480   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | I0501 03:39:55.457405   70304 retry.go:31] will retry after 701.294363ms: waiting for machine to come up
	I0501 03:39:54.429689   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetIP
	I0501 03:39:54.432488   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:54.432817   68864 main.go:141] libmachine: (embed-certs-277128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:7d", ip: ""} in network mk-embed-certs-277128: {Iface:virbr2 ExpiryTime:2024-05-01 04:39:45 +0000 UTC Type:0 Mac:52:54:00:96:11:7d Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-277128 Clientid:01:52:54:00:96:11:7d}
	I0501 03:39:54.432858   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined IP address 192.168.50.218 and MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:54.433039   68864 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0501 03:39:54.437866   68864 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0501 03:39:54.451509   68864 kubeadm.go:877] updating cluster {Name:embed-certs-277128 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.0 ClusterName:embed-certs-277128 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.218 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0501 03:39:54.451615   68864 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0501 03:39:54.451665   68864 ssh_runner.go:195] Run: sudo crictl images --output json
	I0501 03:39:54.494304   68864 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0501 03:39:54.494379   68864 ssh_runner.go:195] Run: which lz4
	I0501 03:39:54.499090   68864 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0501 03:39:54.503970   68864 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0501 03:39:54.503992   68864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394544937 bytes)
	I0501 03:39:56.216407   68864 crio.go:462] duration metric: took 1.717351739s to copy over tarball
	I0501 03:39:56.216488   68864 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0501 03:39:58.703133   68864 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.48661051s)
	I0501 03:39:58.703161   68864 crio.go:469] duration metric: took 2.486721448s to extract the tarball
	I0501 03:39:58.703171   68864 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0501 03:39:58.751431   68864 ssh_runner.go:195] Run: sudo crictl images --output json
	I0501 03:39:58.800353   68864 crio.go:514] all images are preloaded for cri-o runtime.
	I0501 03:39:58.800379   68864 cache_images.go:84] Images are preloaded, skipping loading
	I0501 03:39:58.800389   68864 kubeadm.go:928] updating node { 192.168.50.218 8443 v1.30.0 crio true true} ...
	I0501 03:39:58.800516   68864 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-277128 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.218
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:embed-certs-277128 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0501 03:39:58.800598   68864 ssh_runner.go:195] Run: crio config
	I0501 03:39:56.159966   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:39:56.160373   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | unable to find current IP address of domain default-k8s-diff-port-715118 in network mk-default-k8s-diff-port-715118
	I0501 03:39:56.160404   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | I0501 03:39:56.160334   70304 retry.go:31] will retry after 774.079459ms: waiting for machine to come up
	I0501 03:39:56.936654   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:39:56.937201   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | unable to find current IP address of domain default-k8s-diff-port-715118 in network mk-default-k8s-diff-port-715118
	I0501 03:39:56.937232   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | I0501 03:39:56.937161   70304 retry.go:31] will retry after 877.420181ms: waiting for machine to come up
	I0501 03:39:57.816002   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:39:57.816467   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | unable to find current IP address of domain default-k8s-diff-port-715118 in network mk-default-k8s-diff-port-715118
	I0501 03:39:57.816501   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | I0501 03:39:57.816425   70304 retry.go:31] will retry after 1.477997343s: waiting for machine to come up
	I0501 03:39:59.296533   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:39:59.296970   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | unable to find current IP address of domain default-k8s-diff-port-715118 in network mk-default-k8s-diff-port-715118
	I0501 03:39:59.296995   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | I0501 03:39:59.296922   70304 retry.go:31] will retry after 1.199617253s: waiting for machine to come up
	I0501 03:40:00.498388   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:00.498817   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | unable to find current IP address of domain default-k8s-diff-port-715118 in network mk-default-k8s-diff-port-715118
	I0501 03:40:00.498845   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | I0501 03:40:00.498770   70304 retry.go:31] will retry after 2.227608697s: waiting for machine to come up
	I0501 03:39:58.855600   68864 cni.go:84] Creating CNI manager for ""
	I0501 03:39:58.855630   68864 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0501 03:39:58.855650   68864 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0501 03:39:58.855686   68864 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.218 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-277128 NodeName:embed-certs-277128 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.218"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.218 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0501 03:39:58.855826   68864 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.218
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-277128"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.218
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.218"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0501 03:39:58.855890   68864 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0501 03:39:58.868074   68864 binaries.go:44] Found k8s binaries, skipping transfer
	I0501 03:39:58.868145   68864 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0501 03:39:58.879324   68864 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0501 03:39:58.897572   68864 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0501 03:39:58.918416   68864 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0501 03:39:58.940317   68864 ssh_runner.go:195] Run: grep 192.168.50.218	control-plane.minikube.internal$ /etc/hosts
	I0501 03:39:58.944398   68864 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.218	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0501 03:39:58.959372   68864 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 03:39:59.094172   68864 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0501 03:39:59.113612   68864 certs.go:68] Setting up /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/embed-certs-277128 for IP: 192.168.50.218
	I0501 03:39:59.113653   68864 certs.go:194] generating shared ca certs ...
	I0501 03:39:59.113669   68864 certs.go:226] acquiring lock for ca certs: {Name:mk85a75d14c00780d4bf09cd9bdaf0f96f1b8fce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 03:39:59.113863   68864 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18779-13391/.minikube/ca.key
	I0501 03:39:59.113919   68864 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.key
	I0501 03:39:59.113931   68864 certs.go:256] generating profile certs ...
	I0501 03:39:59.114044   68864 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/embed-certs-277128/client.key
	I0501 03:39:59.114117   68864 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/embed-certs-277128/apiserver.key.65584253
	I0501 03:39:59.114166   68864 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/embed-certs-277128/proxy-client.key
	I0501 03:39:59.114325   68864 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/20724.pem (1338 bytes)
	W0501 03:39:59.114369   68864 certs.go:480] ignoring /home/jenkins/minikube-integration/18779-13391/.minikube/certs/20724_empty.pem, impossibly tiny 0 bytes
	I0501 03:39:59.114383   68864 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca-key.pem (1679 bytes)
	I0501 03:39:59.114430   68864 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem (1078 bytes)
	I0501 03:39:59.114466   68864 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem (1123 bytes)
	I0501 03:39:59.114497   68864 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem (1675 bytes)
	I0501 03:39:59.114550   68864 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem (1708 bytes)
	I0501 03:39:59.115448   68864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0501 03:39:59.155890   68864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0501 03:39:59.209160   68864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0501 03:39:59.251552   68864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0501 03:39:59.288100   68864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/embed-certs-277128/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0501 03:39:59.325437   68864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/embed-certs-277128/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0501 03:39:59.352593   68864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/embed-certs-277128/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0501 03:39:59.378992   68864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/embed-certs-277128/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0501 03:39:59.405517   68864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/certs/20724.pem --> /usr/share/ca-certificates/20724.pem (1338 bytes)
	I0501 03:39:59.431253   68864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem --> /usr/share/ca-certificates/207242.pem (1708 bytes)
	I0501 03:39:59.457155   68864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0501 03:39:59.483696   68864 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0501 03:39:59.502758   68864 ssh_runner.go:195] Run: openssl version
	I0501 03:39:59.509307   68864 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20724.pem && ln -fs /usr/share/ca-certificates/20724.pem /etc/ssl/certs/20724.pem"
	I0501 03:39:59.521438   68864 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20724.pem
	I0501 03:39:59.526658   68864 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May  1 02:20 /usr/share/ca-certificates/20724.pem
	I0501 03:39:59.526706   68864 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20724.pem
	I0501 03:39:59.533201   68864 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20724.pem /etc/ssl/certs/51391683.0"
	I0501 03:39:59.546837   68864 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/207242.pem && ln -fs /usr/share/ca-certificates/207242.pem /etc/ssl/certs/207242.pem"
	I0501 03:39:59.560612   68864 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/207242.pem
	I0501 03:39:59.565545   68864 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May  1 02:20 /usr/share/ca-certificates/207242.pem
	I0501 03:39:59.565589   68864 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/207242.pem
	I0501 03:39:59.571737   68864 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/207242.pem /etc/ssl/certs/3ec20f2e.0"
	I0501 03:39:59.584602   68864 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0501 03:39:59.599088   68864 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0501 03:39:59.604230   68864 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May  1 02:08 /usr/share/ca-certificates/minikubeCA.pem
	I0501 03:39:59.604296   68864 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0501 03:39:59.610536   68864 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0501 03:39:59.624810   68864 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0501 03:39:59.629692   68864 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0501 03:39:59.636209   68864 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0501 03:39:59.642907   68864 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0501 03:39:59.649491   68864 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0501 03:39:59.655702   68864 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0501 03:39:59.661884   68864 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0501 03:39:59.668075   68864 kubeadm.go:391] StartCluster: {Name:embed-certs-277128 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.0 ClusterName:embed-certs-277128 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.218 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 03:39:59.668209   68864 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0501 03:39:59.668255   68864 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0501 03:39:59.712172   68864 cri.go:89] found id: ""
	I0501 03:39:59.712262   68864 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0501 03:39:59.723825   68864 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0501 03:39:59.723848   68864 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0501 03:39:59.723854   68864 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0501 03:39:59.723890   68864 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0501 03:39:59.735188   68864 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0501 03:39:59.736670   68864 kubeconfig.go:125] found "embed-certs-277128" server: "https://192.168.50.218:8443"
	I0501 03:39:59.739665   68864 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0501 03:39:59.750292   68864 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.218
	I0501 03:39:59.750329   68864 kubeadm.go:1154] stopping kube-system containers ...
	I0501 03:39:59.750339   68864 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0501 03:39:59.750388   68864 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0501 03:39:59.791334   68864 cri.go:89] found id: ""
	I0501 03:39:59.791436   68864 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0501 03:39:59.809162   68864 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0501 03:39:59.820979   68864 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0501 03:39:59.821013   68864 kubeadm.go:156] found existing configuration files:
	
	I0501 03:39:59.821072   68864 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0501 03:39:59.832368   68864 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0501 03:39:59.832443   68864 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0501 03:39:59.843920   68864 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0501 03:39:59.855489   68864 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0501 03:39:59.855562   68864 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0501 03:39:59.867337   68864 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0501 03:39:59.878582   68864 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0501 03:39:59.878659   68864 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0501 03:39:59.890049   68864 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0501 03:39:59.901054   68864 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0501 03:39:59.901110   68864 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0501 03:39:59.912900   68864 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0501 03:39:59.925358   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:40:00.065105   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:40:00.861756   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:40:01.089790   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:40:01.158944   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:40:01.249842   68864 api_server.go:52] waiting for apiserver process to appear ...
	I0501 03:40:01.250063   68864 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:01.750273   68864 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:02.250155   68864 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:02.291774   68864 api_server.go:72] duration metric: took 1.041932793s to wait for apiserver process to appear ...
	I0501 03:40:02.291807   68864 api_server.go:88] waiting for apiserver healthz status ...
	I0501 03:40:02.291831   68864 api_server.go:253] Checking apiserver healthz at https://192.168.50.218:8443/healthz ...
	I0501 03:40:02.292377   68864 api_server.go:269] stopped: https://192.168.50.218:8443/healthz: Get "https://192.168.50.218:8443/healthz": dial tcp 192.168.50.218:8443: connect: connection refused
	I0501 03:40:02.792584   68864 api_server.go:253] Checking apiserver healthz at https://192.168.50.218:8443/healthz ...
	I0501 03:40:02.727799   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:02.728314   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | unable to find current IP address of domain default-k8s-diff-port-715118 in network mk-default-k8s-diff-port-715118
	I0501 03:40:02.728347   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | I0501 03:40:02.728270   70304 retry.go:31] will retry after 1.844071576s: waiting for machine to come up
	I0501 03:40:04.574870   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:04.575326   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | unable to find current IP address of domain default-k8s-diff-port-715118 in network mk-default-k8s-diff-port-715118
	I0501 03:40:04.575349   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | I0501 03:40:04.575278   70304 retry.go:31] will retry after 2.989286916s: waiting for machine to come up
	I0501 03:40:04.843311   68864 api_server.go:279] https://192.168.50.218:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0501 03:40:04.843360   68864 api_server.go:103] status: https://192.168.50.218:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0501 03:40:04.843377   68864 api_server.go:253] Checking apiserver healthz at https://192.168.50.218:8443/healthz ...
	I0501 03:40:04.899616   68864 api_server.go:279] https://192.168.50.218:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0501 03:40:04.899655   68864 api_server.go:103] status: https://192.168.50.218:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0501 03:40:05.292097   68864 api_server.go:253] Checking apiserver healthz at https://192.168.50.218:8443/healthz ...
	I0501 03:40:05.300803   68864 api_server.go:279] https://192.168.50.218:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0501 03:40:05.300843   68864 api_server.go:103] status: https://192.168.50.218:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0501 03:40:05.792151   68864 api_server.go:253] Checking apiserver healthz at https://192.168.50.218:8443/healthz ...
	I0501 03:40:05.797124   68864 api_server.go:279] https://192.168.50.218:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0501 03:40:05.797158   68864 api_server.go:103] status: https://192.168.50.218:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0501 03:40:06.292821   68864 api_server.go:253] Checking apiserver healthz at https://192.168.50.218:8443/healthz ...
	I0501 03:40:06.297912   68864 api_server.go:279] https://192.168.50.218:8443/healthz returned 200:
	ok
	I0501 03:40:06.305165   68864 api_server.go:141] control plane version: v1.30.0
	I0501 03:40:06.305199   68864 api_server.go:131] duration metric: took 4.013383351s to wait for apiserver health ...
	I0501 03:40:06.305211   68864 cni.go:84] Creating CNI manager for ""
	I0501 03:40:06.305220   68864 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0501 03:40:06.306925   68864 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0501 03:40:06.308450   68864 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0501 03:40:06.325186   68864 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0501 03:40:06.380997   68864 system_pods.go:43] waiting for kube-system pods to appear ...
	I0501 03:40:06.394134   68864 system_pods.go:59] 8 kube-system pods found
	I0501 03:40:06.394178   68864 system_pods.go:61] "coredns-7db6d8ff4d-sjplt" [6701ee8e-0630-4332-b01c-26741ed3a7b7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0501 03:40:06.394191   68864 system_pods.go:61] "etcd-embed-certs-277128" [744d7481-dd80-4435-90da-685fc16a76a4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0501 03:40:06.394206   68864 system_pods.go:61] "kube-apiserver-embed-certs-277128" [2f1d0f09-b270-4cdc-af3c-e17e3a244bbf] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0501 03:40:06.394215   68864 system_pods.go:61] "kube-controller-manager-embed-certs-277128" [729aa138-dc0d-4616-97d4-1592453971b8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0501 03:40:06.394222   68864 system_pods.go:61] "kube-proxy-phx7x" [56c0381e-c140-4f69-bbe4-09d393db8b23] Running
	I0501 03:40:06.394232   68864 system_pods.go:61] "kube-scheduler-embed-certs-277128" [cc56e9bc-7f21-4f57-a65a-73a10ffd7145] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0501 03:40:06.394253   68864 system_pods.go:61] "metrics-server-569cc877fc-p8j59" [f8ad6c24-dd5d-4515-9052-c9aca7412b55] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0501 03:40:06.394258   68864 system_pods.go:61] "storage-provisioner" [785be666-58d5-4b9d-92fd-bcacdbdebeb2] Running
	I0501 03:40:06.394273   68864 system_pods.go:74] duration metric: took 13.25246ms to wait for pod list to return data ...
	I0501 03:40:06.394293   68864 node_conditions.go:102] verifying NodePressure condition ...
	I0501 03:40:06.399912   68864 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0501 03:40:06.399950   68864 node_conditions.go:123] node cpu capacity is 2
	I0501 03:40:06.399974   68864 node_conditions.go:105] duration metric: took 5.664461ms to run NodePressure ...
	I0501 03:40:06.399996   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:40:06.675573   68864 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0501 03:40:06.680567   68864 kubeadm.go:733] kubelet initialised
	I0501 03:40:06.680591   68864 kubeadm.go:734] duration metric: took 4.987942ms waiting for restarted kubelet to initialise ...
	I0501 03:40:06.680598   68864 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0501 03:40:06.687295   68864 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-sjplt" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:06.692224   68864 pod_ready.go:97] node "embed-certs-277128" hosting pod "coredns-7db6d8ff4d-sjplt" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-277128" has status "Ready":"False"
	I0501 03:40:06.692248   68864 pod_ready.go:81] duration metric: took 4.930388ms for pod "coredns-7db6d8ff4d-sjplt" in "kube-system" namespace to be "Ready" ...
	E0501 03:40:06.692258   68864 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-277128" hosting pod "coredns-7db6d8ff4d-sjplt" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-277128" has status "Ready":"False"
	I0501 03:40:06.692266   68864 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-277128" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:06.699559   68864 pod_ready.go:97] node "embed-certs-277128" hosting pod "etcd-embed-certs-277128" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-277128" has status "Ready":"False"
	I0501 03:40:06.699591   68864 pod_ready.go:81] duration metric: took 7.309622ms for pod "etcd-embed-certs-277128" in "kube-system" namespace to be "Ready" ...
	E0501 03:40:06.699602   68864 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-277128" hosting pod "etcd-embed-certs-277128" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-277128" has status "Ready":"False"
	I0501 03:40:06.699613   68864 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-277128" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:06.705459   68864 pod_ready.go:97] node "embed-certs-277128" hosting pod "kube-apiserver-embed-certs-277128" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-277128" has status "Ready":"False"
	I0501 03:40:06.705485   68864 pod_ready.go:81] duration metric: took 5.86335ms for pod "kube-apiserver-embed-certs-277128" in "kube-system" namespace to be "Ready" ...
	E0501 03:40:06.705497   68864 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-277128" hosting pod "kube-apiserver-embed-certs-277128" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-277128" has status "Ready":"False"
	I0501 03:40:06.705504   68864 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-277128" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:06.786157   68864 pod_ready.go:97] node "embed-certs-277128" hosting pod "kube-controller-manager-embed-certs-277128" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-277128" has status "Ready":"False"
	I0501 03:40:06.786186   68864 pod_ready.go:81] duration metric: took 80.673223ms for pod "kube-controller-manager-embed-certs-277128" in "kube-system" namespace to be "Ready" ...
	E0501 03:40:06.786198   68864 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-277128" hosting pod "kube-controller-manager-embed-certs-277128" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-277128" has status "Ready":"False"
	I0501 03:40:06.786205   68864 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-phx7x" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:07.184262   68864 pod_ready.go:97] node "embed-certs-277128" hosting pod "kube-proxy-phx7x" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-277128" has status "Ready":"False"
	I0501 03:40:07.184297   68864 pod_ready.go:81] duration metric: took 398.081204ms for pod "kube-proxy-phx7x" in "kube-system" namespace to be "Ready" ...
	E0501 03:40:07.184309   68864 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-277128" hosting pod "kube-proxy-phx7x" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-277128" has status "Ready":"False"
	I0501 03:40:07.184319   68864 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-277128" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:07.584569   68864 pod_ready.go:97] node "embed-certs-277128" hosting pod "kube-scheduler-embed-certs-277128" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-277128" has status "Ready":"False"
	I0501 03:40:07.584607   68864 pod_ready.go:81] duration metric: took 400.279023ms for pod "kube-scheduler-embed-certs-277128" in "kube-system" namespace to be "Ready" ...
	E0501 03:40:07.584620   68864 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-277128" hosting pod "kube-scheduler-embed-certs-277128" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-277128" has status "Ready":"False"
	I0501 03:40:07.584630   68864 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:07.984376   68864 pod_ready.go:97] node "embed-certs-277128" hosting pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-277128" has status "Ready":"False"
	I0501 03:40:07.984408   68864 pod_ready.go:81] duration metric: took 399.766342ms for pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace to be "Ready" ...
	E0501 03:40:07.984419   68864 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-277128" hosting pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-277128" has status "Ready":"False"
	I0501 03:40:07.984428   68864 pod_ready.go:38] duration metric: took 1.303821777s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0501 03:40:07.984448   68864 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0501 03:40:08.000370   68864 ops.go:34] apiserver oom_adj: -16
	I0501 03:40:08.000391   68864 kubeadm.go:591] duration metric: took 8.276531687s to restartPrimaryControlPlane
	I0501 03:40:08.000401   68864 kubeadm.go:393] duration metric: took 8.332343707s to StartCluster
	I0501 03:40:08.000416   68864 settings.go:142] acquiring lock: {Name:mkcfe08bcadd45d99c00ed1eaf4e9c226a489fe2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 03:40:08.000482   68864 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18779-13391/kubeconfig
	I0501 03:40:08.002013   68864 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18779-13391/kubeconfig: {Name:mk5d1131557f885800877622151c0915337cb23a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 03:40:08.002343   68864 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.218 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0501 03:40:08.004301   68864 out.go:177] * Verifying Kubernetes components...
	I0501 03:40:08.002423   68864 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0501 03:40:08.002582   68864 config.go:182] Loaded profile config "embed-certs-277128": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0501 03:40:08.005608   68864 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-277128"
	I0501 03:40:08.005624   68864 addons.go:69] Setting metrics-server=true in profile "embed-certs-277128"
	I0501 03:40:08.005658   68864 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-277128"
	W0501 03:40:08.005670   68864 addons.go:243] addon storage-provisioner should already be in state true
	I0501 03:40:08.005609   68864 addons.go:69] Setting default-storageclass=true in profile "embed-certs-277128"
	I0501 03:40:08.005785   68864 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-277128"
	I0501 03:40:08.005659   68864 addons.go:234] Setting addon metrics-server=true in "embed-certs-277128"
	W0501 03:40:08.005819   68864 addons.go:243] addon metrics-server should already be in state true
	I0501 03:40:08.005851   68864 host.go:66] Checking if "embed-certs-277128" exists ...
	I0501 03:40:08.005613   68864 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 03:40:08.005695   68864 host.go:66] Checking if "embed-certs-277128" exists ...
	I0501 03:40:08.006230   68864 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:40:08.006258   68864 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:40:08.006291   68864 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:40:08.006310   68864 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:40:08.006326   68864 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:40:08.006378   68864 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:40:08.021231   68864 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35583
	I0501 03:40:08.021276   68864 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44971
	I0501 03:40:08.021621   68864 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:40:08.021673   68864 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:40:08.022126   68864 main.go:141] libmachine: Using API Version  1
	I0501 03:40:08.022146   68864 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:40:08.022353   68864 main.go:141] libmachine: Using API Version  1
	I0501 03:40:08.022390   68864 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:40:08.022537   68864 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:40:08.022730   68864 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:40:08.022904   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetState
	I0501 03:40:08.023118   68864 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:40:08.023165   68864 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:40:08.024792   68864 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33047
	I0501 03:40:08.025226   68864 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:40:08.025734   68864 main.go:141] libmachine: Using API Version  1
	I0501 03:40:08.025761   68864 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:40:08.026090   68864 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:40:08.026569   68864 addons.go:234] Setting addon default-storageclass=true in "embed-certs-277128"
	W0501 03:40:08.026593   68864 addons.go:243] addon default-storageclass should already be in state true
	I0501 03:40:08.026620   68864 host.go:66] Checking if "embed-certs-277128" exists ...
	I0501 03:40:08.026696   68864 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:40:08.026730   68864 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:40:08.026977   68864 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:40:08.027033   68864 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:40:08.039119   68864 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42975
	I0501 03:40:08.039585   68864 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:40:08.040083   68864 main.go:141] libmachine: Using API Version  1
	I0501 03:40:08.040106   68864 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:40:08.040419   68864 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:40:08.040599   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetState
	I0501 03:40:08.042228   68864 main.go:141] libmachine: (embed-certs-277128) Calling .DriverName
	I0501 03:40:08.044289   68864 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0501 03:40:08.045766   68864 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0501 03:40:08.045787   68864 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0501 03:40:08.045804   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHHostname
	I0501 03:40:08.043677   68864 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37141
	I0501 03:40:08.045633   68864 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41653
	I0501 03:40:08.046247   68864 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:40:08.046326   68864 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:40:08.046989   68864 main.go:141] libmachine: Using API Version  1
	I0501 03:40:08.047012   68864 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:40:08.047196   68864 main.go:141] libmachine: Using API Version  1
	I0501 03:40:08.047216   68864 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:40:08.047279   68864 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:40:08.047403   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetState
	I0501 03:40:08.047515   68864 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:40:08.048047   68864 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:40:08.048081   68864 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:40:08.049225   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:40:08.049623   68864 main.go:141] libmachine: (embed-certs-277128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:7d", ip: ""} in network mk-embed-certs-277128: {Iface:virbr2 ExpiryTime:2024-05-01 04:39:45 +0000 UTC Type:0 Mac:52:54:00:96:11:7d Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-277128 Clientid:01:52:54:00:96:11:7d}
	I0501 03:40:08.049649   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined IP address 192.168.50.218 and MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:40:08.049773   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHPort
	I0501 03:40:08.049915   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHKeyPath
	I0501 03:40:08.050096   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHUsername
	I0501 03:40:08.050165   68864 main.go:141] libmachine: (embed-certs-277128) Calling .DriverName
	I0501 03:40:08.050297   68864 sshutil.go:53] new ssh client: &{IP:192.168.50.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/embed-certs-277128/id_rsa Username:docker}
	I0501 03:40:08.052006   68864 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0501 03:40:08.053365   68864 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0501 03:40:08.053380   68864 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0501 03:40:08.053394   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHHostname
	I0501 03:40:08.056360   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:40:08.056752   68864 main.go:141] libmachine: (embed-certs-277128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:7d", ip: ""} in network mk-embed-certs-277128: {Iface:virbr2 ExpiryTime:2024-05-01 04:39:45 +0000 UTC Type:0 Mac:52:54:00:96:11:7d Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-277128 Clientid:01:52:54:00:96:11:7d}
	I0501 03:40:08.056782   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined IP address 192.168.50.218 and MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:40:08.056892   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHPort
	I0501 03:40:08.057074   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHKeyPath
	I0501 03:40:08.057215   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHUsername
	I0501 03:40:08.057334   68864 sshutil.go:53] new ssh client: &{IP:192.168.50.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/embed-certs-277128/id_rsa Username:docker}
	I0501 03:40:08.064476   68864 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43029
	I0501 03:40:08.064882   68864 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:40:08.065323   68864 main.go:141] libmachine: Using API Version  1
	I0501 03:40:08.065352   68864 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:40:08.065696   68864 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:40:08.065895   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetState
	I0501 03:40:08.067420   68864 main.go:141] libmachine: (embed-certs-277128) Calling .DriverName
	I0501 03:40:08.067740   68864 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0501 03:40:08.067762   68864 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0501 03:40:08.067774   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHHostname
	I0501 03:40:08.070587   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:40:08.071043   68864 main.go:141] libmachine: (embed-certs-277128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:7d", ip: ""} in network mk-embed-certs-277128: {Iface:virbr2 ExpiryTime:2024-05-01 04:39:45 +0000 UTC Type:0 Mac:52:54:00:96:11:7d Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-277128 Clientid:01:52:54:00:96:11:7d}
	I0501 03:40:08.071073   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined IP address 192.168.50.218 and MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:40:08.071225   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHPort
	I0501 03:40:08.071401   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHKeyPath
	I0501 03:40:08.071554   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHUsername
	I0501 03:40:08.071688   68864 sshutil.go:53] new ssh client: &{IP:192.168.50.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/embed-certs-277128/id_rsa Username:docker}
	I0501 03:40:08.204158   68864 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0501 03:40:08.229990   68864 node_ready.go:35] waiting up to 6m0s for node "embed-certs-277128" to be "Ready" ...
	I0501 03:40:08.289511   68864 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0501 03:40:08.289535   68864 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0501 03:40:08.301855   68864 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0501 03:40:08.311966   68864 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0501 03:40:08.330943   68864 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0501 03:40:08.330973   68864 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0501 03:40:08.384842   68864 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0501 03:40:08.384867   68864 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0501 03:40:08.445206   68864 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0501 03:40:09.434390   68864 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.122391479s)
	I0501 03:40:09.434458   68864 main.go:141] libmachine: Making call to close driver server
	I0501 03:40:09.434471   68864 main.go:141] libmachine: (embed-certs-277128) Calling .Close
	I0501 03:40:09.434518   68864 main.go:141] libmachine: Making call to close driver server
	I0501 03:40:09.434541   68864 main.go:141] libmachine: (embed-certs-277128) Calling .Close
	I0501 03:40:09.434567   68864 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.132680542s)
	I0501 03:40:09.434595   68864 main.go:141] libmachine: Making call to close driver server
	I0501 03:40:09.434604   68864 main.go:141] libmachine: (embed-certs-277128) Calling .Close
	I0501 03:40:09.434833   68864 main.go:141] libmachine: Successfully made call to close driver server
	I0501 03:40:09.434859   68864 main.go:141] libmachine: Successfully made call to close driver server
	I0501 03:40:09.434870   68864 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 03:40:09.434872   68864 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 03:40:09.434881   68864 main.go:141] libmachine: Making call to close driver server
	I0501 03:40:09.434882   68864 main.go:141] libmachine: Making call to close driver server
	I0501 03:40:09.434889   68864 main.go:141] libmachine: (embed-certs-277128) Calling .Close
	I0501 03:40:09.434890   68864 main.go:141] libmachine: (embed-certs-277128) Calling .Close
	I0501 03:40:09.434936   68864 main.go:141] libmachine: Successfully made call to close driver server
	I0501 03:40:09.434949   68864 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 03:40:09.434967   68864 main.go:141] libmachine: Making call to close driver server
	I0501 03:40:09.434994   68864 main.go:141] libmachine: (embed-certs-277128) Calling .Close
	I0501 03:40:09.434832   68864 main.go:141] libmachine: (embed-certs-277128) DBG | Closing plugin on server side
	I0501 03:40:09.435072   68864 main.go:141] libmachine: (embed-certs-277128) DBG | Closing plugin on server side
	I0501 03:40:09.437116   68864 main.go:141] libmachine: Successfully made call to close driver server
	I0501 03:40:09.437138   68864 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 03:40:09.437146   68864 main.go:141] libmachine: (embed-certs-277128) DBG | Closing plugin on server side
	I0501 03:40:09.437179   68864 main.go:141] libmachine: (embed-certs-277128) DBG | Closing plugin on server side
	I0501 03:40:09.437194   68864 main.go:141] libmachine: Successfully made call to close driver server
	I0501 03:40:09.437215   68864 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 03:40:09.437297   68864 main.go:141] libmachine: Successfully made call to close driver server
	I0501 03:40:09.437342   68864 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 03:40:09.437359   68864 addons.go:470] Verifying addon metrics-server=true in "embed-certs-277128"
	I0501 03:40:09.445787   68864 main.go:141] libmachine: Making call to close driver server
	I0501 03:40:09.445817   68864 main.go:141] libmachine: (embed-certs-277128) Calling .Close
	I0501 03:40:09.446053   68864 main.go:141] libmachine: Successfully made call to close driver server
	I0501 03:40:09.446090   68864 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 03:40:09.446112   68864 main.go:141] libmachine: (embed-certs-277128) DBG | Closing plugin on server side
	I0501 03:40:09.448129   68864 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass
	I0501 03:40:07.567551   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:07.567914   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | unable to find current IP address of domain default-k8s-diff-port-715118 in network mk-default-k8s-diff-port-715118
	I0501 03:40:07.567948   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | I0501 03:40:07.567860   70304 retry.go:31] will retry after 4.440791777s: waiting for machine to come up
	I0501 03:40:13.516002   69580 start.go:364] duration metric: took 3m31.9441828s to acquireMachinesLock for "old-k8s-version-503971"
	I0501 03:40:13.516087   69580 start.go:96] Skipping create...Using existing machine configuration
	I0501 03:40:13.516100   69580 fix.go:54] fixHost starting: 
	I0501 03:40:13.516559   69580 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:40:13.516601   69580 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:40:13.537158   69580 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38535
	I0501 03:40:13.537631   69580 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:40:13.538169   69580 main.go:141] libmachine: Using API Version  1
	I0501 03:40:13.538197   69580 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:40:13.538570   69580 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:40:13.538769   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .DriverName
	I0501 03:40:13.538958   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetState
	I0501 03:40:13.540454   69580 fix.go:112] recreateIfNeeded on old-k8s-version-503971: state=Stopped err=<nil>
	I0501 03:40:13.540486   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .DriverName
	W0501 03:40:13.540787   69580 fix.go:138] unexpected machine state, will restart: <nil>
	I0501 03:40:13.542670   69580 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-503971" ...
	I0501 03:40:09.449483   68864 addons.go:505] duration metric: took 1.447068548s for enable addons: enabled=[storage-provisioner metrics-server default-storageclass]
	I0501 03:40:10.233650   68864 node_ready.go:53] node "embed-certs-277128" has status "Ready":"False"
	I0501 03:40:12.234270   68864 node_ready.go:53] node "embed-certs-277128" has status "Ready":"False"
	I0501 03:40:12.011886   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:12.012305   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Found IP for machine: 192.168.72.158
	I0501 03:40:12.012335   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has current primary IP address 192.168.72.158 and MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:12.012347   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Reserving static IP address...
	I0501 03:40:12.012759   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-715118", mac: "52:54:00:87:12:31", ip: "192.168.72.158"} in network mk-default-k8s-diff-port-715118: {Iface:virbr3 ExpiryTime:2024-05-01 04:40:05 +0000 UTC Type:0 Mac:52:54:00:87:12:31 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-715118 Clientid:01:52:54:00:87:12:31}
	I0501 03:40:12.012796   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | skip adding static IP to network mk-default-k8s-diff-port-715118 - found existing host DHCP lease matching {name: "default-k8s-diff-port-715118", mac: "52:54:00:87:12:31", ip: "192.168.72.158"}
	I0501 03:40:12.012809   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Reserved static IP address: 192.168.72.158
	I0501 03:40:12.012828   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Waiting for SSH to be available...
	I0501 03:40:12.012835   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | Getting to WaitForSSH function...
	I0501 03:40:12.014719   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:12.015044   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:12:31", ip: ""} in network mk-default-k8s-diff-port-715118: {Iface:virbr3 ExpiryTime:2024-05-01 04:40:05 +0000 UTC Type:0 Mac:52:54:00:87:12:31 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-715118 Clientid:01:52:54:00:87:12:31}
	I0501 03:40:12.015080   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined IP address 192.168.72.158 and MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:12.015193   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | Using SSH client type: external
	I0501 03:40:12.015220   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | Using SSH private key: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/default-k8s-diff-port-715118/id_rsa (-rw-------)
	I0501 03:40:12.015269   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.158 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18779-13391/.minikube/machines/default-k8s-diff-port-715118/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0501 03:40:12.015280   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | About to run SSH command:
	I0501 03:40:12.015289   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | exit 0
	I0501 03:40:12.138881   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | SSH cmd err, output: <nil>: 
	I0501 03:40:12.139286   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetConfigRaw
	I0501 03:40:12.140056   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetIP
	I0501 03:40:12.142869   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:12.143322   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:12:31", ip: ""} in network mk-default-k8s-diff-port-715118: {Iface:virbr3 ExpiryTime:2024-05-01 04:40:05 +0000 UTC Type:0 Mac:52:54:00:87:12:31 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-715118 Clientid:01:52:54:00:87:12:31}
	I0501 03:40:12.143353   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined IP address 192.168.72.158 and MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:12.143662   69237 profile.go:143] Saving config to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/default-k8s-diff-port-715118/config.json ...
	I0501 03:40:12.143858   69237 machine.go:94] provisionDockerMachine start ...
	I0501 03:40:12.143876   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .DriverName
	I0501 03:40:12.144117   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHHostname
	I0501 03:40:12.146145   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:12.146535   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:12:31", ip: ""} in network mk-default-k8s-diff-port-715118: {Iface:virbr3 ExpiryTime:2024-05-01 04:40:05 +0000 UTC Type:0 Mac:52:54:00:87:12:31 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-715118 Clientid:01:52:54:00:87:12:31}
	I0501 03:40:12.146563   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined IP address 192.168.72.158 and MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:12.146712   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHPort
	I0501 03:40:12.146889   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHKeyPath
	I0501 03:40:12.147021   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHKeyPath
	I0501 03:40:12.147130   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHUsername
	I0501 03:40:12.147310   69237 main.go:141] libmachine: Using SSH client type: native
	I0501 03:40:12.147558   69237 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.158 22 <nil> <nil>}
	I0501 03:40:12.147574   69237 main.go:141] libmachine: About to run SSH command:
	hostname
	I0501 03:40:12.251357   69237 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0501 03:40:12.251387   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetMachineName
	I0501 03:40:12.251629   69237 buildroot.go:166] provisioning hostname "default-k8s-diff-port-715118"
	I0501 03:40:12.251658   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetMachineName
	I0501 03:40:12.251862   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHHostname
	I0501 03:40:12.254582   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:12.254892   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:12:31", ip: ""} in network mk-default-k8s-diff-port-715118: {Iface:virbr3 ExpiryTime:2024-05-01 04:40:05 +0000 UTC Type:0 Mac:52:54:00:87:12:31 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-715118 Clientid:01:52:54:00:87:12:31}
	I0501 03:40:12.254924   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined IP address 192.168.72.158 and MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:12.255073   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHPort
	I0501 03:40:12.255276   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHKeyPath
	I0501 03:40:12.255435   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHKeyPath
	I0501 03:40:12.255575   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHUsername
	I0501 03:40:12.255744   69237 main.go:141] libmachine: Using SSH client type: native
	I0501 03:40:12.255905   69237 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.158 22 <nil> <nil>}
	I0501 03:40:12.255917   69237 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-715118 && echo "default-k8s-diff-port-715118" | sudo tee /etc/hostname
	I0501 03:40:12.377588   69237 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-715118
	
	I0501 03:40:12.377628   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHHostname
	I0501 03:40:12.380627   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:12.380927   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:12:31", ip: ""} in network mk-default-k8s-diff-port-715118: {Iface:virbr3 ExpiryTime:2024-05-01 04:40:05 +0000 UTC Type:0 Mac:52:54:00:87:12:31 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-715118 Clientid:01:52:54:00:87:12:31}
	I0501 03:40:12.380958   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined IP address 192.168.72.158 and MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:12.381155   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHPort
	I0501 03:40:12.381372   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHKeyPath
	I0501 03:40:12.381550   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHKeyPath
	I0501 03:40:12.381723   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHUsername
	I0501 03:40:12.381907   69237 main.go:141] libmachine: Using SSH client type: native
	I0501 03:40:12.382148   69237 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.158 22 <nil> <nil>}
	I0501 03:40:12.382170   69237 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-715118' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-715118/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-715118' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0501 03:40:12.494424   69237 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0501 03:40:12.494454   69237 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18779-13391/.minikube CaCertPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18779-13391/.minikube}
	I0501 03:40:12.494484   69237 buildroot.go:174] setting up certificates
	I0501 03:40:12.494493   69237 provision.go:84] configureAuth start
	I0501 03:40:12.494504   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetMachineName
	I0501 03:40:12.494786   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetIP
	I0501 03:40:12.497309   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:12.497584   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:12:31", ip: ""} in network mk-default-k8s-diff-port-715118: {Iface:virbr3 ExpiryTime:2024-05-01 04:40:05 +0000 UTC Type:0 Mac:52:54:00:87:12:31 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-715118 Clientid:01:52:54:00:87:12:31}
	I0501 03:40:12.497616   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined IP address 192.168.72.158 and MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:12.497746   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHHostname
	I0501 03:40:12.500010   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:12.500302   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:12:31", ip: ""} in network mk-default-k8s-diff-port-715118: {Iface:virbr3 ExpiryTime:2024-05-01 04:40:05 +0000 UTC Type:0 Mac:52:54:00:87:12:31 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-715118 Clientid:01:52:54:00:87:12:31}
	I0501 03:40:12.500322   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined IP address 192.168.72.158 and MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:12.500449   69237 provision.go:143] copyHostCerts
	I0501 03:40:12.500505   69237 exec_runner.go:144] found /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem, removing ...
	I0501 03:40:12.500524   69237 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem
	I0501 03:40:12.500598   69237 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem (1078 bytes)
	I0501 03:40:12.500759   69237 exec_runner.go:144] found /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem, removing ...
	I0501 03:40:12.500772   69237 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem
	I0501 03:40:12.500815   69237 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem (1123 bytes)
	I0501 03:40:12.500891   69237 exec_runner.go:144] found /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem, removing ...
	I0501 03:40:12.500900   69237 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem
	I0501 03:40:12.500925   69237 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem (1675 bytes)
	I0501 03:40:12.500991   69237 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-715118 san=[127.0.0.1 192.168.72.158 default-k8s-diff-port-715118 localhost minikube]
	I0501 03:40:12.779037   69237 provision.go:177] copyRemoteCerts
	I0501 03:40:12.779104   69237 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0501 03:40:12.779139   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHHostname
	I0501 03:40:12.781800   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:12.782159   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:12:31", ip: ""} in network mk-default-k8s-diff-port-715118: {Iface:virbr3 ExpiryTime:2024-05-01 04:40:05 +0000 UTC Type:0 Mac:52:54:00:87:12:31 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-715118 Clientid:01:52:54:00:87:12:31}
	I0501 03:40:12.782195   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined IP address 192.168.72.158 and MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:12.782356   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHPort
	I0501 03:40:12.782655   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHKeyPath
	I0501 03:40:12.782812   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHUsername
	I0501 03:40:12.782946   69237 sshutil.go:53] new ssh client: &{IP:192.168.72.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/default-k8s-diff-port-715118/id_rsa Username:docker}
	I0501 03:40:12.867622   69237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0501 03:40:12.897105   69237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0501 03:40:12.926675   69237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0501 03:40:12.955373   69237 provision.go:87] duration metric: took 460.865556ms to configureAuth
	I0501 03:40:12.955405   69237 buildroot.go:189] setting minikube options for container-runtime
	I0501 03:40:12.955606   69237 config.go:182] Loaded profile config "default-k8s-diff-port-715118": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0501 03:40:12.955700   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHHostname
	I0501 03:40:12.958286   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:12.958632   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:12:31", ip: ""} in network mk-default-k8s-diff-port-715118: {Iface:virbr3 ExpiryTime:2024-05-01 04:40:05 +0000 UTC Type:0 Mac:52:54:00:87:12:31 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-715118 Clientid:01:52:54:00:87:12:31}
	I0501 03:40:12.958670   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined IP address 192.168.72.158 and MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:12.958800   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHPort
	I0501 03:40:12.959007   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHKeyPath
	I0501 03:40:12.959225   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHKeyPath
	I0501 03:40:12.959374   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHUsername
	I0501 03:40:12.959554   69237 main.go:141] libmachine: Using SSH client type: native
	I0501 03:40:12.959729   69237 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.158 22 <nil> <nil>}
	I0501 03:40:12.959748   69237 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0501 03:40:13.253328   69237 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0501 03:40:13.253356   69237 machine.go:97] duration metric: took 1.109484866s to provisionDockerMachine
	I0501 03:40:13.253371   69237 start.go:293] postStartSetup for "default-k8s-diff-port-715118" (driver="kvm2")
	I0501 03:40:13.253385   69237 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0501 03:40:13.253405   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .DriverName
	I0501 03:40:13.253753   69237 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0501 03:40:13.253790   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHHostname
	I0501 03:40:13.256734   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:13.257187   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:12:31", ip: ""} in network mk-default-k8s-diff-port-715118: {Iface:virbr3 ExpiryTime:2024-05-01 04:40:05 +0000 UTC Type:0 Mac:52:54:00:87:12:31 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-715118 Clientid:01:52:54:00:87:12:31}
	I0501 03:40:13.257214   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined IP address 192.168.72.158 and MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:13.257345   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHPort
	I0501 03:40:13.257547   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHKeyPath
	I0501 03:40:13.257708   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHUsername
	I0501 03:40:13.257856   69237 sshutil.go:53] new ssh client: &{IP:192.168.72.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/default-k8s-diff-port-715118/id_rsa Username:docker}
	I0501 03:40:13.353373   69237 ssh_runner.go:195] Run: cat /etc/os-release
	I0501 03:40:13.359653   69237 info.go:137] Remote host: Buildroot 2023.02.9
	I0501 03:40:13.359679   69237 filesync.go:126] Scanning /home/jenkins/minikube-integration/18779-13391/.minikube/addons for local assets ...
	I0501 03:40:13.359747   69237 filesync.go:126] Scanning /home/jenkins/minikube-integration/18779-13391/.minikube/files for local assets ...
	I0501 03:40:13.359854   69237 filesync.go:149] local asset: /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem -> 207242.pem in /etc/ssl/certs
	I0501 03:40:13.359964   69237 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0501 03:40:13.370608   69237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem --> /etc/ssl/certs/207242.pem (1708 bytes)
	I0501 03:40:13.402903   69237 start.go:296] duration metric: took 149.518346ms for postStartSetup
	I0501 03:40:13.402946   69237 fix.go:56] duration metric: took 20.610871873s for fixHost
	I0501 03:40:13.402967   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHHostname
	I0501 03:40:13.406324   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:13.406762   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:12:31", ip: ""} in network mk-default-k8s-diff-port-715118: {Iface:virbr3 ExpiryTime:2024-05-01 04:40:05 +0000 UTC Type:0 Mac:52:54:00:87:12:31 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-715118 Clientid:01:52:54:00:87:12:31}
	I0501 03:40:13.406792   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined IP address 192.168.72.158 and MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:13.407028   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHPort
	I0501 03:40:13.407274   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHKeyPath
	I0501 03:40:13.407505   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHKeyPath
	I0501 03:40:13.407645   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHUsername
	I0501 03:40:13.407831   69237 main.go:141] libmachine: Using SSH client type: native
	I0501 03:40:13.408034   69237 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.158 22 <nil> <nil>}
	I0501 03:40:13.408045   69237 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0501 03:40:13.515775   69237 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714534813.490981768
	
	I0501 03:40:13.515814   69237 fix.go:216] guest clock: 1714534813.490981768
	I0501 03:40:13.515852   69237 fix.go:229] Guest: 2024-05-01 03:40:13.490981768 +0000 UTC Remote: 2024-05-01 03:40:13.402950224 +0000 UTC m=+262.796298359 (delta=88.031544ms)
	I0501 03:40:13.515884   69237 fix.go:200] guest clock delta is within tolerance: 88.031544ms
	I0501 03:40:13.515891   69237 start.go:83] releasing machines lock for "default-k8s-diff-port-715118", held for 20.723857967s
	I0501 03:40:13.515976   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .DriverName
	I0501 03:40:13.516272   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetIP
	I0501 03:40:13.519627   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:13.520098   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:12:31", ip: ""} in network mk-default-k8s-diff-port-715118: {Iface:virbr3 ExpiryTime:2024-05-01 04:40:05 +0000 UTC Type:0 Mac:52:54:00:87:12:31 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-715118 Clientid:01:52:54:00:87:12:31}
	I0501 03:40:13.520128   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined IP address 192.168.72.158 and MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:13.520304   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .DriverName
	I0501 03:40:13.520922   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .DriverName
	I0501 03:40:13.521122   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .DriverName
	I0501 03:40:13.521212   69237 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0501 03:40:13.521292   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHHostname
	I0501 03:40:13.521355   69237 ssh_runner.go:195] Run: cat /version.json
	I0501 03:40:13.521387   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHHostname
	I0501 03:40:13.524292   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:13.524328   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:13.524612   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:12:31", ip: ""} in network mk-default-k8s-diff-port-715118: {Iface:virbr3 ExpiryTime:2024-05-01 04:40:05 +0000 UTC Type:0 Mac:52:54:00:87:12:31 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-715118 Clientid:01:52:54:00:87:12:31}
	I0501 03:40:13.524672   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined IP address 192.168.72.158 and MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:13.524819   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:12:31", ip: ""} in network mk-default-k8s-diff-port-715118: {Iface:virbr3 ExpiryTime:2024-05-01 04:40:05 +0000 UTC Type:0 Mac:52:54:00:87:12:31 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-715118 Clientid:01:52:54:00:87:12:31}
	I0501 03:40:13.524948   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined IP address 192.168.72.158 and MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:13.524989   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHPort
	I0501 03:40:13.525033   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHPort
	I0501 03:40:13.525171   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHKeyPath
	I0501 03:40:13.525196   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHKeyPath
	I0501 03:40:13.525306   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHUsername
	I0501 03:40:13.525401   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHUsername
	I0501 03:40:13.525490   69237 sshutil.go:53] new ssh client: &{IP:192.168.72.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/default-k8s-diff-port-715118/id_rsa Username:docker}
	I0501 03:40:13.525553   69237 sshutil.go:53] new ssh client: &{IP:192.168.72.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/default-k8s-diff-port-715118/id_rsa Username:docker}
	I0501 03:40:13.628623   69237 ssh_runner.go:195] Run: systemctl --version
	I0501 03:40:13.636013   69237 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0501 03:40:13.787414   69237 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0501 03:40:13.795777   69237 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0501 03:40:13.795867   69237 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0501 03:40:13.822287   69237 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0501 03:40:13.822326   69237 start.go:494] detecting cgroup driver to use...
	I0501 03:40:13.822507   69237 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0501 03:40:13.841310   69237 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0501 03:40:13.857574   69237 docker.go:217] disabling cri-docker service (if available) ...
	I0501 03:40:13.857645   69237 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0501 03:40:13.872903   69237 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0501 03:40:13.889032   69237 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0501 03:40:14.020563   69237 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0501 03:40:14.222615   69237 docker.go:233] disabling docker service ...
	I0501 03:40:14.222691   69237 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0501 03:40:14.245841   69237 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0501 03:40:14.261001   69237 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0501 03:40:14.385943   69237 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0501 03:40:14.516899   69237 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0501 03:40:14.545138   69237 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0501 03:40:14.570308   69237 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0501 03:40:14.570373   69237 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:40:14.586460   69237 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0501 03:40:14.586535   69237 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:40:14.598947   69237 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:40:14.617581   69237 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:40:14.630097   69237 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0501 03:40:14.642379   69237 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:40:14.653723   69237 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:40:14.674508   69237 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:40:14.685890   69237 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0501 03:40:14.696560   69237 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0501 03:40:14.696614   69237 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0501 03:40:14.713050   69237 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0501 03:40:14.723466   69237 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 03:40:14.884910   69237 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0501 03:40:15.030618   69237 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0501 03:40:15.030689   69237 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0501 03:40:15.036403   69237 start.go:562] Will wait 60s for crictl version
	I0501 03:40:15.036470   69237 ssh_runner.go:195] Run: which crictl
	I0501 03:40:15.040924   69237 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0501 03:40:15.082944   69237 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0501 03:40:15.083037   69237 ssh_runner.go:195] Run: crio --version
	I0501 03:40:15.123492   69237 ssh_runner.go:195] Run: crio --version
	I0501 03:40:15.160739   69237 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0501 03:40:15.162026   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetIP
	I0501 03:40:15.164966   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:15.165378   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:12:31", ip: ""} in network mk-default-k8s-diff-port-715118: {Iface:virbr3 ExpiryTime:2024-05-01 04:40:05 +0000 UTC Type:0 Mac:52:54:00:87:12:31 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-715118 Clientid:01:52:54:00:87:12:31}
	I0501 03:40:15.165417   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined IP address 192.168.72.158 and MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:15.165621   69237 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0501 03:40:15.171717   69237 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0501 03:40:15.190203   69237 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-715118 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.0 ClusterName:default-k8s-diff-port-715118 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.158 Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0501 03:40:15.190359   69237 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0501 03:40:15.190439   69237 ssh_runner.go:195] Run: sudo crictl images --output json
	I0501 03:40:15.240549   69237 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0501 03:40:15.240606   69237 ssh_runner.go:195] Run: which lz4
	I0501 03:40:15.246523   69237 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0501 03:40:15.253094   69237 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0501 03:40:15.253139   69237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394544937 bytes)
	I0501 03:40:13.544100   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .Start
	I0501 03:40:13.544328   69580 main.go:141] libmachine: (old-k8s-version-503971) Ensuring networks are active...
	I0501 03:40:13.545238   69580 main.go:141] libmachine: (old-k8s-version-503971) Ensuring network default is active
	I0501 03:40:13.545621   69580 main.go:141] libmachine: (old-k8s-version-503971) Ensuring network mk-old-k8s-version-503971 is active
	I0501 03:40:13.546072   69580 main.go:141] libmachine: (old-k8s-version-503971) Getting domain xml...
	I0501 03:40:13.546928   69580 main.go:141] libmachine: (old-k8s-version-503971) Creating domain...
	I0501 03:40:14.858558   69580 main.go:141] libmachine: (old-k8s-version-503971) Waiting to get IP...
	I0501 03:40:14.859690   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:14.860108   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | unable to find current IP address of domain old-k8s-version-503971 in network mk-old-k8s-version-503971
	I0501 03:40:14.860215   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | I0501 03:40:14.860103   70499 retry.go:31] will retry after 294.057322ms: waiting for machine to come up
	I0501 03:40:15.155490   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:15.155922   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | unable to find current IP address of domain old-k8s-version-503971 in network mk-old-k8s-version-503971
	I0501 03:40:15.155954   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | I0501 03:40:15.155870   70499 retry.go:31] will retry after 281.238966ms: waiting for machine to come up
	I0501 03:40:15.439196   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:15.439735   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | unable to find current IP address of domain old-k8s-version-503971 in network mk-old-k8s-version-503971
	I0501 03:40:15.439783   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | I0501 03:40:15.439697   70499 retry.go:31] will retry after 429.353689ms: waiting for machine to come up
	I0501 03:40:15.871266   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:15.871947   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | unable to find current IP address of domain old-k8s-version-503971 in network mk-old-k8s-version-503971
	I0501 03:40:15.871970   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | I0501 03:40:15.871895   70499 retry.go:31] will retry after 478.685219ms: waiting for machine to come up
	I0501 03:40:16.352661   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:16.353125   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | unable to find current IP address of domain old-k8s-version-503971 in network mk-old-k8s-version-503971
	I0501 03:40:16.353161   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | I0501 03:40:16.353087   70499 retry.go:31] will retry after 642.905156ms: waiting for machine to come up
	I0501 03:40:14.235378   68864 node_ready.go:53] node "embed-certs-277128" has status "Ready":"False"
	I0501 03:40:15.735465   68864 node_ready.go:49] node "embed-certs-277128" has status "Ready":"True"
	I0501 03:40:15.735494   68864 node_ready.go:38] duration metric: took 7.50546727s for node "embed-certs-277128" to be "Ready" ...
	I0501 03:40:15.735503   68864 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0501 03:40:15.743215   68864 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-sjplt" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:17.752821   68864 pod_ready.go:102] pod "coredns-7db6d8ff4d-sjplt" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:17.121023   69237 crio.go:462] duration metric: took 1.874524806s to copy over tarball
	I0501 03:40:17.121097   69237 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0501 03:40:19.792970   69237 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.671840765s)
	I0501 03:40:19.793004   69237 crio.go:469] duration metric: took 2.67194801s to extract the tarball
	I0501 03:40:19.793014   69237 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0501 03:40:19.834845   69237 ssh_runner.go:195] Run: sudo crictl images --output json
	I0501 03:40:19.896841   69237 crio.go:514] all images are preloaded for cri-o runtime.
	I0501 03:40:19.896881   69237 cache_images.go:84] Images are preloaded, skipping loading
	I0501 03:40:19.896892   69237 kubeadm.go:928] updating node { 192.168.72.158 8444 v1.30.0 crio true true} ...
	I0501 03:40:19.897027   69237 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-715118 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.158
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:default-k8s-diff-port-715118 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0501 03:40:19.897113   69237 ssh_runner.go:195] Run: crio config
	I0501 03:40:19.953925   69237 cni.go:84] Creating CNI manager for ""
	I0501 03:40:19.953956   69237 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0501 03:40:19.953971   69237 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0501 03:40:19.953991   69237 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.158 APIServerPort:8444 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-715118 NodeName:default-k8s-diff-port-715118 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.158"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.158 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0501 03:40:19.954133   69237 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.158
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-715118"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.158
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.158"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0501 03:40:19.954198   69237 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0501 03:40:19.967632   69237 binaries.go:44] Found k8s binaries, skipping transfer
	I0501 03:40:19.967708   69237 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0501 03:40:19.984161   69237 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0501 03:40:20.006540   69237 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0501 03:40:20.029218   69237 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0501 03:40:20.051612   69237 ssh_runner.go:195] Run: grep 192.168.72.158	control-plane.minikube.internal$ /etc/hosts
	I0501 03:40:20.056502   69237 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.158	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0501 03:40:20.071665   69237 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 03:40:20.194289   69237 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0501 03:40:20.215402   69237 certs.go:68] Setting up /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/default-k8s-diff-port-715118 for IP: 192.168.72.158
	I0501 03:40:20.215440   69237 certs.go:194] generating shared ca certs ...
	I0501 03:40:20.215471   69237 certs.go:226] acquiring lock for ca certs: {Name:mk85a75d14c00780d4bf09cd9bdaf0f96f1b8fce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 03:40:20.215698   69237 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18779-13391/.minikube/ca.key
	I0501 03:40:20.215769   69237 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.key
	I0501 03:40:20.215785   69237 certs.go:256] generating profile certs ...
	I0501 03:40:20.215922   69237 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/default-k8s-diff-port-715118/client.key
	I0501 03:40:20.216023   69237 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/default-k8s-diff-port-715118/apiserver.key.91bc3872
	I0501 03:40:20.216094   69237 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/default-k8s-diff-port-715118/proxy-client.key
	I0501 03:40:20.216275   69237 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/20724.pem (1338 bytes)
	W0501 03:40:20.216321   69237 certs.go:480] ignoring /home/jenkins/minikube-integration/18779-13391/.minikube/certs/20724_empty.pem, impossibly tiny 0 bytes
	I0501 03:40:20.216337   69237 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca-key.pem (1679 bytes)
	I0501 03:40:20.216375   69237 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem (1078 bytes)
	I0501 03:40:20.216439   69237 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem (1123 bytes)
	I0501 03:40:20.216472   69237 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem (1675 bytes)
	I0501 03:40:20.216560   69237 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem (1708 bytes)
	I0501 03:40:20.217306   69237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0501 03:40:20.256162   69237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0501 03:40:20.293643   69237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0501 03:40:20.329175   69237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0501 03:40:20.367715   69237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/default-k8s-diff-port-715118/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0501 03:40:20.400024   69237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/default-k8s-diff-port-715118/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0501 03:40:20.428636   69237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/default-k8s-diff-port-715118/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0501 03:40:20.458689   69237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/default-k8s-diff-port-715118/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0501 03:40:20.487619   69237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem --> /usr/share/ca-certificates/207242.pem (1708 bytes)
	I0501 03:40:20.518140   69237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0501 03:40:20.547794   69237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/certs/20724.pem --> /usr/share/ca-certificates/20724.pem (1338 bytes)
	I0501 03:40:20.580453   69237 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0501 03:40:20.605211   69237 ssh_runner.go:195] Run: openssl version
	I0501 03:40:20.612269   69237 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/207242.pem && ln -fs /usr/share/ca-certificates/207242.pem /etc/ssl/certs/207242.pem"
	I0501 03:40:20.626575   69237 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/207242.pem
	I0501 03:40:20.632370   69237 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May  1 02:20 /usr/share/ca-certificates/207242.pem
	I0501 03:40:20.632439   69237 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/207242.pem
	I0501 03:40:20.639563   69237 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/207242.pem /etc/ssl/certs/3ec20f2e.0"
	I0501 03:40:16.997533   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:16.998034   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | unable to find current IP address of domain old-k8s-version-503971 in network mk-old-k8s-version-503971
	I0501 03:40:16.998076   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | I0501 03:40:16.997984   70499 retry.go:31] will retry after 596.56948ms: waiting for machine to come up
	I0501 03:40:17.596671   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:17.597182   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | unable to find current IP address of domain old-k8s-version-503971 in network mk-old-k8s-version-503971
	I0501 03:40:17.597207   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | I0501 03:40:17.597132   70499 retry.go:31] will retry after 770.742109ms: waiting for machine to come up
	I0501 03:40:18.369337   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:18.369833   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | unable to find current IP address of domain old-k8s-version-503971 in network mk-old-k8s-version-503971
	I0501 03:40:18.369864   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | I0501 03:40:18.369780   70499 retry.go:31] will retry after 1.382502808s: waiting for machine to come up
	I0501 03:40:19.753936   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:19.754419   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | unable to find current IP address of domain old-k8s-version-503971 in network mk-old-k8s-version-503971
	I0501 03:40:19.754458   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | I0501 03:40:19.754363   70499 retry.go:31] will retry after 1.344792989s: waiting for machine to come up
	I0501 03:40:21.101047   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:21.101474   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | unable to find current IP address of domain old-k8s-version-503971 in network mk-old-k8s-version-503971
	I0501 03:40:21.101514   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | I0501 03:40:21.101442   70499 retry.go:31] will retry after 1.636964906s: waiting for machine to come up
	I0501 03:40:20.252239   68864 pod_ready.go:102] pod "coredns-7db6d8ff4d-sjplt" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:22.751407   68864 pod_ready.go:92] pod "coredns-7db6d8ff4d-sjplt" in "kube-system" namespace has status "Ready":"True"
	I0501 03:40:22.751431   68864 pod_ready.go:81] duration metric: took 7.008190087s for pod "coredns-7db6d8ff4d-sjplt" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:22.751442   68864 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-277128" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:22.757104   68864 pod_ready.go:92] pod "etcd-embed-certs-277128" in "kube-system" namespace has status "Ready":"True"
	I0501 03:40:22.757124   68864 pod_ready.go:81] duration metric: took 5.677117ms for pod "etcd-embed-certs-277128" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:22.757141   68864 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-277128" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:22.763083   68864 pod_ready.go:92] pod "kube-apiserver-embed-certs-277128" in "kube-system" namespace has status "Ready":"True"
	I0501 03:40:22.763107   68864 pod_ready.go:81] duration metric: took 5.958961ms for pod "kube-apiserver-embed-certs-277128" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:22.763119   68864 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-277128" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:22.768163   68864 pod_ready.go:92] pod "kube-controller-manager-embed-certs-277128" in "kube-system" namespace has status "Ready":"True"
	I0501 03:40:22.768182   68864 pod_ready.go:81] duration metric: took 5.055934ms for pod "kube-controller-manager-embed-certs-277128" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:22.768193   68864 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-phx7x" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:22.772478   68864 pod_ready.go:92] pod "kube-proxy-phx7x" in "kube-system" namespace has status "Ready":"True"
	I0501 03:40:22.772497   68864 pod_ready.go:81] duration metric: took 4.297358ms for pod "kube-proxy-phx7x" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:22.772505   68864 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-277128" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:23.149692   68864 pod_ready.go:92] pod "kube-scheduler-embed-certs-277128" in "kube-system" namespace has status "Ready":"True"
	I0501 03:40:23.149726   68864 pod_ready.go:81] duration metric: took 377.213314ms for pod "kube-scheduler-embed-certs-277128" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:23.149741   68864 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:20.653202   69237 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0501 03:40:20.878582   69237 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0501 03:40:20.884671   69237 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May  1 02:08 /usr/share/ca-certificates/minikubeCA.pem
	I0501 03:40:20.884755   69237 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0501 03:40:20.891633   69237 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0501 03:40:20.906032   69237 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20724.pem && ln -fs /usr/share/ca-certificates/20724.pem /etc/ssl/certs/20724.pem"
	I0501 03:40:20.924491   69237 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20724.pem
	I0501 03:40:20.931346   69237 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May  1 02:20 /usr/share/ca-certificates/20724.pem
	I0501 03:40:20.931421   69237 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20724.pem
	I0501 03:40:20.937830   69237 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20724.pem /etc/ssl/certs/51391683.0"
	I0501 03:40:20.951239   69237 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0501 03:40:20.956883   69237 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0501 03:40:20.964048   69237 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0501 03:40:20.971156   69237 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0501 03:40:20.978243   69237 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0501 03:40:20.985183   69237 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0501 03:40:20.991709   69237 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0501 03:40:20.998390   69237 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-715118 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.0 ClusterName:default-k8s-diff-port-715118 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.158 Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 03:40:20.998509   69237 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0501 03:40:20.998558   69237 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0501 03:40:21.051469   69237 cri.go:89] found id: ""
	I0501 03:40:21.051575   69237 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0501 03:40:21.063280   69237 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0501 03:40:21.063301   69237 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0501 03:40:21.063307   69237 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0501 03:40:21.063381   69237 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0501 03:40:21.077380   69237 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0501 03:40:21.078445   69237 kubeconfig.go:125] found "default-k8s-diff-port-715118" server: "https://192.168.72.158:8444"
	I0501 03:40:21.080872   69237 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0501 03:40:21.095004   69237 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.158
	I0501 03:40:21.095045   69237 kubeadm.go:1154] stopping kube-system containers ...
	I0501 03:40:21.095059   69237 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0501 03:40:21.095123   69237 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0501 03:40:21.151629   69237 cri.go:89] found id: ""
	I0501 03:40:21.151711   69237 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0501 03:40:21.177077   69237 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0501 03:40:21.192057   69237 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0501 03:40:21.192087   69237 kubeadm.go:156] found existing configuration files:
	
	I0501 03:40:21.192146   69237 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0501 03:40:21.206784   69237 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0501 03:40:21.206870   69237 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0501 03:40:21.221942   69237 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0501 03:40:21.236442   69237 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0501 03:40:21.236516   69237 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0501 03:40:21.251285   69237 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0501 03:40:21.265997   69237 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0501 03:40:21.266049   69237 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0501 03:40:21.281137   69237 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0501 03:40:21.297713   69237 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0501 03:40:21.297783   69237 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0501 03:40:21.314264   69237 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0501 03:40:21.328605   69237 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:40:21.478475   69237 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:40:22.161692   69237 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:40:22.432136   69237 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:40:22.514744   69237 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:40:22.597689   69237 api_server.go:52] waiting for apiserver process to appear ...
	I0501 03:40:22.597770   69237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:23.098146   69237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:23.597831   69237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:23.629375   69237 api_server.go:72] duration metric: took 1.031684055s to wait for apiserver process to appear ...
	I0501 03:40:23.629462   69237 api_server.go:88] waiting for apiserver healthz status ...
	I0501 03:40:23.629500   69237 api_server.go:253] Checking apiserver healthz at https://192.168.72.158:8444/healthz ...
	I0501 03:40:23.630045   69237 api_server.go:269] stopped: https://192.168.72.158:8444/healthz: Get "https://192.168.72.158:8444/healthz": dial tcp 192.168.72.158:8444: connect: connection refused
	I0501 03:40:24.129831   69237 api_server.go:253] Checking apiserver healthz at https://192.168.72.158:8444/healthz ...
	I0501 03:40:22.740241   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:22.740692   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | unable to find current IP address of domain old-k8s-version-503971 in network mk-old-k8s-version-503971
	I0501 03:40:22.740722   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | I0501 03:40:22.740656   70499 retry.go:31] will retry after 1.899831455s: waiting for machine to come up
	I0501 03:40:24.642609   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:24.643075   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | unable to find current IP address of domain old-k8s-version-503971 in network mk-old-k8s-version-503971
	I0501 03:40:24.643104   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | I0501 03:40:24.643019   70499 retry.go:31] will retry after 3.503333894s: waiting for machine to come up
	I0501 03:40:25.157335   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:27.160083   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:27.091079   69237 api_server.go:279] https://192.168.72.158:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0501 03:40:27.091134   69237 api_server.go:103] status: https://192.168.72.158:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0501 03:40:27.091152   69237 api_server.go:253] Checking apiserver healthz at https://192.168.72.158:8444/healthz ...
	I0501 03:40:27.163481   69237 api_server.go:279] https://192.168.72.158:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0501 03:40:27.163509   69237 api_server.go:103] status: https://192.168.72.158:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0501 03:40:27.163522   69237 api_server.go:253] Checking apiserver healthz at https://192.168.72.158:8444/healthz ...
	I0501 03:40:27.175097   69237 api_server.go:279] https://192.168.72.158:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0501 03:40:27.175129   69237 api_server.go:103] status: https://192.168.72.158:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0501 03:40:27.629613   69237 api_server.go:253] Checking apiserver healthz at https://192.168.72.158:8444/healthz ...
	I0501 03:40:27.637166   69237 api_server.go:279] https://192.168.72.158:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0501 03:40:27.637202   69237 api_server.go:103] status: https://192.168.72.158:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0501 03:40:28.130467   69237 api_server.go:253] Checking apiserver healthz at https://192.168.72.158:8444/healthz ...
	I0501 03:40:28.148799   69237 api_server.go:279] https://192.168.72.158:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0501 03:40:28.148823   69237 api_server.go:103] status: https://192.168.72.158:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0501 03:40:28.630500   69237 api_server.go:253] Checking apiserver healthz at https://192.168.72.158:8444/healthz ...
	I0501 03:40:28.642856   69237 api_server.go:279] https://192.168.72.158:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0501 03:40:28.642890   69237 api_server.go:103] status: https://192.168.72.158:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0501 03:40:29.130453   69237 api_server.go:253] Checking apiserver healthz at https://192.168.72.158:8444/healthz ...
	I0501 03:40:29.137783   69237 api_server.go:279] https://192.168.72.158:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0501 03:40:29.137819   69237 api_server.go:103] status: https://192.168.72.158:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0501 03:40:29.630448   69237 api_server.go:253] Checking apiserver healthz at https://192.168.72.158:8444/healthz ...
	I0501 03:40:29.634736   69237 api_server.go:279] https://192.168.72.158:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0501 03:40:29.634764   69237 api_server.go:103] status: https://192.168.72.158:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0501 03:40:30.130371   69237 api_server.go:253] Checking apiserver healthz at https://192.168.72.158:8444/healthz ...
	I0501 03:40:30.134727   69237 api_server.go:279] https://192.168.72.158:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0501 03:40:30.134755   69237 api_server.go:103] status: https://192.168.72.158:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0501 03:40:30.630555   69237 api_server.go:253] Checking apiserver healthz at https://192.168.72.158:8444/healthz ...
	I0501 03:40:30.637025   69237 api_server.go:279] https://192.168.72.158:8444/healthz returned 200:
	ok
	I0501 03:40:30.644179   69237 api_server.go:141] control plane version: v1.30.0
	I0501 03:40:30.644209   69237 api_server.go:131] duration metric: took 7.014727807s to wait for apiserver health ...
	I0501 03:40:30.644217   69237 cni.go:84] Creating CNI manager for ""
	I0501 03:40:30.644223   69237 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0501 03:40:30.646018   69237 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0501 03:40:30.647222   69237 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0501 03:40:28.148102   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:28.148506   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | unable to find current IP address of domain old-k8s-version-503971 in network mk-old-k8s-version-503971
	I0501 03:40:28.148547   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | I0501 03:40:28.148463   70499 retry.go:31] will retry after 4.150508159s: waiting for machine to come up
	I0501 03:40:33.783990   68640 start.go:364] duration metric: took 56.072338201s to acquireMachinesLock for "no-preload-892672"
	I0501 03:40:33.784047   68640 start.go:96] Skipping create...Using existing machine configuration
	I0501 03:40:33.784056   68640 fix.go:54] fixHost starting: 
	I0501 03:40:33.784468   68640 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:40:33.784504   68640 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:40:33.801460   68640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37745
	I0501 03:40:33.802023   68640 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:40:33.802634   68640 main.go:141] libmachine: Using API Version  1
	I0501 03:40:33.802669   68640 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:40:33.803062   68640 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:40:33.803262   68640 main.go:141] libmachine: (no-preload-892672) Calling .DriverName
	I0501 03:40:33.803379   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetState
	I0501 03:40:33.805241   68640 fix.go:112] recreateIfNeeded on no-preload-892672: state=Stopped err=<nil>
	I0501 03:40:33.805266   68640 main.go:141] libmachine: (no-preload-892672) Calling .DriverName
	W0501 03:40:33.805452   68640 fix.go:138] unexpected machine state, will restart: <nil>
	I0501 03:40:33.807020   68640 out.go:177] * Restarting existing kvm2 VM for "no-preload-892672" ...
	I0501 03:40:29.656911   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:32.158119   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:32.303427   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:32.303804   69580 main.go:141] libmachine: (old-k8s-version-503971) Found IP for machine: 192.168.61.104
	I0501 03:40:32.303837   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has current primary IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:32.303851   69580 main.go:141] libmachine: (old-k8s-version-503971) Reserving static IP address...
	I0501 03:40:32.304254   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "old-k8s-version-503971", mac: "52:54:00:7d:68:83", ip: "192.168.61.104"} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:40:32.304286   69580 main.go:141] libmachine: (old-k8s-version-503971) Reserved static IP address: 192.168.61.104
	I0501 03:40:32.304305   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | skip adding static IP to network mk-old-k8s-version-503971 - found existing host DHCP lease matching {name: "old-k8s-version-503971", mac: "52:54:00:7d:68:83", ip: "192.168.61.104"}
	I0501 03:40:32.304323   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | Getting to WaitForSSH function...
	I0501 03:40:32.304337   69580 main.go:141] libmachine: (old-k8s-version-503971) Waiting for SSH to be available...
	I0501 03:40:32.306619   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:32.306972   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:68:83", ip: ""} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:40:32.307011   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:32.307114   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | Using SSH client type: external
	I0501 03:40:32.307138   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | Using SSH private key: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/old-k8s-version-503971/id_rsa (-rw-------)
	I0501 03:40:32.307174   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.104 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18779-13391/.minikube/machines/old-k8s-version-503971/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0501 03:40:32.307188   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | About to run SSH command:
	I0501 03:40:32.307224   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | exit 0
	I0501 03:40:32.438508   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | SSH cmd err, output: <nil>: 
	I0501 03:40:32.438882   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetConfigRaw
	I0501 03:40:32.439452   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetIP
	I0501 03:40:32.441984   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:32.442342   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:68:83", ip: ""} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:40:32.442369   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:32.442668   69580 profile.go:143] Saving config to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/old-k8s-version-503971/config.json ...
	I0501 03:40:32.442875   69580 machine.go:94] provisionDockerMachine start ...
	I0501 03:40:32.442897   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .DriverName
	I0501 03:40:32.443077   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHHostname
	I0501 03:40:32.445129   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:32.445442   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:68:83", ip: ""} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:40:32.445480   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:32.445628   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHPort
	I0501 03:40:32.445806   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHKeyPath
	I0501 03:40:32.445974   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHKeyPath
	I0501 03:40:32.446122   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHUsername
	I0501 03:40:32.446314   69580 main.go:141] libmachine: Using SSH client type: native
	I0501 03:40:32.446548   69580 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.104 22 <nil> <nil>}
	I0501 03:40:32.446564   69580 main.go:141] libmachine: About to run SSH command:
	hostname
	I0501 03:40:32.559346   69580 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0501 03:40:32.559379   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetMachineName
	I0501 03:40:32.559630   69580 buildroot.go:166] provisioning hostname "old-k8s-version-503971"
	I0501 03:40:32.559654   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetMachineName
	I0501 03:40:32.559832   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHHostname
	I0501 03:40:32.562176   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:32.562547   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:68:83", ip: ""} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:40:32.562582   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:32.562716   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHPort
	I0501 03:40:32.562892   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHKeyPath
	I0501 03:40:32.563019   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHKeyPath
	I0501 03:40:32.563161   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHUsername
	I0501 03:40:32.563332   69580 main.go:141] libmachine: Using SSH client type: native
	I0501 03:40:32.563545   69580 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.104 22 <nil> <nil>}
	I0501 03:40:32.563564   69580 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-503971 && echo "old-k8s-version-503971" | sudo tee /etc/hostname
	I0501 03:40:32.699918   69580 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-503971
	
	I0501 03:40:32.699961   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHHostname
	I0501 03:40:32.702721   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:32.703134   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:68:83", ip: ""} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:40:32.703158   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:32.703361   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHPort
	I0501 03:40:32.703547   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHKeyPath
	I0501 03:40:32.703744   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHKeyPath
	I0501 03:40:32.703881   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHUsername
	I0501 03:40:32.704037   69580 main.go:141] libmachine: Using SSH client type: native
	I0501 03:40:32.704199   69580 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.104 22 <nil> <nil>}
	I0501 03:40:32.704215   69580 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-503971' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-503971/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-503971' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0501 03:40:32.830277   69580 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0501 03:40:32.830307   69580 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18779-13391/.minikube CaCertPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18779-13391/.minikube}
	I0501 03:40:32.830323   69580 buildroot.go:174] setting up certificates
	I0501 03:40:32.830331   69580 provision.go:84] configureAuth start
	I0501 03:40:32.830340   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetMachineName
	I0501 03:40:32.830629   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetIP
	I0501 03:40:32.833575   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:32.833887   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:68:83", ip: ""} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:40:32.833932   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:32.834070   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHHostname
	I0501 03:40:32.836309   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:32.836664   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:68:83", ip: ""} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:40:32.836691   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:32.836824   69580 provision.go:143] copyHostCerts
	I0501 03:40:32.836885   69580 exec_runner.go:144] found /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem, removing ...
	I0501 03:40:32.836895   69580 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem
	I0501 03:40:32.836945   69580 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem (1078 bytes)
	I0501 03:40:32.837046   69580 exec_runner.go:144] found /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem, removing ...
	I0501 03:40:32.837054   69580 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem
	I0501 03:40:32.837072   69580 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem (1123 bytes)
	I0501 03:40:32.837129   69580 exec_runner.go:144] found /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem, removing ...
	I0501 03:40:32.837136   69580 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem
	I0501 03:40:32.837152   69580 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem (1675 bytes)
	I0501 03:40:32.837202   69580 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-503971 san=[127.0.0.1 192.168.61.104 localhost minikube old-k8s-version-503971]
	I0501 03:40:33.047948   69580 provision.go:177] copyRemoteCerts
	I0501 03:40:33.048004   69580 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0501 03:40:33.048030   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHHostname
	I0501 03:40:33.050591   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:33.050975   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:68:83", ip: ""} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:40:33.051012   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:33.051142   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHPort
	I0501 03:40:33.051310   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHKeyPath
	I0501 03:40:33.051465   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHUsername
	I0501 03:40:33.051574   69580 sshutil.go:53] new ssh client: &{IP:192.168.61.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/old-k8s-version-503971/id_rsa Username:docker}
	I0501 03:40:33.143991   69580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0501 03:40:33.175494   69580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0501 03:40:33.204770   69580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0501 03:40:33.232728   69580 provision.go:87] duration metric: took 402.386279ms to configureAuth
	I0501 03:40:33.232756   69580 buildroot.go:189] setting minikube options for container-runtime
	I0501 03:40:33.232962   69580 config.go:182] Loaded profile config "old-k8s-version-503971": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0501 03:40:33.233051   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHHostname
	I0501 03:40:33.235656   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:33.236006   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:68:83", ip: ""} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:40:33.236038   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:33.236162   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHPort
	I0501 03:40:33.236339   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHKeyPath
	I0501 03:40:33.236484   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHKeyPath
	I0501 03:40:33.236633   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHUsername
	I0501 03:40:33.236817   69580 main.go:141] libmachine: Using SSH client type: native
	I0501 03:40:33.236980   69580 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.104 22 <nil> <nil>}
	I0501 03:40:33.236997   69580 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0501 03:40:33.526370   69580 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0501 03:40:33.526419   69580 machine.go:97] duration metric: took 1.083510254s to provisionDockerMachine
	I0501 03:40:33.526432   69580 start.go:293] postStartSetup for "old-k8s-version-503971" (driver="kvm2")
	I0501 03:40:33.526443   69580 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0501 03:40:33.526470   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .DriverName
	I0501 03:40:33.526788   69580 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0501 03:40:33.526831   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHHostname
	I0501 03:40:33.529815   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:33.530209   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:68:83", ip: ""} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:40:33.530268   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:33.530364   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHPort
	I0501 03:40:33.530559   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHKeyPath
	I0501 03:40:33.530741   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHUsername
	I0501 03:40:33.530909   69580 sshutil.go:53] new ssh client: &{IP:192.168.61.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/old-k8s-version-503971/id_rsa Username:docker}
	I0501 03:40:33.620224   69580 ssh_runner.go:195] Run: cat /etc/os-release
	I0501 03:40:33.625417   69580 info.go:137] Remote host: Buildroot 2023.02.9
	I0501 03:40:33.625447   69580 filesync.go:126] Scanning /home/jenkins/minikube-integration/18779-13391/.minikube/addons for local assets ...
	I0501 03:40:33.625511   69580 filesync.go:126] Scanning /home/jenkins/minikube-integration/18779-13391/.minikube/files for local assets ...
	I0501 03:40:33.625594   69580 filesync.go:149] local asset: /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem -> 207242.pem in /etc/ssl/certs
	I0501 03:40:33.625691   69580 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0501 03:40:33.637311   69580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem --> /etc/ssl/certs/207242.pem (1708 bytes)
	I0501 03:40:33.666707   69580 start.go:296] duration metric: took 140.263297ms for postStartSetup
	I0501 03:40:33.666740   69580 fix.go:56] duration metric: took 20.150640355s for fixHost
	I0501 03:40:33.666758   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHHostname
	I0501 03:40:33.669394   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:33.669822   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:68:83", ip: ""} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:40:33.669852   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:33.669963   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHPort
	I0501 03:40:33.670213   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHKeyPath
	I0501 03:40:33.670388   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHKeyPath
	I0501 03:40:33.670589   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHUsername
	I0501 03:40:33.670794   69580 main.go:141] libmachine: Using SSH client type: native
	I0501 03:40:33.670972   69580 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.104 22 <nil> <nil>}
	I0501 03:40:33.670984   69580 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0501 03:40:33.783810   69580 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714534833.728910946
	
	I0501 03:40:33.783839   69580 fix.go:216] guest clock: 1714534833.728910946
	I0501 03:40:33.783850   69580 fix.go:229] Guest: 2024-05-01 03:40:33.728910946 +0000 UTC Remote: 2024-05-01 03:40:33.666743363 +0000 UTC m=+232.246108464 (delta=62.167583ms)
	I0501 03:40:33.783893   69580 fix.go:200] guest clock delta is within tolerance: 62.167583ms
	I0501 03:40:33.783903   69580 start.go:83] releasing machines lock for "old-k8s-version-503971", held for 20.267840723s
	I0501 03:40:33.783933   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .DriverName
	I0501 03:40:33.784203   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetIP
	I0501 03:40:33.786846   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:33.787202   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:68:83", ip: ""} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:40:33.787230   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:33.787385   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .DriverName
	I0501 03:40:33.787837   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .DriverName
	I0501 03:40:33.787997   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .DriverName
	I0501 03:40:33.788085   69580 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0501 03:40:33.788126   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHHostname
	I0501 03:40:33.788252   69580 ssh_runner.go:195] Run: cat /version.json
	I0501 03:40:33.788279   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHHostname
	I0501 03:40:33.790748   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:33.791086   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:33.791118   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:68:83", ip: ""} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:40:33.791142   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:33.791435   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHPort
	I0501 03:40:33.791491   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:68:83", ip: ""} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:40:33.791532   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:33.791618   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHKeyPath
	I0501 03:40:33.791740   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHPort
	I0501 03:40:33.791815   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHUsername
	I0501 03:40:33.791937   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHKeyPath
	I0501 03:40:33.792014   69580 sshutil.go:53] new ssh client: &{IP:192.168.61.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/old-k8s-version-503971/id_rsa Username:docker}
	I0501 03:40:33.792069   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHUsername
	I0501 03:40:33.792206   69580 sshutil.go:53] new ssh client: &{IP:192.168.61.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/old-k8s-version-503971/id_rsa Username:docker}
	I0501 03:40:33.876242   69580 ssh_runner.go:195] Run: systemctl --version
	I0501 03:40:33.901692   69580 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0501 03:40:34.056758   69580 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0501 03:40:34.065070   69580 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0501 03:40:34.065156   69580 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0501 03:40:34.085337   69580 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0501 03:40:34.085364   69580 start.go:494] detecting cgroup driver to use...
	I0501 03:40:34.085432   69580 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0501 03:40:34.102723   69580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0501 03:40:34.118792   69580 docker.go:217] disabling cri-docker service (if available) ...
	I0501 03:40:34.118847   69580 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0501 03:40:34.133978   69580 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0501 03:40:34.153890   69580 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0501 03:40:34.283815   69580 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0501 03:40:34.475851   69580 docker.go:233] disabling docker service ...
	I0501 03:40:34.475926   69580 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0501 03:40:34.500769   69580 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0501 03:40:34.517315   69580 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0501 03:40:34.674322   69580 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0501 03:40:34.833281   69580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0501 03:40:34.852610   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0501 03:40:34.879434   69580 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0501 03:40:34.879517   69580 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:40:34.892197   69580 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0501 03:40:34.892269   69580 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:40:34.904437   69580 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:40:34.919950   69580 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:40:34.933772   69580 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0501 03:40:34.947563   69580 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0501 03:40:34.965724   69580 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0501 03:40:34.965795   69580 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0501 03:40:34.984251   69580 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0501 03:40:34.997050   69580 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 03:40:35.155852   69580 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0501 03:40:35.362090   69580 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0501 03:40:35.362164   69580 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0501 03:40:35.368621   69580 start.go:562] Will wait 60s for crictl version
	I0501 03:40:35.368701   69580 ssh_runner.go:195] Run: which crictl
	I0501 03:40:35.373792   69580 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0501 03:40:35.436905   69580 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0501 03:40:35.437018   69580 ssh_runner.go:195] Run: crio --version
	I0501 03:40:35.485130   69580 ssh_runner.go:195] Run: crio --version
	I0501 03:40:35.528700   69580 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0501 03:40:30.661395   69237 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0501 03:40:30.682810   69237 system_pods.go:43] waiting for kube-system pods to appear ...
	I0501 03:40:30.694277   69237 system_pods.go:59] 8 kube-system pods found
	I0501 03:40:30.694326   69237 system_pods.go:61] "coredns-7db6d8ff4d-9r7dt" [75d43a25-d309-427e-befc-7f1851b90d8c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0501 03:40:30.694343   69237 system_pods.go:61] "etcd-default-k8s-diff-port-715118" [21f6a4cd-f662-4865-9208-83959f0a6782] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0501 03:40:30.694354   69237 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-715118" [4dc3e45e-a5d8-480f-a8e8-763ecab0976b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0501 03:40:30.694369   69237 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-715118" [340580a3-040e-48fc-b89c-36a4f6fccfc2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0501 03:40:30.694376   69237 system_pods.go:61] "kube-proxy-vg7ts" [e55f3363-178c-427a-819d-0dc94c3116f3] Running
	I0501 03:40:30.694388   69237 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-715118" [b850fc4a-da6b-4714-98bb-e36e185880dd] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0501 03:40:30.694417   69237 system_pods.go:61] "metrics-server-569cc877fc-2btjj" [9b8ff94d-9e59-46d4-ac6d-7accca8b3552] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0501 03:40:30.694427   69237 system_pods.go:61] "storage-provisioner" [d44a3cf1-c8a5-4a20-8dd6-b854680b33b9] Running
	I0501 03:40:30.694435   69237 system_pods.go:74] duration metric: took 11.599113ms to wait for pod list to return data ...
	I0501 03:40:30.694449   69237 node_conditions.go:102] verifying NodePressure condition ...
	I0501 03:40:30.697795   69237 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0501 03:40:30.697825   69237 node_conditions.go:123] node cpu capacity is 2
	I0501 03:40:30.697838   69237 node_conditions.go:105] duration metric: took 3.383507ms to run NodePressure ...
	I0501 03:40:30.697858   69237 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:40:30.978827   69237 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0501 03:40:30.984628   69237 kubeadm.go:733] kubelet initialised
	I0501 03:40:30.984650   69237 kubeadm.go:734] duration metric: took 5.799905ms waiting for restarted kubelet to initialise ...
	I0501 03:40:30.984656   69237 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0501 03:40:30.992354   69237 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-9r7dt" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:30.999663   69237 pod_ready.go:97] node "default-k8s-diff-port-715118" hosting pod "coredns-7db6d8ff4d-9r7dt" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-715118" has status "Ready":"False"
	I0501 03:40:30.999690   69237 pod_ready.go:81] duration metric: took 7.312969ms for pod "coredns-7db6d8ff4d-9r7dt" in "kube-system" namespace to be "Ready" ...
	E0501 03:40:30.999700   69237 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-715118" hosting pod "coredns-7db6d8ff4d-9r7dt" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-715118" has status "Ready":"False"
	I0501 03:40:30.999706   69237 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-715118" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:31.006163   69237 pod_ready.go:97] node "default-k8s-diff-port-715118" hosting pod "etcd-default-k8s-diff-port-715118" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-715118" has status "Ready":"False"
	I0501 03:40:31.006187   69237 pod_ready.go:81] duration metric: took 6.471262ms for pod "etcd-default-k8s-diff-port-715118" in "kube-system" namespace to be "Ready" ...
	E0501 03:40:31.006199   69237 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-715118" hosting pod "etcd-default-k8s-diff-port-715118" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-715118" has status "Ready":"False"
	I0501 03:40:31.006208   69237 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-715118" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:31.011772   69237 pod_ready.go:97] node "default-k8s-diff-port-715118" hosting pod "kube-apiserver-default-k8s-diff-port-715118" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-715118" has status "Ready":"False"
	I0501 03:40:31.011793   69237 pod_ready.go:81] duration metric: took 5.576722ms for pod "kube-apiserver-default-k8s-diff-port-715118" in "kube-system" namespace to be "Ready" ...
	E0501 03:40:31.011803   69237 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-715118" hosting pod "kube-apiserver-default-k8s-diff-port-715118" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-715118" has status "Ready":"False"
	I0501 03:40:31.011810   69237 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-715118" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:31.086163   69237 pod_ready.go:97] node "default-k8s-diff-port-715118" hosting pod "kube-controller-manager-default-k8s-diff-port-715118" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-715118" has status "Ready":"False"
	I0501 03:40:31.086194   69237 pod_ready.go:81] duration metric: took 74.377197ms for pod "kube-controller-manager-default-k8s-diff-port-715118" in "kube-system" namespace to be "Ready" ...
	E0501 03:40:31.086207   69237 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-715118" hosting pod "kube-controller-manager-default-k8s-diff-port-715118" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-715118" has status "Ready":"False"
	I0501 03:40:31.086214   69237 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-vg7ts" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:31.487056   69237 pod_ready.go:92] pod "kube-proxy-vg7ts" in "kube-system" namespace has status "Ready":"True"
	I0501 03:40:31.487078   69237 pod_ready.go:81] duration metric: took 400.857543ms for pod "kube-proxy-vg7ts" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:31.487088   69237 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-715118" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:33.502448   69237 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-715118" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:35.530015   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetIP
	I0501 03:40:35.533706   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:35.534178   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:68:83", ip: ""} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:40:35.534254   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:35.534515   69580 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0501 03:40:35.541542   69580 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0501 03:40:35.563291   69580 kubeadm.go:877] updating cluster {Name:old-k8s-version-503971 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-503971 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.104 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0501 03:40:35.563434   69580 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0501 03:40:35.563512   69580 ssh_runner.go:195] Run: sudo crictl images --output json
	I0501 03:40:35.646548   69580 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0501 03:40:35.646635   69580 ssh_runner.go:195] Run: which lz4
	I0501 03:40:35.652824   69580 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0501 03:40:35.660056   69580 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0501 03:40:35.660099   69580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0501 03:40:33.808828   68640 main.go:141] libmachine: (no-preload-892672) Calling .Start
	I0501 03:40:33.809083   68640 main.go:141] libmachine: (no-preload-892672) Ensuring networks are active...
	I0501 03:40:33.809829   68640 main.go:141] libmachine: (no-preload-892672) Ensuring network default is active
	I0501 03:40:33.810166   68640 main.go:141] libmachine: (no-preload-892672) Ensuring network mk-no-preload-892672 is active
	I0501 03:40:33.810632   68640 main.go:141] libmachine: (no-preload-892672) Getting domain xml...
	I0501 03:40:33.811386   68640 main.go:141] libmachine: (no-preload-892672) Creating domain...
	I0501 03:40:35.133886   68640 main.go:141] libmachine: (no-preload-892672) Waiting to get IP...
	I0501 03:40:35.134756   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:35.135216   68640 main.go:141] libmachine: (no-preload-892672) DBG | unable to find current IP address of domain no-preload-892672 in network mk-no-preload-892672
	I0501 03:40:35.135280   68640 main.go:141] libmachine: (no-preload-892672) DBG | I0501 03:40:35.135178   70664 retry.go:31] will retry after 275.796908ms: waiting for machine to come up
	I0501 03:40:35.412670   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:35.413206   68640 main.go:141] libmachine: (no-preload-892672) DBG | unable to find current IP address of domain no-preload-892672 in network mk-no-preload-892672
	I0501 03:40:35.413232   68640 main.go:141] libmachine: (no-preload-892672) DBG | I0501 03:40:35.413162   70664 retry.go:31] will retry after 326.173381ms: waiting for machine to come up
	I0501 03:40:35.740734   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:35.741314   68640 main.go:141] libmachine: (no-preload-892672) DBG | unable to find current IP address of domain no-preload-892672 in network mk-no-preload-892672
	I0501 03:40:35.741342   68640 main.go:141] libmachine: (no-preload-892672) DBG | I0501 03:40:35.741260   70664 retry.go:31] will retry after 476.50915ms: waiting for machine to come up
	I0501 03:40:36.219908   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:36.220440   68640 main.go:141] libmachine: (no-preload-892672) DBG | unable to find current IP address of domain no-preload-892672 in network mk-no-preload-892672
	I0501 03:40:36.220473   68640 main.go:141] libmachine: (no-preload-892672) DBG | I0501 03:40:36.220399   70664 retry.go:31] will retry after 377.277784ms: waiting for machine to come up
	I0501 03:40:36.598936   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:36.599391   68640 main.go:141] libmachine: (no-preload-892672) DBG | unable to find current IP address of domain no-preload-892672 in network mk-no-preload-892672
	I0501 03:40:36.599417   68640 main.go:141] libmachine: (no-preload-892672) DBG | I0501 03:40:36.599348   70664 retry.go:31] will retry after 587.166276ms: waiting for machine to come up
	I0501 03:40:37.188757   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:37.189406   68640 main.go:141] libmachine: (no-preload-892672) DBG | unable to find current IP address of domain no-preload-892672 in network mk-no-preload-892672
	I0501 03:40:37.189441   68640 main.go:141] libmachine: (no-preload-892672) DBG | I0501 03:40:37.189311   70664 retry.go:31] will retry after 801.958256ms: waiting for machine to come up
	I0501 03:40:34.658104   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:36.660517   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:35.998453   69237 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-715118" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:38.495088   69237 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-715118" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:39.004175   69237 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-715118" in "kube-system" namespace has status "Ready":"True"
	I0501 03:40:39.004198   69237 pod_ready.go:81] duration metric: took 7.517103824s for pod "kube-scheduler-default-k8s-diff-port-715118" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:39.004209   69237 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:37.870306   69580 crio.go:462] duration metric: took 2.217531377s to copy over tarball
	I0501 03:40:37.870393   69580 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0501 03:40:37.992669   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:37.993052   68640 main.go:141] libmachine: (no-preload-892672) DBG | unable to find current IP address of domain no-preload-892672 in network mk-no-preload-892672
	I0501 03:40:37.993080   68640 main.go:141] libmachine: (no-preload-892672) DBG | I0501 03:40:37.993016   70664 retry.go:31] will retry after 1.085029482s: waiting for machine to come up
	I0501 03:40:39.079315   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:39.079739   68640 main.go:141] libmachine: (no-preload-892672) DBG | unable to find current IP address of domain no-preload-892672 in network mk-no-preload-892672
	I0501 03:40:39.079779   68640 main.go:141] libmachine: (no-preload-892672) DBG | I0501 03:40:39.079682   70664 retry.go:31] will retry after 1.140448202s: waiting for machine to come up
	I0501 03:40:40.221645   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:40.222165   68640 main.go:141] libmachine: (no-preload-892672) DBG | unable to find current IP address of domain no-preload-892672 in network mk-no-preload-892672
	I0501 03:40:40.222192   68640 main.go:141] libmachine: (no-preload-892672) DBG | I0501 03:40:40.222103   70664 retry.go:31] will retry after 1.434247869s: waiting for machine to come up
	I0501 03:40:41.658447   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:41.659034   68640 main.go:141] libmachine: (no-preload-892672) DBG | unable to find current IP address of domain no-preload-892672 in network mk-no-preload-892672
	I0501 03:40:41.659072   68640 main.go:141] libmachine: (no-preload-892672) DBG | I0501 03:40:41.659003   70664 retry.go:31] will retry after 1.759453732s: waiting for machine to come up
	I0501 03:40:39.157834   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:41.164729   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:43.658248   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:41.014770   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:43.513038   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:45.516821   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:41.534681   69580 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.664236925s)
	I0501 03:40:41.599216   69580 crio.go:469] duration metric: took 3.72886857s to extract the tarball
	I0501 03:40:41.599238   69580 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0501 03:40:41.649221   69580 ssh_runner.go:195] Run: sudo crictl images --output json
	I0501 03:40:41.697169   69580 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0501 03:40:41.697198   69580 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0501 03:40:41.697302   69580 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0501 03:40:41.697346   69580 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0501 03:40:41.697367   69580 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0501 03:40:41.697352   69580 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0501 03:40:41.697375   69580 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0501 03:40:41.697329   69580 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0501 03:40:41.697275   69580 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0501 03:40:41.697329   69580 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0501 03:40:41.698950   69580 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0501 03:40:41.699010   69580 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0501 03:40:41.699114   69580 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0501 03:40:41.699251   69580 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0501 03:40:41.699292   69580 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0501 03:40:41.699020   69580 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0501 03:40:41.699550   69580 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0501 03:40:41.699715   69580 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0501 03:40:41.830042   69580 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0501 03:40:41.881770   69580 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0501 03:40:41.881834   69580 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0501 03:40:41.881896   69580 ssh_runner.go:195] Run: which crictl
	I0501 03:40:41.887083   69580 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0501 03:40:41.894597   69580 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0501 03:40:41.935993   69580 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0501 03:40:41.937339   69580 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0501 03:40:41.961728   69580 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0501 03:40:41.961778   69580 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0501 03:40:41.961827   69580 ssh_runner.go:195] Run: which crictl
	I0501 03:40:42.004327   69580 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0501 03:40:42.004395   69580 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0501 03:40:42.004435   69580 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0501 03:40:42.004493   69580 ssh_runner.go:195] Run: which crictl
	I0501 03:40:42.053743   69580 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0501 03:40:42.055914   69580 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0501 03:40:42.056267   69580 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0501 03:40:42.056610   69580 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0501 03:40:42.060229   69580 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0501 03:40:42.070489   69580 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0501 03:40:42.127829   69580 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0501 03:40:42.127880   69580 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0501 03:40:42.127927   69580 ssh_runner.go:195] Run: which crictl
	I0501 03:40:42.201731   69580 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0501 03:40:42.201783   69580 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0501 03:40:42.201814   69580 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0501 03:40:42.201842   69580 ssh_runner.go:195] Run: which crictl
	I0501 03:40:42.211112   69580 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0501 03:40:42.211163   69580 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0501 03:40:42.211227   69580 ssh_runner.go:195] Run: which crictl
	I0501 03:40:42.217794   69580 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0501 03:40:42.217835   69580 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0501 03:40:42.217873   69580 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0501 03:40:42.217917   69580 ssh_runner.go:195] Run: which crictl
	I0501 03:40:42.217873   69580 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0501 03:40:42.220250   69580 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0501 03:40:42.274880   69580 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0501 03:40:42.294354   69580 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0501 03:40:42.294436   69580 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0501 03:40:42.305191   69580 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0501 03:40:42.342502   69580 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0501 03:40:42.560474   69580 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0501 03:40:42.712970   69580 cache_images.go:92] duration metric: took 1.015752585s to LoadCachedImages
	W0501 03:40:42.713057   69580 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	I0501 03:40:42.713074   69580 kubeadm.go:928] updating node { 192.168.61.104 8443 v1.20.0 crio true true} ...
	I0501 03:40:42.713227   69580 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-503971 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.104
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-503971 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0501 03:40:42.713323   69580 ssh_runner.go:195] Run: crio config
	I0501 03:40:42.771354   69580 cni.go:84] Creating CNI manager for ""
	I0501 03:40:42.771384   69580 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0501 03:40:42.771403   69580 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0501 03:40:42.771428   69580 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.104 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-503971 NodeName:old-k8s-version-503971 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.104"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.104 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0501 03:40:42.771644   69580 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.104
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-503971"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.104
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.104"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0501 03:40:42.771722   69580 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0501 03:40:42.784978   69580 binaries.go:44] Found k8s binaries, skipping transfer
	I0501 03:40:42.785057   69580 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0501 03:40:42.800945   69580 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0501 03:40:42.824293   69580 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0501 03:40:42.845949   69580 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0501 03:40:42.867390   69580 ssh_runner.go:195] Run: grep 192.168.61.104	control-plane.minikube.internal$ /etc/hosts
	I0501 03:40:42.872038   69580 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.104	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0501 03:40:42.890213   69580 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 03:40:43.041533   69580 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0501 03:40:43.070048   69580 certs.go:68] Setting up /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/old-k8s-version-503971 for IP: 192.168.61.104
	I0501 03:40:43.070075   69580 certs.go:194] generating shared ca certs ...
	I0501 03:40:43.070097   69580 certs.go:226] acquiring lock for ca certs: {Name:mk85a75d14c00780d4bf09cd9bdaf0f96f1b8fce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 03:40:43.070315   69580 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18779-13391/.minikube/ca.key
	I0501 03:40:43.070388   69580 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.key
	I0501 03:40:43.070419   69580 certs.go:256] generating profile certs ...
	I0501 03:40:43.070558   69580 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/old-k8s-version-503971/client.key
	I0501 03:40:43.070631   69580 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/old-k8s-version-503971/apiserver.key.760b883a
	I0501 03:40:43.070670   69580 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/old-k8s-version-503971/proxy-client.key
	I0501 03:40:43.070804   69580 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/20724.pem (1338 bytes)
	W0501 03:40:43.070852   69580 certs.go:480] ignoring /home/jenkins/minikube-integration/18779-13391/.minikube/certs/20724_empty.pem, impossibly tiny 0 bytes
	I0501 03:40:43.070865   69580 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca-key.pem (1679 bytes)
	I0501 03:40:43.070914   69580 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem (1078 bytes)
	I0501 03:40:43.070955   69580 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem (1123 bytes)
	I0501 03:40:43.070985   69580 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem (1675 bytes)
	I0501 03:40:43.071044   69580 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem (1708 bytes)
	I0501 03:40:43.071869   69580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0501 03:40:43.110078   69580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0501 03:40:43.164382   69580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0501 03:40:43.197775   69580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0501 03:40:43.230575   69580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/old-k8s-version-503971/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0501 03:40:43.260059   69580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/old-k8s-version-503971/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0501 03:40:43.288704   69580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/old-k8s-version-503971/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0501 03:40:43.315417   69580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/old-k8s-version-503971/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0501 03:40:43.363440   69580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0501 03:40:43.396043   69580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/certs/20724.pem --> /usr/share/ca-certificates/20724.pem (1338 bytes)
	I0501 03:40:43.425997   69580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem --> /usr/share/ca-certificates/207242.pem (1708 bytes)
	I0501 03:40:43.456927   69580 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0501 03:40:43.478177   69580 ssh_runner.go:195] Run: openssl version
	I0501 03:40:43.484513   69580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0501 03:40:43.497230   69580 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0501 03:40:43.504025   69580 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May  1 02:08 /usr/share/ca-certificates/minikubeCA.pem
	I0501 03:40:43.504112   69580 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0501 03:40:43.513309   69580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0501 03:40:43.528592   69580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20724.pem && ln -fs /usr/share/ca-certificates/20724.pem /etc/ssl/certs/20724.pem"
	I0501 03:40:43.544560   69580 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20724.pem
	I0501 03:40:43.550975   69580 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May  1 02:20 /usr/share/ca-certificates/20724.pem
	I0501 03:40:43.551047   69580 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20724.pem
	I0501 03:40:43.559214   69580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20724.pem /etc/ssl/certs/51391683.0"
	I0501 03:40:43.575362   69580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/207242.pem && ln -fs /usr/share/ca-certificates/207242.pem /etc/ssl/certs/207242.pem"
	I0501 03:40:43.587848   69580 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/207242.pem
	I0501 03:40:43.593131   69580 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May  1 02:20 /usr/share/ca-certificates/207242.pem
	I0501 03:40:43.593183   69580 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/207242.pem
	I0501 03:40:43.600365   69580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/207242.pem /etc/ssl/certs/3ec20f2e.0"
	I0501 03:40:43.613912   69580 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0501 03:40:43.619576   69580 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0501 03:40:43.628551   69580 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0501 03:40:43.637418   69580 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0501 03:40:43.645060   69580 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0501 03:40:43.654105   69580 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0501 03:40:43.663501   69580 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0501 03:40:43.670855   69580 kubeadm.go:391] StartCluster: {Name:old-k8s-version-503971 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-503971 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.104 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 03:40:43.670937   69580 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0501 03:40:43.670982   69580 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0501 03:40:43.720350   69580 cri.go:89] found id: ""
	I0501 03:40:43.720419   69580 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0501 03:40:43.732518   69580 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0501 03:40:43.732544   69580 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0501 03:40:43.732552   69580 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0501 03:40:43.732612   69580 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0501 03:40:43.743804   69580 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0501 03:40:43.745071   69580 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-503971" does not appear in /home/jenkins/minikube-integration/18779-13391/kubeconfig
	I0501 03:40:43.745785   69580 kubeconfig.go:62] /home/jenkins/minikube-integration/18779-13391/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-503971" cluster setting kubeconfig missing "old-k8s-version-503971" context setting]
	I0501 03:40:43.747054   69580 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18779-13391/kubeconfig: {Name:mk5d1131557f885800877622151c0915337cb23a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 03:40:43.748989   69580 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0501 03:40:43.760349   69580 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.104
	I0501 03:40:43.760389   69580 kubeadm.go:1154] stopping kube-system containers ...
	I0501 03:40:43.760403   69580 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0501 03:40:43.760473   69580 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0501 03:40:43.804745   69580 cri.go:89] found id: ""
	I0501 03:40:43.804841   69580 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0501 03:40:43.825960   69580 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0501 03:40:43.838038   69580 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0501 03:40:43.838062   69580 kubeadm.go:156] found existing configuration files:
	
	I0501 03:40:43.838115   69580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0501 03:40:43.849075   69580 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0501 03:40:43.849164   69580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0501 03:40:43.860634   69580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0501 03:40:43.871244   69580 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0501 03:40:43.871313   69580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0501 03:40:43.882184   69580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0501 03:40:43.893193   69580 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0501 03:40:43.893254   69580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0501 03:40:43.904257   69580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0501 03:40:43.915414   69580 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0501 03:40:43.915492   69580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0501 03:40:43.927372   69580 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0501 03:40:43.939117   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:40:44.098502   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:40:45.150125   69580 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.051581029s)
	I0501 03:40:45.150161   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:40:45.443307   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:40:45.563369   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:40:45.678620   69580 api_server.go:52] waiting for apiserver process to appear ...
	I0501 03:40:45.678731   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:46.179675   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:43.419480   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:43.419952   68640 main.go:141] libmachine: (no-preload-892672) DBG | unable to find current IP address of domain no-preload-892672 in network mk-no-preload-892672
	I0501 03:40:43.419980   68640 main.go:141] libmachine: (no-preload-892672) DBG | I0501 03:40:43.419907   70664 retry.go:31] will retry after 2.329320519s: waiting for machine to come up
	I0501 03:40:45.751405   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:45.751871   68640 main.go:141] libmachine: (no-preload-892672) DBG | unable to find current IP address of domain no-preload-892672 in network mk-no-preload-892672
	I0501 03:40:45.751902   68640 main.go:141] libmachine: (no-preload-892672) DBG | I0501 03:40:45.751822   70664 retry.go:31] will retry after 3.262804058s: waiting for machine to come up
	I0501 03:40:45.659845   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:48.157145   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:48.013520   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:50.514729   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:46.679449   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:47.179179   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:47.678890   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:48.179190   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:48.679276   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:49.179698   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:49.679121   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:50.179723   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:50.679603   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:51.179094   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:49.016460   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:49.016856   68640 main.go:141] libmachine: (no-preload-892672) DBG | unable to find current IP address of domain no-preload-892672 in network mk-no-preload-892672
	I0501 03:40:49.016878   68640 main.go:141] libmachine: (no-preload-892672) DBG | I0501 03:40:49.016826   70664 retry.go:31] will retry after 3.440852681s: waiting for machine to come up
	I0501 03:40:52.461349   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:52.461771   68640 main.go:141] libmachine: (no-preload-892672) DBG | unable to find current IP address of domain no-preload-892672 in network mk-no-preload-892672
	I0501 03:40:52.461800   68640 main.go:141] libmachine: (no-preload-892672) DBG | I0501 03:40:52.461722   70664 retry.go:31] will retry after 4.871322728s: waiting for machine to come up
	I0501 03:40:50.157703   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:52.655677   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:53.011851   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:55.510458   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:51.679850   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:52.179568   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:52.679100   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:53.179470   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:53.679115   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:54.178815   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:54.679769   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:55.179576   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:55.678864   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:56.179617   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:57.335855   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:57.336228   68640 main.go:141] libmachine: (no-preload-892672) Found IP for machine: 192.168.39.144
	I0501 03:40:57.336263   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has current primary IP address 192.168.39.144 and MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:57.336281   68640 main.go:141] libmachine: (no-preload-892672) Reserving static IP address...
	I0501 03:40:57.336629   68640 main.go:141] libmachine: (no-preload-892672) DBG | found host DHCP lease matching {name: "no-preload-892672", mac: "52:54:00:c7:6d:9a", ip: "192.168.39.144"} in network mk-no-preload-892672: {Iface:virbr1 ExpiryTime:2024-05-01 04:40:47 +0000 UTC Type:0 Mac:52:54:00:c7:6d:9a Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:no-preload-892672 Clientid:01:52:54:00:c7:6d:9a}
	I0501 03:40:57.336649   68640 main.go:141] libmachine: (no-preload-892672) DBG | skip adding static IP to network mk-no-preload-892672 - found existing host DHCP lease matching {name: "no-preload-892672", mac: "52:54:00:c7:6d:9a", ip: "192.168.39.144"}
	I0501 03:40:57.336661   68640 main.go:141] libmachine: (no-preload-892672) Reserved static IP address: 192.168.39.144
	I0501 03:40:57.336671   68640 main.go:141] libmachine: (no-preload-892672) Waiting for SSH to be available...
	I0501 03:40:57.336680   68640 main.go:141] libmachine: (no-preload-892672) DBG | Getting to WaitForSSH function...
	I0501 03:40:57.338862   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:57.339135   68640 main.go:141] libmachine: (no-preload-892672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6d:9a", ip: ""} in network mk-no-preload-892672: {Iface:virbr1 ExpiryTime:2024-05-01 04:40:47 +0000 UTC Type:0 Mac:52:54:00:c7:6d:9a Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:no-preload-892672 Clientid:01:52:54:00:c7:6d:9a}
	I0501 03:40:57.339163   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined IP address 192.168.39.144 and MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:57.339268   68640 main.go:141] libmachine: (no-preload-892672) DBG | Using SSH client type: external
	I0501 03:40:57.339296   68640 main.go:141] libmachine: (no-preload-892672) DBG | Using SSH private key: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/no-preload-892672/id_rsa (-rw-------)
	I0501 03:40:57.339328   68640 main.go:141] libmachine: (no-preload-892672) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.144 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18779-13391/.minikube/machines/no-preload-892672/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0501 03:40:57.339341   68640 main.go:141] libmachine: (no-preload-892672) DBG | About to run SSH command:
	I0501 03:40:57.339370   68640 main.go:141] libmachine: (no-preload-892672) DBG | exit 0
	I0501 03:40:57.466775   68640 main.go:141] libmachine: (no-preload-892672) DBG | SSH cmd err, output: <nil>: 
	I0501 03:40:57.467183   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetConfigRaw
	I0501 03:40:57.467890   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetIP
	I0501 03:40:57.470097   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:57.470527   68640 main.go:141] libmachine: (no-preload-892672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6d:9a", ip: ""} in network mk-no-preload-892672: {Iface:virbr1 ExpiryTime:2024-05-01 04:40:47 +0000 UTC Type:0 Mac:52:54:00:c7:6d:9a Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:no-preload-892672 Clientid:01:52:54:00:c7:6d:9a}
	I0501 03:40:57.470555   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined IP address 192.168.39.144 and MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:57.470767   68640 profile.go:143] Saving config to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/no-preload-892672/config.json ...
	I0501 03:40:57.470929   68640 machine.go:94] provisionDockerMachine start ...
	I0501 03:40:57.470950   68640 main.go:141] libmachine: (no-preload-892672) Calling .DriverName
	I0501 03:40:57.471177   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHHostname
	I0501 03:40:57.473301   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:57.473599   68640 main.go:141] libmachine: (no-preload-892672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6d:9a", ip: ""} in network mk-no-preload-892672: {Iface:virbr1 ExpiryTime:2024-05-01 04:40:47 +0000 UTC Type:0 Mac:52:54:00:c7:6d:9a Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:no-preload-892672 Clientid:01:52:54:00:c7:6d:9a}
	I0501 03:40:57.473626   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined IP address 192.168.39.144 and MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:57.473724   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHPort
	I0501 03:40:57.473863   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHKeyPath
	I0501 03:40:57.474032   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHKeyPath
	I0501 03:40:57.474181   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHUsername
	I0501 03:40:57.474337   68640 main.go:141] libmachine: Using SSH client type: native
	I0501 03:40:57.474545   68640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.144 22 <nil> <nil>}
	I0501 03:40:57.474558   68640 main.go:141] libmachine: About to run SSH command:
	hostname
	I0501 03:40:57.591733   68640 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0501 03:40:57.591766   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetMachineName
	I0501 03:40:57.592016   68640 buildroot.go:166] provisioning hostname "no-preload-892672"
	I0501 03:40:57.592048   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetMachineName
	I0501 03:40:57.592308   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHHostname
	I0501 03:40:57.595192   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:57.595593   68640 main.go:141] libmachine: (no-preload-892672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6d:9a", ip: ""} in network mk-no-preload-892672: {Iface:virbr1 ExpiryTime:2024-05-01 04:40:47 +0000 UTC Type:0 Mac:52:54:00:c7:6d:9a Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:no-preload-892672 Clientid:01:52:54:00:c7:6d:9a}
	I0501 03:40:57.595618   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined IP address 192.168.39.144 and MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:57.595697   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHPort
	I0501 03:40:57.595891   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHKeyPath
	I0501 03:40:57.596041   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHKeyPath
	I0501 03:40:57.596192   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHUsername
	I0501 03:40:57.596376   68640 main.go:141] libmachine: Using SSH client type: native
	I0501 03:40:57.596544   68640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.144 22 <nil> <nil>}
	I0501 03:40:57.596559   68640 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-892672 && echo "no-preload-892672" | sudo tee /etc/hostname
	I0501 03:40:57.727738   68640 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-892672
	
	I0501 03:40:57.727770   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHHostname
	I0501 03:40:57.730673   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:57.731033   68640 main.go:141] libmachine: (no-preload-892672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6d:9a", ip: ""} in network mk-no-preload-892672: {Iface:virbr1 ExpiryTime:2024-05-01 04:40:47 +0000 UTC Type:0 Mac:52:54:00:c7:6d:9a Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:no-preload-892672 Clientid:01:52:54:00:c7:6d:9a}
	I0501 03:40:57.731066   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined IP address 192.168.39.144 and MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:57.731202   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHPort
	I0501 03:40:57.731383   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHKeyPath
	I0501 03:40:57.731577   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHKeyPath
	I0501 03:40:57.731744   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHUsername
	I0501 03:40:57.731936   68640 main.go:141] libmachine: Using SSH client type: native
	I0501 03:40:57.732155   68640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.144 22 <nil> <nil>}
	I0501 03:40:57.732173   68640 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-892672' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-892672/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-892672' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0501 03:40:57.857465   68640 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0501 03:40:57.857492   68640 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18779-13391/.minikube CaCertPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18779-13391/.minikube}
	I0501 03:40:57.857515   68640 buildroot.go:174] setting up certificates
	I0501 03:40:57.857524   68640 provision.go:84] configureAuth start
	I0501 03:40:57.857532   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetMachineName
	I0501 03:40:57.857791   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetIP
	I0501 03:40:57.860530   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:57.860881   68640 main.go:141] libmachine: (no-preload-892672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6d:9a", ip: ""} in network mk-no-preload-892672: {Iface:virbr1 ExpiryTime:2024-05-01 04:40:47 +0000 UTC Type:0 Mac:52:54:00:c7:6d:9a Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:no-preload-892672 Clientid:01:52:54:00:c7:6d:9a}
	I0501 03:40:57.860911   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined IP address 192.168.39.144 and MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:57.861035   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHHostname
	I0501 03:40:57.863122   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:57.863445   68640 main.go:141] libmachine: (no-preload-892672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6d:9a", ip: ""} in network mk-no-preload-892672: {Iface:virbr1 ExpiryTime:2024-05-01 04:40:47 +0000 UTC Type:0 Mac:52:54:00:c7:6d:9a Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:no-preload-892672 Clientid:01:52:54:00:c7:6d:9a}
	I0501 03:40:57.863472   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined IP address 192.168.39.144 and MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:57.863565   68640 provision.go:143] copyHostCerts
	I0501 03:40:57.863614   68640 exec_runner.go:144] found /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem, removing ...
	I0501 03:40:57.863624   68640 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem
	I0501 03:40:57.863689   68640 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem (1078 bytes)
	I0501 03:40:57.863802   68640 exec_runner.go:144] found /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem, removing ...
	I0501 03:40:57.863814   68640 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem
	I0501 03:40:57.863843   68640 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem (1123 bytes)
	I0501 03:40:57.863928   68640 exec_runner.go:144] found /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem, removing ...
	I0501 03:40:57.863938   68640 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem
	I0501 03:40:57.863962   68640 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem (1675 bytes)
	I0501 03:40:57.864040   68640 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca-key.pem org=jenkins.no-preload-892672 san=[127.0.0.1 192.168.39.144 localhost minikube no-preload-892672]
	I0501 03:40:54.658003   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:56.658041   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:58.125270   68640 provision.go:177] copyRemoteCerts
	I0501 03:40:58.125321   68640 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0501 03:40:58.125342   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHHostname
	I0501 03:40:58.127890   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:58.128299   68640 main.go:141] libmachine: (no-preload-892672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6d:9a", ip: ""} in network mk-no-preload-892672: {Iface:virbr1 ExpiryTime:2024-05-01 04:40:47 +0000 UTC Type:0 Mac:52:54:00:c7:6d:9a Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:no-preload-892672 Clientid:01:52:54:00:c7:6d:9a}
	I0501 03:40:58.128330   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined IP address 192.168.39.144 and MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:58.128469   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHPort
	I0501 03:40:58.128645   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHKeyPath
	I0501 03:40:58.128809   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHUsername
	I0501 03:40:58.128941   68640 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/no-preload-892672/id_rsa Username:docker}
	I0501 03:40:58.222112   68640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0501 03:40:58.249760   68640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0501 03:40:58.277574   68640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0501 03:40:58.304971   68640 provision.go:87] duration metric: took 447.420479ms to configureAuth
	I0501 03:40:58.305017   68640 buildroot.go:189] setting minikube options for container-runtime
	I0501 03:40:58.305270   68640 config.go:182] Loaded profile config "no-preload-892672": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0501 03:40:58.305434   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHHostname
	I0501 03:40:58.308098   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:58.308487   68640 main.go:141] libmachine: (no-preload-892672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6d:9a", ip: ""} in network mk-no-preload-892672: {Iface:virbr1 ExpiryTime:2024-05-01 04:40:47 +0000 UTC Type:0 Mac:52:54:00:c7:6d:9a Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:no-preload-892672 Clientid:01:52:54:00:c7:6d:9a}
	I0501 03:40:58.308528   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined IP address 192.168.39.144 and MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:58.308658   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHPort
	I0501 03:40:58.308857   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHKeyPath
	I0501 03:40:58.309025   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHKeyPath
	I0501 03:40:58.309173   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHUsername
	I0501 03:40:58.309354   68640 main.go:141] libmachine: Using SSH client type: native
	I0501 03:40:58.309510   68640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.144 22 <nil> <nil>}
	I0501 03:40:58.309526   68640 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0501 03:40:58.609833   68640 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0501 03:40:58.609859   68640 machine.go:97] duration metric: took 1.138916322s to provisionDockerMachine
	I0501 03:40:58.609873   68640 start.go:293] postStartSetup for "no-preload-892672" (driver="kvm2")
	I0501 03:40:58.609885   68640 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0501 03:40:58.609905   68640 main.go:141] libmachine: (no-preload-892672) Calling .DriverName
	I0501 03:40:58.610271   68640 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0501 03:40:58.610307   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHHostname
	I0501 03:40:58.612954   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:58.613308   68640 main.go:141] libmachine: (no-preload-892672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6d:9a", ip: ""} in network mk-no-preload-892672: {Iface:virbr1 ExpiryTime:2024-05-01 04:40:47 +0000 UTC Type:0 Mac:52:54:00:c7:6d:9a Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:no-preload-892672 Clientid:01:52:54:00:c7:6d:9a}
	I0501 03:40:58.613322   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined IP address 192.168.39.144 and MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:58.613485   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHPort
	I0501 03:40:58.613683   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHKeyPath
	I0501 03:40:58.613871   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHUsername
	I0501 03:40:58.614005   68640 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/no-preload-892672/id_rsa Username:docker}
	I0501 03:40:58.702752   68640 ssh_runner.go:195] Run: cat /etc/os-release
	I0501 03:40:58.707441   68640 info.go:137] Remote host: Buildroot 2023.02.9
	I0501 03:40:58.707468   68640 filesync.go:126] Scanning /home/jenkins/minikube-integration/18779-13391/.minikube/addons for local assets ...
	I0501 03:40:58.707577   68640 filesync.go:126] Scanning /home/jenkins/minikube-integration/18779-13391/.minikube/files for local assets ...
	I0501 03:40:58.707646   68640 filesync.go:149] local asset: /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem -> 207242.pem in /etc/ssl/certs
	I0501 03:40:58.707728   68640 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0501 03:40:58.718247   68640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem --> /etc/ssl/certs/207242.pem (1708 bytes)
	I0501 03:40:58.745184   68640 start.go:296] duration metric: took 135.29943ms for postStartSetup
	I0501 03:40:58.745218   68640 fix.go:56] duration metric: took 24.96116093s for fixHost
	I0501 03:40:58.745236   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHHostname
	I0501 03:40:58.747809   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:58.748228   68640 main.go:141] libmachine: (no-preload-892672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6d:9a", ip: ""} in network mk-no-preload-892672: {Iface:virbr1 ExpiryTime:2024-05-01 04:40:47 +0000 UTC Type:0 Mac:52:54:00:c7:6d:9a Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:no-preload-892672 Clientid:01:52:54:00:c7:6d:9a}
	I0501 03:40:58.748261   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined IP address 192.168.39.144 and MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:58.748380   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHPort
	I0501 03:40:58.748591   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHKeyPath
	I0501 03:40:58.748747   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHKeyPath
	I0501 03:40:58.748870   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHUsername
	I0501 03:40:58.749049   68640 main.go:141] libmachine: Using SSH client type: native
	I0501 03:40:58.749262   68640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.144 22 <nil> <nil>}
	I0501 03:40:58.749275   68640 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0501 03:40:58.867651   68640 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714534858.808639015
	
	I0501 03:40:58.867676   68640 fix.go:216] guest clock: 1714534858.808639015
	I0501 03:40:58.867686   68640 fix.go:229] Guest: 2024-05-01 03:40:58.808639015 +0000 UTC Remote: 2024-05-01 03:40:58.745221709 +0000 UTC m=+370.854832040 (delta=63.417306ms)
	I0501 03:40:58.867735   68640 fix.go:200] guest clock delta is within tolerance: 63.417306ms
	I0501 03:40:58.867746   68640 start.go:83] releasing machines lock for "no-preload-892672", held for 25.083724737s
	I0501 03:40:58.867770   68640 main.go:141] libmachine: (no-preload-892672) Calling .DriverName
	I0501 03:40:58.868053   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetIP
	I0501 03:40:58.871193   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:58.871618   68640 main.go:141] libmachine: (no-preload-892672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6d:9a", ip: ""} in network mk-no-preload-892672: {Iface:virbr1 ExpiryTime:2024-05-01 04:40:47 +0000 UTC Type:0 Mac:52:54:00:c7:6d:9a Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:no-preload-892672 Clientid:01:52:54:00:c7:6d:9a}
	I0501 03:40:58.871664   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined IP address 192.168.39.144 and MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:58.871815   68640 main.go:141] libmachine: (no-preload-892672) Calling .DriverName
	I0501 03:40:58.872441   68640 main.go:141] libmachine: (no-preload-892672) Calling .DriverName
	I0501 03:40:58.872665   68640 main.go:141] libmachine: (no-preload-892672) Calling .DriverName
	I0501 03:40:58.872750   68640 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0501 03:40:58.872787   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHHostname
	I0501 03:40:58.872918   68640 ssh_runner.go:195] Run: cat /version.json
	I0501 03:40:58.872946   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHHostname
	I0501 03:40:58.875797   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:58.875976   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:58.876230   68640 main.go:141] libmachine: (no-preload-892672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6d:9a", ip: ""} in network mk-no-preload-892672: {Iface:virbr1 ExpiryTime:2024-05-01 04:40:47 +0000 UTC Type:0 Mac:52:54:00:c7:6d:9a Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:no-preload-892672 Clientid:01:52:54:00:c7:6d:9a}
	I0501 03:40:58.876341   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined IP address 192.168.39.144 and MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:58.876377   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHPort
	I0501 03:40:58.876502   68640 main.go:141] libmachine: (no-preload-892672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6d:9a", ip: ""} in network mk-no-preload-892672: {Iface:virbr1 ExpiryTime:2024-05-01 04:40:47 +0000 UTC Type:0 Mac:52:54:00:c7:6d:9a Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:no-preload-892672 Clientid:01:52:54:00:c7:6d:9a}
	I0501 03:40:58.876539   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined IP address 192.168.39.144 and MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:58.876587   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHKeyPath
	I0501 03:40:58.876756   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHUsername
	I0501 03:40:58.876894   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHPort
	I0501 03:40:58.876969   68640 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/no-preload-892672/id_rsa Username:docker}
	I0501 03:40:58.877057   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHKeyPath
	I0501 03:40:58.877246   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHUsername
	I0501 03:40:58.877424   68640 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/no-preload-892672/id_rsa Username:docker}
	I0501 03:40:58.983384   68640 ssh_runner.go:195] Run: systemctl --version
	I0501 03:40:58.991625   68640 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0501 03:40:59.143916   68640 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0501 03:40:59.151065   68640 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0501 03:40:59.151124   68640 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0501 03:40:59.168741   68640 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0501 03:40:59.168763   68640 start.go:494] detecting cgroup driver to use...
	I0501 03:40:59.168825   68640 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0501 03:40:59.188524   68640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0501 03:40:59.205602   68640 docker.go:217] disabling cri-docker service (if available) ...
	I0501 03:40:59.205668   68640 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0501 03:40:59.221173   68640 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0501 03:40:59.236546   68640 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0501 03:40:59.364199   68640 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0501 03:40:59.533188   68640 docker.go:233] disabling docker service ...
	I0501 03:40:59.533266   68640 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0501 03:40:59.549488   68640 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0501 03:40:59.562910   68640 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0501 03:40:59.705451   68640 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0501 03:40:59.843226   68640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0501 03:40:59.858878   68640 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0501 03:40:59.882729   68640 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0501 03:40:59.882808   68640 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:40:59.895678   68640 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0501 03:40:59.895763   68640 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:40:59.908439   68640 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:40:59.921319   68640 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:40:59.934643   68640 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0501 03:40:59.947416   68640 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:40:59.959887   68640 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:40:59.981849   68640 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:40:59.994646   68640 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0501 03:41:00.006059   68640 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0501 03:41:00.006133   68640 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0501 03:41:00.024850   68640 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0501 03:41:00.036834   68640 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 03:41:00.161283   68640 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0501 03:41:00.312304   68640 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0501 03:41:00.312375   68640 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0501 03:41:00.317980   68640 start.go:562] Will wait 60s for crictl version
	I0501 03:41:00.318043   68640 ssh_runner.go:195] Run: which crictl
	I0501 03:41:00.322780   68640 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0501 03:41:00.362830   68640 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0501 03:41:00.362920   68640 ssh_runner.go:195] Run: crio --version
	I0501 03:41:00.399715   68640 ssh_runner.go:195] Run: crio --version
	I0501 03:41:00.432510   68640 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0501 03:40:57.511719   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:00.013693   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:56.679034   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:57.179062   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:57.679579   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:58.179221   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:58.679728   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:59.178851   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:59.679647   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:00.179397   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:00.678839   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:01.179679   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:00.433777   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetIP
	I0501 03:41:00.436557   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:41:00.436892   68640 main.go:141] libmachine: (no-preload-892672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6d:9a", ip: ""} in network mk-no-preload-892672: {Iface:virbr1 ExpiryTime:2024-05-01 04:40:47 +0000 UTC Type:0 Mac:52:54:00:c7:6d:9a Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:no-preload-892672 Clientid:01:52:54:00:c7:6d:9a}
	I0501 03:41:00.436920   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined IP address 192.168.39.144 and MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:41:00.437124   68640 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0501 03:41:00.441861   68640 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0501 03:41:00.455315   68640 kubeadm.go:877] updating cluster {Name:no-preload-892672 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.0 ClusterName:no-preload-892672 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.144 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0501 03:41:00.455417   68640 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0501 03:41:00.455462   68640 ssh_runner.go:195] Run: sudo crictl images --output json
	I0501 03:41:00.496394   68640 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0501 03:41:00.496422   68640 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.30.0 registry.k8s.io/kube-controller-manager:v1.30.0 registry.k8s.io/kube-scheduler:v1.30.0 registry.k8s.io/kube-proxy:v1.30.0 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.12-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0501 03:41:00.496508   68640 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0501 03:41:00.496532   68640 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.0
	I0501 03:41:00.496551   68640 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.0
	I0501 03:41:00.496581   68640 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0501 03:41:00.496679   68640 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.0
	I0501 03:41:00.496701   68640 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0501 03:41:00.496736   68640 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0501 03:41:00.496529   68640 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.0
	I0501 03:41:00.498207   68640 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.0
	I0501 03:41:00.498227   68640 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0501 03:41:00.498246   68640 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.0
	I0501 03:41:00.498250   68640 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.0
	I0501 03:41:00.498270   68640 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.0
	I0501 03:41:00.498254   68640 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0501 03:41:00.498298   68640 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0501 03:41:00.498477   68640 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0501 03:41:00.617430   68640 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.30.0
	I0501 03:41:00.621346   68640 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.30.0
	I0501 03:41:00.622759   68640 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0501 03:41:00.628313   68640 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.30.0
	I0501 03:41:00.629087   68640 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0501 03:41:00.633625   68640 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.12-0
	I0501 03:41:00.652130   68640 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.30.0
	I0501 03:41:00.722500   68640 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.30.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.30.0" does not exist at hash "c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0" in container runtime
	I0501 03:41:00.722554   68640 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.30.0
	I0501 03:41:00.722623   68640 ssh_runner.go:195] Run: which crictl
	I0501 03:41:00.796476   68640 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.30.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.30.0" does not exist at hash "259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced" in container runtime
	I0501 03:41:00.796530   68640 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.30.0
	I0501 03:41:00.796580   68640 ssh_runner.go:195] Run: which crictl
	I0501 03:41:00.944235   68640 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.30.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.30.0" does not exist at hash "c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b" in container runtime
	I0501 03:41:00.944262   68640 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0501 03:41:00.944289   68640 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.30.0
	I0501 03:41:00.944297   68640 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0501 03:41:00.944305   68640 cache_images.go:116] "registry.k8s.io/etcd:3.5.12-0" needs transfer: "registry.k8s.io/etcd:3.5.12-0" does not exist at hash "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899" in container runtime
	I0501 03:41:00.944325   68640 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.12-0
	I0501 03:41:00.944344   68640 ssh_runner.go:195] Run: which crictl
	I0501 03:41:00.944357   68640 ssh_runner.go:195] Run: which crictl
	I0501 03:41:00.944398   68640 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.30.0" needs transfer: "registry.k8s.io/kube-proxy:v1.30.0" does not exist at hash "a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b" in container runtime
	I0501 03:41:00.944348   68640 ssh_runner.go:195] Run: which crictl
	I0501 03:41:00.944434   68640 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.30.0
	I0501 03:41:00.944422   68640 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.30.0
	I0501 03:41:00.944452   68640 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.0
	I0501 03:41:00.944464   68640 ssh_runner.go:195] Run: which crictl
	I0501 03:41:00.998765   68640 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.30.0
	I0501 03:41:00.998791   68640 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0
	I0501 03:41:00.998846   68640 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.12-0
	I0501 03:41:00.998891   68640 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.0
	I0501 03:41:01.017469   68640 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.30.0
	I0501 03:41:01.017494   68640 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0
	I0501 03:41:01.017584   68640 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.0
	I0501 03:41:01.018040   68640 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0501 03:41:01.105445   68640 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0
	I0501 03:41:01.105517   68640 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0
	I0501 03:41:01.105560   68640 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0
	I0501 03:41:01.105583   68640 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.30.0 (exists)
	I0501 03:41:01.105595   68640 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.30.0
	I0501 03:41:01.105635   68640 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.0
	I0501 03:41:01.105645   68640 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0
	I0501 03:41:01.105734   68640 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.30.0 (exists)
	I0501 03:41:01.105814   68640 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0
	I0501 03:41:01.105888   68640 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.0
	I0501 03:41:01.120943   68640 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0501 03:41:01.121044   68640 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0501 03:41:01.127975   68640 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.12-0 (exists)
	I0501 03:41:01.359381   68640 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0501 03:40:59.156924   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:01.659307   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:03.661498   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:02.511652   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:05.011220   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:01.679527   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:02.179435   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:02.679626   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:03.179351   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:03.679618   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:04.179426   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:04.678853   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:05.179143   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:05.679065   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:06.179513   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:04.315680   68640 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.0: (3.210016587s)
	I0501 03:41:04.315725   68640 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.30.0 (exists)
	I0501 03:41:04.315756   68640 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.0: (3.209843913s)
	I0501 03:41:04.315784   68640 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1: (3.194721173s)
	I0501 03:41:04.315799   68640 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0: (3.210139611s)
	I0501 03:41:04.315812   68640 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.30.0 (exists)
	I0501 03:41:04.315813   68640 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0501 03:41:04.315813   68640 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0 from cache
	I0501 03:41:04.315844   68640 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.956432506s)
	I0501 03:41:04.315859   68640 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.30.0
	I0501 03:41:04.315902   68640 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0501 03:41:04.315905   68640 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0
	I0501 03:41:04.315927   68640 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0501 03:41:04.315962   68640 ssh_runner.go:195] Run: which crictl
	I0501 03:41:05.691351   68640 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0: (1.375419764s)
	I0501 03:41:05.691394   68640 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0 from cache
	I0501 03:41:05.691418   68640 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.12-0
	I0501 03:41:05.691467   68640 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0
	I0501 03:41:05.691477   68640 ssh_runner.go:235] Completed: which crictl: (1.375499162s)
	I0501 03:41:05.691529   68640 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0501 03:41:06.159381   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:08.659756   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:07.012126   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:09.511459   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:06.679246   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:07.179629   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:07.679601   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:08.179634   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:08.679603   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:09.179675   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:09.678837   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:10.178860   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:10.679638   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:11.179802   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:09.757005   68640 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0: (4.065509843s)
	I0501 03:41:09.757044   68640 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 from cache
	I0501 03:41:09.757079   68640 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.30.0
	I0501 03:41:09.757093   68640 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (4.065539206s)
	I0501 03:41:09.757137   68640 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0501 03:41:09.757158   68640 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0
	I0501 03:41:09.757222   68640 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0501 03:41:12.125691   68640 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0: (2.368504788s)
	I0501 03:41:12.125729   68640 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0 from cache
	I0501 03:41:12.125726   68640 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.368475622s)
	I0501 03:41:12.125755   68640 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0501 03:41:12.125754   68640 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.30.0
	I0501 03:41:12.125817   68640 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0
	I0501 03:41:11.157019   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:13.157632   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:11.513027   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:14.013463   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:11.679355   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:12.178847   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:12.679660   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:13.179641   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:13.678808   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:14.178955   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:14.679651   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:15.179623   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:15.678862   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:16.179775   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:14.315765   68640 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0: (2.18991878s)
	I0501 03:41:14.315791   68640 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0 from cache
	I0501 03:41:14.315835   68640 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0501 03:41:14.315911   68640 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0501 03:41:16.401221   68640 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.085281928s)
	I0501 03:41:16.401261   68640 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0501 03:41:16.401291   68640 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0501 03:41:16.401335   68640 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0501 03:41:17.152926   68640 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0501 03:41:17.152969   68640 cache_images.go:123] Successfully loaded all cached images
	I0501 03:41:17.152976   68640 cache_images.go:92] duration metric: took 16.656540612s to LoadCachedImages
	I0501 03:41:17.152989   68640 kubeadm.go:928] updating node { 192.168.39.144 8443 v1.30.0 crio true true} ...
	I0501 03:41:17.153119   68640 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-892672 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.144
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:no-preload-892672 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0501 03:41:17.153241   68640 ssh_runner.go:195] Run: crio config
	I0501 03:41:17.207153   68640 cni.go:84] Creating CNI manager for ""
	I0501 03:41:17.207181   68640 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0501 03:41:17.207196   68640 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0501 03:41:17.207225   68640 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.144 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-892672 NodeName:no-preload-892672 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.144"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.144 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0501 03:41:17.207407   68640 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.144
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-892672"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.144
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.144"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0501 03:41:17.207488   68640 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0501 03:41:17.221033   68640 binaries.go:44] Found k8s binaries, skipping transfer
	I0501 03:41:17.221099   68640 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0501 03:41:17.232766   68640 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0501 03:41:17.252543   68640 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0501 03:41:17.272030   68640 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0501 03:41:17.291541   68640 ssh_runner.go:195] Run: grep 192.168.39.144	control-plane.minikube.internal$ /etc/hosts
	I0501 03:41:17.295801   68640 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.144	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0501 03:41:17.309880   68640 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 03:41:17.432917   68640 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0501 03:41:17.452381   68640 certs.go:68] Setting up /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/no-preload-892672 for IP: 192.168.39.144
	I0501 03:41:17.452406   68640 certs.go:194] generating shared ca certs ...
	I0501 03:41:17.452425   68640 certs.go:226] acquiring lock for ca certs: {Name:mk85a75d14c00780d4bf09cd9bdaf0f96f1b8fce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 03:41:17.452606   68640 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18779-13391/.minikube/ca.key
	I0501 03:41:17.452655   68640 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.key
	I0501 03:41:17.452669   68640 certs.go:256] generating profile certs ...
	I0501 03:41:17.452746   68640 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/no-preload-892672/client.key
	I0501 03:41:17.452809   68640 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/no-preload-892672/apiserver.key.3644a8af
	I0501 03:41:17.452848   68640 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/no-preload-892672/proxy-client.key
	I0501 03:41:17.452963   68640 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/20724.pem (1338 bytes)
	W0501 03:41:17.453007   68640 certs.go:480] ignoring /home/jenkins/minikube-integration/18779-13391/.minikube/certs/20724_empty.pem, impossibly tiny 0 bytes
	I0501 03:41:17.453021   68640 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca-key.pem (1679 bytes)
	I0501 03:41:17.453050   68640 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem (1078 bytes)
	I0501 03:41:17.453083   68640 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem (1123 bytes)
	I0501 03:41:17.453116   68640 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem (1675 bytes)
	I0501 03:41:17.453166   68640 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem (1708 bytes)
	I0501 03:41:17.453767   68640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0501 03:41:17.490616   68640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0501 03:41:17.545217   68640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0501 03:41:17.576908   68640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0501 03:41:17.607371   68640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/no-preload-892672/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0501 03:41:17.657675   68640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/no-preload-892672/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0501 03:41:17.684681   68640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/no-preload-892672/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0501 03:41:17.716319   68640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/no-preload-892672/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0501 03:41:17.745731   68640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0501 03:41:17.770939   68640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/certs/20724.pem --> /usr/share/ca-certificates/20724.pem (1338 bytes)
	I0501 03:41:17.796366   68640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem --> /usr/share/ca-certificates/207242.pem (1708 bytes)
	I0501 03:41:17.823301   68640 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0501 03:41:17.841496   68640 ssh_runner.go:195] Run: openssl version
	I0501 03:41:17.848026   68640 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0501 03:41:17.860734   68640 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0501 03:41:17.865978   68640 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May  1 02:08 /usr/share/ca-certificates/minikubeCA.pem
	I0501 03:41:17.866037   68640 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0501 03:41:17.872644   68640 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0501 03:41:17.886241   68640 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20724.pem && ln -fs /usr/share/ca-certificates/20724.pem /etc/ssl/certs/20724.pem"
	I0501 03:41:17.899619   68640 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20724.pem
	I0501 03:41:17.904664   68640 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May  1 02:20 /usr/share/ca-certificates/20724.pem
	I0501 03:41:17.904701   68640 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20724.pem
	I0501 03:41:17.910799   68640 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20724.pem /etc/ssl/certs/51391683.0"
	I0501 03:41:17.923007   68640 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/207242.pem && ln -fs /usr/share/ca-certificates/207242.pem /etc/ssl/certs/207242.pem"
	I0501 03:41:15.657403   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:18.156777   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:16.511834   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:18.512735   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:20.513144   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:16.679614   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:17.179604   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:17.679100   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:18.179166   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:18.679202   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:19.179631   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:19.679583   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:20.179584   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:20.679493   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:21.178945   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:17.935647   68640 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/207242.pem
	I0501 03:41:17.942147   68640 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May  1 02:20 /usr/share/ca-certificates/207242.pem
	I0501 03:41:17.942187   68640 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/207242.pem
	I0501 03:41:17.948468   68640 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/207242.pem /etc/ssl/certs/3ec20f2e.0"
	I0501 03:41:17.962737   68640 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0501 03:41:17.968953   68640 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0501 03:41:17.975849   68640 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0501 03:41:17.982324   68640 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0501 03:41:17.988930   68640 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0501 03:41:17.995221   68640 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0501 03:41:18.001868   68640 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0501 03:41:18.008701   68640 kubeadm.go:391] StartCluster: {Name:no-preload-892672 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.0 ClusterName:no-preload-892672 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.144 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 03:41:18.008831   68640 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0501 03:41:18.008893   68640 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0501 03:41:18.056939   68640 cri.go:89] found id: ""
	I0501 03:41:18.057005   68640 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0501 03:41:18.070898   68640 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0501 03:41:18.070921   68640 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0501 03:41:18.070926   68640 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0501 03:41:18.070968   68640 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0501 03:41:18.083907   68640 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0501 03:41:18.085116   68640 kubeconfig.go:125] found "no-preload-892672" server: "https://192.168.39.144:8443"
	I0501 03:41:18.088582   68640 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0501 03:41:18.101426   68640 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.144
	I0501 03:41:18.101471   68640 kubeadm.go:1154] stopping kube-system containers ...
	I0501 03:41:18.101493   68640 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0501 03:41:18.101543   68640 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0501 03:41:18.153129   68640 cri.go:89] found id: ""
	I0501 03:41:18.153193   68640 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0501 03:41:18.173100   68640 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0501 03:41:18.188443   68640 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0501 03:41:18.188463   68640 kubeadm.go:156] found existing configuration files:
	
	I0501 03:41:18.188509   68640 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0501 03:41:18.202153   68640 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0501 03:41:18.202204   68640 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0501 03:41:18.215390   68640 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0501 03:41:18.227339   68640 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0501 03:41:18.227404   68640 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0501 03:41:18.239160   68640 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0501 03:41:18.251992   68640 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0501 03:41:18.252053   68640 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0501 03:41:18.265088   68640 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0501 03:41:18.277922   68640 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0501 03:41:18.277983   68640 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0501 03:41:18.291307   68640 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0501 03:41:18.304879   68640 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:41:18.417921   68640 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:41:19.350848   68640 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:41:19.586348   68640 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:41:19.761056   68640 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:41:19.867315   68640 api_server.go:52] waiting for apiserver process to appear ...
	I0501 03:41:19.867435   68640 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:20.368520   68640 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:20.868444   68640 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:20.913411   68640 api_server.go:72] duration metric: took 1.046095165s to wait for apiserver process to appear ...
	I0501 03:41:20.913444   68640 api_server.go:88] waiting for apiserver healthz status ...
	I0501 03:41:20.913469   68640 api_server.go:253] Checking apiserver healthz at https://192.168.39.144:8443/healthz ...
	I0501 03:41:20.914000   68640 api_server.go:269] stopped: https://192.168.39.144:8443/healthz: Get "https://192.168.39.144:8443/healthz": dial tcp 192.168.39.144:8443: connect: connection refused
	I0501 03:41:21.414544   68640 api_server.go:253] Checking apiserver healthz at https://192.168.39.144:8443/healthz ...
	I0501 03:41:20.658333   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:23.157298   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:23.011395   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:25.012164   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:21.678785   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:22.179435   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:22.679615   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:23.179610   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:23.679473   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:24.179613   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:24.679672   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:25.179400   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:25.679793   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:26.179809   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:24.166756   68640 api_server.go:279] https://192.168.39.144:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0501 03:41:24.166786   68640 api_server.go:103] status: https://192.168.39.144:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0501 03:41:24.166807   68640 api_server.go:253] Checking apiserver healthz at https://192.168.39.144:8443/healthz ...
	I0501 03:41:24.205679   68640 api_server.go:279] https://192.168.39.144:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0501 03:41:24.205713   68640 api_server.go:103] status: https://192.168.39.144:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0501 03:41:24.414055   68640 api_server.go:253] Checking apiserver healthz at https://192.168.39.144:8443/healthz ...
	I0501 03:41:24.420468   68640 api_server.go:279] https://192.168.39.144:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0501 03:41:24.420502   68640 api_server.go:103] status: https://192.168.39.144:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0501 03:41:24.914021   68640 api_server.go:253] Checking apiserver healthz at https://192.168.39.144:8443/healthz ...
	I0501 03:41:24.919717   68640 api_server.go:279] https://192.168.39.144:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0501 03:41:24.919754   68640 api_server.go:103] status: https://192.168.39.144:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0501 03:41:25.414015   68640 api_server.go:253] Checking apiserver healthz at https://192.168.39.144:8443/healthz ...
	I0501 03:41:25.422149   68640 api_server.go:279] https://192.168.39.144:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0501 03:41:25.422180   68640 api_server.go:103] status: https://192.168.39.144:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0501 03:41:25.913751   68640 api_server.go:253] Checking apiserver healthz at https://192.168.39.144:8443/healthz ...
	I0501 03:41:25.917839   68640 api_server.go:279] https://192.168.39.144:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0501 03:41:25.917865   68640 api_server.go:103] status: https://192.168.39.144:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0501 03:41:26.414458   68640 api_server.go:253] Checking apiserver healthz at https://192.168.39.144:8443/healthz ...
	I0501 03:41:26.419346   68640 api_server.go:279] https://192.168.39.144:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0501 03:41:26.419367   68640 api_server.go:103] status: https://192.168.39.144:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0501 03:41:26.913912   68640 api_server.go:253] Checking apiserver healthz at https://192.168.39.144:8443/healthz ...
	I0501 03:41:26.918504   68640 api_server.go:279] https://192.168.39.144:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0501 03:41:26.918537   68640 api_server.go:103] status: https://192.168.39.144:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0501 03:41:27.413693   68640 api_server.go:253] Checking apiserver healthz at https://192.168.39.144:8443/healthz ...
	I0501 03:41:27.421752   68640 api_server.go:279] https://192.168.39.144:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0501 03:41:27.421776   68640 api_server.go:103] status: https://192.168.39.144:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0501 03:41:27.913582   68640 api_server.go:253] Checking apiserver healthz at https://192.168.39.144:8443/healthz ...
	I0501 03:41:27.918116   68640 api_server.go:279] https://192.168.39.144:8443/healthz returned 200:
	ok
	I0501 03:41:27.927764   68640 api_server.go:141] control plane version: v1.30.0
	I0501 03:41:27.927790   68640 api_server.go:131] duration metric: took 7.014339409s to wait for apiserver health ...
	I0501 03:41:27.927799   68640 cni.go:84] Creating CNI manager for ""
	I0501 03:41:27.927805   68640 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0501 03:41:27.929889   68640 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0501 03:41:27.931210   68640 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0501 03:41:25.158177   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:27.656879   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:27.511692   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:30.010468   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:26.679430   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:27.179043   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:27.678801   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:28.179629   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:28.679111   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:29.179599   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:29.679624   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:30.179585   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:30.679442   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:31.179530   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:27.945852   68640 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0501 03:41:27.968311   68640 system_pods.go:43] waiting for kube-system pods to appear ...
	I0501 03:41:27.981571   68640 system_pods.go:59] 8 kube-system pods found
	I0501 03:41:27.981609   68640 system_pods.go:61] "coredns-7db6d8ff4d-v8bqq" [bf389521-9f19-4f2b-83a5-6d469c7ce0fe] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0501 03:41:27.981615   68640 system_pods.go:61] "etcd-no-preload-892672" [108fce6d-03f3-4bb9-a410-a58c58e8f186] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0501 03:41:27.981621   68640 system_pods.go:61] "kube-apiserver-no-preload-892672" [a18b7242-1865-4a67-aab6-c6cc19552326] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0501 03:41:27.981629   68640 system_pods.go:61] "kube-controller-manager-no-preload-892672" [318d39e1-5265-42e5-a3d5-4408b7b73542] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0501 03:41:27.981636   68640 system_pods.go:61] "kube-proxy-dwvdl" [f7a97598-aaa1-4df5-8d6a-8f6286568ad6] Running
	I0501 03:41:27.981642   68640 system_pods.go:61] "kube-scheduler-no-preload-892672" [cbf1c183-16df-42c8-b1c8-b9adf3c25a7a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0501 03:41:27.981647   68640 system_pods.go:61] "metrics-server-569cc877fc-k8jnl" [1dd0fb29-4d90-41c8-9de2-d163eeb0247b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0501 03:41:27.981651   68640 system_pods.go:61] "storage-provisioner" [fc703ab1-f14b-4766-8ee2-a43477d3df21] Running
	I0501 03:41:27.981657   68640 system_pods.go:74] duration metric: took 13.322893ms to wait for pod list to return data ...
	I0501 03:41:27.981667   68640 node_conditions.go:102] verifying NodePressure condition ...
	I0501 03:41:27.985896   68640 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0501 03:41:27.985931   68640 node_conditions.go:123] node cpu capacity is 2
	I0501 03:41:27.985944   68640 node_conditions.go:105] duration metric: took 4.271726ms to run NodePressure ...
	I0501 03:41:27.985966   68640 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:41:28.269675   68640 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0501 03:41:28.276487   68640 kubeadm.go:733] kubelet initialised
	I0501 03:41:28.276512   68640 kubeadm.go:734] duration metric: took 6.808875ms waiting for restarted kubelet to initialise ...
	I0501 03:41:28.276522   68640 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0501 03:41:28.287109   68640 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-v8bqq" in "kube-system" namespace to be "Ready" ...
	I0501 03:41:28.297143   68640 pod_ready.go:97] node "no-preload-892672" hosting pod "coredns-7db6d8ff4d-v8bqq" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-892672" has status "Ready":"False"
	I0501 03:41:28.297185   68640 pod_ready.go:81] duration metric: took 10.040841ms for pod "coredns-7db6d8ff4d-v8bqq" in "kube-system" namespace to be "Ready" ...
	E0501 03:41:28.297198   68640 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-892672" hosting pod "coredns-7db6d8ff4d-v8bqq" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-892672" has status "Ready":"False"
	I0501 03:41:28.297206   68640 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-892672" in "kube-system" namespace to be "Ready" ...
	I0501 03:41:28.307648   68640 pod_ready.go:97] node "no-preload-892672" hosting pod "etcd-no-preload-892672" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-892672" has status "Ready":"False"
	I0501 03:41:28.307682   68640 pod_ready.go:81] duration metric: took 10.464199ms for pod "etcd-no-preload-892672" in "kube-system" namespace to be "Ready" ...
	E0501 03:41:28.307695   68640 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-892672" hosting pod "etcd-no-preload-892672" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-892672" has status "Ready":"False"
	I0501 03:41:28.307707   68640 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-892672" in "kube-system" namespace to be "Ready" ...
	I0501 03:41:30.319652   68640 pod_ready.go:102] pod "kube-apiserver-no-preload-892672" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:32.821375   68640 pod_ready.go:102] pod "kube-apiserver-no-preload-892672" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:29.657167   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:32.157549   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:32.012009   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:34.511543   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:31.679423   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:32.179628   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:32.679456   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:33.179336   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:33.679221   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:34.178900   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:34.679236   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:35.179595   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:35.679520   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:36.179639   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:35.317202   68640 pod_ready.go:102] pod "kube-apiserver-no-preload-892672" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:37.318125   68640 pod_ready.go:92] pod "kube-apiserver-no-preload-892672" in "kube-system" namespace has status "Ready":"True"
	I0501 03:41:37.318157   68640 pod_ready.go:81] duration metric: took 9.010440772s for pod "kube-apiserver-no-preload-892672" in "kube-system" namespace to be "Ready" ...
	I0501 03:41:37.318170   68640 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-892672" in "kube-system" namespace to be "Ready" ...
	I0501 03:41:37.327390   68640 pod_ready.go:92] pod "kube-controller-manager-no-preload-892672" in "kube-system" namespace has status "Ready":"True"
	I0501 03:41:37.327412   68640 pod_ready.go:81] duration metric: took 9.233689ms for pod "kube-controller-manager-no-preload-892672" in "kube-system" namespace to be "Ready" ...
	I0501 03:41:37.327425   68640 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-dwvdl" in "kube-system" namespace to be "Ready" ...
	I0501 03:41:37.333971   68640 pod_ready.go:92] pod "kube-proxy-dwvdl" in "kube-system" namespace has status "Ready":"True"
	I0501 03:41:37.333994   68640 pod_ready.go:81] duration metric: took 6.561014ms for pod "kube-proxy-dwvdl" in "kube-system" namespace to be "Ready" ...
	I0501 03:41:37.334006   68640 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-892672" in "kube-system" namespace to be "Ready" ...
	I0501 03:41:37.338637   68640 pod_ready.go:92] pod "kube-scheduler-no-preload-892672" in "kube-system" namespace has status "Ready":"True"
	I0501 03:41:37.338657   68640 pod_ready.go:81] duration metric: took 4.644395ms for pod "kube-scheduler-no-preload-892672" in "kube-system" namespace to be "Ready" ...
	I0501 03:41:37.338665   68640 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace to be "Ready" ...
	I0501 03:41:34.657958   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:36.658191   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:36.512234   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:39.012636   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:36.678883   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:37.179198   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:37.679101   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:38.179088   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:38.679354   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:39.179163   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:39.678809   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:40.179768   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:40.679046   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:41.179618   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:39.346054   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:41.346434   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:39.157142   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:41.656902   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:41.510939   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:43.511571   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:45.511959   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:41.679751   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:42.178848   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:42.679525   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:43.179706   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:43.679665   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:44.179053   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:44.679615   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:45.178830   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:45.679547   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:41:45.679620   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:41:45.718568   69580 cri.go:89] found id: ""
	I0501 03:41:45.718597   69580 logs.go:276] 0 containers: []
	W0501 03:41:45.718611   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:41:45.718619   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:41:45.718678   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:41:45.755572   69580 cri.go:89] found id: ""
	I0501 03:41:45.755596   69580 logs.go:276] 0 containers: []
	W0501 03:41:45.755604   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:41:45.755609   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:41:45.755654   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:41:45.793411   69580 cri.go:89] found id: ""
	I0501 03:41:45.793440   69580 logs.go:276] 0 containers: []
	W0501 03:41:45.793450   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:41:45.793458   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:41:45.793526   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:41:45.834547   69580 cri.go:89] found id: ""
	I0501 03:41:45.834572   69580 logs.go:276] 0 containers: []
	W0501 03:41:45.834579   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:41:45.834585   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:41:45.834668   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:41:45.873293   69580 cri.go:89] found id: ""
	I0501 03:41:45.873321   69580 logs.go:276] 0 containers: []
	W0501 03:41:45.873332   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:41:45.873348   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:41:45.873411   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:41:45.911703   69580 cri.go:89] found id: ""
	I0501 03:41:45.911734   69580 logs.go:276] 0 containers: []
	W0501 03:41:45.911745   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:41:45.911766   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:41:45.911826   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:41:45.949577   69580 cri.go:89] found id: ""
	I0501 03:41:45.949602   69580 logs.go:276] 0 containers: []
	W0501 03:41:45.949610   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:41:45.949616   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:41:45.949666   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:41:45.986174   69580 cri.go:89] found id: ""
	I0501 03:41:45.986199   69580 logs.go:276] 0 containers: []
	W0501 03:41:45.986207   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:41:45.986216   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:41:45.986228   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:41:46.041028   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:41:46.041064   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:41:46.057097   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:41:46.057126   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:41:46.195021   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:41:46.195042   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:41:46.195055   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:41:46.261153   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:41:46.261197   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:41:43.845096   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:45.845950   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:47.849620   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:44.157041   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:46.158028   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:48.658062   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:48.011975   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:50.512345   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:48.809274   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:48.824295   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:41:48.824369   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:41:48.869945   69580 cri.go:89] found id: ""
	I0501 03:41:48.869975   69580 logs.go:276] 0 containers: []
	W0501 03:41:48.869985   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:41:48.869993   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:41:48.870053   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:41:48.918088   69580 cri.go:89] found id: ""
	I0501 03:41:48.918113   69580 logs.go:276] 0 containers: []
	W0501 03:41:48.918122   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:41:48.918131   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:41:48.918190   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:41:48.958102   69580 cri.go:89] found id: ""
	I0501 03:41:48.958132   69580 logs.go:276] 0 containers: []
	W0501 03:41:48.958143   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:41:48.958149   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:41:48.958207   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:41:48.997163   69580 cri.go:89] found id: ""
	I0501 03:41:48.997194   69580 logs.go:276] 0 containers: []
	W0501 03:41:48.997211   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:41:48.997218   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:41:48.997284   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:41:49.040132   69580 cri.go:89] found id: ""
	I0501 03:41:49.040156   69580 logs.go:276] 0 containers: []
	W0501 03:41:49.040164   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:41:49.040170   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:41:49.040228   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:41:49.079680   69580 cri.go:89] found id: ""
	I0501 03:41:49.079712   69580 logs.go:276] 0 containers: []
	W0501 03:41:49.079724   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:41:49.079732   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:41:49.079790   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:41:49.120577   69580 cri.go:89] found id: ""
	I0501 03:41:49.120610   69580 logs.go:276] 0 containers: []
	W0501 03:41:49.120623   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:41:49.120630   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:41:49.120700   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:41:49.167098   69580 cri.go:89] found id: ""
	I0501 03:41:49.167123   69580 logs.go:276] 0 containers: []
	W0501 03:41:49.167133   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:41:49.167141   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:41:49.167152   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:41:49.242834   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:41:49.242868   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:41:49.264011   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:41:49.264033   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:41:49.367711   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:41:49.367739   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:41:49.367764   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:41:49.441925   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:41:49.441964   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:41:50.346009   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:52.346333   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:51.156287   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:53.657588   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:53.010720   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:55.012329   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:51.986536   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:52.001651   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:41:52.001734   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:41:52.039550   69580 cri.go:89] found id: ""
	I0501 03:41:52.039571   69580 logs.go:276] 0 containers: []
	W0501 03:41:52.039579   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:41:52.039584   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:41:52.039636   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:41:52.082870   69580 cri.go:89] found id: ""
	I0501 03:41:52.082892   69580 logs.go:276] 0 containers: []
	W0501 03:41:52.082900   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:41:52.082905   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:41:52.082949   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:41:52.126970   69580 cri.go:89] found id: ""
	I0501 03:41:52.126996   69580 logs.go:276] 0 containers: []
	W0501 03:41:52.127009   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:41:52.127014   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:41:52.127076   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:41:52.169735   69580 cri.go:89] found id: ""
	I0501 03:41:52.169761   69580 logs.go:276] 0 containers: []
	W0501 03:41:52.169769   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:41:52.169774   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:41:52.169826   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:41:52.207356   69580 cri.go:89] found id: ""
	I0501 03:41:52.207392   69580 logs.go:276] 0 containers: []
	W0501 03:41:52.207404   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:41:52.207412   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:41:52.207472   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:41:52.250074   69580 cri.go:89] found id: ""
	I0501 03:41:52.250102   69580 logs.go:276] 0 containers: []
	W0501 03:41:52.250113   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:41:52.250121   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:41:52.250180   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:41:52.290525   69580 cri.go:89] found id: ""
	I0501 03:41:52.290550   69580 logs.go:276] 0 containers: []
	W0501 03:41:52.290558   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:41:52.290564   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:41:52.290610   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:41:52.336058   69580 cri.go:89] found id: ""
	I0501 03:41:52.336084   69580 logs.go:276] 0 containers: []
	W0501 03:41:52.336092   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:41:52.336103   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:41:52.336118   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:41:52.392738   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:41:52.392773   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:41:52.408475   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:41:52.408503   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:41:52.493567   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:41:52.493594   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:41:52.493608   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:41:52.566550   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:41:52.566583   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:41:55.117129   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:55.134840   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:41:55.134918   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:41:55.193990   69580 cri.go:89] found id: ""
	I0501 03:41:55.194019   69580 logs.go:276] 0 containers: []
	W0501 03:41:55.194029   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:41:55.194038   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:41:55.194100   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:41:55.261710   69580 cri.go:89] found id: ""
	I0501 03:41:55.261743   69580 logs.go:276] 0 containers: []
	W0501 03:41:55.261754   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:41:55.261761   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:41:55.261823   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:41:55.302432   69580 cri.go:89] found id: ""
	I0501 03:41:55.302468   69580 logs.go:276] 0 containers: []
	W0501 03:41:55.302480   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:41:55.302488   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:41:55.302550   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:41:55.346029   69580 cri.go:89] found id: ""
	I0501 03:41:55.346058   69580 logs.go:276] 0 containers: []
	W0501 03:41:55.346067   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:41:55.346073   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:41:55.346117   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:41:55.393206   69580 cri.go:89] found id: ""
	I0501 03:41:55.393229   69580 logs.go:276] 0 containers: []
	W0501 03:41:55.393236   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:41:55.393242   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:41:55.393295   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:41:55.437908   69580 cri.go:89] found id: ""
	I0501 03:41:55.437940   69580 logs.go:276] 0 containers: []
	W0501 03:41:55.437952   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:41:55.437960   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:41:55.438020   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:41:55.480439   69580 cri.go:89] found id: ""
	I0501 03:41:55.480468   69580 logs.go:276] 0 containers: []
	W0501 03:41:55.480480   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:41:55.480488   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:41:55.480589   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:41:55.524782   69580 cri.go:89] found id: ""
	I0501 03:41:55.524811   69580 logs.go:276] 0 containers: []
	W0501 03:41:55.524819   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:41:55.524828   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:41:55.524840   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:41:55.604337   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:41:55.604373   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:41:55.649427   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:41:55.649455   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:41:55.707928   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:41:55.707976   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:41:55.723289   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:41:55.723316   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:41:55.805146   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:41:54.347203   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:56.847806   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:55.658387   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:58.156886   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:57.511280   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:59.511460   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:58.306145   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:58.322207   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:41:58.322280   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:41:58.370291   69580 cri.go:89] found id: ""
	I0501 03:41:58.370319   69580 logs.go:276] 0 containers: []
	W0501 03:41:58.370331   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:41:58.370338   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:41:58.370417   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:41:58.421230   69580 cri.go:89] found id: ""
	I0501 03:41:58.421256   69580 logs.go:276] 0 containers: []
	W0501 03:41:58.421264   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:41:58.421270   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:41:58.421317   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:41:58.463694   69580 cri.go:89] found id: ""
	I0501 03:41:58.463724   69580 logs.go:276] 0 containers: []
	W0501 03:41:58.463735   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:41:58.463743   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:41:58.463797   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:41:58.507756   69580 cri.go:89] found id: ""
	I0501 03:41:58.507785   69580 logs.go:276] 0 containers: []
	W0501 03:41:58.507791   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:41:58.507797   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:41:58.507870   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:41:58.554852   69580 cri.go:89] found id: ""
	I0501 03:41:58.554884   69580 logs.go:276] 0 containers: []
	W0501 03:41:58.554895   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:41:58.554903   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:41:58.554969   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:41:58.602467   69580 cri.go:89] found id: ""
	I0501 03:41:58.602495   69580 logs.go:276] 0 containers: []
	W0501 03:41:58.602505   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:41:58.602511   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:41:58.602561   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:41:58.652718   69580 cri.go:89] found id: ""
	I0501 03:41:58.652749   69580 logs.go:276] 0 containers: []
	W0501 03:41:58.652759   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:41:58.652766   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:41:58.652837   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:41:58.694351   69580 cri.go:89] found id: ""
	I0501 03:41:58.694377   69580 logs.go:276] 0 containers: []
	W0501 03:41:58.694385   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:41:58.694393   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:41:58.694434   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:41:58.779878   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:41:58.779911   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:41:58.826733   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:41:58.826768   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:41:58.883808   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:41:58.883842   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:41:58.900463   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:41:58.900495   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:41:58.991346   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:41:59.345807   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:01.846099   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:00.157131   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:02.157204   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:01.511711   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:03.512536   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:01.492396   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:42:01.508620   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:42:01.508756   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:42:01.555669   69580 cri.go:89] found id: ""
	I0501 03:42:01.555696   69580 logs.go:276] 0 containers: []
	W0501 03:42:01.555712   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:42:01.555720   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:42:01.555782   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:42:01.597591   69580 cri.go:89] found id: ""
	I0501 03:42:01.597615   69580 logs.go:276] 0 containers: []
	W0501 03:42:01.597626   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:42:01.597635   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:42:01.597693   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:42:01.636259   69580 cri.go:89] found id: ""
	I0501 03:42:01.636286   69580 logs.go:276] 0 containers: []
	W0501 03:42:01.636297   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:42:01.636305   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:42:01.636361   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:42:01.684531   69580 cri.go:89] found id: ""
	I0501 03:42:01.684562   69580 logs.go:276] 0 containers: []
	W0501 03:42:01.684572   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:42:01.684579   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:42:01.684647   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:42:01.725591   69580 cri.go:89] found id: ""
	I0501 03:42:01.725621   69580 logs.go:276] 0 containers: []
	W0501 03:42:01.725628   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:42:01.725652   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:42:01.725718   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:42:01.767868   69580 cri.go:89] found id: ""
	I0501 03:42:01.767901   69580 logs.go:276] 0 containers: []
	W0501 03:42:01.767910   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:42:01.767917   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:42:01.767977   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:42:01.817590   69580 cri.go:89] found id: ""
	I0501 03:42:01.817618   69580 logs.go:276] 0 containers: []
	W0501 03:42:01.817629   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:42:01.817637   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:42:01.817697   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:42:01.863549   69580 cri.go:89] found id: ""
	I0501 03:42:01.863576   69580 logs.go:276] 0 containers: []
	W0501 03:42:01.863586   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:42:01.863595   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:42:01.863607   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:42:01.879134   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:42:01.879162   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:42:01.967015   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:42:01.967043   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:42:01.967059   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:42:02.051576   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:42:02.051614   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:42:02.095614   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:42:02.095644   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:42:04.652974   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:42:04.671018   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:42:04.671103   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:42:04.712392   69580 cri.go:89] found id: ""
	I0501 03:42:04.712425   69580 logs.go:276] 0 containers: []
	W0501 03:42:04.712435   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:42:04.712442   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:42:04.712503   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:42:04.756854   69580 cri.go:89] found id: ""
	I0501 03:42:04.756881   69580 logs.go:276] 0 containers: []
	W0501 03:42:04.756893   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:42:04.756900   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:42:04.756962   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:42:04.797665   69580 cri.go:89] found id: ""
	I0501 03:42:04.797694   69580 logs.go:276] 0 containers: []
	W0501 03:42:04.797703   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:42:04.797709   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:42:04.797756   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:42:04.838441   69580 cri.go:89] found id: ""
	I0501 03:42:04.838472   69580 logs.go:276] 0 containers: []
	W0501 03:42:04.838483   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:42:04.838491   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:42:04.838556   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:42:04.879905   69580 cri.go:89] found id: ""
	I0501 03:42:04.879935   69580 logs.go:276] 0 containers: []
	W0501 03:42:04.879945   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:42:04.879952   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:42:04.880012   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:42:04.924759   69580 cri.go:89] found id: ""
	I0501 03:42:04.924792   69580 logs.go:276] 0 containers: []
	W0501 03:42:04.924804   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:42:04.924813   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:42:04.924879   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:42:04.965638   69580 cri.go:89] found id: ""
	I0501 03:42:04.965663   69580 logs.go:276] 0 containers: []
	W0501 03:42:04.965670   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:42:04.965676   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:42:04.965721   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:42:05.013127   69580 cri.go:89] found id: ""
	I0501 03:42:05.013153   69580 logs.go:276] 0 containers: []
	W0501 03:42:05.013163   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:42:05.013173   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:42:05.013185   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:42:05.108388   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:42:05.108409   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:42:05.108422   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:42:05.198239   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:42:05.198281   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:42:05.241042   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:42:05.241076   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:42:05.299017   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:42:05.299069   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:42:04.345910   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:06.346830   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:04.657438   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:06.657707   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:06.011511   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:08.016548   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:10.510503   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:07.815458   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:42:07.832047   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:42:07.832125   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:42:07.882950   69580 cri.go:89] found id: ""
	I0501 03:42:07.882985   69580 logs.go:276] 0 containers: []
	W0501 03:42:07.882996   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:42:07.883002   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:42:07.883051   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:42:07.928086   69580 cri.go:89] found id: ""
	I0501 03:42:07.928111   69580 logs.go:276] 0 containers: []
	W0501 03:42:07.928119   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:42:07.928124   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:42:07.928177   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:42:07.976216   69580 cri.go:89] found id: ""
	I0501 03:42:07.976250   69580 logs.go:276] 0 containers: []
	W0501 03:42:07.976268   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:42:07.976274   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:42:07.976331   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:42:08.019903   69580 cri.go:89] found id: ""
	I0501 03:42:08.019932   69580 logs.go:276] 0 containers: []
	W0501 03:42:08.019943   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:42:08.019951   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:42:08.020009   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:42:08.075980   69580 cri.go:89] found id: ""
	I0501 03:42:08.076004   69580 logs.go:276] 0 containers: []
	W0501 03:42:08.076012   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:42:08.076018   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:42:08.076065   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:42:08.114849   69580 cri.go:89] found id: ""
	I0501 03:42:08.114881   69580 logs.go:276] 0 containers: []
	W0501 03:42:08.114891   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:42:08.114897   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:42:08.114955   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:42:08.159427   69580 cri.go:89] found id: ""
	I0501 03:42:08.159457   69580 logs.go:276] 0 containers: []
	W0501 03:42:08.159468   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:42:08.159476   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:42:08.159543   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:42:08.200117   69580 cri.go:89] found id: ""
	I0501 03:42:08.200151   69580 logs.go:276] 0 containers: []
	W0501 03:42:08.200163   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:42:08.200182   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:42:08.200197   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:42:08.281926   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:42:08.281972   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:42:08.331393   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:42:08.331429   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:42:08.386758   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:42:08.386793   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:42:08.402551   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:42:08.402581   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:42:08.489678   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:42:10.990653   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:42:11.007879   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:42:11.007958   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:42:11.049842   69580 cri.go:89] found id: ""
	I0501 03:42:11.049867   69580 logs.go:276] 0 containers: []
	W0501 03:42:11.049879   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:42:11.049885   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:42:11.049933   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:42:11.091946   69580 cri.go:89] found id: ""
	I0501 03:42:11.091980   69580 logs.go:276] 0 containers: []
	W0501 03:42:11.091992   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:42:11.092000   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:42:11.092079   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:42:11.140100   69580 cri.go:89] found id: ""
	I0501 03:42:11.140129   69580 logs.go:276] 0 containers: []
	W0501 03:42:11.140138   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:42:11.140144   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:42:11.140207   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:42:11.182796   69580 cri.go:89] found id: ""
	I0501 03:42:11.182821   69580 logs.go:276] 0 containers: []
	W0501 03:42:11.182832   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:42:11.182838   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:42:11.182896   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:42:11.222985   69580 cri.go:89] found id: ""
	I0501 03:42:11.223016   69580 logs.go:276] 0 containers: []
	W0501 03:42:11.223027   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:42:11.223033   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:42:11.223114   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:42:11.265793   69580 cri.go:89] found id: ""
	I0501 03:42:11.265818   69580 logs.go:276] 0 containers: []
	W0501 03:42:11.265830   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:42:11.265838   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:42:11.265913   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:42:11.309886   69580 cri.go:89] found id: ""
	I0501 03:42:11.309912   69580 logs.go:276] 0 containers: []
	W0501 03:42:11.309924   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:42:11.309931   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:42:11.309989   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:42:11.357757   69580 cri.go:89] found id: ""
	I0501 03:42:11.357791   69580 logs.go:276] 0 containers: []
	W0501 03:42:11.357803   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:42:11.357823   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:42:11.357839   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:42:11.412668   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:42:11.412704   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:42:11.428380   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:42:11.428422   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0501 03:42:08.347511   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:10.846691   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:09.156632   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:11.158047   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:13.657603   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:12.512713   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:15.011382   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	W0501 03:42:11.521898   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:42:11.521924   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:42:11.521940   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:42:11.607081   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:42:11.607116   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:42:14.153054   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:42:14.173046   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:42:14.173150   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:42:14.219583   69580 cri.go:89] found id: ""
	I0501 03:42:14.219605   69580 logs.go:276] 0 containers: []
	W0501 03:42:14.219613   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:42:14.219619   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:42:14.219664   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:42:14.260316   69580 cri.go:89] found id: ""
	I0501 03:42:14.260349   69580 logs.go:276] 0 containers: []
	W0501 03:42:14.260357   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:42:14.260366   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:42:14.260420   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:42:14.305049   69580 cri.go:89] found id: ""
	I0501 03:42:14.305085   69580 logs.go:276] 0 containers: []
	W0501 03:42:14.305109   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:42:14.305117   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:42:14.305198   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:42:14.359589   69580 cri.go:89] found id: ""
	I0501 03:42:14.359614   69580 logs.go:276] 0 containers: []
	W0501 03:42:14.359622   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:42:14.359628   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:42:14.359672   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:42:14.403867   69580 cri.go:89] found id: ""
	I0501 03:42:14.403895   69580 logs.go:276] 0 containers: []
	W0501 03:42:14.403904   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:42:14.403910   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:42:14.403987   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:42:14.446626   69580 cri.go:89] found id: ""
	I0501 03:42:14.446655   69580 logs.go:276] 0 containers: []
	W0501 03:42:14.446675   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:42:14.446683   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:42:14.446754   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:42:14.490983   69580 cri.go:89] found id: ""
	I0501 03:42:14.491016   69580 logs.go:276] 0 containers: []
	W0501 03:42:14.491028   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:42:14.491036   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:42:14.491117   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:42:14.534180   69580 cri.go:89] found id: ""
	I0501 03:42:14.534205   69580 logs.go:276] 0 containers: []
	W0501 03:42:14.534213   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:42:14.534221   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:42:14.534236   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:42:14.621433   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:42:14.621491   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:42:14.680265   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:42:14.680310   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:42:14.738943   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:42:14.738983   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:42:14.754145   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:42:14.754176   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:42:14.839974   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:42:13.347081   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:15.847072   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:17.847749   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:16.157433   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:18.158120   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:17.017276   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:19.514339   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:17.340948   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:42:17.360007   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:42:17.360068   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:42:17.403201   69580 cri.go:89] found id: ""
	I0501 03:42:17.403231   69580 logs.go:276] 0 containers: []
	W0501 03:42:17.403239   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:42:17.403245   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:42:17.403301   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:42:17.442940   69580 cri.go:89] found id: ""
	I0501 03:42:17.442966   69580 logs.go:276] 0 containers: []
	W0501 03:42:17.442975   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:42:17.442981   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:42:17.443038   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:42:17.487219   69580 cri.go:89] found id: ""
	I0501 03:42:17.487248   69580 logs.go:276] 0 containers: []
	W0501 03:42:17.487259   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:42:17.487267   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:42:17.487324   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:42:17.528551   69580 cri.go:89] found id: ""
	I0501 03:42:17.528583   69580 logs.go:276] 0 containers: []
	W0501 03:42:17.528593   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:42:17.528601   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:42:17.528668   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:42:17.577005   69580 cri.go:89] found id: ""
	I0501 03:42:17.577041   69580 logs.go:276] 0 containers: []
	W0501 03:42:17.577052   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:42:17.577061   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:42:17.577132   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:42:17.618924   69580 cri.go:89] found id: ""
	I0501 03:42:17.618949   69580 logs.go:276] 0 containers: []
	W0501 03:42:17.618957   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:42:17.618963   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:42:17.619022   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:42:17.660487   69580 cri.go:89] found id: ""
	I0501 03:42:17.660514   69580 logs.go:276] 0 containers: []
	W0501 03:42:17.660525   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:42:17.660532   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:42:17.660592   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:42:17.701342   69580 cri.go:89] found id: ""
	I0501 03:42:17.701370   69580 logs.go:276] 0 containers: []
	W0501 03:42:17.701378   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:42:17.701387   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:42:17.701400   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:42:17.757034   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:42:17.757069   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:42:17.772955   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:42:17.772984   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:42:17.888062   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:42:17.888088   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:42:17.888101   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:42:17.969274   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:42:17.969312   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:42:20.521053   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:42:20.536065   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:42:20.536141   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:42:20.577937   69580 cri.go:89] found id: ""
	I0501 03:42:20.577967   69580 logs.go:276] 0 containers: []
	W0501 03:42:20.577977   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:42:20.577986   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:42:20.578055   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:42:20.626690   69580 cri.go:89] found id: ""
	I0501 03:42:20.626714   69580 logs.go:276] 0 containers: []
	W0501 03:42:20.626722   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:42:20.626728   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:42:20.626809   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:42:20.670849   69580 cri.go:89] found id: ""
	I0501 03:42:20.670872   69580 logs.go:276] 0 containers: []
	W0501 03:42:20.670881   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:42:20.670886   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:42:20.670946   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:42:20.711481   69580 cri.go:89] found id: ""
	I0501 03:42:20.711511   69580 logs.go:276] 0 containers: []
	W0501 03:42:20.711522   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:42:20.711531   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:42:20.711596   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:42:20.753413   69580 cri.go:89] found id: ""
	I0501 03:42:20.753443   69580 logs.go:276] 0 containers: []
	W0501 03:42:20.753452   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:42:20.753459   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:42:20.753536   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:42:20.791424   69580 cri.go:89] found id: ""
	I0501 03:42:20.791452   69580 logs.go:276] 0 containers: []
	W0501 03:42:20.791461   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:42:20.791466   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:42:20.791526   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:42:20.833718   69580 cri.go:89] found id: ""
	I0501 03:42:20.833740   69580 logs.go:276] 0 containers: []
	W0501 03:42:20.833748   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:42:20.833752   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:42:20.833799   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:42:20.879788   69580 cri.go:89] found id: ""
	I0501 03:42:20.879818   69580 logs.go:276] 0 containers: []
	W0501 03:42:20.879828   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:42:20.879839   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:42:20.879855   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:42:20.895266   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:42:20.895304   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:42:20.976429   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:42:20.976452   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:42:20.976465   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:42:21.063573   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:42:21.063611   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:42:21.113510   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:42:21.113543   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:42:20.346735   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:22.347096   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:20.658642   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:22.659841   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:22.011045   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:24.012756   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:23.672203   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:42:23.687849   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:42:23.687946   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:42:23.731428   69580 cri.go:89] found id: ""
	I0501 03:42:23.731455   69580 logs.go:276] 0 containers: []
	W0501 03:42:23.731467   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:42:23.731473   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:42:23.731534   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:42:23.772219   69580 cri.go:89] found id: ""
	I0501 03:42:23.772248   69580 logs.go:276] 0 containers: []
	W0501 03:42:23.772259   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:42:23.772266   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:42:23.772369   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:42:23.837203   69580 cri.go:89] found id: ""
	I0501 03:42:23.837235   69580 logs.go:276] 0 containers: []
	W0501 03:42:23.837247   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:42:23.837255   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:42:23.837317   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:42:23.884681   69580 cri.go:89] found id: ""
	I0501 03:42:23.884709   69580 logs.go:276] 0 containers: []
	W0501 03:42:23.884716   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:42:23.884722   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:42:23.884783   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:42:23.927544   69580 cri.go:89] found id: ""
	I0501 03:42:23.927576   69580 logs.go:276] 0 containers: []
	W0501 03:42:23.927584   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:42:23.927590   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:42:23.927652   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:42:23.970428   69580 cri.go:89] found id: ""
	I0501 03:42:23.970457   69580 logs.go:276] 0 containers: []
	W0501 03:42:23.970467   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:42:23.970476   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:42:23.970541   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:42:24.010545   69580 cri.go:89] found id: ""
	I0501 03:42:24.010573   69580 logs.go:276] 0 containers: []
	W0501 03:42:24.010583   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:42:24.010593   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:42:24.010653   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:42:24.053547   69580 cri.go:89] found id: ""
	I0501 03:42:24.053574   69580 logs.go:276] 0 containers: []
	W0501 03:42:24.053582   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:42:24.053591   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:42:24.053602   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:42:24.108416   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:42:24.108452   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:42:24.124052   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:42:24.124083   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:42:24.209024   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:42:24.209048   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:42:24.209063   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:42:24.291644   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:42:24.291693   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:42:24.846439   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:26.846750   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:25.157009   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:27.657022   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:26.510679   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:28.511049   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:30.511542   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:26.840623   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:42:26.856231   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:42:26.856320   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:42:26.897988   69580 cri.go:89] found id: ""
	I0501 03:42:26.898022   69580 logs.go:276] 0 containers: []
	W0501 03:42:26.898033   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:42:26.898041   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:42:26.898109   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:42:26.937608   69580 cri.go:89] found id: ""
	I0501 03:42:26.937638   69580 logs.go:276] 0 containers: []
	W0501 03:42:26.937660   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:42:26.937668   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:42:26.937731   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:42:26.979799   69580 cri.go:89] found id: ""
	I0501 03:42:26.979836   69580 logs.go:276] 0 containers: []
	W0501 03:42:26.979847   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:42:26.979854   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:42:26.979922   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:42:27.018863   69580 cri.go:89] found id: ""
	I0501 03:42:27.018896   69580 logs.go:276] 0 containers: []
	W0501 03:42:27.018903   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:42:27.018909   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:42:27.018959   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:42:27.057864   69580 cri.go:89] found id: ""
	I0501 03:42:27.057893   69580 logs.go:276] 0 containers: []
	W0501 03:42:27.057904   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:42:27.057912   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:42:27.057982   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:42:27.102909   69580 cri.go:89] found id: ""
	I0501 03:42:27.102939   69580 logs.go:276] 0 containers: []
	W0501 03:42:27.102950   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:42:27.102958   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:42:27.103019   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:42:27.148292   69580 cri.go:89] found id: ""
	I0501 03:42:27.148326   69580 logs.go:276] 0 containers: []
	W0501 03:42:27.148336   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:42:27.148344   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:42:27.148407   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:42:27.197557   69580 cri.go:89] found id: ""
	I0501 03:42:27.197581   69580 logs.go:276] 0 containers: []
	W0501 03:42:27.197588   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:42:27.197596   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:42:27.197609   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:42:27.281768   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:42:27.281793   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:42:27.281806   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:42:27.361496   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:42:27.361528   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:42:27.407640   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:42:27.407675   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:42:27.472533   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:42:27.472576   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:42:29.987773   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:42:30.003511   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:42:30.003619   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:42:30.049330   69580 cri.go:89] found id: ""
	I0501 03:42:30.049363   69580 logs.go:276] 0 containers: []
	W0501 03:42:30.049377   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:42:30.049384   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:42:30.049439   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:42:30.088521   69580 cri.go:89] found id: ""
	I0501 03:42:30.088549   69580 logs.go:276] 0 containers: []
	W0501 03:42:30.088560   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:42:30.088568   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:42:30.088624   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:42:30.132731   69580 cri.go:89] found id: ""
	I0501 03:42:30.132765   69580 logs.go:276] 0 containers: []
	W0501 03:42:30.132777   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:42:30.132784   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:42:30.132847   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:42:30.178601   69580 cri.go:89] found id: ""
	I0501 03:42:30.178639   69580 logs.go:276] 0 containers: []
	W0501 03:42:30.178648   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:42:30.178656   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:42:30.178714   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:42:30.230523   69580 cri.go:89] found id: ""
	I0501 03:42:30.230549   69580 logs.go:276] 0 containers: []
	W0501 03:42:30.230561   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:42:30.230569   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:42:30.230632   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:42:30.289234   69580 cri.go:89] found id: ""
	I0501 03:42:30.289262   69580 logs.go:276] 0 containers: []
	W0501 03:42:30.289270   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:42:30.289277   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:42:30.289342   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:42:30.332596   69580 cri.go:89] found id: ""
	I0501 03:42:30.332627   69580 logs.go:276] 0 containers: []
	W0501 03:42:30.332637   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:42:30.332644   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:42:30.332710   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:42:30.383871   69580 cri.go:89] found id: ""
	I0501 03:42:30.383901   69580 logs.go:276] 0 containers: []
	W0501 03:42:30.383908   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:42:30.383917   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:42:30.383929   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:42:30.464382   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:42:30.464404   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:42:30.464417   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:42:30.550604   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:42:30.550637   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:42:30.594927   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:42:30.594959   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:42:30.648392   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:42:30.648426   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:42:28.847271   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:31.345865   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:29.657316   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:31.657435   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:32.511887   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:35.011677   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:33.167591   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:42:33.183804   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:42:33.183874   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:42:33.223501   69580 cri.go:89] found id: ""
	I0501 03:42:33.223525   69580 logs.go:276] 0 containers: []
	W0501 03:42:33.223532   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:42:33.223539   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:42:33.223600   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:42:33.268674   69580 cri.go:89] found id: ""
	I0501 03:42:33.268705   69580 logs.go:276] 0 containers: []
	W0501 03:42:33.268741   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:42:33.268749   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:42:33.268807   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:42:33.310613   69580 cri.go:89] found id: ""
	I0501 03:42:33.310655   69580 logs.go:276] 0 containers: []
	W0501 03:42:33.310666   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:42:33.310674   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:42:33.310737   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:42:33.353156   69580 cri.go:89] found id: ""
	I0501 03:42:33.353177   69580 logs.go:276] 0 containers: []
	W0501 03:42:33.353184   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:42:33.353189   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:42:33.353237   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:42:33.389702   69580 cri.go:89] found id: ""
	I0501 03:42:33.389730   69580 logs.go:276] 0 containers: []
	W0501 03:42:33.389743   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:42:33.389751   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:42:33.389817   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:42:33.431244   69580 cri.go:89] found id: ""
	I0501 03:42:33.431275   69580 logs.go:276] 0 containers: []
	W0501 03:42:33.431290   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:42:33.431298   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:42:33.431384   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:42:33.472382   69580 cri.go:89] found id: ""
	I0501 03:42:33.472412   69580 logs.go:276] 0 containers: []
	W0501 03:42:33.472423   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:42:33.472431   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:42:33.472519   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:42:33.517042   69580 cri.go:89] found id: ""
	I0501 03:42:33.517064   69580 logs.go:276] 0 containers: []
	W0501 03:42:33.517071   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:42:33.517079   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:42:33.517091   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:42:33.573343   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:42:33.573372   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:42:33.588932   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:42:33.588963   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:42:33.674060   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:42:33.674090   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:42:33.674106   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:42:33.756635   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:42:33.756684   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:42:36.300909   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:42:36.320407   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:42:36.320474   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:42:36.367236   69580 cri.go:89] found id: ""
	I0501 03:42:36.367261   69580 logs.go:276] 0 containers: []
	W0501 03:42:36.367269   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:42:36.367274   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:42:36.367335   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:42:36.406440   69580 cri.go:89] found id: ""
	I0501 03:42:36.406471   69580 logs.go:276] 0 containers: []
	W0501 03:42:36.406482   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:42:36.406489   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:42:36.406552   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:42:36.443931   69580 cri.go:89] found id: ""
	I0501 03:42:36.443957   69580 logs.go:276] 0 containers: []
	W0501 03:42:36.443964   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:42:36.443969   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:42:36.444024   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:42:33.844832   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:35.845476   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:37.846291   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:34.156976   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:36.657001   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:38.657056   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:37.510534   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:39.511335   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:36.486169   69580 cri.go:89] found id: ""
	I0501 03:42:36.486200   69580 logs.go:276] 0 containers: []
	W0501 03:42:36.486213   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:42:36.486220   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:42:36.486276   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:42:36.532211   69580 cri.go:89] found id: ""
	I0501 03:42:36.532237   69580 logs.go:276] 0 containers: []
	W0501 03:42:36.532246   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:42:36.532251   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:42:36.532311   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:42:36.571889   69580 cri.go:89] found id: ""
	I0501 03:42:36.571921   69580 logs.go:276] 0 containers: []
	W0501 03:42:36.571933   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:42:36.571940   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:42:36.572000   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:42:36.612126   69580 cri.go:89] found id: ""
	I0501 03:42:36.612159   69580 logs.go:276] 0 containers: []
	W0501 03:42:36.612170   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:42:36.612177   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:42:36.612238   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:42:36.654067   69580 cri.go:89] found id: ""
	I0501 03:42:36.654096   69580 logs.go:276] 0 containers: []
	W0501 03:42:36.654106   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:42:36.654117   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:42:36.654129   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:42:36.740205   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:42:36.740226   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:42:36.740237   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:42:36.821403   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:42:36.821437   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:42:36.874829   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:42:36.874867   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:42:36.928312   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:42:36.928342   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:42:39.444598   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:42:39.460086   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:42:39.460151   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:42:39.500833   69580 cri.go:89] found id: ""
	I0501 03:42:39.500859   69580 logs.go:276] 0 containers: []
	W0501 03:42:39.500870   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:42:39.500879   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:42:39.500936   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:42:39.544212   69580 cri.go:89] found id: ""
	I0501 03:42:39.544238   69580 logs.go:276] 0 containers: []
	W0501 03:42:39.544248   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:42:39.544260   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:42:39.544326   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:42:39.582167   69580 cri.go:89] found id: ""
	I0501 03:42:39.582200   69580 logs.go:276] 0 containers: []
	W0501 03:42:39.582218   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:42:39.582231   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:42:39.582296   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:42:39.624811   69580 cri.go:89] found id: ""
	I0501 03:42:39.624837   69580 logs.go:276] 0 containers: []
	W0501 03:42:39.624848   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:42:39.624855   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:42:39.624913   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:42:39.666001   69580 cri.go:89] found id: ""
	I0501 03:42:39.666030   69580 logs.go:276] 0 containers: []
	W0501 03:42:39.666041   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:42:39.666048   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:42:39.666111   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:42:39.708790   69580 cri.go:89] found id: ""
	I0501 03:42:39.708820   69580 logs.go:276] 0 containers: []
	W0501 03:42:39.708831   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:42:39.708839   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:42:39.708896   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:42:39.750585   69580 cri.go:89] found id: ""
	I0501 03:42:39.750609   69580 logs.go:276] 0 containers: []
	W0501 03:42:39.750617   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:42:39.750622   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:42:39.750670   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:42:39.798576   69580 cri.go:89] found id: ""
	I0501 03:42:39.798612   69580 logs.go:276] 0 containers: []
	W0501 03:42:39.798624   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:42:39.798636   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:42:39.798651   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:42:39.891759   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:42:39.891782   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:42:39.891797   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:42:39.974419   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:42:39.974462   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:42:40.020700   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:42:40.020728   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:42:40.073946   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:42:40.073980   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:42:40.345975   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:42.350579   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:40.657403   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:42.658271   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:41.511780   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:43.512428   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:42.590933   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:42:42.606044   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:42:42.606120   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:42:42.653074   69580 cri.go:89] found id: ""
	I0501 03:42:42.653104   69580 logs.go:276] 0 containers: []
	W0501 03:42:42.653115   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:42:42.653123   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:42:42.653195   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:42:42.693770   69580 cri.go:89] found id: ""
	I0501 03:42:42.693809   69580 logs.go:276] 0 containers: []
	W0501 03:42:42.693821   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:42:42.693829   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:42:42.693885   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:42:42.739087   69580 cri.go:89] found id: ""
	I0501 03:42:42.739115   69580 logs.go:276] 0 containers: []
	W0501 03:42:42.739125   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:42:42.739133   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:42:42.739196   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:42:42.779831   69580 cri.go:89] found id: ""
	I0501 03:42:42.779863   69580 logs.go:276] 0 containers: []
	W0501 03:42:42.779876   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:42:42.779885   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:42:42.779950   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:42:42.826759   69580 cri.go:89] found id: ""
	I0501 03:42:42.826791   69580 logs.go:276] 0 containers: []
	W0501 03:42:42.826799   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:42:42.826804   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:42:42.826854   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:42:42.872602   69580 cri.go:89] found id: ""
	I0501 03:42:42.872629   69580 logs.go:276] 0 containers: []
	W0501 03:42:42.872640   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:42:42.872648   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:42:42.872707   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:42:42.913833   69580 cri.go:89] found id: ""
	I0501 03:42:42.913862   69580 logs.go:276] 0 containers: []
	W0501 03:42:42.913872   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:42:42.913879   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:42:42.913936   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:42:42.953629   69580 cri.go:89] found id: ""
	I0501 03:42:42.953657   69580 logs.go:276] 0 containers: []
	W0501 03:42:42.953667   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:42:42.953679   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:42:42.953695   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:42:42.968420   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:42:42.968447   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:42:43.046840   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:42:43.046874   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:42:43.046898   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:42:43.135453   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:42:43.135492   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:42:43.184103   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:42:43.184141   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:42:45.738246   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:42:45.753193   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:42:45.753258   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:42:45.791191   69580 cri.go:89] found id: ""
	I0501 03:42:45.791216   69580 logs.go:276] 0 containers: []
	W0501 03:42:45.791224   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:42:45.791236   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:42:45.791285   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:42:45.831935   69580 cri.go:89] found id: ""
	I0501 03:42:45.831967   69580 logs.go:276] 0 containers: []
	W0501 03:42:45.831978   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:42:45.831986   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:42:45.832041   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:42:45.869492   69580 cri.go:89] found id: ""
	I0501 03:42:45.869517   69580 logs.go:276] 0 containers: []
	W0501 03:42:45.869529   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:42:45.869536   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:42:45.869593   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:42:45.910642   69580 cri.go:89] found id: ""
	I0501 03:42:45.910672   69580 logs.go:276] 0 containers: []
	W0501 03:42:45.910682   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:42:45.910691   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:42:45.910754   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:42:45.951489   69580 cri.go:89] found id: ""
	I0501 03:42:45.951518   69580 logs.go:276] 0 containers: []
	W0501 03:42:45.951528   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:42:45.951535   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:42:45.951582   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:42:45.991388   69580 cri.go:89] found id: ""
	I0501 03:42:45.991410   69580 logs.go:276] 0 containers: []
	W0501 03:42:45.991418   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:42:45.991423   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:42:45.991467   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:42:46.036524   69580 cri.go:89] found id: ""
	I0501 03:42:46.036546   69580 logs.go:276] 0 containers: []
	W0501 03:42:46.036553   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:42:46.036560   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:42:46.036622   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:42:46.087472   69580 cri.go:89] found id: ""
	I0501 03:42:46.087495   69580 logs.go:276] 0 containers: []
	W0501 03:42:46.087504   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:42:46.087513   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:42:46.087526   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:42:46.101283   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:42:46.101314   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:42:46.176459   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:42:46.176491   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:42:46.176506   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:42:46.261921   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:42:46.261956   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:42:46.309879   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:42:46.309910   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:42:44.846042   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:47.349023   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:44.658318   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:47.155780   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:46.011347   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:48.511156   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:50.512175   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:48.867064   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:42:48.884082   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:42:48.884192   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:42:48.929681   69580 cri.go:89] found id: ""
	I0501 03:42:48.929708   69580 logs.go:276] 0 containers: []
	W0501 03:42:48.929716   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:42:48.929722   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:42:48.929789   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:42:48.977850   69580 cri.go:89] found id: ""
	I0501 03:42:48.977882   69580 logs.go:276] 0 containers: []
	W0501 03:42:48.977894   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:42:48.977901   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:42:48.977962   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:42:49.022590   69580 cri.go:89] found id: ""
	I0501 03:42:49.022619   69580 logs.go:276] 0 containers: []
	W0501 03:42:49.022629   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:42:49.022637   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:42:49.022706   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:42:49.064092   69580 cri.go:89] found id: ""
	I0501 03:42:49.064122   69580 logs.go:276] 0 containers: []
	W0501 03:42:49.064143   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:42:49.064152   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:42:49.064220   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:42:49.103962   69580 cri.go:89] found id: ""
	I0501 03:42:49.103990   69580 logs.go:276] 0 containers: []
	W0501 03:42:49.104002   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:42:49.104009   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:42:49.104070   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:42:49.144566   69580 cri.go:89] found id: ""
	I0501 03:42:49.144596   69580 logs.go:276] 0 containers: []
	W0501 03:42:49.144604   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:42:49.144610   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:42:49.144669   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:42:49.183110   69580 cri.go:89] found id: ""
	I0501 03:42:49.183141   69580 logs.go:276] 0 containers: []
	W0501 03:42:49.183161   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:42:49.183166   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:42:49.183239   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:42:49.225865   69580 cri.go:89] found id: ""
	I0501 03:42:49.225890   69580 logs.go:276] 0 containers: []
	W0501 03:42:49.225902   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:42:49.225912   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:42:49.225926   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:42:49.312967   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:42:49.313005   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:42:49.361171   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:42:49.361206   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:42:49.418731   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:42:49.418780   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:42:49.436976   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:42:49.437007   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:42:49.517994   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:42:49.848517   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:52.346908   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:49.160713   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:51.656444   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:53.659040   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:53.011092   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:55.011811   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:52.018675   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:42:52.033946   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:42:52.034022   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:42:52.081433   69580 cri.go:89] found id: ""
	I0501 03:42:52.081465   69580 logs.go:276] 0 containers: []
	W0501 03:42:52.081477   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:42:52.081485   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:42:52.081544   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:42:52.123914   69580 cri.go:89] found id: ""
	I0501 03:42:52.123947   69580 logs.go:276] 0 containers: []
	W0501 03:42:52.123958   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:42:52.123966   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:42:52.124023   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:42:52.164000   69580 cri.go:89] found id: ""
	I0501 03:42:52.164020   69580 logs.go:276] 0 containers: []
	W0501 03:42:52.164027   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:42:52.164033   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:42:52.164086   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:42:52.205984   69580 cri.go:89] found id: ""
	I0501 03:42:52.206011   69580 logs.go:276] 0 containers: []
	W0501 03:42:52.206023   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:42:52.206031   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:42:52.206096   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:42:52.252743   69580 cri.go:89] found id: ""
	I0501 03:42:52.252766   69580 logs.go:276] 0 containers: []
	W0501 03:42:52.252774   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:42:52.252779   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:42:52.252839   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:42:52.296814   69580 cri.go:89] found id: ""
	I0501 03:42:52.296838   69580 logs.go:276] 0 containers: []
	W0501 03:42:52.296856   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:42:52.296864   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:42:52.296928   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:42:52.335996   69580 cri.go:89] found id: ""
	I0501 03:42:52.336023   69580 logs.go:276] 0 containers: []
	W0501 03:42:52.336034   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:42:52.336042   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:42:52.336105   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:42:52.377470   69580 cri.go:89] found id: ""
	I0501 03:42:52.377498   69580 logs.go:276] 0 containers: []
	W0501 03:42:52.377513   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:42:52.377524   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:42:52.377540   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:42:52.432644   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:42:52.432680   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:42:52.447518   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:42:52.447552   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:42:52.530967   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:42:52.530992   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:42:52.531005   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:42:52.612280   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:42:52.612327   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:42:55.170134   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:42:55.185252   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:42:55.185328   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:42:55.227741   69580 cri.go:89] found id: ""
	I0501 03:42:55.227764   69580 logs.go:276] 0 containers: []
	W0501 03:42:55.227771   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:42:55.227777   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:42:55.227820   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:42:55.270796   69580 cri.go:89] found id: ""
	I0501 03:42:55.270823   69580 logs.go:276] 0 containers: []
	W0501 03:42:55.270834   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:42:55.270840   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:42:55.270898   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:42:55.312146   69580 cri.go:89] found id: ""
	I0501 03:42:55.312171   69580 logs.go:276] 0 containers: []
	W0501 03:42:55.312180   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:42:55.312190   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:42:55.312236   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:42:55.354410   69580 cri.go:89] found id: ""
	I0501 03:42:55.354436   69580 logs.go:276] 0 containers: []
	W0501 03:42:55.354445   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:42:55.354450   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:42:55.354509   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:42:55.393550   69580 cri.go:89] found id: ""
	I0501 03:42:55.393580   69580 logs.go:276] 0 containers: []
	W0501 03:42:55.393589   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:42:55.393594   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:42:55.393651   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:42:55.431468   69580 cri.go:89] found id: ""
	I0501 03:42:55.431497   69580 logs.go:276] 0 containers: []
	W0501 03:42:55.431507   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:42:55.431514   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:42:55.431566   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:42:55.470491   69580 cri.go:89] found id: ""
	I0501 03:42:55.470513   69580 logs.go:276] 0 containers: []
	W0501 03:42:55.470520   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:42:55.470526   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:42:55.470571   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:42:55.509849   69580 cri.go:89] found id: ""
	I0501 03:42:55.509875   69580 logs.go:276] 0 containers: []
	W0501 03:42:55.509885   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:42:55.509894   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:42:55.509909   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:42:55.566680   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:42:55.566762   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:42:55.584392   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:42:55.584423   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:42:55.663090   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:42:55.663116   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:42:55.663131   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:42:55.741459   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:42:55.741494   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:42:54.846549   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:56.848989   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:56.156918   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:58.157016   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:57.012980   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:59.513719   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:58.294435   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:42:58.310204   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:42:58.310267   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:42:58.350292   69580 cri.go:89] found id: ""
	I0501 03:42:58.350322   69580 logs.go:276] 0 containers: []
	W0501 03:42:58.350334   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:42:58.350343   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:42:58.350431   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:42:58.395998   69580 cri.go:89] found id: ""
	I0501 03:42:58.396029   69580 logs.go:276] 0 containers: []
	W0501 03:42:58.396041   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:42:58.396049   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:42:58.396131   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:42:58.434371   69580 cri.go:89] found id: ""
	I0501 03:42:58.434414   69580 logs.go:276] 0 containers: []
	W0501 03:42:58.434427   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:42:58.434434   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:42:58.434493   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:42:58.473457   69580 cri.go:89] found id: ""
	I0501 03:42:58.473489   69580 logs.go:276] 0 containers: []
	W0501 03:42:58.473499   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:42:58.473507   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:42:58.473572   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:42:58.515172   69580 cri.go:89] found id: ""
	I0501 03:42:58.515201   69580 logs.go:276] 0 containers: []
	W0501 03:42:58.515212   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:42:58.515221   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:42:58.515291   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:42:58.560305   69580 cri.go:89] found id: ""
	I0501 03:42:58.560333   69580 logs.go:276] 0 containers: []
	W0501 03:42:58.560341   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:42:58.560348   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:42:58.560407   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:42:58.617980   69580 cri.go:89] found id: ""
	I0501 03:42:58.618005   69580 logs.go:276] 0 containers: []
	W0501 03:42:58.618013   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:42:58.618019   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:42:58.618080   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:42:58.659800   69580 cri.go:89] found id: ""
	I0501 03:42:58.659827   69580 logs.go:276] 0 containers: []
	W0501 03:42:58.659838   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:42:58.659848   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:42:58.659862   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:42:58.718134   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:42:58.718169   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:42:58.733972   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:42:58.734001   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:42:58.813055   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:42:58.813082   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:42:58.813099   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:42:58.897293   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:42:58.897331   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:43:01.442980   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:43:01.459602   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:43:01.459687   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:42:58.849599   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:01.346264   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:00.157322   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:02.657002   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:02.012753   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:04.510896   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:01.502817   69580 cri.go:89] found id: ""
	I0501 03:43:01.502848   69580 logs.go:276] 0 containers: []
	W0501 03:43:01.502857   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:43:01.502863   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:43:01.502924   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:43:01.547251   69580 cri.go:89] found id: ""
	I0501 03:43:01.547289   69580 logs.go:276] 0 containers: []
	W0501 03:43:01.547301   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:43:01.547308   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:43:01.547376   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:43:01.590179   69580 cri.go:89] found id: ""
	I0501 03:43:01.590211   69580 logs.go:276] 0 containers: []
	W0501 03:43:01.590221   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:43:01.590228   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:43:01.590296   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:43:01.628772   69580 cri.go:89] found id: ""
	I0501 03:43:01.628814   69580 logs.go:276] 0 containers: []
	W0501 03:43:01.628826   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:43:01.628834   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:43:01.628893   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:43:01.677414   69580 cri.go:89] found id: ""
	I0501 03:43:01.677440   69580 logs.go:276] 0 containers: []
	W0501 03:43:01.677448   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:43:01.677453   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:43:01.677500   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:43:01.723107   69580 cri.go:89] found id: ""
	I0501 03:43:01.723139   69580 logs.go:276] 0 containers: []
	W0501 03:43:01.723152   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:43:01.723160   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:43:01.723225   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:43:01.771846   69580 cri.go:89] found id: ""
	I0501 03:43:01.771873   69580 logs.go:276] 0 containers: []
	W0501 03:43:01.771883   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:43:01.771890   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:43:01.771952   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:43:01.818145   69580 cri.go:89] found id: ""
	I0501 03:43:01.818179   69580 logs.go:276] 0 containers: []
	W0501 03:43:01.818191   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:43:01.818202   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:43:01.818218   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:43:01.881502   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:43:01.881546   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:43:01.897580   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:43:01.897614   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:43:01.981959   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:43:01.981980   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:43:01.981996   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:43:02.066228   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:43:02.066269   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:43:04.609855   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:43:04.626885   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:43:04.626962   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:43:04.668248   69580 cri.go:89] found id: ""
	I0501 03:43:04.668277   69580 logs.go:276] 0 containers: []
	W0501 03:43:04.668290   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:43:04.668298   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:43:04.668364   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:43:04.711032   69580 cri.go:89] found id: ""
	I0501 03:43:04.711057   69580 logs.go:276] 0 containers: []
	W0501 03:43:04.711068   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:43:04.711076   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:43:04.711136   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:43:04.754197   69580 cri.go:89] found id: ""
	I0501 03:43:04.754232   69580 logs.go:276] 0 containers: []
	W0501 03:43:04.754241   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:43:04.754248   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:43:04.754317   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:43:04.801062   69580 cri.go:89] found id: ""
	I0501 03:43:04.801089   69580 logs.go:276] 0 containers: []
	W0501 03:43:04.801097   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:43:04.801103   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:43:04.801163   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:43:04.849425   69580 cri.go:89] found id: ""
	I0501 03:43:04.849454   69580 logs.go:276] 0 containers: []
	W0501 03:43:04.849465   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:43:04.849473   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:43:04.849536   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:43:04.892555   69580 cri.go:89] found id: ""
	I0501 03:43:04.892589   69580 logs.go:276] 0 containers: []
	W0501 03:43:04.892597   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:43:04.892603   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:43:04.892661   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:43:04.934101   69580 cri.go:89] found id: ""
	I0501 03:43:04.934129   69580 logs.go:276] 0 containers: []
	W0501 03:43:04.934137   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:43:04.934142   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:43:04.934191   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:43:04.985720   69580 cri.go:89] found id: ""
	I0501 03:43:04.985747   69580 logs.go:276] 0 containers: []
	W0501 03:43:04.985760   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:43:04.985773   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:43:04.985789   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:43:05.060634   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:43:05.060692   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:43:05.082007   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:43:05.082036   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:43:05.164613   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:43:05.164636   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:43:05.164652   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:43:05.244064   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:43:05.244103   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:43:03.845495   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:06.346757   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:05.157929   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:07.657094   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:06.511168   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:08.511512   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:10.511984   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:07.793867   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:43:07.811161   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:43:07.811236   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:43:07.850738   69580 cri.go:89] found id: ""
	I0501 03:43:07.850765   69580 logs.go:276] 0 containers: []
	W0501 03:43:07.850775   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:43:07.850782   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:43:07.850841   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:43:07.892434   69580 cri.go:89] found id: ""
	I0501 03:43:07.892466   69580 logs.go:276] 0 containers: []
	W0501 03:43:07.892476   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:43:07.892483   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:43:07.892543   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:43:07.934093   69580 cri.go:89] found id: ""
	I0501 03:43:07.934122   69580 logs.go:276] 0 containers: []
	W0501 03:43:07.934133   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:43:07.934141   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:43:07.934200   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:43:07.976165   69580 cri.go:89] found id: ""
	I0501 03:43:07.976196   69580 logs.go:276] 0 containers: []
	W0501 03:43:07.976205   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:43:07.976216   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:43:07.976278   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:43:08.016925   69580 cri.go:89] found id: ""
	I0501 03:43:08.016956   69580 logs.go:276] 0 containers: []
	W0501 03:43:08.016968   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:43:08.016975   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:43:08.017038   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:43:08.063385   69580 cri.go:89] found id: ""
	I0501 03:43:08.063438   69580 logs.go:276] 0 containers: []
	W0501 03:43:08.063454   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:43:08.063465   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:43:08.063551   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:43:08.103586   69580 cri.go:89] found id: ""
	I0501 03:43:08.103610   69580 logs.go:276] 0 containers: []
	W0501 03:43:08.103618   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:43:08.103628   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:43:08.103672   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:43:08.142564   69580 cri.go:89] found id: ""
	I0501 03:43:08.142594   69580 logs.go:276] 0 containers: []
	W0501 03:43:08.142605   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:43:08.142617   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:43:08.142635   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:43:08.231532   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:43:08.231556   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:43:08.231571   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:43:08.311009   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:43:08.311053   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:43:08.357841   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:43:08.357877   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:43:08.409577   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:43:08.409610   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:43:10.924898   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:43:10.941525   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:43:10.941591   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:43:11.009214   69580 cri.go:89] found id: ""
	I0501 03:43:11.009238   69580 logs.go:276] 0 containers: []
	W0501 03:43:11.009247   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:43:11.009255   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:43:11.009316   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:43:11.072233   69580 cri.go:89] found id: ""
	I0501 03:43:11.072259   69580 logs.go:276] 0 containers: []
	W0501 03:43:11.072267   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:43:11.072273   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:43:11.072327   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:43:11.111662   69580 cri.go:89] found id: ""
	I0501 03:43:11.111691   69580 logs.go:276] 0 containers: []
	W0501 03:43:11.111701   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:43:11.111708   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:43:11.111765   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:43:11.151540   69580 cri.go:89] found id: ""
	I0501 03:43:11.151570   69580 logs.go:276] 0 containers: []
	W0501 03:43:11.151580   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:43:11.151594   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:43:11.151656   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:43:11.194030   69580 cri.go:89] found id: ""
	I0501 03:43:11.194064   69580 logs.go:276] 0 containers: []
	W0501 03:43:11.194076   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:43:11.194083   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:43:11.194146   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:43:11.233010   69580 cri.go:89] found id: ""
	I0501 03:43:11.233045   69580 logs.go:276] 0 containers: []
	W0501 03:43:11.233056   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:43:11.233063   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:43:11.233117   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:43:11.270979   69580 cri.go:89] found id: ""
	I0501 03:43:11.271009   69580 logs.go:276] 0 containers: []
	W0501 03:43:11.271019   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:43:11.271026   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:43:11.271088   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:43:11.312338   69580 cri.go:89] found id: ""
	I0501 03:43:11.312369   69580 logs.go:276] 0 containers: []
	W0501 03:43:11.312381   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:43:11.312393   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:43:11.312408   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:43:11.364273   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:43:11.364307   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:43:11.418603   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:43:11.418634   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:43:11.433409   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:43:11.433438   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0501 03:43:08.349537   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:10.845566   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:12.846699   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:10.157910   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:12.657859   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:12.512669   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:15.013314   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	W0501 03:43:11.511243   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:43:11.511265   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:43:11.511280   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:43:14.089834   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:43:14.104337   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:43:14.104419   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:43:14.148799   69580 cri.go:89] found id: ""
	I0501 03:43:14.148826   69580 logs.go:276] 0 containers: []
	W0501 03:43:14.148833   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:43:14.148839   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:43:14.148904   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:43:14.191330   69580 cri.go:89] found id: ""
	I0501 03:43:14.191366   69580 logs.go:276] 0 containers: []
	W0501 03:43:14.191378   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:43:14.191386   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:43:14.191448   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:43:14.245978   69580 cri.go:89] found id: ""
	I0501 03:43:14.246010   69580 logs.go:276] 0 containers: []
	W0501 03:43:14.246018   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:43:14.246024   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:43:14.246093   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:43:14.287188   69580 cri.go:89] found id: ""
	I0501 03:43:14.287215   69580 logs.go:276] 0 containers: []
	W0501 03:43:14.287223   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:43:14.287228   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:43:14.287276   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:43:14.328060   69580 cri.go:89] found id: ""
	I0501 03:43:14.328093   69580 logs.go:276] 0 containers: []
	W0501 03:43:14.328104   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:43:14.328113   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:43:14.328179   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:43:14.370734   69580 cri.go:89] found id: ""
	I0501 03:43:14.370765   69580 logs.go:276] 0 containers: []
	W0501 03:43:14.370776   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:43:14.370783   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:43:14.370837   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:43:14.414690   69580 cri.go:89] found id: ""
	I0501 03:43:14.414713   69580 logs.go:276] 0 containers: []
	W0501 03:43:14.414721   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:43:14.414726   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:43:14.414790   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:43:14.459030   69580 cri.go:89] found id: ""
	I0501 03:43:14.459060   69580 logs.go:276] 0 containers: []
	W0501 03:43:14.459072   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:43:14.459083   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:43:14.459098   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:43:14.519728   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:43:14.519761   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:43:14.535841   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:43:14.535871   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:43:14.615203   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:43:14.615231   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:43:14.615249   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:43:14.707677   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:43:14.707725   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:43:15.345927   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:17.846732   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:14.657956   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:17.156935   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:17.512424   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:20.012471   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:17.254918   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:43:17.270643   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:43:17.270698   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:43:17.310692   69580 cri.go:89] found id: ""
	I0501 03:43:17.310724   69580 logs.go:276] 0 containers: []
	W0501 03:43:17.310732   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:43:17.310739   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:43:17.310806   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:43:17.349932   69580 cri.go:89] found id: ""
	I0501 03:43:17.349959   69580 logs.go:276] 0 containers: []
	W0501 03:43:17.349969   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:43:17.349976   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:43:17.350040   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:43:17.393073   69580 cri.go:89] found id: ""
	I0501 03:43:17.393099   69580 logs.go:276] 0 containers: []
	W0501 03:43:17.393109   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:43:17.393116   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:43:17.393176   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:43:17.429736   69580 cri.go:89] found id: ""
	I0501 03:43:17.429763   69580 logs.go:276] 0 containers: []
	W0501 03:43:17.429773   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:43:17.429787   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:43:17.429858   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:43:17.473052   69580 cri.go:89] found id: ""
	I0501 03:43:17.473085   69580 logs.go:276] 0 containers: []
	W0501 03:43:17.473097   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:43:17.473105   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:43:17.473168   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:43:17.514035   69580 cri.go:89] found id: ""
	I0501 03:43:17.514062   69580 logs.go:276] 0 containers: []
	W0501 03:43:17.514071   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:43:17.514078   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:43:17.514126   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:43:17.553197   69580 cri.go:89] found id: ""
	I0501 03:43:17.553225   69580 logs.go:276] 0 containers: []
	W0501 03:43:17.553234   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:43:17.553240   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:43:17.553300   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:43:17.592170   69580 cri.go:89] found id: ""
	I0501 03:43:17.592192   69580 logs.go:276] 0 containers: []
	W0501 03:43:17.592199   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:43:17.592208   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:43:17.592220   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:43:17.647549   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:43:17.647584   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:43:17.663084   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:43:17.663114   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:43:17.748357   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:43:17.748385   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:43:17.748401   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:43:17.832453   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:43:17.832491   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:43:20.375927   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:43:20.391840   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:43:20.391918   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:43:20.434158   69580 cri.go:89] found id: ""
	I0501 03:43:20.434185   69580 logs.go:276] 0 containers: []
	W0501 03:43:20.434193   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:43:20.434198   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:43:20.434254   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:43:20.477209   69580 cri.go:89] found id: ""
	I0501 03:43:20.477237   69580 logs.go:276] 0 containers: []
	W0501 03:43:20.477253   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:43:20.477259   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:43:20.477309   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:43:20.517227   69580 cri.go:89] found id: ""
	I0501 03:43:20.517260   69580 logs.go:276] 0 containers: []
	W0501 03:43:20.517270   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:43:20.517282   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:43:20.517340   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:43:20.555771   69580 cri.go:89] found id: ""
	I0501 03:43:20.555802   69580 logs.go:276] 0 containers: []
	W0501 03:43:20.555812   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:43:20.555820   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:43:20.555866   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:43:20.598177   69580 cri.go:89] found id: ""
	I0501 03:43:20.598200   69580 logs.go:276] 0 containers: []
	W0501 03:43:20.598213   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:43:20.598218   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:43:20.598326   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:43:20.637336   69580 cri.go:89] found id: ""
	I0501 03:43:20.637364   69580 logs.go:276] 0 containers: []
	W0501 03:43:20.637373   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:43:20.637378   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:43:20.637435   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:43:20.687736   69580 cri.go:89] found id: ""
	I0501 03:43:20.687761   69580 logs.go:276] 0 containers: []
	W0501 03:43:20.687768   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:43:20.687782   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:43:20.687840   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:43:20.726102   69580 cri.go:89] found id: ""
	I0501 03:43:20.726135   69580 logs.go:276] 0 containers: []
	W0501 03:43:20.726143   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:43:20.726154   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:43:20.726169   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:43:20.780874   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:43:20.780905   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:43:20.795798   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:43:20.795836   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:43:20.882337   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:43:20.882367   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:43:20.882381   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:43:20.962138   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:43:20.962188   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:43:20.345887   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:22.346061   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:19.157165   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:21.657358   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:22.015676   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:24.511682   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:23.512174   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:43:23.528344   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:43:23.528417   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:43:23.567182   69580 cri.go:89] found id: ""
	I0501 03:43:23.567212   69580 logs.go:276] 0 containers: []
	W0501 03:43:23.567222   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:43:23.567230   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:43:23.567291   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:43:23.607522   69580 cri.go:89] found id: ""
	I0501 03:43:23.607556   69580 logs.go:276] 0 containers: []
	W0501 03:43:23.607567   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:43:23.607574   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:43:23.607637   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:43:23.650932   69580 cri.go:89] found id: ""
	I0501 03:43:23.650959   69580 logs.go:276] 0 containers: []
	W0501 03:43:23.650970   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:43:23.650976   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:43:23.651035   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:43:23.695392   69580 cri.go:89] found id: ""
	I0501 03:43:23.695419   69580 logs.go:276] 0 containers: []
	W0501 03:43:23.695428   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:43:23.695436   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:43:23.695514   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:43:23.736577   69580 cri.go:89] found id: ""
	I0501 03:43:23.736607   69580 logs.go:276] 0 containers: []
	W0501 03:43:23.736619   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:43:23.736627   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:43:23.736685   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:43:23.776047   69580 cri.go:89] found id: ""
	I0501 03:43:23.776070   69580 logs.go:276] 0 containers: []
	W0501 03:43:23.776077   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:43:23.776082   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:43:23.776134   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:43:23.813896   69580 cri.go:89] found id: ""
	I0501 03:43:23.813934   69580 logs.go:276] 0 containers: []
	W0501 03:43:23.813943   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:43:23.813949   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:43:23.813997   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:43:23.858898   69580 cri.go:89] found id: ""
	I0501 03:43:23.858925   69580 logs.go:276] 0 containers: []
	W0501 03:43:23.858936   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:43:23.858947   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:43:23.858964   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:43:23.901796   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:43:23.901850   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:43:23.957009   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:43:23.957040   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:43:23.972811   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:43:23.972839   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:43:24.055535   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:43:24.055557   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:43:24.055576   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:43:24.845310   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:26.847397   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:24.157453   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:26.661073   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:27.012181   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:29.511387   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:26.640114   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:43:26.657217   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:43:26.657285   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:43:26.701191   69580 cri.go:89] found id: ""
	I0501 03:43:26.701218   69580 logs.go:276] 0 containers: []
	W0501 03:43:26.701227   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:43:26.701232   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:43:26.701287   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:43:26.740710   69580 cri.go:89] found id: ""
	I0501 03:43:26.740737   69580 logs.go:276] 0 containers: []
	W0501 03:43:26.740745   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:43:26.740750   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:43:26.740808   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:43:26.778682   69580 cri.go:89] found id: ""
	I0501 03:43:26.778710   69580 logs.go:276] 0 containers: []
	W0501 03:43:26.778724   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:43:26.778730   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:43:26.778789   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:43:26.822143   69580 cri.go:89] found id: ""
	I0501 03:43:26.822190   69580 logs.go:276] 0 containers: []
	W0501 03:43:26.822201   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:43:26.822209   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:43:26.822270   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:43:26.865938   69580 cri.go:89] found id: ""
	I0501 03:43:26.865976   69580 logs.go:276] 0 containers: []
	W0501 03:43:26.865988   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:43:26.865996   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:43:26.866058   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:43:26.914939   69580 cri.go:89] found id: ""
	I0501 03:43:26.914969   69580 logs.go:276] 0 containers: []
	W0501 03:43:26.914979   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:43:26.914986   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:43:26.915043   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:43:26.961822   69580 cri.go:89] found id: ""
	I0501 03:43:26.961850   69580 logs.go:276] 0 containers: []
	W0501 03:43:26.961860   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:43:26.961867   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:43:26.961920   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:43:27.005985   69580 cri.go:89] found id: ""
	I0501 03:43:27.006012   69580 logs.go:276] 0 containers: []
	W0501 03:43:27.006021   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:43:27.006032   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:43:27.006046   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:43:27.058265   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:43:27.058303   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:43:27.076270   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:43:27.076308   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:43:27.152627   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:43:27.152706   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:43:27.152728   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:43:27.229638   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:43:27.229678   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:43:29.775960   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:43:29.792849   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:43:29.792925   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:43:29.832508   69580 cri.go:89] found id: ""
	I0501 03:43:29.832537   69580 logs.go:276] 0 containers: []
	W0501 03:43:29.832551   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:43:29.832559   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:43:29.832617   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:43:29.873160   69580 cri.go:89] found id: ""
	I0501 03:43:29.873188   69580 logs.go:276] 0 containers: []
	W0501 03:43:29.873199   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:43:29.873207   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:43:29.873271   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:43:29.919431   69580 cri.go:89] found id: ""
	I0501 03:43:29.919459   69580 logs.go:276] 0 containers: []
	W0501 03:43:29.919468   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:43:29.919474   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:43:29.919533   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:43:29.967944   69580 cri.go:89] found id: ""
	I0501 03:43:29.967976   69580 logs.go:276] 0 containers: []
	W0501 03:43:29.967987   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:43:29.967995   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:43:29.968060   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:43:30.011626   69580 cri.go:89] found id: ""
	I0501 03:43:30.011657   69580 logs.go:276] 0 containers: []
	W0501 03:43:30.011669   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:43:30.011678   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:43:30.011743   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:43:30.051998   69580 cri.go:89] found id: ""
	I0501 03:43:30.052020   69580 logs.go:276] 0 containers: []
	W0501 03:43:30.052028   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:43:30.052034   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:43:30.052095   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:43:30.094140   69580 cri.go:89] found id: ""
	I0501 03:43:30.094164   69580 logs.go:276] 0 containers: []
	W0501 03:43:30.094172   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:43:30.094179   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:43:30.094253   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:43:30.132363   69580 cri.go:89] found id: ""
	I0501 03:43:30.132391   69580 logs.go:276] 0 containers: []
	W0501 03:43:30.132399   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:43:30.132411   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:43:30.132422   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:43:30.221368   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:43:30.221410   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:43:30.271279   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:43:30.271317   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:43:30.325549   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:43:30.325586   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:43:30.345337   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:43:30.345376   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:43:30.427552   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:43:29.347108   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:31.846435   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:29.156483   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:31.156871   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:33.157355   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:32.015498   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:34.511190   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:32.928667   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:43:32.945489   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:43:32.945557   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:43:32.989604   69580 cri.go:89] found id: ""
	I0501 03:43:32.989628   69580 logs.go:276] 0 containers: []
	W0501 03:43:32.989636   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:43:32.989642   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:43:32.989701   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:43:33.030862   69580 cri.go:89] found id: ""
	I0501 03:43:33.030892   69580 logs.go:276] 0 containers: []
	W0501 03:43:33.030903   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:43:33.030912   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:43:33.030977   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:43:33.079795   69580 cri.go:89] found id: ""
	I0501 03:43:33.079827   69580 logs.go:276] 0 containers: []
	W0501 03:43:33.079835   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:43:33.079841   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:43:33.079898   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:43:33.120612   69580 cri.go:89] found id: ""
	I0501 03:43:33.120636   69580 logs.go:276] 0 containers: []
	W0501 03:43:33.120644   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:43:33.120649   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:43:33.120694   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:43:33.161824   69580 cri.go:89] found id: ""
	I0501 03:43:33.161851   69580 logs.go:276] 0 containers: []
	W0501 03:43:33.161861   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:43:33.161868   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:43:33.161924   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:43:33.200068   69580 cri.go:89] found id: ""
	I0501 03:43:33.200098   69580 logs.go:276] 0 containers: []
	W0501 03:43:33.200107   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:43:33.200113   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:43:33.200175   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:43:33.239314   69580 cri.go:89] found id: ""
	I0501 03:43:33.239341   69580 logs.go:276] 0 containers: []
	W0501 03:43:33.239351   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:43:33.239359   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:43:33.239427   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:43:33.281381   69580 cri.go:89] found id: ""
	I0501 03:43:33.281408   69580 logs.go:276] 0 containers: []
	W0501 03:43:33.281419   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:43:33.281431   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:43:33.281447   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:43:33.297992   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:43:33.298047   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:43:33.383273   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:43:33.383292   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:43:33.383303   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:43:33.465256   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:43:33.465289   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:43:33.509593   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:43:33.509621   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:43:36.065074   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:43:36.081361   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:43:36.081429   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:43:36.130394   69580 cri.go:89] found id: ""
	I0501 03:43:36.130436   69580 logs.go:276] 0 containers: []
	W0501 03:43:36.130448   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:43:36.130456   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:43:36.130524   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:43:36.171013   69580 cri.go:89] found id: ""
	I0501 03:43:36.171038   69580 logs.go:276] 0 containers: []
	W0501 03:43:36.171046   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:43:36.171052   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:43:36.171099   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:43:36.215372   69580 cri.go:89] found id: ""
	I0501 03:43:36.215411   69580 logs.go:276] 0 containers: []
	W0501 03:43:36.215424   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:43:36.215431   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:43:36.215493   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:43:36.257177   69580 cri.go:89] found id: ""
	I0501 03:43:36.257204   69580 logs.go:276] 0 containers: []
	W0501 03:43:36.257216   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:43:36.257223   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:43:36.257293   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:43:36.299035   69580 cri.go:89] found id: ""
	I0501 03:43:36.299066   69580 logs.go:276] 0 containers: []
	W0501 03:43:36.299085   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:43:36.299094   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:43:36.299166   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:43:36.339060   69580 cri.go:89] found id: ""
	I0501 03:43:36.339087   69580 logs.go:276] 0 containers: []
	W0501 03:43:36.339097   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:43:36.339105   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:43:36.339163   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:43:36.379982   69580 cri.go:89] found id: ""
	I0501 03:43:36.380016   69580 logs.go:276] 0 containers: []
	W0501 03:43:36.380028   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:43:36.380037   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:43:36.380100   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:43:36.419702   69580 cri.go:89] found id: ""
	I0501 03:43:36.419734   69580 logs.go:276] 0 containers: []
	W0501 03:43:36.419746   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:43:36.419758   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:43:36.419780   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:43:33.846499   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:35.846579   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:37.852802   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:35.159724   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:37.657040   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:36.516601   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:39.012001   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:36.472553   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:43:36.472774   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:43:36.488402   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:43:36.488439   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:43:36.566390   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:43:36.566433   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:43:36.566446   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:43:36.643493   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:43:36.643527   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:43:39.199060   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:43:39.216612   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:43:39.216695   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:43:39.262557   69580 cri.go:89] found id: ""
	I0501 03:43:39.262581   69580 logs.go:276] 0 containers: []
	W0501 03:43:39.262589   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:43:39.262595   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:43:39.262642   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:43:39.331051   69580 cri.go:89] found id: ""
	I0501 03:43:39.331076   69580 logs.go:276] 0 containers: []
	W0501 03:43:39.331093   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:43:39.331098   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:43:39.331162   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:43:39.382033   69580 cri.go:89] found id: ""
	I0501 03:43:39.382058   69580 logs.go:276] 0 containers: []
	W0501 03:43:39.382066   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:43:39.382071   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:43:39.382122   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:43:39.424019   69580 cri.go:89] found id: ""
	I0501 03:43:39.424049   69580 logs.go:276] 0 containers: []
	W0501 03:43:39.424058   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:43:39.424064   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:43:39.424120   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:43:39.465787   69580 cri.go:89] found id: ""
	I0501 03:43:39.465833   69580 logs.go:276] 0 containers: []
	W0501 03:43:39.465846   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:43:39.465855   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:43:39.465916   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:43:39.507746   69580 cri.go:89] found id: ""
	I0501 03:43:39.507781   69580 logs.go:276] 0 containers: []
	W0501 03:43:39.507791   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:43:39.507798   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:43:39.507861   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:43:39.550737   69580 cri.go:89] found id: ""
	I0501 03:43:39.550768   69580 logs.go:276] 0 containers: []
	W0501 03:43:39.550775   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:43:39.550781   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:43:39.550831   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:43:39.592279   69580 cri.go:89] found id: ""
	I0501 03:43:39.592329   69580 logs.go:276] 0 containers: []
	W0501 03:43:39.592343   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:43:39.592356   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:43:39.592373   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:43:39.648858   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:43:39.648896   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:43:39.665316   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:43:39.665343   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:43:39.743611   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:43:39.743632   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:43:39.743646   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:43:39.829285   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:43:39.829322   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:43:40.347121   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:42.845466   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:39.657888   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:41.657976   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:41.512061   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:44.017693   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:42.374457   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:43:42.389944   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:43:42.390002   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:43:42.431270   69580 cri.go:89] found id: ""
	I0501 03:43:42.431294   69580 logs.go:276] 0 containers: []
	W0501 03:43:42.431302   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:43:42.431308   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:43:42.431366   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:43:42.470515   69580 cri.go:89] found id: ""
	I0501 03:43:42.470546   69580 logs.go:276] 0 containers: []
	W0501 03:43:42.470558   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:43:42.470566   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:43:42.470619   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:43:42.518472   69580 cri.go:89] found id: ""
	I0501 03:43:42.518494   69580 logs.go:276] 0 containers: []
	W0501 03:43:42.518501   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:43:42.518506   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:43:42.518555   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:43:42.562192   69580 cri.go:89] found id: ""
	I0501 03:43:42.562220   69580 logs.go:276] 0 containers: []
	W0501 03:43:42.562231   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:43:42.562239   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:43:42.562300   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:43:42.599372   69580 cri.go:89] found id: ""
	I0501 03:43:42.599403   69580 logs.go:276] 0 containers: []
	W0501 03:43:42.599414   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:43:42.599422   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:43:42.599483   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:43:42.636738   69580 cri.go:89] found id: ""
	I0501 03:43:42.636766   69580 logs.go:276] 0 containers: []
	W0501 03:43:42.636777   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:43:42.636786   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:43:42.636845   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:43:42.682087   69580 cri.go:89] found id: ""
	I0501 03:43:42.682115   69580 logs.go:276] 0 containers: []
	W0501 03:43:42.682125   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:43:42.682133   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:43:42.682198   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:43:42.724280   69580 cri.go:89] found id: ""
	I0501 03:43:42.724316   69580 logs.go:276] 0 containers: []
	W0501 03:43:42.724328   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:43:42.724340   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:43:42.724354   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:43:42.771667   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:43:42.771702   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:43:42.827390   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:43:42.827428   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:43:42.843452   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:43:42.843480   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:43:42.925544   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:43:42.925563   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:43:42.925577   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:43:45.515104   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:43:45.529545   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:43:45.529619   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:43:45.573451   69580 cri.go:89] found id: ""
	I0501 03:43:45.573475   69580 logs.go:276] 0 containers: []
	W0501 03:43:45.573483   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:43:45.573489   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:43:45.573536   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:43:45.613873   69580 cri.go:89] found id: ""
	I0501 03:43:45.613897   69580 logs.go:276] 0 containers: []
	W0501 03:43:45.613905   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:43:45.613910   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:43:45.613954   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:43:45.660195   69580 cri.go:89] found id: ""
	I0501 03:43:45.660215   69580 logs.go:276] 0 containers: []
	W0501 03:43:45.660221   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:43:45.660226   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:43:45.660284   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:43:45.703539   69580 cri.go:89] found id: ""
	I0501 03:43:45.703566   69580 logs.go:276] 0 containers: []
	W0501 03:43:45.703574   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:43:45.703580   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:43:45.703637   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:43:45.754635   69580 cri.go:89] found id: ""
	I0501 03:43:45.754659   69580 logs.go:276] 0 containers: []
	W0501 03:43:45.754668   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:43:45.754675   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:43:45.754738   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:43:45.800836   69580 cri.go:89] found id: ""
	I0501 03:43:45.800866   69580 logs.go:276] 0 containers: []
	W0501 03:43:45.800884   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:43:45.800892   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:43:45.800955   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:43:45.859057   69580 cri.go:89] found id: ""
	I0501 03:43:45.859084   69580 logs.go:276] 0 containers: []
	W0501 03:43:45.859092   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:43:45.859098   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:43:45.859145   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:43:45.913173   69580 cri.go:89] found id: ""
	I0501 03:43:45.913204   69580 logs.go:276] 0 containers: []
	W0501 03:43:45.913216   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:43:45.913227   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:43:45.913243   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:43:45.930050   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:43:45.930087   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:43:46.006047   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:43:46.006081   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:43:46.006097   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:43:46.086630   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:43:46.086666   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:43:46.134635   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:43:46.134660   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:43:45.347071   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:47.845983   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:44.157143   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:46.157880   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:48.656747   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:46.510981   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:48.512854   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:48.690330   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:43:48.705024   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:43:48.705093   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:43:48.750244   69580 cri.go:89] found id: ""
	I0501 03:43:48.750278   69580 logs.go:276] 0 containers: []
	W0501 03:43:48.750299   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:43:48.750307   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:43:48.750377   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:43:48.791231   69580 cri.go:89] found id: ""
	I0501 03:43:48.791264   69580 logs.go:276] 0 containers: []
	W0501 03:43:48.791276   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:43:48.791283   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:43:48.791348   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:43:48.834692   69580 cri.go:89] found id: ""
	I0501 03:43:48.834720   69580 logs.go:276] 0 containers: []
	W0501 03:43:48.834731   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:43:48.834739   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:43:48.834809   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:43:48.877383   69580 cri.go:89] found id: ""
	I0501 03:43:48.877415   69580 logs.go:276] 0 containers: []
	W0501 03:43:48.877424   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:43:48.877430   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:43:48.877479   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:43:48.919728   69580 cri.go:89] found id: ""
	I0501 03:43:48.919756   69580 logs.go:276] 0 containers: []
	W0501 03:43:48.919767   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:43:48.919775   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:43:48.919836   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:43:48.962090   69580 cri.go:89] found id: ""
	I0501 03:43:48.962122   69580 logs.go:276] 0 containers: []
	W0501 03:43:48.962137   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:43:48.962144   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:43:48.962205   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:43:48.998456   69580 cri.go:89] found id: ""
	I0501 03:43:48.998487   69580 logs.go:276] 0 containers: []
	W0501 03:43:48.998498   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:43:48.998506   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:43:48.998566   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:43:49.042591   69580 cri.go:89] found id: ""
	I0501 03:43:49.042623   69580 logs.go:276] 0 containers: []
	W0501 03:43:49.042633   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:43:49.042645   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:43:49.042661   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:43:49.088533   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:43:49.088571   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:43:49.145252   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:43:49.145288   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:43:49.163093   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:43:49.163120   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:43:49.240805   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:43:49.240831   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:43:49.240844   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:43:49.848864   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:52.347128   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:50.656790   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:52.658130   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:51.011713   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:53.510598   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:55.512900   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:51.825530   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:43:51.839596   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:43:51.839669   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:43:51.879493   69580 cri.go:89] found id: ""
	I0501 03:43:51.879516   69580 logs.go:276] 0 containers: []
	W0501 03:43:51.879524   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:43:51.879530   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:43:51.879585   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:43:51.921577   69580 cri.go:89] found id: ""
	I0501 03:43:51.921608   69580 logs.go:276] 0 containers: []
	W0501 03:43:51.921620   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:43:51.921627   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:43:51.921693   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:43:51.961000   69580 cri.go:89] found id: ""
	I0501 03:43:51.961028   69580 logs.go:276] 0 containers: []
	W0501 03:43:51.961037   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:43:51.961043   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:43:51.961103   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:43:52.006087   69580 cri.go:89] found id: ""
	I0501 03:43:52.006118   69580 logs.go:276] 0 containers: []
	W0501 03:43:52.006129   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:43:52.006137   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:43:52.006201   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:43:52.047196   69580 cri.go:89] found id: ""
	I0501 03:43:52.047228   69580 logs.go:276] 0 containers: []
	W0501 03:43:52.047239   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:43:52.047250   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:43:52.047319   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:43:52.086380   69580 cri.go:89] found id: ""
	I0501 03:43:52.086423   69580 logs.go:276] 0 containers: []
	W0501 03:43:52.086434   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:43:52.086442   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:43:52.086499   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:43:52.128824   69580 cri.go:89] found id: ""
	I0501 03:43:52.128851   69580 logs.go:276] 0 containers: []
	W0501 03:43:52.128861   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:43:52.128868   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:43:52.128933   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:43:52.168743   69580 cri.go:89] found id: ""
	I0501 03:43:52.168769   69580 logs.go:276] 0 containers: []
	W0501 03:43:52.168776   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:43:52.168788   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:43:52.168802   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:43:52.184391   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:43:52.184419   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:43:52.268330   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:43:52.268368   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:43:52.268386   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:43:52.350556   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:43:52.350586   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:43:52.395930   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:43:52.395967   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:43:54.952879   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:43:54.968440   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:43:54.968517   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:43:55.008027   69580 cri.go:89] found id: ""
	I0501 03:43:55.008056   69580 logs.go:276] 0 containers: []
	W0501 03:43:55.008067   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:43:55.008074   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:43:55.008137   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:43:55.048848   69580 cri.go:89] found id: ""
	I0501 03:43:55.048869   69580 logs.go:276] 0 containers: []
	W0501 03:43:55.048877   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:43:55.048882   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:43:55.048931   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:43:55.085886   69580 cri.go:89] found id: ""
	I0501 03:43:55.085910   69580 logs.go:276] 0 containers: []
	W0501 03:43:55.085919   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:43:55.085924   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:43:55.085971   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:43:55.119542   69580 cri.go:89] found id: ""
	I0501 03:43:55.119567   69580 logs.go:276] 0 containers: []
	W0501 03:43:55.119574   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:43:55.119580   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:43:55.119636   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:43:55.158327   69580 cri.go:89] found id: ""
	I0501 03:43:55.158357   69580 logs.go:276] 0 containers: []
	W0501 03:43:55.158367   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:43:55.158374   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:43:55.158449   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:43:55.200061   69580 cri.go:89] found id: ""
	I0501 03:43:55.200085   69580 logs.go:276] 0 containers: []
	W0501 03:43:55.200093   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:43:55.200100   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:43:55.200146   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:43:55.239446   69580 cri.go:89] found id: ""
	I0501 03:43:55.239476   69580 logs.go:276] 0 containers: []
	W0501 03:43:55.239487   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:43:55.239493   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:43:55.239557   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:43:55.275593   69580 cri.go:89] found id: ""
	I0501 03:43:55.275623   69580 logs.go:276] 0 containers: []
	W0501 03:43:55.275635   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:43:55.275646   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:43:55.275662   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:43:55.356701   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:43:55.356724   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:43:55.356740   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:43:55.437445   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:43:55.437483   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:43:55.489024   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:43:55.489051   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:43:55.548083   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:43:55.548114   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:43:54.845529   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:57.348771   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:55.158591   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:57.657361   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:58.010099   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:00.010511   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:58.067063   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:43:58.080485   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:43:58.080539   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:43:58.121459   69580 cri.go:89] found id: ""
	I0501 03:43:58.121488   69580 logs.go:276] 0 containers: []
	W0501 03:43:58.121498   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:43:58.121505   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:43:58.121562   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:43:58.161445   69580 cri.go:89] found id: ""
	I0501 03:43:58.161479   69580 logs.go:276] 0 containers: []
	W0501 03:43:58.161489   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:43:58.161499   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:43:58.161560   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:43:58.203216   69580 cri.go:89] found id: ""
	I0501 03:43:58.203238   69580 logs.go:276] 0 containers: []
	W0501 03:43:58.203246   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:43:58.203251   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:43:58.203297   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:43:58.239496   69580 cri.go:89] found id: ""
	I0501 03:43:58.239526   69580 logs.go:276] 0 containers: []
	W0501 03:43:58.239538   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:43:58.239546   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:43:58.239605   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:43:58.280331   69580 cri.go:89] found id: ""
	I0501 03:43:58.280359   69580 logs.go:276] 0 containers: []
	W0501 03:43:58.280370   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:43:58.280378   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:43:58.280438   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:43:58.318604   69580 cri.go:89] found id: ""
	I0501 03:43:58.318634   69580 logs.go:276] 0 containers: []
	W0501 03:43:58.318646   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:43:58.318653   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:43:58.318712   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:43:58.359360   69580 cri.go:89] found id: ""
	I0501 03:43:58.359383   69580 logs.go:276] 0 containers: []
	W0501 03:43:58.359392   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:43:58.359398   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:43:58.359446   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:43:58.401172   69580 cri.go:89] found id: ""
	I0501 03:43:58.401202   69580 logs.go:276] 0 containers: []
	W0501 03:43:58.401211   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:43:58.401220   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:43:58.401232   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:43:58.416877   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:43:58.416907   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:43:58.489812   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:43:58.489835   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:43:58.489849   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:43:58.574971   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:43:58.575004   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:43:58.619526   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:43:58.619557   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:44:01.173759   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:44:01.187838   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:44:01.187922   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:44:01.227322   69580 cri.go:89] found id: ""
	I0501 03:44:01.227355   69580 logs.go:276] 0 containers: []
	W0501 03:44:01.227366   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:44:01.227372   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:44:01.227432   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:44:01.268418   69580 cri.go:89] found id: ""
	I0501 03:44:01.268453   69580 logs.go:276] 0 containers: []
	W0501 03:44:01.268465   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:44:01.268472   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:44:01.268530   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:44:01.314641   69580 cri.go:89] found id: ""
	I0501 03:44:01.314667   69580 logs.go:276] 0 containers: []
	W0501 03:44:01.314675   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:44:01.314681   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:44:01.314739   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:44:01.361237   69580 cri.go:89] found id: ""
	I0501 03:44:01.361272   69580 logs.go:276] 0 containers: []
	W0501 03:44:01.361288   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:44:01.361294   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:44:01.361348   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:44:01.400650   69580 cri.go:89] found id: ""
	I0501 03:44:01.400676   69580 logs.go:276] 0 containers: []
	W0501 03:44:01.400684   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:44:01.400690   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:44:01.400739   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:44:01.447998   69580 cri.go:89] found id: ""
	I0501 03:44:01.448023   69580 logs.go:276] 0 containers: []
	W0501 03:44:01.448032   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:44:01.448040   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:44:01.448101   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:43:59.845726   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:02.345826   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:00.155851   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:02.155998   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:02.010828   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:04.014801   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:01.492172   69580 cri.go:89] found id: ""
	I0501 03:44:01.492199   69580 logs.go:276] 0 containers: []
	W0501 03:44:01.492207   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:44:01.492213   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:44:01.492265   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:44:01.538589   69580 cri.go:89] found id: ""
	I0501 03:44:01.538617   69580 logs.go:276] 0 containers: []
	W0501 03:44:01.538628   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:44:01.538638   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:44:01.538653   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:44:01.592914   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:44:01.592952   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:44:01.611706   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:44:01.611754   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:44:01.693469   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:44:01.693488   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:44:01.693501   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:44:01.774433   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:44:01.774470   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:44:04.321593   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:44:04.335428   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:44:04.335497   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:44:04.378479   69580 cri.go:89] found id: ""
	I0501 03:44:04.378505   69580 logs.go:276] 0 containers: []
	W0501 03:44:04.378516   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:44:04.378525   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:44:04.378585   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:44:04.420025   69580 cri.go:89] found id: ""
	I0501 03:44:04.420050   69580 logs.go:276] 0 containers: []
	W0501 03:44:04.420059   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:44:04.420065   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:44:04.420113   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:44:04.464009   69580 cri.go:89] found id: ""
	I0501 03:44:04.464039   69580 logs.go:276] 0 containers: []
	W0501 03:44:04.464047   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:44:04.464052   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:44:04.464113   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:44:04.502039   69580 cri.go:89] found id: ""
	I0501 03:44:04.502069   69580 logs.go:276] 0 containers: []
	W0501 03:44:04.502081   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:44:04.502088   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:44:04.502150   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:44:04.544566   69580 cri.go:89] found id: ""
	I0501 03:44:04.544593   69580 logs.go:276] 0 containers: []
	W0501 03:44:04.544605   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:44:04.544614   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:44:04.544672   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:44:04.584067   69580 cri.go:89] found id: ""
	I0501 03:44:04.584095   69580 logs.go:276] 0 containers: []
	W0501 03:44:04.584104   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:44:04.584112   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:44:04.584174   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:44:04.625165   69580 cri.go:89] found id: ""
	I0501 03:44:04.625197   69580 logs.go:276] 0 containers: []
	W0501 03:44:04.625210   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:44:04.625219   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:44:04.625292   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:44:04.667796   69580 cri.go:89] found id: ""
	I0501 03:44:04.667830   69580 logs.go:276] 0 containers: []
	W0501 03:44:04.667839   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:44:04.667850   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:44:04.667868   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:44:04.722269   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:44:04.722303   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:44:04.738232   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:44:04.738265   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:44:04.821551   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:44:04.821578   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:44:04.821595   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:44:04.902575   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:44:04.902618   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:44:04.346197   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:06.845552   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:04.157333   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:06.157366   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:08.656837   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:06.513484   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:09.012004   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:07.449793   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:44:07.466348   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:44:07.466450   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:44:07.510325   69580 cri.go:89] found id: ""
	I0501 03:44:07.510352   69580 logs.go:276] 0 containers: []
	W0501 03:44:07.510363   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:44:07.510371   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:44:07.510450   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:44:07.550722   69580 cri.go:89] found id: ""
	I0501 03:44:07.550748   69580 logs.go:276] 0 containers: []
	W0501 03:44:07.550756   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:44:07.550762   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:44:07.550810   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:44:07.589592   69580 cri.go:89] found id: ""
	I0501 03:44:07.589617   69580 logs.go:276] 0 containers: []
	W0501 03:44:07.589625   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:44:07.589630   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:44:07.589678   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:44:07.631628   69580 cri.go:89] found id: ""
	I0501 03:44:07.631655   69580 logs.go:276] 0 containers: []
	W0501 03:44:07.631662   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:44:07.631668   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:44:07.631726   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:44:07.674709   69580 cri.go:89] found id: ""
	I0501 03:44:07.674743   69580 logs.go:276] 0 containers: []
	W0501 03:44:07.674753   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:44:07.674760   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:44:07.674811   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:44:07.714700   69580 cri.go:89] found id: ""
	I0501 03:44:07.714767   69580 logs.go:276] 0 containers: []
	W0501 03:44:07.714788   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:44:07.714797   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:44:07.714856   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:44:07.753440   69580 cri.go:89] found id: ""
	I0501 03:44:07.753467   69580 logs.go:276] 0 containers: []
	W0501 03:44:07.753478   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:44:07.753485   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:44:07.753549   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:44:07.791579   69580 cri.go:89] found id: ""
	I0501 03:44:07.791606   69580 logs.go:276] 0 containers: []
	W0501 03:44:07.791617   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:44:07.791628   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:44:07.791644   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:44:07.845568   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:44:07.845606   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:44:07.861861   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:44:07.861885   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:44:07.941719   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:44:07.941743   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:44:07.941757   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:44:08.022684   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:44:08.022720   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:44:10.575417   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:44:10.593408   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:44:10.593468   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:44:10.641322   69580 cri.go:89] found id: ""
	I0501 03:44:10.641357   69580 logs.go:276] 0 containers: []
	W0501 03:44:10.641370   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:44:10.641378   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:44:10.641442   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:44:10.686330   69580 cri.go:89] found id: ""
	I0501 03:44:10.686358   69580 logs.go:276] 0 containers: []
	W0501 03:44:10.686368   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:44:10.686377   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:44:10.686458   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:44:10.734414   69580 cri.go:89] found id: ""
	I0501 03:44:10.734444   69580 logs.go:276] 0 containers: []
	W0501 03:44:10.734456   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:44:10.734463   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:44:10.734527   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:44:10.776063   69580 cri.go:89] found id: ""
	I0501 03:44:10.776095   69580 logs.go:276] 0 containers: []
	W0501 03:44:10.776106   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:44:10.776113   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:44:10.776176   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:44:10.819035   69580 cri.go:89] found id: ""
	I0501 03:44:10.819065   69580 logs.go:276] 0 containers: []
	W0501 03:44:10.819076   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:44:10.819084   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:44:10.819150   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:44:10.868912   69580 cri.go:89] found id: ""
	I0501 03:44:10.868938   69580 logs.go:276] 0 containers: []
	W0501 03:44:10.868946   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:44:10.868952   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:44:10.869000   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:44:10.910517   69580 cri.go:89] found id: ""
	I0501 03:44:10.910549   69580 logs.go:276] 0 containers: []
	W0501 03:44:10.910572   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:44:10.910581   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:44:10.910678   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:44:10.949267   69580 cri.go:89] found id: ""
	I0501 03:44:10.949297   69580 logs.go:276] 0 containers: []
	W0501 03:44:10.949306   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:44:10.949314   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:44:10.949327   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:44:11.004731   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:44:11.004779   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:44:11.022146   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:44:11.022174   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:44:11.108992   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:44:11.109020   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:44:11.109035   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:44:11.192571   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:44:11.192605   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:44:08.846431   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:11.346295   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:10.657938   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:13.156112   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:11.012040   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:13.512166   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:15.512232   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:13.739336   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:44:13.758622   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:44:13.758721   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:44:13.805395   69580 cri.go:89] found id: ""
	I0501 03:44:13.805423   69580 logs.go:276] 0 containers: []
	W0501 03:44:13.805434   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:44:13.805442   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:44:13.805523   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:44:13.847372   69580 cri.go:89] found id: ""
	I0501 03:44:13.847400   69580 logs.go:276] 0 containers: []
	W0501 03:44:13.847409   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:44:13.847417   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:44:13.847474   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:44:13.891842   69580 cri.go:89] found id: ""
	I0501 03:44:13.891867   69580 logs.go:276] 0 containers: []
	W0501 03:44:13.891874   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:44:13.891880   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:44:13.891935   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:44:13.933382   69580 cri.go:89] found id: ""
	I0501 03:44:13.933411   69580 logs.go:276] 0 containers: []
	W0501 03:44:13.933422   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:44:13.933430   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:44:13.933490   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:44:13.973955   69580 cri.go:89] found id: ""
	I0501 03:44:13.973980   69580 logs.go:276] 0 containers: []
	W0501 03:44:13.973991   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:44:13.974000   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:44:13.974053   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:44:14.015202   69580 cri.go:89] found id: ""
	I0501 03:44:14.015226   69580 logs.go:276] 0 containers: []
	W0501 03:44:14.015234   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:44:14.015240   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:44:14.015287   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:44:14.057441   69580 cri.go:89] found id: ""
	I0501 03:44:14.057471   69580 logs.go:276] 0 containers: []
	W0501 03:44:14.057483   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:44:14.057491   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:44:14.057551   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:44:14.099932   69580 cri.go:89] found id: ""
	I0501 03:44:14.099961   69580 logs.go:276] 0 containers: []
	W0501 03:44:14.099972   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:44:14.099983   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:44:14.099996   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:44:14.160386   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:44:14.160418   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:44:14.176880   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:44:14.176908   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:44:14.272137   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:44:14.272155   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:44:14.272168   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:44:14.366523   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:44:14.366571   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:44:13.349770   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:15.351345   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:17.845182   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:15.156569   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:17.157994   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:17.512836   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:20.012034   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:16.914394   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:44:16.930976   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:44:16.931038   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:44:16.977265   69580 cri.go:89] found id: ""
	I0501 03:44:16.977294   69580 logs.go:276] 0 containers: []
	W0501 03:44:16.977303   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:44:16.977309   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:44:16.977363   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:44:17.015656   69580 cri.go:89] found id: ""
	I0501 03:44:17.015686   69580 logs.go:276] 0 containers: []
	W0501 03:44:17.015694   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:44:17.015700   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:44:17.015768   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:44:17.056079   69580 cri.go:89] found id: ""
	I0501 03:44:17.056111   69580 logs.go:276] 0 containers: []
	W0501 03:44:17.056121   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:44:17.056129   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:44:17.056188   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:44:17.099504   69580 cri.go:89] found id: ""
	I0501 03:44:17.099528   69580 logs.go:276] 0 containers: []
	W0501 03:44:17.099536   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:44:17.099542   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:44:17.099606   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:44:17.141371   69580 cri.go:89] found id: ""
	I0501 03:44:17.141401   69580 logs.go:276] 0 containers: []
	W0501 03:44:17.141410   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:44:17.141417   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:44:17.141484   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:44:17.184143   69580 cri.go:89] found id: ""
	I0501 03:44:17.184167   69580 logs.go:276] 0 containers: []
	W0501 03:44:17.184179   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:44:17.184193   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:44:17.184246   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:44:17.224012   69580 cri.go:89] found id: ""
	I0501 03:44:17.224049   69580 logs.go:276] 0 containers: []
	W0501 03:44:17.224061   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:44:17.224069   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:44:17.224136   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:44:17.268185   69580 cri.go:89] found id: ""
	I0501 03:44:17.268216   69580 logs.go:276] 0 containers: []
	W0501 03:44:17.268224   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:44:17.268233   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:44:17.268248   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:44:17.351342   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:44:17.351392   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:44:17.398658   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:44:17.398689   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:44:17.452476   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:44:17.452517   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:44:17.468734   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:44:17.468771   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:44:17.558971   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:44:20.059342   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:44:20.075707   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:44:20.075791   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:44:20.114436   69580 cri.go:89] found id: ""
	I0501 03:44:20.114472   69580 logs.go:276] 0 containers: []
	W0501 03:44:20.114486   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:44:20.114495   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:44:20.114562   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:44:20.155607   69580 cri.go:89] found id: ""
	I0501 03:44:20.155638   69580 logs.go:276] 0 containers: []
	W0501 03:44:20.155649   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:44:20.155657   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:44:20.155715   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:44:20.198188   69580 cri.go:89] found id: ""
	I0501 03:44:20.198218   69580 logs.go:276] 0 containers: []
	W0501 03:44:20.198227   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:44:20.198234   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:44:20.198291   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:44:20.237183   69580 cri.go:89] found id: ""
	I0501 03:44:20.237213   69580 logs.go:276] 0 containers: []
	W0501 03:44:20.237223   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:44:20.237232   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:44:20.237286   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:44:20.279289   69580 cri.go:89] found id: ""
	I0501 03:44:20.279320   69580 logs.go:276] 0 containers: []
	W0501 03:44:20.279332   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:44:20.279341   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:44:20.279409   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:44:20.334066   69580 cri.go:89] found id: ""
	I0501 03:44:20.334091   69580 logs.go:276] 0 containers: []
	W0501 03:44:20.334112   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:44:20.334121   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:44:20.334181   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:44:20.385740   69580 cri.go:89] found id: ""
	I0501 03:44:20.385775   69580 logs.go:276] 0 containers: []
	W0501 03:44:20.385785   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:44:20.385796   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:44:20.385860   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:44:20.425151   69580 cri.go:89] found id: ""
	I0501 03:44:20.425176   69580 logs.go:276] 0 containers: []
	W0501 03:44:20.425183   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:44:20.425193   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:44:20.425214   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:44:20.472563   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:44:20.472605   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:44:20.526589   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:44:20.526626   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:44:20.541978   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:44:20.542013   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:44:20.619513   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:44:20.619540   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:44:20.619555   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:44:19.846208   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:22.345166   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:19.658986   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:22.156821   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:23.159267   68864 pod_ready.go:81] duration metric: took 4m0.009511824s for pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace to be "Ready" ...
	E0501 03:44:23.159296   68864 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0501 03:44:23.159308   68864 pod_ready.go:38] duration metric: took 4m7.423794373s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0501 03:44:23.159327   68864 api_server.go:52] waiting for apiserver process to appear ...
	I0501 03:44:23.159362   68864 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:44:23.159422   68864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:44:23.225563   68864 cri.go:89] found id: "a96815c49ac45b34c359d7171b30be5c25cee80fae08d947fc004d479693df00"
	I0501 03:44:23.225590   68864 cri.go:89] found id: ""
	I0501 03:44:23.225607   68864 logs.go:276] 1 containers: [a96815c49ac45b34c359d7171b30be5c25cee80fae08d947fc004d479693df00]
	I0501 03:44:23.225663   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:44:23.231542   68864 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:44:23.231598   68864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:44:23.290847   68864 cri.go:89] found id: "d109948ffbbddd2e38484d83d2260690afce3c51e16abff0fe8713e844bfdcb3"
	I0501 03:44:23.290871   68864 cri.go:89] found id: ""
	I0501 03:44:23.290878   68864 logs.go:276] 1 containers: [d109948ffbbddd2e38484d83d2260690afce3c51e16abff0fe8713e844bfdcb3]
	I0501 03:44:23.290926   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:44:23.295697   68864 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:44:23.295755   68864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:44:23.348625   68864 cri.go:89] found id: "e3c74de489af3d78a4b0cf620ed16e1fba7c9ce1c7021a15c5b4bd241b7a934a"
	I0501 03:44:23.348652   68864 cri.go:89] found id: ""
	I0501 03:44:23.348661   68864 logs.go:276] 1 containers: [e3c74de489af3d78a4b0cf620ed16e1fba7c9ce1c7021a15c5b4bd241b7a934a]
	I0501 03:44:23.348717   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:44:23.355801   68864 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:44:23.355896   68864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:44:23.409428   68864 cri.go:89] found id: "1813f35574f4f0398e7d482fe77c5d7f8490726666a277035bda0a5cff80af6e"
	I0501 03:44:23.409461   68864 cri.go:89] found id: ""
	I0501 03:44:23.409471   68864 logs.go:276] 1 containers: [1813f35574f4f0398e7d482fe77c5d7f8490726666a277035bda0a5cff80af6e]
	I0501 03:44:23.409530   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:44:23.416480   68864 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:44:23.416560   68864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:44:23.466642   68864 cri.go:89] found id: "94afdb03c382269209154b2e73edbf200662e89960c248ba775a0ef2b8fbe6b1"
	I0501 03:44:23.466672   68864 cri.go:89] found id: ""
	I0501 03:44:23.466681   68864 logs.go:276] 1 containers: [94afdb03c382269209154b2e73edbf200662e89960c248ba775a0ef2b8fbe6b1]
	I0501 03:44:23.466739   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:44:23.472831   68864 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:44:23.472906   68864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:44:23.524815   68864 cri.go:89] found id: "7e7158f7ff3922e97ba5ee97108acc1d0ba0350dff2f9aa56c2f71519108791c"
	I0501 03:44:23.524841   68864 cri.go:89] found id: ""
	I0501 03:44:23.524850   68864 logs.go:276] 1 containers: [7e7158f7ff3922e97ba5ee97108acc1d0ba0350dff2f9aa56c2f71519108791c]
	I0501 03:44:23.524902   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:44:23.532092   68864 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:44:23.532161   68864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:44:23.577262   68864 cri.go:89] found id: ""
	I0501 03:44:23.577292   68864 logs.go:276] 0 containers: []
	W0501 03:44:23.577305   68864 logs.go:278] No container was found matching "kindnet"
	I0501 03:44:23.577312   68864 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0501 03:44:23.577374   68864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0501 03:44:23.623597   68864 cri.go:89] found id: "f9a8d2f0f9453969b410fb7f880f6cf4c7e6e2f3c17a687583906efe4181dd17"
	I0501 03:44:23.623626   68864 cri.go:89] found id: "aaae36261c5ce1accf3bfb5993be5a256a037a640d5f6f30de8384900dda06b2"
	I0501 03:44:23.623632   68864 cri.go:89] found id: ""
	I0501 03:44:23.623640   68864 logs.go:276] 2 containers: [f9a8d2f0f9453969b410fb7f880f6cf4c7e6e2f3c17a687583906efe4181dd17 aaae36261c5ce1accf3bfb5993be5a256a037a640d5f6f30de8384900dda06b2]
	I0501 03:44:23.623702   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:44:23.630189   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:44:23.635673   68864 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:44:23.635694   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:44:22.012084   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:24.511736   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:23.203031   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:44:23.219964   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:44:23.220043   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:44:23.264287   69580 cri.go:89] found id: ""
	I0501 03:44:23.264315   69580 logs.go:276] 0 containers: []
	W0501 03:44:23.264323   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:44:23.264328   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:44:23.264395   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:44:23.310337   69580 cri.go:89] found id: ""
	I0501 03:44:23.310366   69580 logs.go:276] 0 containers: []
	W0501 03:44:23.310375   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:44:23.310383   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:44:23.310461   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:44:23.364550   69580 cri.go:89] found id: ""
	I0501 03:44:23.364577   69580 logs.go:276] 0 containers: []
	W0501 03:44:23.364588   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:44:23.364596   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:44:23.364676   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:44:23.412620   69580 cri.go:89] found id: ""
	I0501 03:44:23.412647   69580 logs.go:276] 0 containers: []
	W0501 03:44:23.412657   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:44:23.412665   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:44:23.412726   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:44:23.461447   69580 cri.go:89] found id: ""
	I0501 03:44:23.461477   69580 logs.go:276] 0 containers: []
	W0501 03:44:23.461488   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:44:23.461496   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:44:23.461558   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:44:23.514868   69580 cri.go:89] found id: ""
	I0501 03:44:23.514896   69580 logs.go:276] 0 containers: []
	W0501 03:44:23.514915   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:44:23.514924   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:44:23.514984   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:44:23.559171   69580 cri.go:89] found id: ""
	I0501 03:44:23.559200   69580 logs.go:276] 0 containers: []
	W0501 03:44:23.559210   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:44:23.559218   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:44:23.559284   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:44:23.601713   69580 cri.go:89] found id: ""
	I0501 03:44:23.601740   69580 logs.go:276] 0 containers: []
	W0501 03:44:23.601749   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:44:23.601760   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:44:23.601772   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:44:23.656147   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:44:23.656187   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:44:23.673507   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:44:23.673545   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:44:23.771824   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:44:23.771846   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:44:23.771861   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:44:23.861128   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:44:23.861161   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:44:26.406507   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:44:26.421836   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:44:26.421894   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:44:26.462758   69580 cri.go:89] found id: ""
	I0501 03:44:26.462785   69580 logs.go:276] 0 containers: []
	W0501 03:44:26.462796   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:44:26.462804   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:44:26.462860   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:44:24.346534   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:26.847370   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:24.220047   68864 logs.go:123] Gathering logs for kubelet ...
	I0501 03:44:24.220087   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:44:24.279596   68864 logs.go:123] Gathering logs for kube-apiserver [a96815c49ac45b34c359d7171b30be5c25cee80fae08d947fc004d479693df00] ...
	I0501 03:44:24.279633   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a96815c49ac45b34c359d7171b30be5c25cee80fae08d947fc004d479693df00"
	I0501 03:44:24.336092   68864 logs.go:123] Gathering logs for etcd [d109948ffbbddd2e38484d83d2260690afce3c51e16abff0fe8713e844bfdcb3] ...
	I0501 03:44:24.336128   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d109948ffbbddd2e38484d83d2260690afce3c51e16abff0fe8713e844bfdcb3"
	I0501 03:44:24.396117   68864 logs.go:123] Gathering logs for storage-provisioner [f9a8d2f0f9453969b410fb7f880f6cf4c7e6e2f3c17a687583906efe4181dd17] ...
	I0501 03:44:24.396145   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f9a8d2f0f9453969b410fb7f880f6cf4c7e6e2f3c17a687583906efe4181dd17"
	I0501 03:44:24.443608   68864 logs.go:123] Gathering logs for storage-provisioner [aaae36261c5ce1accf3bfb5993be5a256a037a640d5f6f30de8384900dda06b2] ...
	I0501 03:44:24.443644   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aaae36261c5ce1accf3bfb5993be5a256a037a640d5f6f30de8384900dda06b2"
	I0501 03:44:24.499533   68864 logs.go:123] Gathering logs for kube-controller-manager [7e7158f7ff3922e97ba5ee97108acc1d0ba0350dff2f9aa56c2f71519108791c] ...
	I0501 03:44:24.499560   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7e7158f7ff3922e97ba5ee97108acc1d0ba0350dff2f9aa56c2f71519108791c"
	I0501 03:44:24.562990   68864 logs.go:123] Gathering logs for container status ...
	I0501 03:44:24.563028   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:44:24.622630   68864 logs.go:123] Gathering logs for dmesg ...
	I0501 03:44:24.622671   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:44:24.641106   68864 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:44:24.641145   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0501 03:44:24.781170   68864 logs.go:123] Gathering logs for coredns [e3c74de489af3d78a4b0cf620ed16e1fba7c9ce1c7021a15c5b4bd241b7a934a] ...
	I0501 03:44:24.781203   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e3c74de489af3d78a4b0cf620ed16e1fba7c9ce1c7021a15c5b4bd241b7a934a"
	I0501 03:44:24.824616   68864 logs.go:123] Gathering logs for kube-scheduler [1813f35574f4f0398e7d482fe77c5d7f8490726666a277035bda0a5cff80af6e] ...
	I0501 03:44:24.824643   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1813f35574f4f0398e7d482fe77c5d7f8490726666a277035bda0a5cff80af6e"
	I0501 03:44:24.871956   68864 logs.go:123] Gathering logs for kube-proxy [94afdb03c382269209154b2e73edbf200662e89960c248ba775a0ef2b8fbe6b1] ...
	I0501 03:44:24.871992   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 94afdb03c382269209154b2e73edbf200662e89960c248ba775a0ef2b8fbe6b1"
	I0501 03:44:27.424582   68864 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:44:27.447490   68864 api_server.go:72] duration metric: took 4m19.445111196s to wait for apiserver process to appear ...
	I0501 03:44:27.447522   68864 api_server.go:88] waiting for apiserver healthz status ...
	I0501 03:44:27.447555   68864 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:44:27.447601   68864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:44:27.494412   68864 cri.go:89] found id: "a96815c49ac45b34c359d7171b30be5c25cee80fae08d947fc004d479693df00"
	I0501 03:44:27.494437   68864 cri.go:89] found id: ""
	I0501 03:44:27.494445   68864 logs.go:276] 1 containers: [a96815c49ac45b34c359d7171b30be5c25cee80fae08d947fc004d479693df00]
	I0501 03:44:27.494490   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:44:27.503782   68864 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:44:27.503853   68864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:44:27.550991   68864 cri.go:89] found id: "d109948ffbbddd2e38484d83d2260690afce3c51e16abff0fe8713e844bfdcb3"
	I0501 03:44:27.551018   68864 cri.go:89] found id: ""
	I0501 03:44:27.551026   68864 logs.go:276] 1 containers: [d109948ffbbddd2e38484d83d2260690afce3c51e16abff0fe8713e844bfdcb3]
	I0501 03:44:27.551073   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:44:27.556919   68864 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:44:27.556983   68864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:44:27.606005   68864 cri.go:89] found id: "e3c74de489af3d78a4b0cf620ed16e1fba7c9ce1c7021a15c5b4bd241b7a934a"
	I0501 03:44:27.606033   68864 cri.go:89] found id: ""
	I0501 03:44:27.606042   68864 logs.go:276] 1 containers: [e3c74de489af3d78a4b0cf620ed16e1fba7c9ce1c7021a15c5b4bd241b7a934a]
	I0501 03:44:27.606100   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:44:27.611639   68864 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:44:27.611706   68864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:44:27.661151   68864 cri.go:89] found id: "1813f35574f4f0398e7d482fe77c5d7f8490726666a277035bda0a5cff80af6e"
	I0501 03:44:27.661172   68864 cri.go:89] found id: ""
	I0501 03:44:27.661179   68864 logs.go:276] 1 containers: [1813f35574f4f0398e7d482fe77c5d7f8490726666a277035bda0a5cff80af6e]
	I0501 03:44:27.661278   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:44:27.666443   68864 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:44:27.666514   68864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:44:27.712387   68864 cri.go:89] found id: "94afdb03c382269209154b2e73edbf200662e89960c248ba775a0ef2b8fbe6b1"
	I0501 03:44:27.712416   68864 cri.go:89] found id: ""
	I0501 03:44:27.712424   68864 logs.go:276] 1 containers: [94afdb03c382269209154b2e73edbf200662e89960c248ba775a0ef2b8fbe6b1]
	I0501 03:44:27.712480   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:44:27.717280   68864 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:44:27.717342   68864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:44:27.767124   68864 cri.go:89] found id: "7e7158f7ff3922e97ba5ee97108acc1d0ba0350dff2f9aa56c2f71519108791c"
	I0501 03:44:27.767154   68864 cri.go:89] found id: ""
	I0501 03:44:27.767163   68864 logs.go:276] 1 containers: [7e7158f7ff3922e97ba5ee97108acc1d0ba0350dff2f9aa56c2f71519108791c]
	I0501 03:44:27.767215   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:44:27.773112   68864 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:44:27.773183   68864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:44:27.829966   68864 cri.go:89] found id: ""
	I0501 03:44:27.829991   68864 logs.go:276] 0 containers: []
	W0501 03:44:27.829999   68864 logs.go:278] No container was found matching "kindnet"
	I0501 03:44:27.830005   68864 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0501 03:44:27.830056   68864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0501 03:44:27.873391   68864 cri.go:89] found id: "f9a8d2f0f9453969b410fb7f880f6cf4c7e6e2f3c17a687583906efe4181dd17"
	I0501 03:44:27.873415   68864 cri.go:89] found id: "aaae36261c5ce1accf3bfb5993be5a256a037a640d5f6f30de8384900dda06b2"
	I0501 03:44:27.873419   68864 cri.go:89] found id: ""
	I0501 03:44:27.873426   68864 logs.go:276] 2 containers: [f9a8d2f0f9453969b410fb7f880f6cf4c7e6e2f3c17a687583906efe4181dd17 aaae36261c5ce1accf3bfb5993be5a256a037a640d5f6f30de8384900dda06b2]
	I0501 03:44:27.873473   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:44:27.878537   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:44:27.883518   68864 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:44:27.883543   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0501 03:44:28.012337   68864 logs.go:123] Gathering logs for kube-apiserver [a96815c49ac45b34c359d7171b30be5c25cee80fae08d947fc004d479693df00] ...
	I0501 03:44:28.012377   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a96815c49ac45b34c359d7171b30be5c25cee80fae08d947fc004d479693df00"
	I0501 03:44:28.063686   68864 logs.go:123] Gathering logs for etcd [d109948ffbbddd2e38484d83d2260690afce3c51e16abff0fe8713e844bfdcb3] ...
	I0501 03:44:28.063715   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d109948ffbbddd2e38484d83d2260690afce3c51e16abff0fe8713e844bfdcb3"
	I0501 03:44:28.116507   68864 logs.go:123] Gathering logs for storage-provisioner [aaae36261c5ce1accf3bfb5993be5a256a037a640d5f6f30de8384900dda06b2] ...
	I0501 03:44:28.116535   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aaae36261c5ce1accf3bfb5993be5a256a037a640d5f6f30de8384900dda06b2"
	I0501 03:44:28.165593   68864 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:44:28.165636   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:44:28.595278   68864 logs.go:123] Gathering logs for container status ...
	I0501 03:44:28.595333   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:44:28.645790   68864 logs.go:123] Gathering logs for dmesg ...
	I0501 03:44:28.645836   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:44:28.662952   68864 logs.go:123] Gathering logs for coredns [e3c74de489af3d78a4b0cf620ed16e1fba7c9ce1c7021a15c5b4bd241b7a934a] ...
	I0501 03:44:28.662984   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e3c74de489af3d78a4b0cf620ed16e1fba7c9ce1c7021a15c5b4bd241b7a934a"
	I0501 03:44:28.710273   68864 logs.go:123] Gathering logs for kube-scheduler [1813f35574f4f0398e7d482fe77c5d7f8490726666a277035bda0a5cff80af6e] ...
	I0501 03:44:28.710302   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1813f35574f4f0398e7d482fe77c5d7f8490726666a277035bda0a5cff80af6e"
	I0501 03:44:28.761838   68864 logs.go:123] Gathering logs for kube-proxy [94afdb03c382269209154b2e73edbf200662e89960c248ba775a0ef2b8fbe6b1] ...
	I0501 03:44:28.761872   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 94afdb03c382269209154b2e73edbf200662e89960c248ba775a0ef2b8fbe6b1"
	I0501 03:44:28.810775   68864 logs.go:123] Gathering logs for kube-controller-manager [7e7158f7ff3922e97ba5ee97108acc1d0ba0350dff2f9aa56c2f71519108791c] ...
	I0501 03:44:28.810808   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7e7158f7ff3922e97ba5ee97108acc1d0ba0350dff2f9aa56c2f71519108791c"
	I0501 03:44:27.012119   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:29.510651   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:26.505067   69580 cri.go:89] found id: ""
	I0501 03:44:26.505098   69580 logs.go:276] 0 containers: []
	W0501 03:44:26.505110   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:44:26.505121   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:44:26.505182   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:44:26.544672   69580 cri.go:89] found id: ""
	I0501 03:44:26.544699   69580 logs.go:276] 0 containers: []
	W0501 03:44:26.544711   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:44:26.544717   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:44:26.544764   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:44:26.590579   69580 cri.go:89] found id: ""
	I0501 03:44:26.590605   69580 logs.go:276] 0 containers: []
	W0501 03:44:26.590614   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:44:26.590620   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:44:26.590670   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:44:26.637887   69580 cri.go:89] found id: ""
	I0501 03:44:26.637920   69580 logs.go:276] 0 containers: []
	W0501 03:44:26.637930   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:44:26.637939   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:44:26.637998   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:44:26.686778   69580 cri.go:89] found id: ""
	I0501 03:44:26.686807   69580 logs.go:276] 0 containers: []
	W0501 03:44:26.686815   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:44:26.686821   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:44:26.686882   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:44:26.729020   69580 cri.go:89] found id: ""
	I0501 03:44:26.729045   69580 logs.go:276] 0 containers: []
	W0501 03:44:26.729054   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:44:26.729060   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:44:26.729124   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:44:26.769022   69580 cri.go:89] found id: ""
	I0501 03:44:26.769043   69580 logs.go:276] 0 containers: []
	W0501 03:44:26.769051   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:44:26.769059   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:44:26.769073   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:44:26.854985   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:44:26.855011   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:44:26.855024   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:44:26.937031   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:44:26.937063   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:44:27.006267   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:44:27.006301   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:44:27.080503   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:44:27.080545   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:44:29.598176   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:44:29.614465   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:44:29.614523   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:44:29.662384   69580 cri.go:89] found id: ""
	I0501 03:44:29.662421   69580 logs.go:276] 0 containers: []
	W0501 03:44:29.662433   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:44:29.662439   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:44:29.662483   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:44:29.705262   69580 cri.go:89] found id: ""
	I0501 03:44:29.705286   69580 logs.go:276] 0 containers: []
	W0501 03:44:29.705295   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:44:29.705300   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:44:29.705345   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:44:29.752308   69580 cri.go:89] found id: ""
	I0501 03:44:29.752335   69580 logs.go:276] 0 containers: []
	W0501 03:44:29.752343   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:44:29.752349   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:44:29.752403   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:44:29.802702   69580 cri.go:89] found id: ""
	I0501 03:44:29.802729   69580 logs.go:276] 0 containers: []
	W0501 03:44:29.802741   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:44:29.802749   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:44:29.802814   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:44:29.854112   69580 cri.go:89] found id: ""
	I0501 03:44:29.854138   69580 logs.go:276] 0 containers: []
	W0501 03:44:29.854149   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:44:29.854157   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:44:29.854217   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:44:29.898447   69580 cri.go:89] found id: ""
	I0501 03:44:29.898470   69580 logs.go:276] 0 containers: []
	W0501 03:44:29.898480   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:44:29.898486   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:44:29.898545   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:44:29.938832   69580 cri.go:89] found id: ""
	I0501 03:44:29.938862   69580 logs.go:276] 0 containers: []
	W0501 03:44:29.938873   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:44:29.938881   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:44:29.938948   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:44:29.987697   69580 cri.go:89] found id: ""
	I0501 03:44:29.987721   69580 logs.go:276] 0 containers: []
	W0501 03:44:29.987730   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:44:29.987738   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:44:29.987753   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:44:30.042446   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:44:30.042473   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:44:30.095358   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:44:30.095389   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:44:30.110745   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:44:30.110782   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:44:30.190923   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:44:30.190951   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:44:30.190965   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:44:29.346013   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:31.347513   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:28.868838   68864 logs.go:123] Gathering logs for storage-provisioner [f9a8d2f0f9453969b410fb7f880f6cf4c7e6e2f3c17a687583906efe4181dd17] ...
	I0501 03:44:28.868876   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f9a8d2f0f9453969b410fb7f880f6cf4c7e6e2f3c17a687583906efe4181dd17"
	I0501 03:44:28.912436   68864 logs.go:123] Gathering logs for kubelet ...
	I0501 03:44:28.912474   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:44:31.469456   68864 api_server.go:253] Checking apiserver healthz at https://192.168.50.218:8443/healthz ...
	I0501 03:44:31.478498   68864 api_server.go:279] https://192.168.50.218:8443/healthz returned 200:
	ok
	I0501 03:44:31.479838   68864 api_server.go:141] control plane version: v1.30.0
	I0501 03:44:31.479861   68864 api_server.go:131] duration metric: took 4.032331979s to wait for apiserver health ...
	I0501 03:44:31.479869   68864 system_pods.go:43] waiting for kube-system pods to appear ...
	I0501 03:44:31.479889   68864 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:44:31.479930   68864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:44:31.531068   68864 cri.go:89] found id: "a96815c49ac45b34c359d7171b30be5c25cee80fae08d947fc004d479693df00"
	I0501 03:44:31.531088   68864 cri.go:89] found id: ""
	I0501 03:44:31.531095   68864 logs.go:276] 1 containers: [a96815c49ac45b34c359d7171b30be5c25cee80fae08d947fc004d479693df00]
	I0501 03:44:31.531137   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:44:31.536216   68864 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:44:31.536292   68864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:44:31.584155   68864 cri.go:89] found id: "d109948ffbbddd2e38484d83d2260690afce3c51e16abff0fe8713e844bfdcb3"
	I0501 03:44:31.584183   68864 cri.go:89] found id: ""
	I0501 03:44:31.584194   68864 logs.go:276] 1 containers: [d109948ffbbddd2e38484d83d2260690afce3c51e16abff0fe8713e844bfdcb3]
	I0501 03:44:31.584250   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:44:31.589466   68864 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:44:31.589528   68864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:44:31.639449   68864 cri.go:89] found id: "e3c74de489af3d78a4b0cf620ed16e1fba7c9ce1c7021a15c5b4bd241b7a934a"
	I0501 03:44:31.639476   68864 cri.go:89] found id: ""
	I0501 03:44:31.639484   68864 logs.go:276] 1 containers: [e3c74de489af3d78a4b0cf620ed16e1fba7c9ce1c7021a15c5b4bd241b7a934a]
	I0501 03:44:31.639535   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:44:31.644684   68864 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:44:31.644750   68864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:44:31.702095   68864 cri.go:89] found id: "1813f35574f4f0398e7d482fe77c5d7f8490726666a277035bda0a5cff80af6e"
	I0501 03:44:31.702119   68864 cri.go:89] found id: ""
	I0501 03:44:31.702125   68864 logs.go:276] 1 containers: [1813f35574f4f0398e7d482fe77c5d7f8490726666a277035bda0a5cff80af6e]
	I0501 03:44:31.702173   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:44:31.707443   68864 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:44:31.707508   68864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:44:31.758582   68864 cri.go:89] found id: "94afdb03c382269209154b2e73edbf200662e89960c248ba775a0ef2b8fbe6b1"
	I0501 03:44:31.758603   68864 cri.go:89] found id: ""
	I0501 03:44:31.758610   68864 logs.go:276] 1 containers: [94afdb03c382269209154b2e73edbf200662e89960c248ba775a0ef2b8fbe6b1]
	I0501 03:44:31.758656   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:44:31.764261   68864 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:44:31.764325   68864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:44:31.813385   68864 cri.go:89] found id: "7e7158f7ff3922e97ba5ee97108acc1d0ba0350dff2f9aa56c2f71519108791c"
	I0501 03:44:31.813407   68864 cri.go:89] found id: ""
	I0501 03:44:31.813414   68864 logs.go:276] 1 containers: [7e7158f7ff3922e97ba5ee97108acc1d0ba0350dff2f9aa56c2f71519108791c]
	I0501 03:44:31.813457   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:44:31.818289   68864 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:44:31.818348   68864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:44:31.862788   68864 cri.go:89] found id: ""
	I0501 03:44:31.862814   68864 logs.go:276] 0 containers: []
	W0501 03:44:31.862824   68864 logs.go:278] No container was found matching "kindnet"
	I0501 03:44:31.862832   68864 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0501 03:44:31.862890   68864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0501 03:44:31.912261   68864 cri.go:89] found id: "f9a8d2f0f9453969b410fb7f880f6cf4c7e6e2f3c17a687583906efe4181dd17"
	I0501 03:44:31.912284   68864 cri.go:89] found id: "aaae36261c5ce1accf3bfb5993be5a256a037a640d5f6f30de8384900dda06b2"
	I0501 03:44:31.912298   68864 cri.go:89] found id: ""
	I0501 03:44:31.912312   68864 logs.go:276] 2 containers: [f9a8d2f0f9453969b410fb7f880f6cf4c7e6e2f3c17a687583906efe4181dd17 aaae36261c5ce1accf3bfb5993be5a256a037a640d5f6f30de8384900dda06b2]
	I0501 03:44:31.912367   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:44:31.917696   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:44:31.922432   68864 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:44:31.922450   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:44:32.332797   68864 logs.go:123] Gathering logs for kubelet ...
	I0501 03:44:32.332836   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:44:32.396177   68864 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:44:32.396214   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0501 03:44:32.511915   68864 logs.go:123] Gathering logs for etcd [d109948ffbbddd2e38484d83d2260690afce3c51e16abff0fe8713e844bfdcb3] ...
	I0501 03:44:32.511953   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d109948ffbbddd2e38484d83d2260690afce3c51e16abff0fe8713e844bfdcb3"
	I0501 03:44:32.564447   68864 logs.go:123] Gathering logs for kube-proxy [94afdb03c382269209154b2e73edbf200662e89960c248ba775a0ef2b8fbe6b1] ...
	I0501 03:44:32.564475   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 94afdb03c382269209154b2e73edbf200662e89960c248ba775a0ef2b8fbe6b1"
	I0501 03:44:32.610196   68864 logs.go:123] Gathering logs for kube-controller-manager [7e7158f7ff3922e97ba5ee97108acc1d0ba0350dff2f9aa56c2f71519108791c] ...
	I0501 03:44:32.610235   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7e7158f7ff3922e97ba5ee97108acc1d0ba0350dff2f9aa56c2f71519108791c"
	I0501 03:44:32.665262   68864 logs.go:123] Gathering logs for storage-provisioner [aaae36261c5ce1accf3bfb5993be5a256a037a640d5f6f30de8384900dda06b2] ...
	I0501 03:44:32.665314   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aaae36261c5ce1accf3bfb5993be5a256a037a640d5f6f30de8384900dda06b2"
	I0501 03:44:32.707346   68864 logs.go:123] Gathering logs for container status ...
	I0501 03:44:32.707377   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:44:32.757693   68864 logs.go:123] Gathering logs for dmesg ...
	I0501 03:44:32.757726   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:44:32.775720   68864 logs.go:123] Gathering logs for kube-apiserver [a96815c49ac45b34c359d7171b30be5c25cee80fae08d947fc004d479693df00] ...
	I0501 03:44:32.775759   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a96815c49ac45b34c359d7171b30be5c25cee80fae08d947fc004d479693df00"
	I0501 03:44:32.831002   68864 logs.go:123] Gathering logs for coredns [e3c74de489af3d78a4b0cf620ed16e1fba7c9ce1c7021a15c5b4bd241b7a934a] ...
	I0501 03:44:32.831039   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e3c74de489af3d78a4b0cf620ed16e1fba7c9ce1c7021a15c5b4bd241b7a934a"
	I0501 03:44:32.878365   68864 logs.go:123] Gathering logs for kube-scheduler [1813f35574f4f0398e7d482fe77c5d7f8490726666a277035bda0a5cff80af6e] ...
	I0501 03:44:32.878416   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1813f35574f4f0398e7d482fe77c5d7f8490726666a277035bda0a5cff80af6e"
	I0501 03:44:32.935752   68864 logs.go:123] Gathering logs for storage-provisioner [f9a8d2f0f9453969b410fb7f880f6cf4c7e6e2f3c17a687583906efe4181dd17] ...
	I0501 03:44:32.935791   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f9a8d2f0f9453969b410fb7f880f6cf4c7e6e2f3c17a687583906efe4181dd17"
	I0501 03:44:35.492575   68864 system_pods.go:59] 8 kube-system pods found
	I0501 03:44:35.492603   68864 system_pods.go:61] "coredns-7db6d8ff4d-sjplt" [6701ee8e-0630-4332-b01c-26741ed3a7b7] Running
	I0501 03:44:35.492607   68864 system_pods.go:61] "etcd-embed-certs-277128" [744d7481-dd80-4435-90da-685fc16a76a4] Running
	I0501 03:44:35.492612   68864 system_pods.go:61] "kube-apiserver-embed-certs-277128" [2f1d0f09-b270-4cdc-af3c-e17e3a244bbf] Running
	I0501 03:44:35.492616   68864 system_pods.go:61] "kube-controller-manager-embed-certs-277128" [729aa138-dc0d-4616-97d4-1592453971b8] Running
	I0501 03:44:35.492619   68864 system_pods.go:61] "kube-proxy-phx7x" [56c0381e-c140-4f69-bbe4-09d393db8b23] Running
	I0501 03:44:35.492621   68864 system_pods.go:61] "kube-scheduler-embed-certs-277128" [cc56e9bc-7f21-4f57-a65a-73a10ffd7145] Running
	I0501 03:44:35.492627   68864 system_pods.go:61] "metrics-server-569cc877fc-p8j59" [f8ad6c24-dd5d-4515-9052-c9aca7412b55] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0501 03:44:35.492631   68864 system_pods.go:61] "storage-provisioner" [785be666-58d5-4b9d-92fd-bcacdbdebeb2] Running
	I0501 03:44:35.492638   68864 system_pods.go:74] duration metric: took 4.012764043s to wait for pod list to return data ...
	I0501 03:44:35.492644   68864 default_sa.go:34] waiting for default service account to be created ...
	I0501 03:44:35.494580   68864 default_sa.go:45] found service account: "default"
	I0501 03:44:35.494599   68864 default_sa.go:55] duration metric: took 1.949121ms for default service account to be created ...
	I0501 03:44:35.494606   68864 system_pods.go:116] waiting for k8s-apps to be running ...
	I0501 03:44:35.499484   68864 system_pods.go:86] 8 kube-system pods found
	I0501 03:44:35.499507   68864 system_pods.go:89] "coredns-7db6d8ff4d-sjplt" [6701ee8e-0630-4332-b01c-26741ed3a7b7] Running
	I0501 03:44:35.499514   68864 system_pods.go:89] "etcd-embed-certs-277128" [744d7481-dd80-4435-90da-685fc16a76a4] Running
	I0501 03:44:35.499519   68864 system_pods.go:89] "kube-apiserver-embed-certs-277128" [2f1d0f09-b270-4cdc-af3c-e17e3a244bbf] Running
	I0501 03:44:35.499523   68864 system_pods.go:89] "kube-controller-manager-embed-certs-277128" [729aa138-dc0d-4616-97d4-1592453971b8] Running
	I0501 03:44:35.499526   68864 system_pods.go:89] "kube-proxy-phx7x" [56c0381e-c140-4f69-bbe4-09d393db8b23] Running
	I0501 03:44:35.499531   68864 system_pods.go:89] "kube-scheduler-embed-certs-277128" [cc56e9bc-7f21-4f57-a65a-73a10ffd7145] Running
	I0501 03:44:35.499537   68864 system_pods.go:89] "metrics-server-569cc877fc-p8j59" [f8ad6c24-dd5d-4515-9052-c9aca7412b55] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0501 03:44:35.499544   68864 system_pods.go:89] "storage-provisioner" [785be666-58d5-4b9d-92fd-bcacdbdebeb2] Running
	I0501 03:44:35.499550   68864 system_pods.go:126] duration metric: took 4.939659ms to wait for k8s-apps to be running ...
	I0501 03:44:35.499559   68864 system_svc.go:44] waiting for kubelet service to be running ....
	I0501 03:44:35.499599   68864 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 03:44:35.518471   68864 system_svc.go:56] duration metric: took 18.902776ms WaitForService to wait for kubelet
	I0501 03:44:35.518498   68864 kubeadm.go:576] duration metric: took 4m27.516125606s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0501 03:44:35.518521   68864 node_conditions.go:102] verifying NodePressure condition ...
	I0501 03:44:35.521936   68864 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0501 03:44:35.521956   68864 node_conditions.go:123] node cpu capacity is 2
	I0501 03:44:35.521966   68864 node_conditions.go:105] duration metric: took 3.439997ms to run NodePressure ...
	I0501 03:44:35.521976   68864 start.go:240] waiting for startup goroutines ...
	I0501 03:44:35.521983   68864 start.go:245] waiting for cluster config update ...
	I0501 03:44:35.521994   68864 start.go:254] writing updated cluster config ...
	I0501 03:44:35.522311   68864 ssh_runner.go:195] Run: rm -f paused
	I0501 03:44:35.572130   68864 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0501 03:44:35.573709   68864 out.go:177] * Done! kubectl is now configured to use "embed-certs-277128" cluster and "default" namespace by default
	I0501 03:44:31.512755   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:34.011892   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:32.772208   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:44:32.791063   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:44:32.791145   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:44:32.856883   69580 cri.go:89] found id: ""
	I0501 03:44:32.856909   69580 logs.go:276] 0 containers: []
	W0501 03:44:32.856920   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:44:32.856927   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:44:32.856988   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:44:32.928590   69580 cri.go:89] found id: ""
	I0501 03:44:32.928625   69580 logs.go:276] 0 containers: []
	W0501 03:44:32.928637   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:44:32.928644   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:44:32.928707   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:44:32.978068   69580 cri.go:89] found id: ""
	I0501 03:44:32.978100   69580 logs.go:276] 0 containers: []
	W0501 03:44:32.978113   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:44:32.978120   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:44:32.978184   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:44:33.018873   69580 cri.go:89] found id: ""
	I0501 03:44:33.018897   69580 logs.go:276] 0 containers: []
	W0501 03:44:33.018905   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:44:33.018911   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:44:33.018970   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:44:33.060633   69580 cri.go:89] found id: ""
	I0501 03:44:33.060661   69580 logs.go:276] 0 containers: []
	W0501 03:44:33.060673   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:44:33.060681   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:44:33.060735   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:44:33.099862   69580 cri.go:89] found id: ""
	I0501 03:44:33.099891   69580 logs.go:276] 0 containers: []
	W0501 03:44:33.099900   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:44:33.099906   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:44:33.099953   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:44:33.139137   69580 cri.go:89] found id: ""
	I0501 03:44:33.139163   69580 logs.go:276] 0 containers: []
	W0501 03:44:33.139171   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:44:33.139177   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:44:33.139224   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:44:33.178800   69580 cri.go:89] found id: ""
	I0501 03:44:33.178826   69580 logs.go:276] 0 containers: []
	W0501 03:44:33.178834   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:44:33.178842   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:44:33.178856   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:44:33.233811   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:44:33.233842   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:44:33.248931   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:44:33.248958   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:44:33.325530   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:44:33.325551   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:44:33.325563   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:44:33.412071   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:44:33.412103   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:44:35.954706   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:44:35.970256   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:44:35.970333   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:44:36.010417   69580 cri.go:89] found id: ""
	I0501 03:44:36.010443   69580 logs.go:276] 0 containers: []
	W0501 03:44:36.010452   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:44:36.010459   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:44:36.010524   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:44:36.051571   69580 cri.go:89] found id: ""
	I0501 03:44:36.051600   69580 logs.go:276] 0 containers: []
	W0501 03:44:36.051611   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:44:36.051619   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:44:36.051683   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:44:36.092148   69580 cri.go:89] found id: ""
	I0501 03:44:36.092176   69580 logs.go:276] 0 containers: []
	W0501 03:44:36.092185   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:44:36.092190   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:44:36.092247   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:44:36.136243   69580 cri.go:89] found id: ""
	I0501 03:44:36.136282   69580 logs.go:276] 0 containers: []
	W0501 03:44:36.136290   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:44:36.136296   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:44:36.136342   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:44:36.178154   69580 cri.go:89] found id: ""
	I0501 03:44:36.178183   69580 logs.go:276] 0 containers: []
	W0501 03:44:36.178193   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:44:36.178200   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:44:36.178264   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:44:36.217050   69580 cri.go:89] found id: ""
	I0501 03:44:36.217077   69580 logs.go:276] 0 containers: []
	W0501 03:44:36.217089   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:44:36.217096   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:44:36.217172   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:44:36.260438   69580 cri.go:89] found id: ""
	I0501 03:44:36.260470   69580 logs.go:276] 0 containers: []
	W0501 03:44:36.260481   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:44:36.260488   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:44:36.260546   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:44:36.303410   69580 cri.go:89] found id: ""
	I0501 03:44:36.303436   69580 logs.go:276] 0 containers: []
	W0501 03:44:36.303448   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:44:36.303459   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:44:36.303475   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:44:36.390427   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:44:36.390468   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:44:36.433631   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:44:36.433663   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:44:33.845863   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:35.847896   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:36.012448   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:38.510722   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:39.005005   69237 pod_ready.go:81] duration metric: took 4m0.000783466s for pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace to be "Ready" ...
	E0501 03:44:39.005036   69237 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace to be "Ready" (will not retry!)
	I0501 03:44:39.005057   69237 pod_ready.go:38] duration metric: took 4m8.020392425s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0501 03:44:39.005089   69237 kubeadm.go:591] duration metric: took 4m17.941775807s to restartPrimaryControlPlane
	W0501 03:44:39.005175   69237 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0501 03:44:39.005208   69237 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0501 03:44:36.486334   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:44:36.486365   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:44:36.502145   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:44:36.502175   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:44:36.586733   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:44:39.087607   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:44:39.102475   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:44:39.102552   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:44:39.141916   69580 cri.go:89] found id: ""
	I0501 03:44:39.141947   69580 logs.go:276] 0 containers: []
	W0501 03:44:39.141958   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:44:39.141964   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:44:39.142012   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:44:39.188472   69580 cri.go:89] found id: ""
	I0501 03:44:39.188501   69580 logs.go:276] 0 containers: []
	W0501 03:44:39.188512   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:44:39.188520   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:44:39.188582   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:44:39.243282   69580 cri.go:89] found id: ""
	I0501 03:44:39.243306   69580 logs.go:276] 0 containers: []
	W0501 03:44:39.243313   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:44:39.243318   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:44:39.243377   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:44:39.288254   69580 cri.go:89] found id: ""
	I0501 03:44:39.288284   69580 logs.go:276] 0 containers: []
	W0501 03:44:39.288296   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:44:39.288304   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:44:39.288379   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:44:39.330846   69580 cri.go:89] found id: ""
	I0501 03:44:39.330879   69580 logs.go:276] 0 containers: []
	W0501 03:44:39.330892   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:44:39.330901   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:44:39.330969   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:44:39.377603   69580 cri.go:89] found id: ""
	I0501 03:44:39.377632   69580 logs.go:276] 0 containers: []
	W0501 03:44:39.377642   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:44:39.377649   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:44:39.377710   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:44:39.421545   69580 cri.go:89] found id: ""
	I0501 03:44:39.421574   69580 logs.go:276] 0 containers: []
	W0501 03:44:39.421585   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:44:39.421594   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:44:39.421653   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:44:39.463394   69580 cri.go:89] found id: ""
	I0501 03:44:39.463424   69580 logs.go:276] 0 containers: []
	W0501 03:44:39.463435   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:44:39.463447   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:44:39.463464   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:44:39.552196   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:44:39.552218   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:44:39.552229   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:44:39.648509   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:44:39.648549   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:44:39.702829   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:44:39.702866   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:44:39.757712   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:44:39.757746   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:44:38.347120   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:40.355310   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:42.847346   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:42.273443   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:44:42.289788   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:44:42.289856   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:44:42.336802   69580 cri.go:89] found id: ""
	I0501 03:44:42.336833   69580 logs.go:276] 0 containers: []
	W0501 03:44:42.336846   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:44:42.336854   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:44:42.336919   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:44:42.387973   69580 cri.go:89] found id: ""
	I0501 03:44:42.388017   69580 logs.go:276] 0 containers: []
	W0501 03:44:42.388028   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:44:42.388036   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:44:42.388103   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:44:42.444866   69580 cri.go:89] found id: ""
	I0501 03:44:42.444895   69580 logs.go:276] 0 containers: []
	W0501 03:44:42.444906   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:44:42.444914   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:44:42.444987   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:44:42.493647   69580 cri.go:89] found id: ""
	I0501 03:44:42.493676   69580 logs.go:276] 0 containers: []
	W0501 03:44:42.493686   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:44:42.493692   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:44:42.493748   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:44:42.535046   69580 cri.go:89] found id: ""
	I0501 03:44:42.535075   69580 logs.go:276] 0 containers: []
	W0501 03:44:42.535086   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:44:42.535093   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:44:42.535161   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:44:42.579453   69580 cri.go:89] found id: ""
	I0501 03:44:42.579486   69580 logs.go:276] 0 containers: []
	W0501 03:44:42.579499   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:44:42.579507   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:44:42.579568   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:44:42.621903   69580 cri.go:89] found id: ""
	I0501 03:44:42.621931   69580 logs.go:276] 0 containers: []
	W0501 03:44:42.621942   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:44:42.621950   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:44:42.622009   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:44:42.666202   69580 cri.go:89] found id: ""
	I0501 03:44:42.666232   69580 logs.go:276] 0 containers: []
	W0501 03:44:42.666243   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:44:42.666257   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:44:42.666272   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:44:42.736032   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:44:42.736078   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:44:42.750773   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:44:42.750799   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:44:42.836942   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:44:42.836975   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:44:42.836997   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:44:42.930660   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:44:42.930695   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:44:45.479619   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:44:45.495112   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:44:45.495174   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:44:45.536693   69580 cri.go:89] found id: ""
	I0501 03:44:45.536722   69580 logs.go:276] 0 containers: []
	W0501 03:44:45.536730   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:44:45.536737   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:44:45.536785   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:44:45.577838   69580 cri.go:89] found id: ""
	I0501 03:44:45.577866   69580 logs.go:276] 0 containers: []
	W0501 03:44:45.577876   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:44:45.577894   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:44:45.577958   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:44:45.615842   69580 cri.go:89] found id: ""
	I0501 03:44:45.615868   69580 logs.go:276] 0 containers: []
	W0501 03:44:45.615879   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:44:45.615892   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:44:45.615953   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:44:45.654948   69580 cri.go:89] found id: ""
	I0501 03:44:45.654972   69580 logs.go:276] 0 containers: []
	W0501 03:44:45.654980   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:44:45.654986   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:44:45.655042   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:44:45.695104   69580 cri.go:89] found id: ""
	I0501 03:44:45.695129   69580 logs.go:276] 0 containers: []
	W0501 03:44:45.695138   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:44:45.695145   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:44:45.695212   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:44:45.737609   69580 cri.go:89] found id: ""
	I0501 03:44:45.737633   69580 logs.go:276] 0 containers: []
	W0501 03:44:45.737641   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:44:45.737647   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:44:45.737693   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:44:45.778655   69580 cri.go:89] found id: ""
	I0501 03:44:45.778685   69580 logs.go:276] 0 containers: []
	W0501 03:44:45.778696   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:44:45.778702   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:44:45.778781   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:44:45.819430   69580 cri.go:89] found id: ""
	I0501 03:44:45.819452   69580 logs.go:276] 0 containers: []
	W0501 03:44:45.819460   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:44:45.819469   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:44:45.819485   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:44:45.875879   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:44:45.875911   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:44:45.892035   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:44:45.892062   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:44:45.975803   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:44:45.975836   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:44:45.975853   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:44:46.058183   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:44:46.058222   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:44:45.345465   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:47.346947   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:48.604991   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:44:48.621226   69580 kubeadm.go:591] duration metric: took 4m4.888665162s to restartPrimaryControlPlane
	W0501 03:44:48.621351   69580 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0501 03:44:48.621407   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0501 03:44:49.654748   69580 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.033320548s)
	I0501 03:44:49.654838   69580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 03:44:49.671511   69580 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0501 03:44:49.684266   69580 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0501 03:44:49.697079   69580 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0501 03:44:49.697101   69580 kubeadm.go:156] found existing configuration files:
	
	I0501 03:44:49.697159   69580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0501 03:44:49.710609   69580 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0501 03:44:49.710692   69580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0501 03:44:49.723647   69580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0501 03:44:49.736855   69580 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0501 03:44:49.737023   69580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0501 03:44:49.748842   69580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0501 03:44:49.760856   69580 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0501 03:44:49.760923   69580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0501 03:44:49.772685   69580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0501 03:44:49.784035   69580 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0501 03:44:49.784114   69580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0501 03:44:49.795699   69580 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0501 03:44:49.869387   69580 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0501 03:44:49.869481   69580 kubeadm.go:309] [preflight] Running pre-flight checks
	I0501 03:44:50.028858   69580 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0501 03:44:50.028999   69580 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0501 03:44:50.029182   69580 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0501 03:44:50.242773   69580 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0501 03:44:50.244816   69580 out.go:204]   - Generating certificates and keys ...
	I0501 03:44:50.244918   69580 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0501 03:44:50.245008   69580 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0501 03:44:50.245111   69580 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0501 03:44:50.245216   69580 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0501 03:44:50.245331   69580 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0501 03:44:50.245424   69580 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0501 03:44:50.245490   69580 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0501 03:44:50.245556   69580 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0501 03:44:50.245629   69580 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0501 03:44:50.245724   69580 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0501 03:44:50.245784   69580 kubeadm.go:309] [certs] Using the existing "sa" key
	I0501 03:44:50.245877   69580 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0501 03:44:50.501955   69580 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0501 03:44:50.683749   69580 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0501 03:44:50.905745   69580 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0501 03:44:51.005912   69580 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0501 03:44:51.025470   69580 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0501 03:44:51.029411   69580 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0501 03:44:51.029859   69580 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0501 03:44:51.181498   69580 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0501 03:44:51.183222   69580 out.go:204]   - Booting up control plane ...
	I0501 03:44:51.183334   69580 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0501 03:44:51.200394   69580 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0501 03:44:51.201612   69580 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0501 03:44:51.202445   69580 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0501 03:44:51.204681   69580 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0501 03:44:49.847629   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:52.345383   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:54.346479   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:56.348560   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:58.846207   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:45:01.345790   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:45:03.847746   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:45:06.346172   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:45:08.346693   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:45:10.846797   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:45:11.778923   69237 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.773690939s)
	I0501 03:45:11.778992   69237 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 03:45:11.796337   69237 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0501 03:45:11.810167   69237 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0501 03:45:11.822425   69237 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0501 03:45:11.822457   69237 kubeadm.go:156] found existing configuration files:
	
	I0501 03:45:11.822514   69237 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0501 03:45:11.834539   69237 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0501 03:45:11.834596   69237 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0501 03:45:11.848336   69237 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0501 03:45:11.860459   69237 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0501 03:45:11.860535   69237 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0501 03:45:11.873903   69237 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0501 03:45:11.887353   69237 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0501 03:45:11.887427   69237 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0501 03:45:11.900805   69237 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0501 03:45:11.912512   69237 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0501 03:45:11.912572   69237 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0501 03:45:11.924870   69237 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0501 03:45:12.149168   69237 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0501 03:45:13.348651   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:45:15.847148   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:45:20.882309   69237 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0501 03:45:20.882382   69237 kubeadm.go:309] [preflight] Running pre-flight checks
	I0501 03:45:20.882472   69237 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0501 03:45:20.882602   69237 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0501 03:45:20.882741   69237 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0501 03:45:20.882836   69237 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0501 03:45:20.884733   69237 out.go:204]   - Generating certificates and keys ...
	I0501 03:45:20.884837   69237 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0501 03:45:20.884894   69237 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0501 03:45:20.884996   69237 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0501 03:45:20.885106   69237 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0501 03:45:20.885209   69237 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0501 03:45:20.885316   69237 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0501 03:45:20.885400   69237 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0501 03:45:20.885483   69237 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0501 03:45:20.885590   69237 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0501 03:45:20.885702   69237 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0501 03:45:20.885759   69237 kubeadm.go:309] [certs] Using the existing "sa" key
	I0501 03:45:20.885838   69237 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0501 03:45:20.885915   69237 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0501 03:45:20.885996   69237 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0501 03:45:20.886074   69237 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0501 03:45:20.886164   69237 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0501 03:45:20.886233   69237 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0501 03:45:20.886362   69237 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0501 03:45:20.886492   69237 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0501 03:45:20.888113   69237 out.go:204]   - Booting up control plane ...
	I0501 03:45:20.888194   69237 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0501 03:45:20.888264   69237 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0501 03:45:20.888329   69237 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0501 03:45:20.888458   69237 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0501 03:45:20.888570   69237 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0501 03:45:20.888627   69237 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0501 03:45:20.888777   69237 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0501 03:45:20.888863   69237 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0501 03:45:20.888964   69237 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 502.867448ms
	I0501 03:45:20.889080   69237 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0501 03:45:20.889177   69237 kubeadm.go:309] [api-check] The API server is healthy after 5.503139909s
	I0501 03:45:20.889335   69237 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0501 03:45:20.889506   69237 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0501 03:45:20.889579   69237 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0501 03:45:20.889817   69237 kubeadm.go:309] [mark-control-plane] Marking the node default-k8s-diff-port-715118 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0501 03:45:20.889868   69237 kubeadm.go:309] [bootstrap-token] Using token: 2vhvw6.gdesonhc2twrukzt
	I0501 03:45:20.892253   69237 out.go:204]   - Configuring RBAC rules ...
	I0501 03:45:20.892395   69237 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0501 03:45:20.892475   69237 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0501 03:45:20.892652   69237 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0501 03:45:20.892812   69237 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0501 03:45:20.892931   69237 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0501 03:45:20.893040   69237 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0501 03:45:20.893201   69237 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0501 03:45:20.893264   69237 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0501 03:45:20.893309   69237 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0501 03:45:20.893319   69237 kubeadm.go:309] 
	I0501 03:45:20.893367   69237 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0501 03:45:20.893373   69237 kubeadm.go:309] 
	I0501 03:45:20.893450   69237 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0501 03:45:20.893458   69237 kubeadm.go:309] 
	I0501 03:45:20.893481   69237 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0501 03:45:20.893544   69237 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0501 03:45:20.893591   69237 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0501 03:45:20.893597   69237 kubeadm.go:309] 
	I0501 03:45:20.893643   69237 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0501 03:45:20.893650   69237 kubeadm.go:309] 
	I0501 03:45:20.893685   69237 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0501 03:45:20.893690   69237 kubeadm.go:309] 
	I0501 03:45:20.893741   69237 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0501 03:45:20.893805   69237 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0501 03:45:20.893858   69237 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0501 03:45:20.893863   69237 kubeadm.go:309] 
	I0501 03:45:20.893946   69237 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0501 03:45:20.894035   69237 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0501 03:45:20.894045   69237 kubeadm.go:309] 
	I0501 03:45:20.894139   69237 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8444 --token 2vhvw6.gdesonhc2twrukzt \
	I0501 03:45:20.894267   69237 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:bd94cc6acfb95fe36f70b12c6a1f2980601b8235e7c4dea65cebdf35eb514754 \
	I0501 03:45:20.894294   69237 kubeadm.go:309] 	--control-plane 
	I0501 03:45:20.894301   69237 kubeadm.go:309] 
	I0501 03:45:20.894368   69237 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0501 03:45:20.894375   69237 kubeadm.go:309] 
	I0501 03:45:20.894498   69237 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8444 --token 2vhvw6.gdesonhc2twrukzt \
	I0501 03:45:20.894605   69237 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:bd94cc6acfb95fe36f70b12c6a1f2980601b8235e7c4dea65cebdf35eb514754 
	I0501 03:45:20.894616   69237 cni.go:84] Creating CNI manager for ""
	I0501 03:45:20.894623   69237 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0501 03:45:20.896151   69237 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0501 03:45:18.346276   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:45:20.846958   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:45:20.897443   69237 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0501 03:45:20.911935   69237 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0501 03:45:20.941109   69237 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0501 03:45:20.941193   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:20.941249   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-715118 minikube.k8s.io/updated_at=2024_05_01T03_45_20_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=2c4eae41cda912e6a762d77f0d8868e00f97bb4e minikube.k8s.io/name=default-k8s-diff-port-715118 minikube.k8s.io/primary=true
	I0501 03:45:20.971300   69237 ops.go:34] apiserver oom_adj: -16
	I0501 03:45:21.143744   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:21.643800   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:22.144096   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:22.643852   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:23.144726   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:23.644174   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:24.143735   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:24.643947   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:25.143871   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:25.644557   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:23.345774   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:45:25.346189   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:45:27.348026   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:45:26.144443   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:26.643761   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:27.144691   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:27.644445   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:28.144006   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:28.643904   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:29.144077   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:29.644690   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:30.144692   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:30.644604   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:31.207553   69580 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0501 03:45:31.208328   69580 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0501 03:45:31.208516   69580 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0501 03:45:29.845785   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:45:32.348020   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:45:31.144517   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:31.644673   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:32.143793   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:32.644380   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:33.144729   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:33.644415   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:33.752056   69237 kubeadm.go:1107] duration metric: took 12.810918189s to wait for elevateKubeSystemPrivileges
	W0501 03:45:33.752096   69237 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0501 03:45:33.752105   69237 kubeadm.go:393] duration metric: took 5m12.753721662s to StartCluster
	I0501 03:45:33.752124   69237 settings.go:142] acquiring lock: {Name:mkcfe08bcadd45d99c00ed1eaf4e9c226a489fe2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 03:45:33.752219   69237 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18779-13391/kubeconfig
	I0501 03:45:33.753829   69237 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18779-13391/kubeconfig: {Name:mk5d1131557f885800877622151c0915337cb23a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 03:45:33.754094   69237 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.72.158 Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0501 03:45:33.755764   69237 out.go:177] * Verifying Kubernetes components...
	I0501 03:45:33.754178   69237 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0501 03:45:33.754310   69237 config.go:182] Loaded profile config "default-k8s-diff-port-715118": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0501 03:45:33.757144   69237 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 03:45:33.757151   69237 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-715118"
	I0501 03:45:33.757172   69237 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-715118"
	I0501 03:45:33.757189   69237 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-715118"
	I0501 03:45:33.757213   69237 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-715118"
	I0501 03:45:33.757221   69237 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-715118"
	W0501 03:45:33.757230   69237 addons.go:243] addon metrics-server should already be in state true
	I0501 03:45:33.757264   69237 host.go:66] Checking if "default-k8s-diff-port-715118" exists ...
	I0501 03:45:33.757180   69237 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-715118"
	W0501 03:45:33.757295   69237 addons.go:243] addon storage-provisioner should already be in state true
	I0501 03:45:33.757355   69237 host.go:66] Checking if "default-k8s-diff-port-715118" exists ...
	I0501 03:45:33.757596   69237 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:45:33.757624   69237 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:45:33.757630   69237 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:45:33.757762   69237 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:45:33.757808   69237 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:45:33.757662   69237 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:45:33.773846   69237 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44313
	I0501 03:45:33.774442   69237 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:45:33.775002   69237 main.go:141] libmachine: Using API Version  1
	I0501 03:45:33.775024   69237 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:45:33.775438   69237 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:45:33.776086   69237 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:45:33.776117   69237 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:45:33.777715   69237 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37079
	I0501 03:45:33.777835   69237 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38097
	I0501 03:45:33.778170   69237 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:45:33.778240   69237 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:45:33.778701   69237 main.go:141] libmachine: Using API Version  1
	I0501 03:45:33.778734   69237 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:45:33.778778   69237 main.go:141] libmachine: Using API Version  1
	I0501 03:45:33.778795   69237 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:45:33.779142   69237 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:45:33.779150   69237 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:45:33.779427   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetState
	I0501 03:45:33.779721   69237 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:45:33.779769   69237 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:45:33.783493   69237 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-715118"
	W0501 03:45:33.783519   69237 addons.go:243] addon default-storageclass should already be in state true
	I0501 03:45:33.783551   69237 host.go:66] Checking if "default-k8s-diff-port-715118" exists ...
	I0501 03:45:33.783922   69237 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:45:33.783965   69237 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:45:33.795373   69237 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43491
	I0501 03:45:33.795988   69237 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:45:33.796557   69237 main.go:141] libmachine: Using API Version  1
	I0501 03:45:33.796579   69237 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:45:33.796931   69237 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:45:33.797093   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetState
	I0501 03:45:33.797155   69237 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46755
	I0501 03:45:33.797806   69237 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:45:33.798383   69237 main.go:141] libmachine: Using API Version  1
	I0501 03:45:33.798442   69237 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:45:33.798848   69237 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:45:33.799052   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetState
	I0501 03:45:33.799105   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .DriverName
	I0501 03:45:33.801809   69237 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0501 03:45:33.800600   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .DriverName
	I0501 03:45:33.803752   69237 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0501 03:45:33.803779   69237 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0501 03:45:33.803800   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHHostname
	I0501 03:45:33.805235   69237 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0501 03:45:33.804172   69237 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35709
	I0501 03:45:33.806635   69237 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0501 03:45:33.806651   69237 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0501 03:45:33.806670   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHHostname
	I0501 03:45:33.806889   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:45:33.806967   69237 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:45:33.807292   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:12:31", ip: ""} in network mk-default-k8s-diff-port-715118: {Iface:virbr3 ExpiryTime:2024-05-01 04:40:05 +0000 UTC Type:0 Mac:52:54:00:87:12:31 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-715118 Clientid:01:52:54:00:87:12:31}
	I0501 03:45:33.807426   69237 main.go:141] libmachine: Using API Version  1
	I0501 03:45:33.807428   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHPort
	I0501 03:45:33.807437   69237 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:45:33.807449   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined IP address 192.168.72.158 and MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:45:33.807578   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHKeyPath
	I0501 03:45:33.807680   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHUsername
	I0501 03:45:33.807799   69237 sshutil.go:53] new ssh client: &{IP:192.168.72.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/default-k8s-diff-port-715118/id_rsa Username:docker}
	I0501 03:45:33.808171   69237 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:45:33.808625   69237 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:45:33.808660   69237 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:45:33.810668   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:45:33.811266   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:12:31", ip: ""} in network mk-default-k8s-diff-port-715118: {Iface:virbr3 ExpiryTime:2024-05-01 04:40:05 +0000 UTC Type:0 Mac:52:54:00:87:12:31 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-715118 Clientid:01:52:54:00:87:12:31}
	I0501 03:45:33.811297   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined IP address 192.168.72.158 and MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:45:33.811595   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHPort
	I0501 03:45:33.811794   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHKeyPath
	I0501 03:45:33.811983   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHUsername
	I0501 03:45:33.812124   69237 sshutil.go:53] new ssh client: &{IP:192.168.72.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/default-k8s-diff-port-715118/id_rsa Username:docker}
	I0501 03:45:33.825315   69237 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35461
	I0501 03:45:33.825891   69237 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:45:33.826334   69237 main.go:141] libmachine: Using API Version  1
	I0501 03:45:33.826351   69237 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:45:33.826679   69237 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:45:33.826912   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetState
	I0501 03:45:33.828659   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .DriverName
	I0501 03:45:33.828931   69237 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0501 03:45:33.828946   69237 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0501 03:45:33.828963   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHHostname
	I0501 03:45:33.832151   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:45:33.832632   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:12:31", ip: ""} in network mk-default-k8s-diff-port-715118: {Iface:virbr3 ExpiryTime:2024-05-01 04:40:05 +0000 UTC Type:0 Mac:52:54:00:87:12:31 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-715118 Clientid:01:52:54:00:87:12:31}
	I0501 03:45:33.832656   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined IP address 192.168.72.158 and MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:45:33.832863   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHPort
	I0501 03:45:33.833010   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHKeyPath
	I0501 03:45:33.833146   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHUsername
	I0501 03:45:33.833302   69237 sshutil.go:53] new ssh client: &{IP:192.168.72.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/default-k8s-diff-port-715118/id_rsa Username:docker}
	I0501 03:45:34.014287   69237 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0501 03:45:34.047199   69237 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-715118" to be "Ready" ...
	I0501 03:45:34.069000   69237 node_ready.go:49] node "default-k8s-diff-port-715118" has status "Ready":"True"
	I0501 03:45:34.069023   69237 node_ready.go:38] duration metric: took 21.790599ms for node "default-k8s-diff-port-715118" to be "Ready" ...
	I0501 03:45:34.069033   69237 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0501 03:45:34.077182   69237 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-bg755" in "kube-system" namespace to be "Ready" ...
	I0501 03:45:34.151001   69237 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0501 03:45:34.166362   69237 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0501 03:45:34.166385   69237 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0501 03:45:34.214624   69237 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0501 03:45:34.329110   69237 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0501 03:45:34.329133   69237 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0501 03:45:34.436779   69237 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0501 03:45:34.436804   69237 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0501 03:45:34.611410   69237 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0501 03:45:34.698997   69237 main.go:141] libmachine: Making call to close driver server
	I0501 03:45:34.699026   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .Close
	I0501 03:45:34.699321   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | Closing plugin on server side
	I0501 03:45:34.699389   69237 main.go:141] libmachine: Successfully made call to close driver server
	I0501 03:45:34.699408   69237 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 03:45:34.699423   69237 main.go:141] libmachine: Making call to close driver server
	I0501 03:45:34.699437   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .Close
	I0501 03:45:34.699684   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | Closing plugin on server side
	I0501 03:45:34.699726   69237 main.go:141] libmachine: Successfully made call to close driver server
	I0501 03:45:34.699734   69237 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 03:45:34.708143   69237 main.go:141] libmachine: Making call to close driver server
	I0501 03:45:34.708171   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .Close
	I0501 03:45:34.708438   69237 main.go:141] libmachine: Successfully made call to close driver server
	I0501 03:45:34.708457   69237 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 03:45:34.708463   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | Closing plugin on server side
	I0501 03:45:35.510225   69237 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.295555956s)
	I0501 03:45:35.510274   69237 main.go:141] libmachine: Making call to close driver server
	I0501 03:45:35.510286   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .Close
	I0501 03:45:35.510700   69237 main.go:141] libmachine: Successfully made call to close driver server
	I0501 03:45:35.510721   69237 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 03:45:35.510732   69237 main.go:141] libmachine: Making call to close driver server
	I0501 03:45:35.510728   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | Closing plugin on server side
	I0501 03:45:35.510740   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .Close
	I0501 03:45:35.510961   69237 main.go:141] libmachine: Successfully made call to close driver server
	I0501 03:45:35.510979   69237 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 03:45:35.510983   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | Closing plugin on server side
	I0501 03:45:35.845633   69237 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.234178466s)
	I0501 03:45:35.845691   69237 main.go:141] libmachine: Making call to close driver server
	I0501 03:45:35.845708   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .Close
	I0501 03:45:35.845997   69237 main.go:141] libmachine: Successfully made call to close driver server
	I0501 03:45:35.846017   69237 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 03:45:35.846027   69237 main.go:141] libmachine: Making call to close driver server
	I0501 03:45:35.846026   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | Closing plugin on server side
	I0501 03:45:35.846036   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .Close
	I0501 03:45:35.847736   69237 main.go:141] libmachine: Successfully made call to close driver server
	I0501 03:45:35.847767   69237 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 03:45:35.847781   69237 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-715118"
	I0501 03:45:35.847786   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | Closing plugin on server side
	I0501 03:45:35.849438   69237 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0501 03:45:36.209029   69580 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0501 03:45:36.209300   69580 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0501 03:45:34.848699   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:45:37.338985   68640 pod_ready.go:81] duration metric: took 4m0.000306796s for pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace to be "Ready" ...
	E0501 03:45:37.339010   68640 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace to be "Ready" (will not retry!)
	I0501 03:45:37.339029   68640 pod_ready.go:38] duration metric: took 4m9.062496127s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0501 03:45:37.339089   68640 kubeadm.go:591] duration metric: took 4m19.268153875s to restartPrimaryControlPlane
	W0501 03:45:37.339148   68640 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0501 03:45:37.339176   68640 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0501 03:45:35.851156   69237 addons.go:505] duration metric: took 2.096980743s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0501 03:45:36.085176   69237 pod_ready.go:102] pod "coredns-7db6d8ff4d-bg755" in "kube-system" namespace has status "Ready":"False"
	I0501 03:45:36.585390   69237 pod_ready.go:92] pod "coredns-7db6d8ff4d-bg755" in "kube-system" namespace has status "Ready":"True"
	I0501 03:45:36.585415   69237 pod_ready.go:81] duration metric: took 2.508204204s for pod "coredns-7db6d8ff4d-bg755" in "kube-system" namespace to be "Ready" ...
	I0501 03:45:36.585428   69237 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-mp6f5" in "kube-system" namespace to be "Ready" ...
	I0501 03:45:36.594575   69237 pod_ready.go:92] pod "coredns-7db6d8ff4d-mp6f5" in "kube-system" namespace has status "Ready":"True"
	I0501 03:45:36.594600   69237 pod_ready.go:81] duration metric: took 9.163923ms for pod "coredns-7db6d8ff4d-mp6f5" in "kube-system" namespace to be "Ready" ...
	I0501 03:45:36.594613   69237 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-715118" in "kube-system" namespace to be "Ready" ...
	I0501 03:45:36.606784   69237 pod_ready.go:92] pod "etcd-default-k8s-diff-port-715118" in "kube-system" namespace has status "Ready":"True"
	I0501 03:45:36.606807   69237 pod_ready.go:81] duration metric: took 12.186129ms for pod "etcd-default-k8s-diff-port-715118" in "kube-system" namespace to be "Ready" ...
	I0501 03:45:36.606819   69237 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-715118" in "kube-system" namespace to be "Ready" ...
	I0501 03:45:36.617373   69237 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-715118" in "kube-system" namespace has status "Ready":"True"
	I0501 03:45:36.617394   69237 pod_ready.go:81] duration metric: took 10.566278ms for pod "kube-apiserver-default-k8s-diff-port-715118" in "kube-system" namespace to be "Ready" ...
	I0501 03:45:36.617404   69237 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-715118" in "kube-system" namespace to be "Ready" ...
	I0501 03:45:36.622441   69237 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-715118" in "kube-system" namespace has status "Ready":"True"
	I0501 03:45:36.622460   69237 pod_ready.go:81] duration metric: took 5.049948ms for pod "kube-controller-manager-default-k8s-diff-port-715118" in "kube-system" namespace to be "Ready" ...
	I0501 03:45:36.622469   69237 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2knrp" in "kube-system" namespace to be "Ready" ...
	I0501 03:45:36.981490   69237 pod_ready.go:92] pod "kube-proxy-2knrp" in "kube-system" namespace has status "Ready":"True"
	I0501 03:45:36.981513   69237 pod_ready.go:81] duration metric: took 359.038927ms for pod "kube-proxy-2knrp" in "kube-system" namespace to be "Ready" ...
	I0501 03:45:36.981523   69237 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-715118" in "kube-system" namespace to be "Ready" ...
	I0501 03:45:37.381970   69237 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-715118" in "kube-system" namespace has status "Ready":"True"
	I0501 03:45:37.381999   69237 pod_ready.go:81] duration metric: took 400.468372ms for pod "kube-scheduler-default-k8s-diff-port-715118" in "kube-system" namespace to be "Ready" ...
	I0501 03:45:37.382011   69237 pod_ready.go:38] duration metric: took 3.312967983s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0501 03:45:37.382028   69237 api_server.go:52] waiting for apiserver process to appear ...
	I0501 03:45:37.382091   69237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:45:37.401961   69237 api_server.go:72] duration metric: took 3.647829991s to wait for apiserver process to appear ...
	I0501 03:45:37.401992   69237 api_server.go:88] waiting for apiserver healthz status ...
	I0501 03:45:37.402016   69237 api_server.go:253] Checking apiserver healthz at https://192.168.72.158:8444/healthz ...
	I0501 03:45:37.407177   69237 api_server.go:279] https://192.168.72.158:8444/healthz returned 200:
	ok
	I0501 03:45:37.408020   69237 api_server.go:141] control plane version: v1.30.0
	I0501 03:45:37.408037   69237 api_server.go:131] duration metric: took 6.036621ms to wait for apiserver health ...
	I0501 03:45:37.408046   69237 system_pods.go:43] waiting for kube-system pods to appear ...
	I0501 03:45:37.586052   69237 system_pods.go:59] 9 kube-system pods found
	I0501 03:45:37.586081   69237 system_pods.go:61] "coredns-7db6d8ff4d-bg755" [884d489a-bc1e-442c-8e00-4616f983d3e9] Running
	I0501 03:45:37.586085   69237 system_pods.go:61] "coredns-7db6d8ff4d-mp6f5" [4c8550d0-0029-48f1-a892-1800f6639c75] Running
	I0501 03:45:37.586090   69237 system_pods.go:61] "etcd-default-k8s-diff-port-715118" [12be9bec-1d84-49ee-898c-499ff75a8026] Running
	I0501 03:45:37.586094   69237 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-715118" [ae9a476b-03cf-4d4d-9990-5e760db82e60] Running
	I0501 03:45:37.586098   69237 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-715118" [542bbe50-58b6-40fb-b81b-0cc2444a3401] Running
	I0501 03:45:37.586101   69237 system_pods.go:61] "kube-proxy-2knrp" [cf1406ff-8a6e-49bb-b180-1e72f4b54fbf] Running
	I0501 03:45:37.586104   69237 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-715118" [d24f02a2-67a9-4f28-9acc-445e0e74a68d] Running
	I0501 03:45:37.586109   69237 system_pods.go:61] "metrics-server-569cc877fc-xwxx9" [a66f5df4-355c-47f0-8b6e-da29e1c4394e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0501 03:45:37.586113   69237 system_pods.go:61] "storage-provisioner" [debb3a59-143a-46d3-87da-c2403e264861] Running
	I0501 03:45:37.586123   69237 system_pods.go:74] duration metric: took 178.07045ms to wait for pod list to return data ...
	I0501 03:45:37.586132   69237 default_sa.go:34] waiting for default service account to be created ...
	I0501 03:45:37.780696   69237 default_sa.go:45] found service account: "default"
	I0501 03:45:37.780720   69237 default_sa.go:55] duration metric: took 194.582743ms for default service account to be created ...
	I0501 03:45:37.780728   69237 system_pods.go:116] waiting for k8s-apps to be running ...
	I0501 03:45:37.985342   69237 system_pods.go:86] 9 kube-system pods found
	I0501 03:45:37.985368   69237 system_pods.go:89] "coredns-7db6d8ff4d-bg755" [884d489a-bc1e-442c-8e00-4616f983d3e9] Running
	I0501 03:45:37.985374   69237 system_pods.go:89] "coredns-7db6d8ff4d-mp6f5" [4c8550d0-0029-48f1-a892-1800f6639c75] Running
	I0501 03:45:37.985378   69237 system_pods.go:89] "etcd-default-k8s-diff-port-715118" [12be9bec-1d84-49ee-898c-499ff75a8026] Running
	I0501 03:45:37.985383   69237 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-715118" [ae9a476b-03cf-4d4d-9990-5e760db82e60] Running
	I0501 03:45:37.985387   69237 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-715118" [542bbe50-58b6-40fb-b81b-0cc2444a3401] Running
	I0501 03:45:37.985391   69237 system_pods.go:89] "kube-proxy-2knrp" [cf1406ff-8a6e-49bb-b180-1e72f4b54fbf] Running
	I0501 03:45:37.985395   69237 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-715118" [d24f02a2-67a9-4f28-9acc-445e0e74a68d] Running
	I0501 03:45:37.985401   69237 system_pods.go:89] "metrics-server-569cc877fc-xwxx9" [a66f5df4-355c-47f0-8b6e-da29e1c4394e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0501 03:45:37.985405   69237 system_pods.go:89] "storage-provisioner" [debb3a59-143a-46d3-87da-c2403e264861] Running
	I0501 03:45:37.985412   69237 system_pods.go:126] duration metric: took 204.679545ms to wait for k8s-apps to be running ...
	I0501 03:45:37.985418   69237 system_svc.go:44] waiting for kubelet service to be running ....
	I0501 03:45:37.985463   69237 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 03:45:38.002421   69237 system_svc.go:56] duration metric: took 16.992346ms WaitForService to wait for kubelet
	I0501 03:45:38.002458   69237 kubeadm.go:576] duration metric: took 4.248332952s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0501 03:45:38.002477   69237 node_conditions.go:102] verifying NodePressure condition ...
	I0501 03:45:38.181465   69237 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0501 03:45:38.181496   69237 node_conditions.go:123] node cpu capacity is 2
	I0501 03:45:38.181510   69237 node_conditions.go:105] duration metric: took 179.027834ms to run NodePressure ...
	I0501 03:45:38.181524   69237 start.go:240] waiting for startup goroutines ...
	I0501 03:45:38.181534   69237 start.go:245] waiting for cluster config update ...
	I0501 03:45:38.181547   69237 start.go:254] writing updated cluster config ...
	I0501 03:45:38.181810   69237 ssh_runner.go:195] Run: rm -f paused
	I0501 03:45:38.244075   69237 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0501 03:45:38.246261   69237 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-715118" cluster and "default" namespace by default
	I0501 03:45:46.209837   69580 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0501 03:45:46.210120   69580 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0501 03:46:06.211471   69580 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0501 03:46:06.211673   69580 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0501 03:46:09.967666   68640 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.628454657s)
	I0501 03:46:09.967737   68640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 03:46:09.985802   68640 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0501 03:46:09.996494   68640 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0501 03:46:10.006956   68640 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0501 03:46:10.006979   68640 kubeadm.go:156] found existing configuration files:
	
	I0501 03:46:10.007025   68640 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0501 03:46:10.017112   68640 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0501 03:46:10.017174   68640 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0501 03:46:10.027747   68640 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0501 03:46:10.037853   68640 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0501 03:46:10.037910   68640 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0501 03:46:10.048023   68640 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0501 03:46:10.057354   68640 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0501 03:46:10.057408   68640 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0501 03:46:10.067352   68640 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0501 03:46:10.076696   68640 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0501 03:46:10.076741   68640 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0501 03:46:10.086799   68640 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0501 03:46:10.150816   68640 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0501 03:46:10.150871   68640 kubeadm.go:309] [preflight] Running pre-flight checks
	I0501 03:46:10.325430   68640 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0501 03:46:10.325546   68640 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0501 03:46:10.325669   68640 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0501 03:46:10.581934   68640 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0501 03:46:10.585119   68640 out.go:204]   - Generating certificates and keys ...
	I0501 03:46:10.585221   68640 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0501 03:46:10.585319   68640 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0501 03:46:10.585416   68640 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0501 03:46:10.585522   68640 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0501 03:46:10.585620   68640 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0501 03:46:10.585695   68640 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0501 03:46:10.585781   68640 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0501 03:46:10.585861   68640 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0501 03:46:10.585959   68640 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0501 03:46:10.586064   68640 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0501 03:46:10.586116   68640 kubeadm.go:309] [certs] Using the existing "sa" key
	I0501 03:46:10.586208   68640 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0501 03:46:10.789482   68640 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0501 03:46:10.991219   68640 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0501 03:46:11.194897   68640 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0501 03:46:11.411926   68640 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0501 03:46:11.994791   68640 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0501 03:46:11.995468   68640 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0501 03:46:11.998463   68640 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0501 03:46:12.000394   68640 out.go:204]   - Booting up control plane ...
	I0501 03:46:12.000521   68640 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0501 03:46:12.000632   68640 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0501 03:46:12.000721   68640 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0501 03:46:12.022371   68640 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0501 03:46:12.023628   68640 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0501 03:46:12.023709   68640 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0501 03:46:12.178475   68640 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0501 03:46:12.178615   68640 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0501 03:46:12.680307   68640 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 502.179909ms
	I0501 03:46:12.680409   68640 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0501 03:46:18.182830   68640 kubeadm.go:309] [api-check] The API server is healthy after 5.502227274s
	I0501 03:46:18.197822   68640 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0501 03:46:18.217282   68640 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0501 03:46:18.247591   68640 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0501 03:46:18.247833   68640 kubeadm.go:309] [mark-control-plane] Marking the node no-preload-892672 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0501 03:46:18.259687   68640 kubeadm.go:309] [bootstrap-token] Using token: 8rc6kt.ele1oeavg6hezahw
	I0501 03:46:18.261204   68640 out.go:204]   - Configuring RBAC rules ...
	I0501 03:46:18.261333   68640 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0501 03:46:18.272461   68640 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0501 03:46:18.284615   68640 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0501 03:46:18.288686   68640 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0501 03:46:18.292005   68640 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0501 03:46:18.295772   68640 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0501 03:46:18.591035   68640 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0501 03:46:19.028299   68640 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0501 03:46:19.598192   68640 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0501 03:46:19.598219   68640 kubeadm.go:309] 
	I0501 03:46:19.598323   68640 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0501 03:46:19.598337   68640 kubeadm.go:309] 
	I0501 03:46:19.598490   68640 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0501 03:46:19.598514   68640 kubeadm.go:309] 
	I0501 03:46:19.598542   68640 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0501 03:46:19.598604   68640 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0501 03:46:19.598648   68640 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0501 03:46:19.598673   68640 kubeadm.go:309] 
	I0501 03:46:19.598771   68640 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0501 03:46:19.598784   68640 kubeadm.go:309] 
	I0501 03:46:19.598850   68640 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0501 03:46:19.598860   68640 kubeadm.go:309] 
	I0501 03:46:19.598963   68640 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0501 03:46:19.599069   68640 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0501 03:46:19.599167   68640 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0501 03:46:19.599183   68640 kubeadm.go:309] 
	I0501 03:46:19.599283   68640 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0501 03:46:19.599389   68640 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0501 03:46:19.599400   68640 kubeadm.go:309] 
	I0501 03:46:19.599500   68640 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 8rc6kt.ele1oeavg6hezahw \
	I0501 03:46:19.599626   68640 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:bd94cc6acfb95fe36f70b12c6a1f2980601b8235e7c4dea65cebdf35eb514754 \
	I0501 03:46:19.599666   68640 kubeadm.go:309] 	--control-plane 
	I0501 03:46:19.599676   68640 kubeadm.go:309] 
	I0501 03:46:19.599779   68640 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0501 03:46:19.599807   68640 kubeadm.go:309] 
	I0501 03:46:19.599931   68640 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 8rc6kt.ele1oeavg6hezahw \
	I0501 03:46:19.600079   68640 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:bd94cc6acfb95fe36f70b12c6a1f2980601b8235e7c4dea65cebdf35eb514754 
	I0501 03:46:19.600763   68640 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0501 03:46:19.600786   68640 cni.go:84] Creating CNI manager for ""
	I0501 03:46:19.600792   68640 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0501 03:46:19.602473   68640 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0501 03:46:19.603816   68640 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0501 03:46:19.621706   68640 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0501 03:46:19.649643   68640 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0501 03:46:19.649762   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:19.649787   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-892672 minikube.k8s.io/updated_at=2024_05_01T03_46_19_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=2c4eae41cda912e6a762d77f0d8868e00f97bb4e minikube.k8s.io/name=no-preload-892672 minikube.k8s.io/primary=true
	I0501 03:46:19.892482   68640 ops.go:34] apiserver oom_adj: -16
	I0501 03:46:19.892631   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:20.393436   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:20.893412   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:21.393634   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:21.893273   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:22.393031   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:22.893498   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:23.393599   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:23.893024   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:24.393544   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:24.893431   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:25.393290   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:25.892718   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:26.392928   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:26.893101   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:27.393045   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:27.892722   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:28.393102   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:28.892871   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:29.392650   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:29.893034   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:30.393561   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:30.893661   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:31.393235   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:31.892889   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:32.393263   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:32.893427   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:33.046965   68640 kubeadm.go:1107] duration metric: took 13.397277159s to wait for elevateKubeSystemPrivileges
	W0501 03:46:33.047010   68640 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0501 03:46:33.047020   68640 kubeadm.go:393] duration metric: took 5m15.038324633s to StartCluster
	I0501 03:46:33.047042   68640 settings.go:142] acquiring lock: {Name:mkcfe08bcadd45d99c00ed1eaf4e9c226a489fe2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 03:46:33.047126   68640 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18779-13391/kubeconfig
	I0501 03:46:33.048731   68640 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18779-13391/kubeconfig: {Name:mk5d1131557f885800877622151c0915337cb23a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 03:46:33.048988   68640 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.144 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0501 03:46:33.050376   68640 out.go:177] * Verifying Kubernetes components...
	I0501 03:46:33.049030   68640 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0501 03:46:33.049253   68640 config.go:182] Loaded profile config "no-preload-892672": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0501 03:46:33.051595   68640 addons.go:69] Setting storage-provisioner=true in profile "no-preload-892672"
	I0501 03:46:33.051616   68640 addons.go:69] Setting metrics-server=true in profile "no-preload-892672"
	I0501 03:46:33.051639   68640 addons.go:234] Setting addon storage-provisioner=true in "no-preload-892672"
	I0501 03:46:33.051644   68640 addons.go:234] Setting addon metrics-server=true in "no-preload-892672"
	W0501 03:46:33.051649   68640 addons.go:243] addon storage-provisioner should already be in state true
	W0501 03:46:33.051653   68640 addons.go:243] addon metrics-server should already be in state true
	I0501 03:46:33.051675   68640 host.go:66] Checking if "no-preload-892672" exists ...
	I0501 03:46:33.051680   68640 host.go:66] Checking if "no-preload-892672" exists ...
	I0501 03:46:33.051599   68640 addons.go:69] Setting default-storageclass=true in profile "no-preload-892672"
	I0501 03:46:33.051760   68640 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-892672"
	I0501 03:46:33.051600   68640 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 03:46:33.052016   68640 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:46:33.052047   68640 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:46:33.052064   68640 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:46:33.052095   68640 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:46:33.052110   68640 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:46:33.052135   68640 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:46:33.068515   68640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46407
	I0501 03:46:33.069115   68640 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:46:33.069702   68640 main.go:141] libmachine: Using API Version  1
	I0501 03:46:33.069728   68640 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:46:33.070085   68640 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:46:33.070731   68640 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:46:33.070763   68640 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:46:33.072166   68640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46609
	I0501 03:46:33.072179   68640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46109
	I0501 03:46:33.072632   68640 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:46:33.072770   68640 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:46:33.073161   68640 main.go:141] libmachine: Using API Version  1
	I0501 03:46:33.073180   68640 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:46:33.073318   68640 main.go:141] libmachine: Using API Version  1
	I0501 03:46:33.073333   68640 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:46:33.073467   68640 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:46:33.073893   68640 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:46:33.074056   68640 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:46:33.074065   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetState
	I0501 03:46:33.074092   68640 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:46:33.077976   68640 addons.go:234] Setting addon default-storageclass=true in "no-preload-892672"
	W0501 03:46:33.077997   68640 addons.go:243] addon default-storageclass should already be in state true
	I0501 03:46:33.078110   68640 host.go:66] Checking if "no-preload-892672" exists ...
	I0501 03:46:33.078535   68640 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:46:33.078566   68640 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:46:33.092605   68640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39707
	I0501 03:46:33.092996   68640 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:46:33.093578   68640 main.go:141] libmachine: Using API Version  1
	I0501 03:46:33.093597   68640 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:46:33.093602   68640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36671
	I0501 03:46:33.093778   68640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34727
	I0501 03:46:33.093893   68640 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:46:33.094117   68640 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:46:33.094169   68640 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:46:33.094250   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetState
	I0501 03:46:33.094577   68640 main.go:141] libmachine: Using API Version  1
	I0501 03:46:33.094602   68640 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:46:33.094986   68640 main.go:141] libmachine: Using API Version  1
	I0501 03:46:33.095004   68640 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:46:33.095062   68640 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:46:33.095389   68640 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:46:33.096401   68640 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:46:33.096423   68640 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:46:33.096665   68640 main.go:141] libmachine: (no-preload-892672) Calling .DriverName
	I0501 03:46:33.096678   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetState
	I0501 03:46:33.098465   68640 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0501 03:46:33.099842   68640 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0501 03:46:33.099861   68640 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0501 03:46:33.099879   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHHostname
	I0501 03:46:33.098734   68640 main.go:141] libmachine: (no-preload-892672) Calling .DriverName
	I0501 03:46:33.101305   68640 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0501 03:46:33.102491   68640 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0501 03:46:33.102512   68640 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0501 03:46:33.102531   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHHostname
	I0501 03:46:33.103006   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:46:33.103617   68640 main.go:141] libmachine: (no-preload-892672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6d:9a", ip: ""} in network mk-no-preload-892672: {Iface:virbr1 ExpiryTime:2024-05-01 04:40:47 +0000 UTC Type:0 Mac:52:54:00:c7:6d:9a Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:no-preload-892672 Clientid:01:52:54:00:c7:6d:9a}
	I0501 03:46:33.103641   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined IP address 192.168.39.144 and MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:46:33.103799   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHPort
	I0501 03:46:33.103977   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHKeyPath
	I0501 03:46:33.104143   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHUsername
	I0501 03:46:33.104272   68640 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/no-preload-892672/id_rsa Username:docker}
	I0501 03:46:33.105452   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:46:33.105795   68640 main.go:141] libmachine: (no-preload-892672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6d:9a", ip: ""} in network mk-no-preload-892672: {Iface:virbr1 ExpiryTime:2024-05-01 04:40:47 +0000 UTC Type:0 Mac:52:54:00:c7:6d:9a Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:no-preload-892672 Clientid:01:52:54:00:c7:6d:9a}
	I0501 03:46:33.105821   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined IP address 192.168.39.144 and MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:46:33.106142   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHPort
	I0501 03:46:33.106290   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHKeyPath
	I0501 03:46:33.106392   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHUsername
	I0501 03:46:33.106511   68640 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/no-preload-892672/id_rsa Username:docker}
	I0501 03:46:33.113012   68640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42615
	I0501 03:46:33.113365   68640 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:46:33.113813   68640 main.go:141] libmachine: Using API Version  1
	I0501 03:46:33.113834   68640 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:46:33.114127   68640 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:46:33.114304   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetState
	I0501 03:46:33.115731   68640 main.go:141] libmachine: (no-preload-892672) Calling .DriverName
	I0501 03:46:33.115997   68640 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0501 03:46:33.116010   68640 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0501 03:46:33.116023   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHHostname
	I0501 03:46:33.119272   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:46:33.119644   68640 main.go:141] libmachine: (no-preload-892672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6d:9a", ip: ""} in network mk-no-preload-892672: {Iface:virbr1 ExpiryTime:2024-05-01 04:40:47 +0000 UTC Type:0 Mac:52:54:00:c7:6d:9a Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:no-preload-892672 Clientid:01:52:54:00:c7:6d:9a}
	I0501 03:46:33.119661   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined IP address 192.168.39.144 and MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:46:33.119845   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHPort
	I0501 03:46:33.120223   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHKeyPath
	I0501 03:46:33.120358   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHUsername
	I0501 03:46:33.120449   68640 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/no-preload-892672/id_rsa Username:docker}
	I0501 03:46:33.296711   68640 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0501 03:46:33.342215   68640 node_ready.go:35] waiting up to 6m0s for node "no-preload-892672" to be "Ready" ...
	I0501 03:46:33.355677   68640 node_ready.go:49] node "no-preload-892672" has status "Ready":"True"
	I0501 03:46:33.355707   68640 node_ready.go:38] duration metric: took 13.392381ms for node "no-preload-892672" to be "Ready" ...
	I0501 03:46:33.355718   68640 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0501 03:46:33.367706   68640 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-57k52" in "kube-system" namespace to be "Ready" ...
	I0501 03:46:33.413444   68640 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0501 03:46:33.418869   68640 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0501 03:46:33.439284   68640 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0501 03:46:33.439318   68640 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0501 03:46:33.512744   68640 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0501 03:46:33.512768   68640 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0501 03:46:33.594777   68640 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0501 03:46:33.594798   68640 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0501 03:46:33.658506   68640 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0501 03:46:34.013890   68640 main.go:141] libmachine: Making call to close driver server
	I0501 03:46:34.013919   68640 main.go:141] libmachine: (no-preload-892672) Calling .Close
	I0501 03:46:34.014023   68640 main.go:141] libmachine: Making call to close driver server
	I0501 03:46:34.014056   68640 main.go:141] libmachine: (no-preload-892672) Calling .Close
	I0501 03:46:34.014250   68640 main.go:141] libmachine: Successfully made call to close driver server
	I0501 03:46:34.014284   68640 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 03:46:34.014297   68640 main.go:141] libmachine: Making call to close driver server
	I0501 03:46:34.014306   68640 main.go:141] libmachine: (no-preload-892672) Calling .Close
	I0501 03:46:34.014353   68640 main.go:141] libmachine: Successfully made call to close driver server
	I0501 03:46:34.014370   68640 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 03:46:34.014383   68640 main.go:141] libmachine: Making call to close driver server
	I0501 03:46:34.014393   68640 main.go:141] libmachine: (no-preload-892672) Calling .Close
	I0501 03:46:34.014642   68640 main.go:141] libmachine: Successfully made call to close driver server
	I0501 03:46:34.014664   68640 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 03:46:34.016263   68640 main.go:141] libmachine: (no-preload-892672) DBG | Closing plugin on server side
	I0501 03:46:34.016263   68640 main.go:141] libmachine: Successfully made call to close driver server
	I0501 03:46:34.016288   68640 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 03:46:34.031961   68640 main.go:141] libmachine: Making call to close driver server
	I0501 03:46:34.031996   68640 main.go:141] libmachine: (no-preload-892672) Calling .Close
	I0501 03:46:34.032303   68640 main.go:141] libmachine: Successfully made call to close driver server
	I0501 03:46:34.032324   68640 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 03:46:34.260154   68640 main.go:141] libmachine: Making call to close driver server
	I0501 03:46:34.260180   68640 main.go:141] libmachine: (no-preload-892672) Calling .Close
	I0501 03:46:34.260600   68640 main.go:141] libmachine: Successfully made call to close driver server
	I0501 03:46:34.260629   68640 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 03:46:34.260641   68640 main.go:141] libmachine: Making call to close driver server
	I0501 03:46:34.260650   68640 main.go:141] libmachine: (no-preload-892672) Calling .Close
	I0501 03:46:34.260876   68640 main.go:141] libmachine: Successfully made call to close driver server
	I0501 03:46:34.260888   68640 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 03:46:34.260899   68640 addons.go:470] Verifying addon metrics-server=true in "no-preload-892672"
	I0501 03:46:34.262520   68640 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0501 03:46:34.264176   68640 addons.go:505] duration metric: took 1.215147486s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0501 03:46:35.384910   68640 pod_ready.go:102] pod "coredns-7db6d8ff4d-57k52" in "kube-system" namespace has status "Ready":"False"
	I0501 03:46:36.377298   68640 pod_ready.go:92] pod "coredns-7db6d8ff4d-57k52" in "kube-system" namespace has status "Ready":"True"
	I0501 03:46:36.377321   68640 pod_ready.go:81] duration metric: took 3.009581117s for pod "coredns-7db6d8ff4d-57k52" in "kube-system" namespace to be "Ready" ...
	I0501 03:46:36.377331   68640 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-c6lnj" in "kube-system" namespace to be "Ready" ...
	I0501 03:46:36.383022   68640 pod_ready.go:92] pod "coredns-7db6d8ff4d-c6lnj" in "kube-system" namespace has status "Ready":"True"
	I0501 03:46:36.383042   68640 pod_ready.go:81] duration metric: took 5.704691ms for pod "coredns-7db6d8ff4d-c6lnj" in "kube-system" namespace to be "Ready" ...
	I0501 03:46:36.383051   68640 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-892672" in "kube-system" namespace to be "Ready" ...
	I0501 03:46:36.387456   68640 pod_ready.go:92] pod "etcd-no-preload-892672" in "kube-system" namespace has status "Ready":"True"
	I0501 03:46:36.387476   68640 pod_ready.go:81] duration metric: took 4.418883ms for pod "etcd-no-preload-892672" in "kube-system" namespace to be "Ready" ...
	I0501 03:46:36.387485   68640 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-892672" in "kube-system" namespace to be "Ready" ...
	I0501 03:46:36.392348   68640 pod_ready.go:92] pod "kube-apiserver-no-preload-892672" in "kube-system" namespace has status "Ready":"True"
	I0501 03:46:36.392366   68640 pod_ready.go:81] duration metric: took 4.874928ms for pod "kube-apiserver-no-preload-892672" in "kube-system" namespace to be "Ready" ...
	I0501 03:46:36.392375   68640 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-892672" in "kube-system" namespace to be "Ready" ...
	I0501 03:46:36.397155   68640 pod_ready.go:92] pod "kube-controller-manager-no-preload-892672" in "kube-system" namespace has status "Ready":"True"
	I0501 03:46:36.397175   68640 pod_ready.go:81] duration metric: took 4.794583ms for pod "kube-controller-manager-no-preload-892672" in "kube-system" namespace to be "Ready" ...
	I0501 03:46:36.397185   68640 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-czsqz" in "kube-system" namespace to be "Ready" ...
	I0501 03:46:36.774003   68640 pod_ready.go:92] pod "kube-proxy-czsqz" in "kube-system" namespace has status "Ready":"True"
	I0501 03:46:36.774025   68640 pod_ready.go:81] duration metric: took 376.83321ms for pod "kube-proxy-czsqz" in "kube-system" namespace to be "Ready" ...
	I0501 03:46:36.774036   68640 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-892672" in "kube-system" namespace to be "Ready" ...
	I0501 03:46:37.171504   68640 pod_ready.go:92] pod "kube-scheduler-no-preload-892672" in "kube-system" namespace has status "Ready":"True"
	I0501 03:46:37.171526   68640 pod_ready.go:81] duration metric: took 397.484706ms for pod "kube-scheduler-no-preload-892672" in "kube-system" namespace to be "Ready" ...
	I0501 03:46:37.171535   68640 pod_ready.go:38] duration metric: took 3.815806043s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0501 03:46:37.171549   68640 api_server.go:52] waiting for apiserver process to appear ...
	I0501 03:46:37.171609   68640 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:46:37.189446   68640 api_server.go:72] duration metric: took 4.140414812s to wait for apiserver process to appear ...
	I0501 03:46:37.189473   68640 api_server.go:88] waiting for apiserver healthz status ...
	I0501 03:46:37.189494   68640 api_server.go:253] Checking apiserver healthz at https://192.168.39.144:8443/healthz ...
	I0501 03:46:37.195052   68640 api_server.go:279] https://192.168.39.144:8443/healthz returned 200:
	ok
	I0501 03:46:37.196163   68640 api_server.go:141] control plane version: v1.30.0
	I0501 03:46:37.196183   68640 api_server.go:131] duration metric: took 6.703804ms to wait for apiserver health ...
	I0501 03:46:37.196191   68640 system_pods.go:43] waiting for kube-system pods to appear ...
	I0501 03:46:37.375742   68640 system_pods.go:59] 9 kube-system pods found
	I0501 03:46:37.375775   68640 system_pods.go:61] "coredns-7db6d8ff4d-57k52" [f98cb358-71ba-49c5-8213-0f3160c6e38b] Running
	I0501 03:46:37.375784   68640 system_pods.go:61] "coredns-7db6d8ff4d-c6lnj" [f8b8c1f1-7696-43f2-98be-339f99963e7c] Running
	I0501 03:46:37.375789   68640 system_pods.go:61] "etcd-no-preload-892672" [5f92eb1b-6611-4663-95f0-8c071a3a37c9] Running
	I0501 03:46:37.375796   68640 system_pods.go:61] "kube-apiserver-no-preload-892672" [90bcaa82-61b0-49d5-b50c-76288b099683] Running
	I0501 03:46:37.375804   68640 system_pods.go:61] "kube-controller-manager-no-preload-892672" [f80af654-aa81-4cd2-b5ce-4f31f6e49e9f] Running
	I0501 03:46:37.375809   68640 system_pods.go:61] "kube-proxy-czsqz" [4254b019-b6c8-4ff9-a361-c96eaf20dc65] Running
	I0501 03:46:37.375813   68640 system_pods.go:61] "kube-scheduler-no-preload-892672" [6753a5df-86d1-47bf-9514-6b8352acf969] Running
	I0501 03:46:37.375824   68640 system_pods.go:61] "metrics-server-569cc877fc-5m5qf" [a1ec3e6c-fe90-4168-b0ec-54f82f17b46d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0501 03:46:37.375830   68640 system_pods.go:61] "storage-provisioner" [b55b7e8b-4de0-40f8-96ff-bf0b550699d1] Running
	I0501 03:46:37.375841   68640 system_pods.go:74] duration metric: took 179.642731ms to wait for pod list to return data ...
	I0501 03:46:37.375857   68640 default_sa.go:34] waiting for default service account to be created ...
	I0501 03:46:37.572501   68640 default_sa.go:45] found service account: "default"
	I0501 03:46:37.572530   68640 default_sa.go:55] duration metric: took 196.664812ms for default service account to be created ...
	I0501 03:46:37.572542   68640 system_pods.go:116] waiting for k8s-apps to be running ...
	I0501 03:46:37.778012   68640 system_pods.go:86] 9 kube-system pods found
	I0501 03:46:37.778053   68640 system_pods.go:89] "coredns-7db6d8ff4d-57k52" [f98cb358-71ba-49c5-8213-0f3160c6e38b] Running
	I0501 03:46:37.778062   68640 system_pods.go:89] "coredns-7db6d8ff4d-c6lnj" [f8b8c1f1-7696-43f2-98be-339f99963e7c] Running
	I0501 03:46:37.778068   68640 system_pods.go:89] "etcd-no-preload-892672" [5f92eb1b-6611-4663-95f0-8c071a3a37c9] Running
	I0501 03:46:37.778075   68640 system_pods.go:89] "kube-apiserver-no-preload-892672" [90bcaa82-61b0-49d5-b50c-76288b099683] Running
	I0501 03:46:37.778082   68640 system_pods.go:89] "kube-controller-manager-no-preload-892672" [f80af654-aa81-4cd2-b5ce-4f31f6e49e9f] Running
	I0501 03:46:37.778088   68640 system_pods.go:89] "kube-proxy-czsqz" [4254b019-b6c8-4ff9-a361-c96eaf20dc65] Running
	I0501 03:46:37.778094   68640 system_pods.go:89] "kube-scheduler-no-preload-892672" [6753a5df-86d1-47bf-9514-6b8352acf969] Running
	I0501 03:46:37.778104   68640 system_pods.go:89] "metrics-server-569cc877fc-5m5qf" [a1ec3e6c-fe90-4168-b0ec-54f82f17b46d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0501 03:46:37.778112   68640 system_pods.go:89] "storage-provisioner" [b55b7e8b-4de0-40f8-96ff-bf0b550699d1] Running
	I0501 03:46:37.778127   68640 system_pods.go:126] duration metric: took 205.578312ms to wait for k8s-apps to be running ...
	I0501 03:46:37.778148   68640 system_svc.go:44] waiting for kubelet service to be running ....
	I0501 03:46:37.778215   68640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 03:46:37.794660   68640 system_svc.go:56] duration metric: took 16.509214ms WaitForService to wait for kubelet
	I0501 03:46:37.794694   68640 kubeadm.go:576] duration metric: took 4.745668881s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0501 03:46:37.794721   68640 node_conditions.go:102] verifying NodePressure condition ...
	I0501 03:46:37.972621   68640 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0501 03:46:37.972647   68640 node_conditions.go:123] node cpu capacity is 2
	I0501 03:46:37.972660   68640 node_conditions.go:105] duration metric: took 177.933367ms to run NodePressure ...
	I0501 03:46:37.972676   68640 start.go:240] waiting for startup goroutines ...
	I0501 03:46:37.972684   68640 start.go:245] waiting for cluster config update ...
	I0501 03:46:37.972699   68640 start.go:254] writing updated cluster config ...
	I0501 03:46:37.972951   68640 ssh_runner.go:195] Run: rm -f paused
	I0501 03:46:38.023054   68640 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0501 03:46:38.025098   68640 out.go:177] * Done! kubectl is now configured to use "no-preload-892672" cluster and "default" namespace by default
	I0501 03:46:46.214470   69580 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0501 03:46:46.214695   69580 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0501 03:46:46.214721   69580 kubeadm.go:309] 
	I0501 03:46:46.214770   69580 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0501 03:46:46.214837   69580 kubeadm.go:309] 		timed out waiting for the condition
	I0501 03:46:46.214875   69580 kubeadm.go:309] 
	I0501 03:46:46.214936   69580 kubeadm.go:309] 	This error is likely caused by:
	I0501 03:46:46.214983   69580 kubeadm.go:309] 		- The kubelet is not running
	I0501 03:46:46.215076   69580 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0501 03:46:46.215084   69580 kubeadm.go:309] 
	I0501 03:46:46.215169   69580 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0501 03:46:46.215201   69580 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0501 03:46:46.215233   69580 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0501 03:46:46.215239   69580 kubeadm.go:309] 
	I0501 03:46:46.215380   69580 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0501 03:46:46.215489   69580 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0501 03:46:46.215505   69580 kubeadm.go:309] 
	I0501 03:46:46.215657   69580 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0501 03:46:46.215782   69580 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0501 03:46:46.215882   69580 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0501 03:46:46.215972   69580 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0501 03:46:46.215984   69580 kubeadm.go:309] 
	I0501 03:46:46.217243   69580 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0501 03:46:46.217352   69580 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0501 03:46:46.217426   69580 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0501 03:46:46.217550   69580 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0501 03:46:46.217611   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0501 03:46:47.375634   69580 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.157990231s)
	I0501 03:46:47.375723   69580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 03:46:47.392333   69580 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0501 03:46:47.404983   69580 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0501 03:46:47.405007   69580 kubeadm.go:156] found existing configuration files:
	
	I0501 03:46:47.405054   69580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0501 03:46:47.417437   69580 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0501 03:46:47.417501   69580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0501 03:46:47.429929   69580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0501 03:46:47.441141   69580 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0501 03:46:47.441215   69580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0501 03:46:47.453012   69580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0501 03:46:47.463702   69580 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0501 03:46:47.463759   69580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0501 03:46:47.474783   69580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0501 03:46:47.485793   69580 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0501 03:46:47.485853   69580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0501 03:46:47.497706   69580 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0501 03:46:47.588221   69580 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0501 03:46:47.588340   69580 kubeadm.go:309] [preflight] Running pre-flight checks
	I0501 03:46:47.759631   69580 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0501 03:46:47.759801   69580 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0501 03:46:47.759949   69580 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0501 03:46:47.978077   69580 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0501 03:46:47.980130   69580 out.go:204]   - Generating certificates and keys ...
	I0501 03:46:47.980240   69580 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0501 03:46:47.980323   69580 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0501 03:46:47.980455   69580 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0501 03:46:47.980579   69580 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0501 03:46:47.980679   69580 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0501 03:46:47.980771   69580 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0501 03:46:47.980864   69580 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0501 03:46:47.981256   69580 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0501 03:46:47.981616   69580 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0501 03:46:47.981858   69580 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0501 03:46:47.981907   69580 kubeadm.go:309] [certs] Using the existing "sa" key
	I0501 03:46:47.981991   69580 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0501 03:46:48.100377   69580 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0501 03:46:48.463892   69580 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0501 03:46:48.521991   69580 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0501 03:46:48.735222   69580 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0501 03:46:48.753098   69580 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0501 03:46:48.756950   69580 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0501 03:46:48.757379   69580 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0501 03:46:48.937039   69580 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0501 03:46:48.939065   69580 out.go:204]   - Booting up control plane ...
	I0501 03:46:48.939183   69580 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0501 03:46:48.961380   69580 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0501 03:46:48.962890   69580 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0501 03:46:48.963978   69580 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0501 03:46:48.971754   69580 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0501 03:47:28.974873   69580 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0501 03:47:28.975296   69580 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0501 03:47:28.975545   69580 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0501 03:47:33.976469   69580 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0501 03:47:33.976699   69580 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0501 03:47:43.977443   69580 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0501 03:47:43.977663   69580 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0501 03:48:03.979113   69580 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0501 03:48:03.979409   69580 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0501 03:48:43.982479   69580 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0501 03:48:43.982781   69580 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0501 03:48:43.983363   69580 kubeadm.go:309] 
	I0501 03:48:43.983427   69580 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0501 03:48:43.983484   69580 kubeadm.go:309] 		timed out waiting for the condition
	I0501 03:48:43.983490   69580 kubeadm.go:309] 
	I0501 03:48:43.983520   69580 kubeadm.go:309] 	This error is likely caused by:
	I0501 03:48:43.983547   69580 kubeadm.go:309] 		- The kubelet is not running
	I0501 03:48:43.983633   69580 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0501 03:48:43.983637   69580 kubeadm.go:309] 
	I0501 03:48:43.983721   69580 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0501 03:48:43.983748   69580 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0501 03:48:43.983774   69580 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0501 03:48:43.983778   69580 kubeadm.go:309] 
	I0501 03:48:43.983861   69580 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0501 03:48:43.983928   69580 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0501 03:48:43.983932   69580 kubeadm.go:309] 
	I0501 03:48:43.984023   69580 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0501 03:48:43.984094   69580 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0501 03:48:43.984155   69580 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0501 03:48:43.984212   69580 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0501 03:48:43.984216   69580 kubeadm.go:309] 
	I0501 03:48:43.985577   69580 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0501 03:48:43.985777   69580 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0501 03:48:43.985875   69580 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0501 03:48:43.985971   69580 kubeadm.go:393] duration metric: took 8m0.315126498s to StartCluster
	I0501 03:48:43.986025   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:48:43.986092   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:48:44.038296   69580 cri.go:89] found id: ""
	I0501 03:48:44.038328   69580 logs.go:276] 0 containers: []
	W0501 03:48:44.038339   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:48:44.038346   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:48:44.038426   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:48:44.081855   69580 cri.go:89] found id: ""
	I0501 03:48:44.081891   69580 logs.go:276] 0 containers: []
	W0501 03:48:44.081904   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:48:44.081913   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:48:44.081996   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:48:44.131400   69580 cri.go:89] found id: ""
	I0501 03:48:44.131435   69580 logs.go:276] 0 containers: []
	W0501 03:48:44.131445   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:48:44.131451   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:48:44.131519   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:48:44.178274   69580 cri.go:89] found id: ""
	I0501 03:48:44.178302   69580 logs.go:276] 0 containers: []
	W0501 03:48:44.178310   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:48:44.178316   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:48:44.178376   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:48:44.223087   69580 cri.go:89] found id: ""
	I0501 03:48:44.223115   69580 logs.go:276] 0 containers: []
	W0501 03:48:44.223125   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:48:44.223133   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:48:44.223196   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:48:44.266093   69580 cri.go:89] found id: ""
	I0501 03:48:44.266122   69580 logs.go:276] 0 containers: []
	W0501 03:48:44.266135   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:48:44.266143   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:48:44.266204   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:48:44.307766   69580 cri.go:89] found id: ""
	I0501 03:48:44.307795   69580 logs.go:276] 0 containers: []
	W0501 03:48:44.307806   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:48:44.307813   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:48:44.307876   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:48:44.348548   69580 cri.go:89] found id: ""
	I0501 03:48:44.348576   69580 logs.go:276] 0 containers: []
	W0501 03:48:44.348585   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:48:44.348594   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:48:44.348614   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:48:44.394160   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:48:44.394209   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:48:44.449845   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:48:44.449879   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:48:44.467663   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:48:44.467694   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:48:44.556150   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:48:44.556183   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:48:44.556199   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W0501 03:48:44.661110   69580 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0501 03:48:44.661169   69580 out.go:239] * 
	W0501 03:48:44.661226   69580 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0501 03:48:44.661246   69580 out.go:239] * 
	W0501 03:48:44.662064   69580 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0501 03:48:44.665608   69580 out.go:177] 
	W0501 03:48:44.666799   69580 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0501 03:48:44.666851   69580 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0501 03:48:44.666870   69580 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0501 03:48:44.668487   69580 out.go:177] 
	
	
	==> CRI-O <==
	May 01 03:54:40 default-k8s-diff-port-715118 crio[726]: time="2024-05-01 03:54:40.494963085Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=97fcd254-5888-4781-b227-0cb346a140f1 name=/runtime.v1.RuntimeService/Version
	May 01 03:54:40 default-k8s-diff-port-715118 crio[726]: time="2024-05-01 03:54:40.497278652Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9a8b17c3-d093-41cf-abf9-b7f6964c69ec name=/runtime.v1.ImageService/ImageFsInfo
	May 01 03:54:40 default-k8s-diff-port-715118 crio[726]: time="2024-05-01 03:54:40.498848588Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714535680498815156,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133261,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9a8b17c3-d093-41cf-abf9-b7f6964c69ec name=/runtime.v1.ImageService/ImageFsInfo
	May 01 03:54:40 default-k8s-diff-port-715118 crio[726]: time="2024-05-01 03:54:40.500106036Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2f95ae12-db5f-4202-a0f5-1c8e152d7ad6 name=/runtime.v1.RuntimeService/ListContainers
	May 01 03:54:40 default-k8s-diff-port-715118 crio[726]: time="2024-05-01 03:54:40.500199169Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2f95ae12-db5f-4202-a0f5-1c8e152d7ad6 name=/runtime.v1.RuntimeService/ListContainers
	May 01 03:54:40 default-k8s-diff-port-715118 crio[726]: time="2024-05-01 03:54:40.500702175Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3d8ba3db0459896edb75b12157ddbf8810613153a6df76d1e4eb406b8f8b6e62,PodSandboxId:edb63b349a081379b6835b92a992c1b2eed12182273639db600a4b3f0b998243,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714535136228055114,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: debb3a59-143a-46d3-87da-c2403e264861,},Annotations:map[string]string{io.kubernetes.container.hash: 16db7a36,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52024cf28376a62381031eca8bea22e44266b9d223f6d5e99cf52755f6f9fa39,PodSandboxId:9f4f9990c585ec803f12bc5ab6e947b4d96f77913e0fe564009d8aeacfbfd70c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714535135125317856,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bg755,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 884d489a-bc1e-442c-8e00-4616f983d3e9,},Annotations:map[string]string{io.kubernetes.container.hash: 5f29ea52,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63bdbf22a7bd7a8493dba1bed9368968feb9bff095b2e32ce5d7867b3f9959c1,PodSandboxId:8cc39213f143b0273112f6f4b226ce4cf187dce36a1c8ab70af09d9314915f48,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714535135082897572,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mp6f5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 4c8550d0-0029-48f1-a892-1800f6639c75,},Annotations:map[string]string{io.kubernetes.container.hash: 60529517,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11308e8bbf31d7b87ceb42faf1dbf32e184d440a9ff0a0138c0aadd47365b83a,PodSandboxId:8f2b32a0f8500606b42c0b7e0e7f154d5e02360b0b774d4044971ebf2fbbb5cb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING
,CreatedAt:1714535134099600477,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2knrp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf1406ff-8a6e-49bb-b180-1e72f4b54fbf,},Annotations:map[string]string{io.kubernetes.container.hash: 5e978535,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ec59f96dc5ca807a780b0898a1ca13ae038a3e83e43df7eef31296e6f297120,PodSandboxId:c9df82a07399452bc24414ebb686eb25279999abd813c3a1bf5b1964ffe6a39a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:171453511463300084
1,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-715118,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6cf4f2377aeb7600128ff5c542633ad8,},Annotations:map[string]string{io.kubernetes.container.hash: 96fecfa7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e6542cbc796b648420284c6f298ed9fd813087e54aa092fe7efe6fa2afcecac,PodSandboxId:6bcfeffdf6dc222f0fa4c1489ac1111337f2a5f443f90be27dccdb8dd88e0189,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714535114600435459,Labels:map
[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-715118,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 292d2020dce8f2017946cc5de9055d9a,},Annotations:map[string]string{io.kubernetes.container.hash: e71e301a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2997ad24c9a671bec035780acc282ba18cf87b144bd77e595a59b06414d29f34,PodSandboxId:1ccda42299061a6f842aa6c71b6980f3047de6f3a4ba1a3cd0b3e30d3f578d36,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714535114563119381,Labels:map[string]string{io.kube
rnetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-715118,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: beb6d90258e2ad028130bb1ec0b8d9f6,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec5e0a63d43c5edbf108ba506af6763ae952fa85072a3ceba633eccb0fd4c710,PodSandboxId:9aa295a830c06bc2d5fc7eb2cec630a61f167f47b3afec6d2ed81a9efaf9cb95,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714535114467974977,Labels:map[string]string{
io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-715118,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 204b55e4a7dda2d8362d806ee3a56174,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2f95ae12-db5f-4202-a0f5-1c8e152d7ad6 name=/runtime.v1.RuntimeService/ListContainers
	May 01 03:54:40 default-k8s-diff-port-715118 crio[726]: time="2024-05-01 03:54:40.533132798Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=927ae803-25f5-4b18-89f9-95e6905f278f name=/runtime.v1.RuntimeService/ListPodSandbox
	May 01 03:54:40 default-k8s-diff-port-715118 crio[726]: time="2024-05-01 03:54:40.533432484Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:edb63b349a081379b6835b92a992c1b2eed12182273639db600a4b3f0b998243,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:debb3a59-143a-46d3-87da-c2403e264861,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1714535136098308929,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: debb3a59-143a-46d3-87da-c2403e264861,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespac
e\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-05-01T03:45:35.488358365Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8df7e185ce8e32c5511e5bb4ceada737bbd26c0b2e2ef5f71291f9afac2e9fbc,Metadata:&PodSandboxMetadata{Name:metrics-server-569cc877fc-xwxx9,Uid:a66f5df4-355c-47f0-8b6e-da29e1c4394e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1714535135943999119,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-569cc877fc-xwxx9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a66f5df4-355c-47f0-8b6e-d
a29e1c4394e,k8s-app: metrics-server,pod-template-hash: 569cc877fc,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-05-01T03:45:35.636901623Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9f4f9990c585ec803f12bc5ab6e947b4d96f77913e0fe564009d8aeacfbfd70c,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-bg755,Uid:884d489a-bc1e-442c-8e00-4616f983d3e9,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1714535134196638213,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-bg755,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 884d489a-bc1e-442c-8e00-4616f983d3e9,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-05-01T03:45:33.884413365Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8cc39213f143b0273112f6f4b226ce4cf187dce36a1c8ab70af09d9314915f48,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-mp6f5,Uid:4c8550d0
-0029-48f1-a892-1800f6639c75,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1714535134115366998,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-mp6f5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c8550d0-0029-48f1-a892-1800f6639c75,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-05-01T03:45:33.805851732Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8f2b32a0f8500606b42c0b7e0e7f154d5e02360b0b774d4044971ebf2fbbb5cb,Metadata:&PodSandboxMetadata{Name:kube-proxy-2knrp,Uid:cf1406ff-8a6e-49bb-b180-1e72f4b54fbf,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1714535133763212894,Labels:map[string]string{controller-revision-hash: 79cf874c65,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-2knrp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf1406ff-8a6e-49bb-b180-1e72f4b54fbf,k8s-app: kube-pro
xy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-05-01T03:45:33.448574400Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1ccda42299061a6f842aa6c71b6980f3047de6f3a4ba1a3cd0b3e30d3f578d36,Metadata:&PodSandboxMetadata{Name:kube-scheduler-default-k8s-diff-port-715118,Uid:beb6d90258e2ad028130bb1ec0b8d9f6,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1714535114303125918,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-715118,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: beb6d90258e2ad028130bb1ec0b8d9f6,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: beb6d90258e2ad028130bb1ec0b8d9f6,kubernetes.io/config.seen: 2024-05-01T03:45:13.811895081Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:9aa295a830c06bc2d5fc7eb2cec630a61f167f47b3afec6d2ed81a9efaf9cb95,Metadata:&PodSandb
oxMetadata{Name:kube-controller-manager-default-k8s-diff-port-715118,Uid:204b55e4a7dda2d8362d806ee3a56174,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1714535114286427881,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-715118,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 204b55e4a7dda2d8362d806ee3a56174,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 204b55e4a7dda2d8362d806ee3a56174,kubernetes.io/config.seen: 2024-05-01T03:45:13.811894267Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c9df82a07399452bc24414ebb686eb25279999abd813c3a1bf5b1964ffe6a39a,Metadata:&PodSandboxMetadata{Name:kube-apiserver-default-k8s-diff-port-715118,Uid:6cf4f2377aeb7600128ff5c542633ad8,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1714535114281426271,Labels:map[string]string{component: kube-apiserver,io.kubernete
s.container.name: POD,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-715118,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6cf4f2377aeb7600128ff5c542633ad8,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.72.158:8444,kubernetes.io/config.hash: 6cf4f2377aeb7600128ff5c542633ad8,kubernetes.io/config.seen: 2024-05-01T03:45:13.811892954Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:6bcfeffdf6dc222f0fa4c1489ac1111337f2a5f443f90be27dccdb8dd88e0189,Metadata:&PodSandboxMetadata{Name:etcd-default-k8s-diff-port-715118,Uid:292d2020dce8f2017946cc5de9055d9a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1714535114269369685,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-default-k8s-diff-port-715118,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 292d2020dce8f2017946cc5de9055d9a,tier: control-plane,},Annotat
ions:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.72.158:2379,kubernetes.io/config.hash: 292d2020dce8f2017946cc5de9055d9a,kubernetes.io/config.seen: 2024-05-01T03:45:13.811889372Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=927ae803-25f5-4b18-89f9-95e6905f278f name=/runtime.v1.RuntimeService/ListPodSandbox
	May 01 03:54:40 default-k8s-diff-port-715118 crio[726]: time="2024-05-01 03:54:40.534512893Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2619cfe9-2189-4dc9-8169-1168d356f2ce name=/runtime.v1.RuntimeService/ListContainers
	May 01 03:54:40 default-k8s-diff-port-715118 crio[726]: time="2024-05-01 03:54:40.534575597Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2619cfe9-2189-4dc9-8169-1168d356f2ce name=/runtime.v1.RuntimeService/ListContainers
	May 01 03:54:40 default-k8s-diff-port-715118 crio[726]: time="2024-05-01 03:54:40.534754376Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3d8ba3db0459896edb75b12157ddbf8810613153a6df76d1e4eb406b8f8b6e62,PodSandboxId:edb63b349a081379b6835b92a992c1b2eed12182273639db600a4b3f0b998243,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714535136228055114,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: debb3a59-143a-46d3-87da-c2403e264861,},Annotations:map[string]string{io.kubernetes.container.hash: 16db7a36,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52024cf28376a62381031eca8bea22e44266b9d223f6d5e99cf52755f6f9fa39,PodSandboxId:9f4f9990c585ec803f12bc5ab6e947b4d96f77913e0fe564009d8aeacfbfd70c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714535135125317856,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bg755,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 884d489a-bc1e-442c-8e00-4616f983d3e9,},Annotations:map[string]string{io.kubernetes.container.hash: 5f29ea52,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63bdbf22a7bd7a8493dba1bed9368968feb9bff095b2e32ce5d7867b3f9959c1,PodSandboxId:8cc39213f143b0273112f6f4b226ce4cf187dce36a1c8ab70af09d9314915f48,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714535135082897572,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mp6f5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 4c8550d0-0029-48f1-a892-1800f6639c75,},Annotations:map[string]string{io.kubernetes.container.hash: 60529517,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11308e8bbf31d7b87ceb42faf1dbf32e184d440a9ff0a0138c0aadd47365b83a,PodSandboxId:8f2b32a0f8500606b42c0b7e0e7f154d5e02360b0b774d4044971ebf2fbbb5cb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING
,CreatedAt:1714535134099600477,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2knrp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf1406ff-8a6e-49bb-b180-1e72f4b54fbf,},Annotations:map[string]string{io.kubernetes.container.hash: 5e978535,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ec59f96dc5ca807a780b0898a1ca13ae038a3e83e43df7eef31296e6f297120,PodSandboxId:c9df82a07399452bc24414ebb686eb25279999abd813c3a1bf5b1964ffe6a39a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:171453511463300084
1,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-715118,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6cf4f2377aeb7600128ff5c542633ad8,},Annotations:map[string]string{io.kubernetes.container.hash: 96fecfa7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e6542cbc796b648420284c6f298ed9fd813087e54aa092fe7efe6fa2afcecac,PodSandboxId:6bcfeffdf6dc222f0fa4c1489ac1111337f2a5f443f90be27dccdb8dd88e0189,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714535114600435459,Labels:map
[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-715118,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 292d2020dce8f2017946cc5de9055d9a,},Annotations:map[string]string{io.kubernetes.container.hash: e71e301a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2997ad24c9a671bec035780acc282ba18cf87b144bd77e595a59b06414d29f34,PodSandboxId:1ccda42299061a6f842aa6c71b6980f3047de6f3a4ba1a3cd0b3e30d3f578d36,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714535114563119381,Labels:map[string]string{io.kube
rnetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-715118,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: beb6d90258e2ad028130bb1ec0b8d9f6,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec5e0a63d43c5edbf108ba506af6763ae952fa85072a3ceba633eccb0fd4c710,PodSandboxId:9aa295a830c06bc2d5fc7eb2cec630a61f167f47b3afec6d2ed81a9efaf9cb95,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714535114467974977,Labels:map[string]string{
io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-715118,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 204b55e4a7dda2d8362d806ee3a56174,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2619cfe9-2189-4dc9-8169-1168d356f2ce name=/runtime.v1.RuntimeService/ListContainers
	May 01 03:54:40 default-k8s-diff-port-715118 crio[726]: time="2024-05-01 03:54:40.552789544Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=04c86138-0837-4ead-8116-80f50ff956e1 name=/runtime.v1.RuntimeService/Version
	May 01 03:54:40 default-k8s-diff-port-715118 crio[726]: time="2024-05-01 03:54:40.552873090Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=04c86138-0837-4ead-8116-80f50ff956e1 name=/runtime.v1.RuntimeService/Version
	May 01 03:54:40 default-k8s-diff-port-715118 crio[726]: time="2024-05-01 03:54:40.554626713Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c936f49d-d142-485e-a461-762715a20730 name=/runtime.v1.ImageService/ImageFsInfo
	May 01 03:54:40 default-k8s-diff-port-715118 crio[726]: time="2024-05-01 03:54:40.555046045Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714535680555025046,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133261,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c936f49d-d142-485e-a461-762715a20730 name=/runtime.v1.ImageService/ImageFsInfo
	May 01 03:54:40 default-k8s-diff-port-715118 crio[726]: time="2024-05-01 03:54:40.555830242Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=54ab51df-3ab6-4932-91a2-c7fb30e1de9d name=/runtime.v1.RuntimeService/ListContainers
	May 01 03:54:40 default-k8s-diff-port-715118 crio[726]: time="2024-05-01 03:54:40.555885928Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=54ab51df-3ab6-4932-91a2-c7fb30e1de9d name=/runtime.v1.RuntimeService/ListContainers
	May 01 03:54:40 default-k8s-diff-port-715118 crio[726]: time="2024-05-01 03:54:40.556062181Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3d8ba3db0459896edb75b12157ddbf8810613153a6df76d1e4eb406b8f8b6e62,PodSandboxId:edb63b349a081379b6835b92a992c1b2eed12182273639db600a4b3f0b998243,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714535136228055114,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: debb3a59-143a-46d3-87da-c2403e264861,},Annotations:map[string]string{io.kubernetes.container.hash: 16db7a36,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52024cf28376a62381031eca8bea22e44266b9d223f6d5e99cf52755f6f9fa39,PodSandboxId:9f4f9990c585ec803f12bc5ab6e947b4d96f77913e0fe564009d8aeacfbfd70c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714535135125317856,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bg755,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 884d489a-bc1e-442c-8e00-4616f983d3e9,},Annotations:map[string]string{io.kubernetes.container.hash: 5f29ea52,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63bdbf22a7bd7a8493dba1bed9368968feb9bff095b2e32ce5d7867b3f9959c1,PodSandboxId:8cc39213f143b0273112f6f4b226ce4cf187dce36a1c8ab70af09d9314915f48,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714535135082897572,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mp6f5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 4c8550d0-0029-48f1-a892-1800f6639c75,},Annotations:map[string]string{io.kubernetes.container.hash: 60529517,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11308e8bbf31d7b87ceb42faf1dbf32e184d440a9ff0a0138c0aadd47365b83a,PodSandboxId:8f2b32a0f8500606b42c0b7e0e7f154d5e02360b0b774d4044971ebf2fbbb5cb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING
,CreatedAt:1714535134099600477,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2knrp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf1406ff-8a6e-49bb-b180-1e72f4b54fbf,},Annotations:map[string]string{io.kubernetes.container.hash: 5e978535,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ec59f96dc5ca807a780b0898a1ca13ae038a3e83e43df7eef31296e6f297120,PodSandboxId:c9df82a07399452bc24414ebb686eb25279999abd813c3a1bf5b1964ffe6a39a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:171453511463300084
1,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-715118,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6cf4f2377aeb7600128ff5c542633ad8,},Annotations:map[string]string{io.kubernetes.container.hash: 96fecfa7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e6542cbc796b648420284c6f298ed9fd813087e54aa092fe7efe6fa2afcecac,PodSandboxId:6bcfeffdf6dc222f0fa4c1489ac1111337f2a5f443f90be27dccdb8dd88e0189,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714535114600435459,Labels:map
[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-715118,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 292d2020dce8f2017946cc5de9055d9a,},Annotations:map[string]string{io.kubernetes.container.hash: e71e301a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2997ad24c9a671bec035780acc282ba18cf87b144bd77e595a59b06414d29f34,PodSandboxId:1ccda42299061a6f842aa6c71b6980f3047de6f3a4ba1a3cd0b3e30d3f578d36,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714535114563119381,Labels:map[string]string{io.kube
rnetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-715118,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: beb6d90258e2ad028130bb1ec0b8d9f6,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec5e0a63d43c5edbf108ba506af6763ae952fa85072a3ceba633eccb0fd4c710,PodSandboxId:9aa295a830c06bc2d5fc7eb2cec630a61f167f47b3afec6d2ed81a9efaf9cb95,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714535114467974977,Labels:map[string]string{
io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-715118,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 204b55e4a7dda2d8362d806ee3a56174,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=54ab51df-3ab6-4932-91a2-c7fb30e1de9d name=/runtime.v1.RuntimeService/ListContainers
	May 01 03:54:40 default-k8s-diff-port-715118 crio[726]: time="2024-05-01 03:54:40.596228650Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=38975e8f-d426-44a4-876d-21eab9a04f14 name=/runtime.v1.RuntimeService/Version
	May 01 03:54:40 default-k8s-diff-port-715118 crio[726]: time="2024-05-01 03:54:40.596309132Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=38975e8f-d426-44a4-876d-21eab9a04f14 name=/runtime.v1.RuntimeService/Version
	May 01 03:54:40 default-k8s-diff-port-715118 crio[726]: time="2024-05-01 03:54:40.598323091Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cd26b179-776a-48ff-9b64-02dff498e3ca name=/runtime.v1.ImageService/ImageFsInfo
	May 01 03:54:40 default-k8s-diff-port-715118 crio[726]: time="2024-05-01 03:54:40.599036599Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714535680599012152,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133261,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cd26b179-776a-48ff-9b64-02dff498e3ca name=/runtime.v1.ImageService/ImageFsInfo
	May 01 03:54:40 default-k8s-diff-port-715118 crio[726]: time="2024-05-01 03:54:40.600144371Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e37b8651-91d3-4c70-ba4e-740196941aae name=/runtime.v1.RuntimeService/ListContainers
	May 01 03:54:40 default-k8s-diff-port-715118 crio[726]: time="2024-05-01 03:54:40.600305115Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e37b8651-91d3-4c70-ba4e-740196941aae name=/runtime.v1.RuntimeService/ListContainers
	May 01 03:54:40 default-k8s-diff-port-715118 crio[726]: time="2024-05-01 03:54:40.600568089Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3d8ba3db0459896edb75b12157ddbf8810613153a6df76d1e4eb406b8f8b6e62,PodSandboxId:edb63b349a081379b6835b92a992c1b2eed12182273639db600a4b3f0b998243,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714535136228055114,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: debb3a59-143a-46d3-87da-c2403e264861,},Annotations:map[string]string{io.kubernetes.container.hash: 16db7a36,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52024cf28376a62381031eca8bea22e44266b9d223f6d5e99cf52755f6f9fa39,PodSandboxId:9f4f9990c585ec803f12bc5ab6e947b4d96f77913e0fe564009d8aeacfbfd70c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714535135125317856,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bg755,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 884d489a-bc1e-442c-8e00-4616f983d3e9,},Annotations:map[string]string{io.kubernetes.container.hash: 5f29ea52,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63bdbf22a7bd7a8493dba1bed9368968feb9bff095b2e32ce5d7867b3f9959c1,PodSandboxId:8cc39213f143b0273112f6f4b226ce4cf187dce36a1c8ab70af09d9314915f48,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714535135082897572,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mp6f5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 4c8550d0-0029-48f1-a892-1800f6639c75,},Annotations:map[string]string{io.kubernetes.container.hash: 60529517,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11308e8bbf31d7b87ceb42faf1dbf32e184d440a9ff0a0138c0aadd47365b83a,PodSandboxId:8f2b32a0f8500606b42c0b7e0e7f154d5e02360b0b774d4044971ebf2fbbb5cb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING
,CreatedAt:1714535134099600477,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2knrp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf1406ff-8a6e-49bb-b180-1e72f4b54fbf,},Annotations:map[string]string{io.kubernetes.container.hash: 5e978535,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ec59f96dc5ca807a780b0898a1ca13ae038a3e83e43df7eef31296e6f297120,PodSandboxId:c9df82a07399452bc24414ebb686eb25279999abd813c3a1bf5b1964ffe6a39a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:171453511463300084
1,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-715118,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6cf4f2377aeb7600128ff5c542633ad8,},Annotations:map[string]string{io.kubernetes.container.hash: 96fecfa7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e6542cbc796b648420284c6f298ed9fd813087e54aa092fe7efe6fa2afcecac,PodSandboxId:6bcfeffdf6dc222f0fa4c1489ac1111337f2a5f443f90be27dccdb8dd88e0189,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714535114600435459,Labels:map
[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-715118,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 292d2020dce8f2017946cc5de9055d9a,},Annotations:map[string]string{io.kubernetes.container.hash: e71e301a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2997ad24c9a671bec035780acc282ba18cf87b144bd77e595a59b06414d29f34,PodSandboxId:1ccda42299061a6f842aa6c71b6980f3047de6f3a4ba1a3cd0b3e30d3f578d36,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714535114563119381,Labels:map[string]string{io.kube
rnetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-715118,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: beb6d90258e2ad028130bb1ec0b8d9f6,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec5e0a63d43c5edbf108ba506af6763ae952fa85072a3ceba633eccb0fd4c710,PodSandboxId:9aa295a830c06bc2d5fc7eb2cec630a61f167f47b3afec6d2ed81a9efaf9cb95,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714535114467974977,Labels:map[string]string{
io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-715118,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 204b55e4a7dda2d8362d806ee3a56174,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e37b8651-91d3-4c70-ba4e-740196941aae name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	3d8ba3db04598       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   edb63b349a081       storage-provisioner
	52024cf28376a       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   9f4f9990c585e       coredns-7db6d8ff4d-bg755
	63bdbf22a7bd7       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   8cc39213f143b       coredns-7db6d8ff4d-mp6f5
	11308e8bbf31d       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b   9 minutes ago       Running             kube-proxy                0                   8f2b32a0f8500       kube-proxy-2knrp
	4ec59f96dc5ca       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0   9 minutes ago       Running             kube-apiserver            2                   c9df82a073994       kube-apiserver-default-k8s-diff-port-715118
	8e6542cbc796b       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   9 minutes ago       Running             etcd                      2                   6bcfeffdf6dc2       etcd-default-k8s-diff-port-715118
	2997ad24c9a67       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced   9 minutes ago       Running             kube-scheduler            2                   1ccda42299061       kube-scheduler-default-k8s-diff-port-715118
	ec5e0a63d43c5       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b   9 minutes ago       Running             kube-controller-manager   2                   9aa295a830c06       kube-controller-manager-default-k8s-diff-port-715118
	
	
	==> coredns [52024cf28376a62381031eca8bea22e44266b9d223f6d5e99cf52755f6f9fa39] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [63bdbf22a7bd7a8493dba1bed9368968feb9bff095b2e32ce5d7867b3f9959c1] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-715118
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-715118
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2c4eae41cda912e6a762d77f0d8868e00f97bb4e
	                    minikube.k8s.io/name=default-k8s-diff-port-715118
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_01T03_45_20_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 01 May 2024 03:45:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-715118
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 01 May 2024 03:54:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 01 May 2024 03:50:47 +0000   Wed, 01 May 2024 03:45:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 01 May 2024 03:50:47 +0000   Wed, 01 May 2024 03:45:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 01 May 2024 03:50:47 +0000   Wed, 01 May 2024 03:45:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 01 May 2024 03:50:47 +0000   Wed, 01 May 2024 03:45:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.158
	  Hostname:    default-k8s-diff-port-715118
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ca78afd83edb42498001e582216e9753
	  System UUID:                ca78afd8-3edb-4249-8001-e582216e9753
	  Boot ID:                    f24916e9-fc2a-4f3d-a80f-63bee0b9a0aa
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-bg755                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m7s
	  kube-system                 coredns-7db6d8ff4d-mp6f5                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m7s
	  kube-system                 etcd-default-k8s-diff-port-715118                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m20s
	  kube-system                 kube-apiserver-default-k8s-diff-port-715118             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m20s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-715118    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m20s
	  kube-system                 kube-proxy-2knrp                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m7s
	  kube-system                 kube-scheduler-default-k8s-diff-port-715118             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m22s
	  kube-system                 metrics-server-569cc877fc-xwxx9                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m5s
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 9m6s   kube-proxy       
	  Normal  Starting                 9m20s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m20s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m20s  kubelet          Node default-k8s-diff-port-715118 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m20s  kubelet          Node default-k8s-diff-port-715118 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m20s  kubelet          Node default-k8s-diff-port-715118 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m8s   node-controller  Node default-k8s-diff-port-715118 event: Registered Node default-k8s-diff-port-715118 in Controller
	
	
	==> dmesg <==
	[  +0.053880] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.045654] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[May 1 03:40] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.471209] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.570433] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.189641] systemd-fstab-generator[641]: Ignoring "noauto" option for root device
	[  +0.134501] systemd-fstab-generator[653]: Ignoring "noauto" option for root device
	[  +0.229895] systemd-fstab-generator[667]: Ignoring "noauto" option for root device
	[  +0.135829] systemd-fstab-generator[679]: Ignoring "noauto" option for root device
	[  +0.341667] systemd-fstab-generator[710]: Ignoring "noauto" option for root device
	[  +5.336904] systemd-fstab-generator[807]: Ignoring "noauto" option for root device
	[  +0.061899] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.158285] systemd-fstab-generator[939]: Ignoring "noauto" option for root device
	[  +5.594307] kauditd_printk_skb: 97 callbacks suppressed
	[  +7.351619] kauditd_printk_skb: 50 callbacks suppressed
	[  +6.685649] kauditd_printk_skb: 27 callbacks suppressed
	[May 1 03:45] kauditd_printk_skb: 9 callbacks suppressed
	[  +1.607913] systemd-fstab-generator[3603]: Ignoring "noauto" option for root device
	[  +4.523756] kauditd_printk_skb: 53 callbacks suppressed
	[  +2.035180] systemd-fstab-generator[3925]: Ignoring "noauto" option for root device
	[ +13.932253] systemd-fstab-generator[4145]: Ignoring "noauto" option for root device
	[  +0.130974] kauditd_printk_skb: 14 callbacks suppressed
	[May 1 03:46] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [8e6542cbc796b648420284c6f298ed9fd813087e54aa092fe7efe6fa2afcecac] <==
	{"level":"info","ts":"2024-05-01T03:45:15.071226Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"244d86dcb1337571 switched to configuration voters=(2615895240995992945)"}
	{"level":"info","ts":"2024-05-01T03:45:15.072224Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"c08228541f5dd967","local-member-id":"244d86dcb1337571","added-peer-id":"244d86dcb1337571","added-peer-peer-urls":["https://192.168.72.158:2380"]}
	{"level":"info","ts":"2024-05-01T03:45:15.07191Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-05-01T03:45:15.071939Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.72.158:2380"}
	{"level":"info","ts":"2024-05-01T03:45:15.076256Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.72.158:2380"}
	{"level":"info","ts":"2024-05-01T03:45:15.075192Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"244d86dcb1337571","initial-advertise-peer-urls":["https://192.168.72.158:2380"],"listen-peer-urls":["https://192.168.72.158:2380"],"advertise-client-urls":["https://192.168.72.158:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.72.158:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-05-01T03:45:15.07521Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-05-01T03:45:15.700765Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"244d86dcb1337571 is starting a new election at term 1"}
	{"level":"info","ts":"2024-05-01T03:45:15.701153Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"244d86dcb1337571 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-05-01T03:45:15.701437Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"244d86dcb1337571 received MsgPreVoteResp from 244d86dcb1337571 at term 1"}
	{"level":"info","ts":"2024-05-01T03:45:15.701705Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"244d86dcb1337571 became candidate at term 2"}
	{"level":"info","ts":"2024-05-01T03:45:15.701886Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"244d86dcb1337571 received MsgVoteResp from 244d86dcb1337571 at term 2"}
	{"level":"info","ts":"2024-05-01T03:45:15.703604Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"244d86dcb1337571 became leader at term 2"}
	{"level":"info","ts":"2024-05-01T03:45:15.703885Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 244d86dcb1337571 elected leader 244d86dcb1337571 at term 2"}
	{"level":"info","ts":"2024-05-01T03:45:15.713572Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-01T03:45:15.725206Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"244d86dcb1337571","local-member-attributes":"{Name:default-k8s-diff-port-715118 ClientURLs:[https://192.168.72.158:2379]}","request-path":"/0/members/244d86dcb1337571/attributes","cluster-id":"c08228541f5dd967","publish-timeout":"7s"}
	{"level":"info","ts":"2024-05-01T03:45:15.725412Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-01T03:45:15.726542Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-01T03:45:15.743519Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-05-01T03:45:15.743622Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-05-01T03:45:15.761766Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-05-01T03:45:15.761929Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"c08228541f5dd967","local-member-id":"244d86dcb1337571","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-01T03:45:15.762033Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-01T03:45:15.762087Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-01T03:45:15.76212Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.158:2379"}
	
	
	==> kernel <==
	 03:54:41 up 14 min,  0 users,  load average: 0.34, 0.27, 0.21
	Linux default-k8s-diff-port-715118 5.10.207 #1 SMP Tue Apr 30 22:38:43 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [4ec59f96dc5ca807a780b0898a1ca13ae038a3e83e43df7eef31296e6f297120] <==
	I0501 03:48:36.548981       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0501 03:50:17.426155       1 handler_proxy.go:93] no RequestInfo found in the context
	E0501 03:50:17.426316       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0501 03:50:18.426736       1 handler_proxy.go:93] no RequestInfo found in the context
	E0501 03:50:18.426931       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0501 03:50:18.426988       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0501 03:50:18.426754       1 handler_proxy.go:93] no RequestInfo found in the context
	E0501 03:50:18.427123       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0501 03:50:18.428550       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0501 03:51:18.427954       1 handler_proxy.go:93] no RequestInfo found in the context
	E0501 03:51:18.428085       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0501 03:51:18.428113       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0501 03:51:18.429628       1 handler_proxy.go:93] no RequestInfo found in the context
	E0501 03:51:18.429737       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0501 03:51:18.429761       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0501 03:53:18.428990       1 handler_proxy.go:93] no RequestInfo found in the context
	E0501 03:53:18.429304       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0501 03:53:18.429356       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0501 03:53:18.430268       1 handler_proxy.go:93] no RequestInfo found in the context
	E0501 03:53:18.430340       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0501 03:53:18.430357       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [ec5e0a63d43c5edbf108ba506af6763ae952fa85072a3ceba633eccb0fd4c710] <==
	I0501 03:49:03.365440       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0501 03:49:32.820201       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0501 03:49:33.373767       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0501 03:50:02.826775       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0501 03:50:03.383357       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0501 03:50:32.835419       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0501 03:50:33.392374       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0501 03:51:02.841129       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0501 03:51:03.401551       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0501 03:51:31.201802       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="429.268µs"
	E0501 03:51:32.846197       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0501 03:51:33.410442       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0501 03:51:43.198955       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="111.305µs"
	E0501 03:52:02.851677       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0501 03:52:03.421550       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0501 03:52:32.857807       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0501 03:52:33.430579       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0501 03:53:02.863431       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0501 03:53:03.438647       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0501 03:53:32.869022       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0501 03:53:33.448336       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0501 03:54:02.874838       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0501 03:54:03.457930       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0501 03:54:32.882161       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0501 03:54:33.470593       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [11308e8bbf31d7b87ceb42faf1dbf32e184d440a9ff0a0138c0aadd47365b83a] <==
	I0501 03:45:34.508086       1 server_linux.go:69] "Using iptables proxy"
	I0501 03:45:34.538174       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.72.158"]
	I0501 03:45:34.695855       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0501 03:45:34.695917       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0501 03:45:34.695936       1 server_linux.go:165] "Using iptables Proxier"
	I0501 03:45:34.724663       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0501 03:45:34.724968       1 server.go:872] "Version info" version="v1.30.0"
	I0501 03:45:34.725010       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0501 03:45:34.726059       1 config.go:192] "Starting service config controller"
	I0501 03:45:34.726103       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0501 03:45:34.726140       1 config.go:101] "Starting endpoint slice config controller"
	I0501 03:45:34.726171       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0501 03:45:34.730943       1 config.go:319] "Starting node config controller"
	I0501 03:45:34.730985       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0501 03:45:34.827445       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0501 03:45:34.827540       1 shared_informer.go:320] Caches are synced for service config
	I0501 03:45:34.837410       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [2997ad24c9a671bec035780acc282ba18cf87b144bd77e595a59b06414d29f34] <==
	W0501 03:45:18.303447       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0501 03:45:18.303640       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0501 03:45:18.313080       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0501 03:45:18.313133       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0501 03:45:18.404035       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0501 03:45:18.404093       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0501 03:45:18.540939       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0501 03:45:18.541043       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0501 03:45:18.572743       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0501 03:45:18.572982       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0501 03:45:18.594732       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0501 03:45:18.595047       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0501 03:45:18.596546       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0501 03:45:18.597152       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0501 03:45:18.611656       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0501 03:45:18.611730       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0501 03:45:18.707290       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0501 03:45:18.707778       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0501 03:45:18.708142       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0501 03:45:18.708215       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0501 03:45:18.734199       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0501 03:45:18.734284       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0501 03:45:19.011905       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0501 03:45:19.011959       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0501 03:45:21.457931       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	May 01 03:52:20 default-k8s-diff-port-715118 kubelet[3932]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 01 03:52:20 default-k8s-diff-port-715118 kubelet[3932]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 01 03:52:20 default-k8s-diff-port-715118 kubelet[3932]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 01 03:52:20 default-k8s-diff-port-715118 kubelet[3932]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 01 03:52:25 default-k8s-diff-port-715118 kubelet[3932]: E0501 03:52:25.184076    3932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-xwxx9" podUID="a66f5df4-355c-47f0-8b6e-da29e1c4394e"
	May 01 03:52:39 default-k8s-diff-port-715118 kubelet[3932]: E0501 03:52:39.183925    3932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-xwxx9" podUID="a66f5df4-355c-47f0-8b6e-da29e1c4394e"
	May 01 03:52:54 default-k8s-diff-port-715118 kubelet[3932]: E0501 03:52:54.184970    3932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-xwxx9" podUID="a66f5df4-355c-47f0-8b6e-da29e1c4394e"
	May 01 03:53:09 default-k8s-diff-port-715118 kubelet[3932]: E0501 03:53:09.183154    3932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-xwxx9" podUID="a66f5df4-355c-47f0-8b6e-da29e1c4394e"
	May 01 03:53:20 default-k8s-diff-port-715118 kubelet[3932]: E0501 03:53:20.231824    3932 iptables.go:577] "Could not set up iptables canary" err=<
	May 01 03:53:20 default-k8s-diff-port-715118 kubelet[3932]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 01 03:53:20 default-k8s-diff-port-715118 kubelet[3932]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 01 03:53:20 default-k8s-diff-port-715118 kubelet[3932]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 01 03:53:20 default-k8s-diff-port-715118 kubelet[3932]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 01 03:53:22 default-k8s-diff-port-715118 kubelet[3932]: E0501 03:53:22.183134    3932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-xwxx9" podUID="a66f5df4-355c-47f0-8b6e-da29e1c4394e"
	May 01 03:53:37 default-k8s-diff-port-715118 kubelet[3932]: E0501 03:53:37.183955    3932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-xwxx9" podUID="a66f5df4-355c-47f0-8b6e-da29e1c4394e"
	May 01 03:53:48 default-k8s-diff-port-715118 kubelet[3932]: E0501 03:53:48.186086    3932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-xwxx9" podUID="a66f5df4-355c-47f0-8b6e-da29e1c4394e"
	May 01 03:54:02 default-k8s-diff-port-715118 kubelet[3932]: E0501 03:54:02.185792    3932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-xwxx9" podUID="a66f5df4-355c-47f0-8b6e-da29e1c4394e"
	May 01 03:54:15 default-k8s-diff-port-715118 kubelet[3932]: E0501 03:54:15.184738    3932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-xwxx9" podUID="a66f5df4-355c-47f0-8b6e-da29e1c4394e"
	May 01 03:54:20 default-k8s-diff-port-715118 kubelet[3932]: E0501 03:54:20.231443    3932 iptables.go:577] "Could not set up iptables canary" err=<
	May 01 03:54:20 default-k8s-diff-port-715118 kubelet[3932]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 01 03:54:20 default-k8s-diff-port-715118 kubelet[3932]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 01 03:54:20 default-k8s-diff-port-715118 kubelet[3932]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 01 03:54:20 default-k8s-diff-port-715118 kubelet[3932]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 01 03:54:26 default-k8s-diff-port-715118 kubelet[3932]: E0501 03:54:26.183149    3932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-xwxx9" podUID="a66f5df4-355c-47f0-8b6e-da29e1c4394e"
	May 01 03:54:37 default-k8s-diff-port-715118 kubelet[3932]: E0501 03:54:37.183359    3932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-xwxx9" podUID="a66f5df4-355c-47f0-8b6e-da29e1c4394e"
	
	
	==> storage-provisioner [3d8ba3db0459896edb75b12157ddbf8810613153a6df76d1e4eb406b8f8b6e62] <==
	I0501 03:45:36.325513       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0501 03:45:36.337266       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0501 03:45:36.337545       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0501 03:45:36.349200       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0501 03:45:36.350211       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-715118_c675c8ff-db06-4458-ad4e-38e4966957bd!
	I0501 03:45:36.349957       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c297f778-6158-476d-8a08-666ad6d4f2da", APIVersion:"v1", ResourceVersion:"412", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-715118_c675c8ff-db06-4458-ad4e-38e4966957bd became leader
	I0501 03:45:36.451639       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-715118_c675c8ff-db06-4458-ad4e-38e4966957bd!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-715118 -n default-k8s-diff-port-715118
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-715118 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-xwxx9
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-715118 describe pod metrics-server-569cc877fc-xwxx9
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-715118 describe pod metrics-server-569cc877fc-xwxx9: exit status 1 (64.249841ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-xwxx9" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-715118 describe pod metrics-server-569cc877fc-xwxx9: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.41s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.37s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-892672 -n no-preload-892672
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-05-01 03:55:38.587994494 +0000 UTC m=+6513.693910789
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-892672 -n no-preload-892672
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-892672 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-892672 logs -n 25: (2.168605952s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p cert-options-582976                                 | cert-options-582976          | jenkins | v1.33.0 | 01 May 24 03:30 UTC | 01 May 24 03:30 UTC |
	| delete  | -p pause-542495                                        | pause-542495                 | jenkins | v1.33.0 | 01 May 24 03:30 UTC | 01 May 24 03:30 UTC |
	| start   | -p no-preload-892672                                   | no-preload-892672            | jenkins | v1.33.0 | 01 May 24 03:30 UTC | 01 May 24 03:32 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-277128                                  | embed-certs-277128           | jenkins | v1.33.0 | 01 May 24 03:30 UTC | 01 May 24 03:32 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-046243                           | kubernetes-upgrade-046243    | jenkins | v1.33.0 | 01 May 24 03:30 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-046243                           | kubernetes-upgrade-046243    | jenkins | v1.33.0 | 01 May 24 03:30 UTC | 01 May 24 03:32 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-046243                           | kubernetes-upgrade-046243    | jenkins | v1.33.0 | 01 May 24 03:32 UTC | 01 May 24 03:32 UTC |
	| delete  | -p                                                     | disable-driver-mounts-483221 | jenkins | v1.33.0 | 01 May 24 03:32 UTC | 01 May 24 03:32 UTC |
	|         | disable-driver-mounts-483221                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-715118 | jenkins | v1.33.0 | 01 May 24 03:32 UTC | 01 May 24 03:33 UTC |
	|         | default-k8s-diff-port-715118                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-892672             | no-preload-892672            | jenkins | v1.33.0 | 01 May 24 03:32 UTC | 01 May 24 03:32 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-892672                                   | no-preload-892672            | jenkins | v1.33.0 | 01 May 24 03:32 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-277128            | embed-certs-277128           | jenkins | v1.33.0 | 01 May 24 03:32 UTC | 01 May 24 03:32 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-277128                                  | embed-certs-277128           | jenkins | v1.33.0 | 01 May 24 03:32 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-715118  | default-k8s-diff-port-715118 | jenkins | v1.33.0 | 01 May 24 03:33 UTC | 01 May 24 03:33 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-715118 | jenkins | v1.33.0 | 01 May 24 03:33 UTC |                     |
	|         | default-k8s-diff-port-715118                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-892672                  | no-preload-892672            | jenkins | v1.33.0 | 01 May 24 03:34 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-277128                 | embed-certs-277128           | jenkins | v1.33.0 | 01 May 24 03:34 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-892672                                   | no-preload-892672            | jenkins | v1.33.0 | 01 May 24 03:34 UTC | 01 May 24 03:46 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-503971        | old-k8s-version-503971       | jenkins | v1.33.0 | 01 May 24 03:34 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p embed-certs-277128                                  | embed-certs-277128           | jenkins | v1.33.0 | 01 May 24 03:34 UTC | 01 May 24 03:44 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-715118       | default-k8s-diff-port-715118 | jenkins | v1.33.0 | 01 May 24 03:35 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-715118 | jenkins | v1.33.0 | 01 May 24 03:35 UTC | 01 May 24 03:45 UTC |
	|         | default-k8s-diff-port-715118                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-503971                              | old-k8s-version-503971       | jenkins | v1.33.0 | 01 May 24 03:36 UTC | 01 May 24 03:36 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-503971             | old-k8s-version-503971       | jenkins | v1.33.0 | 01 May 24 03:36 UTC | 01 May 24 03:36 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-503971                              | old-k8s-version-503971       | jenkins | v1.33.0 | 01 May 24 03:36 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/01 03:36:41
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0501 03:36:41.470152   69580 out.go:291] Setting OutFile to fd 1 ...
	I0501 03:36:41.470256   69580 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 03:36:41.470264   69580 out.go:304] Setting ErrFile to fd 2...
	I0501 03:36:41.470268   69580 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 03:36:41.470484   69580 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18779-13391/.minikube/bin
	I0501 03:36:41.470989   69580 out.go:298] Setting JSON to false
	I0501 03:36:41.471856   69580 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":8345,"bootTime":1714526257,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0501 03:36:41.471911   69580 start.go:139] virtualization: kvm guest
	I0501 03:36:41.473901   69580 out.go:177] * [old-k8s-version-503971] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0501 03:36:41.474994   69580 out.go:177]   - MINIKUBE_LOCATION=18779
	I0501 03:36:41.475003   69580 notify.go:220] Checking for updates...
	I0501 03:36:41.477150   69580 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0501 03:36:41.478394   69580 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18779-13391/kubeconfig
	I0501 03:36:41.479462   69580 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18779-13391/.minikube
	I0501 03:36:41.480507   69580 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0501 03:36:41.481543   69580 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0501 03:36:41.482907   69580 config.go:182] Loaded profile config "old-k8s-version-503971": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0501 03:36:41.483279   69580 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:36:41.483311   69580 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:36:41.497758   69580 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40255
	I0501 03:36:41.498090   69580 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:36:41.498591   69580 main.go:141] libmachine: Using API Version  1
	I0501 03:36:41.498616   69580 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:36:41.498891   69580 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:36:41.499055   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .DriverName
	I0501 03:36:41.500675   69580 out.go:177] * Kubernetes 1.30.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.0
	I0501 03:36:41.501716   69580 driver.go:392] Setting default libvirt URI to qemu:///system
	I0501 03:36:41.501974   69580 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:36:41.502024   69580 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:36:41.515991   69580 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40591
	I0501 03:36:41.516392   69580 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:36:41.516826   69580 main.go:141] libmachine: Using API Version  1
	I0501 03:36:41.516846   69580 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:36:41.517120   69580 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:36:41.517281   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .DriverName
	I0501 03:36:41.551130   69580 out.go:177] * Using the kvm2 driver based on existing profile
	I0501 03:36:41.552244   69580 start.go:297] selected driver: kvm2
	I0501 03:36:41.552253   69580 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-503971 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-503971 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.104 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 03:36:41.552369   69580 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0501 03:36:41.553004   69580 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0501 03:36:41.553071   69580 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18779-13391/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0501 03:36:41.567362   69580 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0501 03:36:41.567736   69580 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0501 03:36:41.567815   69580 cni.go:84] Creating CNI manager for ""
	I0501 03:36:41.567832   69580 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0501 03:36:41.567881   69580 start.go:340] cluster config:
	{Name:old-k8s-version-503971 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-503971 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.104 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 03:36:41.568012   69580 iso.go:125] acquiring lock: {Name:mkcd2496aadb29931e179193d707c731bfda98b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0501 03:36:41.569791   69580 out.go:177] * Starting "old-k8s-version-503971" primary control-plane node in "old-k8s-version-503971" cluster
	I0501 03:36:38.886755   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:36:41.571352   69580 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0501 03:36:41.571389   69580 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0501 03:36:41.571408   69580 cache.go:56] Caching tarball of preloaded images
	I0501 03:36:41.571478   69580 preload.go:173] Found /home/jenkins/minikube-integration/18779-13391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0501 03:36:41.571490   69580 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0501 03:36:41.571588   69580 profile.go:143] Saving config to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/old-k8s-version-503971/config.json ...
	I0501 03:36:41.571775   69580 start.go:360] acquireMachinesLock for old-k8s-version-503971: {Name:mkc4548e5afe1d4b0a833f6c522103562b0cefff Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0501 03:36:44.966689   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:36:48.038769   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:36:54.118675   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:36:57.190716   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:37:03.270653   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:37:06.342693   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:37:12.422726   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:37:15.494702   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:37:21.574646   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:37:24.646711   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:37:30.726724   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:37:33.798628   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:37:39.878657   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:37:42.950647   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:37:49.030699   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:37:52.102665   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:37:58.182647   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:38:01.254620   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:38:07.334707   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:38:10.406670   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:38:16.486684   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:38:19.558714   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:38:25.638642   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:38:28.710687   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:38:34.790659   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:38:37.862651   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:38:43.942639   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:38:47.014729   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:38:53.094674   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:38:56.166684   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:39:02.246662   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:39:05.318633   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:39:11.398705   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:39:14.470640   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:39:20.550642   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:39:23.622701   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:39:32.707273   68864 start.go:364] duration metric: took 4m38.787656406s to acquireMachinesLock for "embed-certs-277128"
	I0501 03:39:32.707327   68864 start.go:96] Skipping create...Using existing machine configuration
	I0501 03:39:32.707336   68864 fix.go:54] fixHost starting: 
	I0501 03:39:32.707655   68864 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:39:32.707697   68864 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:39:32.722689   68864 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35015
	I0501 03:39:32.723061   68864 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:39:32.723536   68864 main.go:141] libmachine: Using API Version  1
	I0501 03:39:32.723557   68864 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:39:32.723848   68864 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:39:32.724041   68864 main.go:141] libmachine: (embed-certs-277128) Calling .DriverName
	I0501 03:39:32.724164   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetState
	I0501 03:39:32.725542   68864 fix.go:112] recreateIfNeeded on embed-certs-277128: state=Stopped err=<nil>
	I0501 03:39:32.725569   68864 main.go:141] libmachine: (embed-certs-277128) Calling .DriverName
	W0501 03:39:32.725714   68864 fix.go:138] unexpected machine state, will restart: <nil>
	I0501 03:39:32.727403   68864 out.go:177] * Restarting existing kvm2 VM for "embed-certs-277128" ...
	I0501 03:39:29.702654   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:39:32.704906   68640 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0501 03:39:32.704940   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetMachineName
	I0501 03:39:32.705254   68640 buildroot.go:166] provisioning hostname "no-preload-892672"
	I0501 03:39:32.705278   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetMachineName
	I0501 03:39:32.705499   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHHostname
	I0501 03:39:32.707128   68640 machine.go:97] duration metric: took 4m44.649178925s to provisionDockerMachine
	I0501 03:39:32.707171   68640 fix.go:56] duration metric: took 4m44.67002247s for fixHost
	I0501 03:39:32.707178   68640 start.go:83] releasing machines lock for "no-preload-892672", held for 4m44.670048235s
	W0501 03:39:32.707201   68640 start.go:713] error starting host: provision: host is not running
	W0501 03:39:32.707293   68640 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0501 03:39:32.707305   68640 start.go:728] Will try again in 5 seconds ...
	I0501 03:39:32.728616   68864 main.go:141] libmachine: (embed-certs-277128) Calling .Start
	I0501 03:39:32.728768   68864 main.go:141] libmachine: (embed-certs-277128) Ensuring networks are active...
	I0501 03:39:32.729434   68864 main.go:141] libmachine: (embed-certs-277128) Ensuring network default is active
	I0501 03:39:32.729789   68864 main.go:141] libmachine: (embed-certs-277128) Ensuring network mk-embed-certs-277128 is active
	I0501 03:39:32.730218   68864 main.go:141] libmachine: (embed-certs-277128) Getting domain xml...
	I0501 03:39:32.730972   68864 main.go:141] libmachine: (embed-certs-277128) Creating domain...
	I0501 03:39:37.711605   68640 start.go:360] acquireMachinesLock for no-preload-892672: {Name:mkc4548e5afe1d4b0a833f6c522103562b0cefff Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0501 03:39:33.914124   68864 main.go:141] libmachine: (embed-certs-277128) Waiting to get IP...
	I0501 03:39:33.915022   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:33.915411   68864 main.go:141] libmachine: (embed-certs-277128) DBG | unable to find current IP address of domain embed-certs-277128 in network mk-embed-certs-277128
	I0501 03:39:33.915473   68864 main.go:141] libmachine: (embed-certs-277128) DBG | I0501 03:39:33.915391   70171 retry.go:31] will retry after 278.418743ms: waiting for machine to come up
	I0501 03:39:34.195933   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:34.196470   68864 main.go:141] libmachine: (embed-certs-277128) DBG | unable to find current IP address of domain embed-certs-277128 in network mk-embed-certs-277128
	I0501 03:39:34.196497   68864 main.go:141] libmachine: (embed-certs-277128) DBG | I0501 03:39:34.196417   70171 retry.go:31] will retry after 375.593174ms: waiting for machine to come up
	I0501 03:39:34.574178   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:34.574666   68864 main.go:141] libmachine: (embed-certs-277128) DBG | unable to find current IP address of domain embed-certs-277128 in network mk-embed-certs-277128
	I0501 03:39:34.574689   68864 main.go:141] libmachine: (embed-certs-277128) DBG | I0501 03:39:34.574617   70171 retry.go:31] will retry after 377.853045ms: waiting for machine to come up
	I0501 03:39:34.954022   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:34.954436   68864 main.go:141] libmachine: (embed-certs-277128) DBG | unable to find current IP address of domain embed-certs-277128 in network mk-embed-certs-277128
	I0501 03:39:34.954465   68864 main.go:141] libmachine: (embed-certs-277128) DBG | I0501 03:39:34.954375   70171 retry.go:31] will retry after 374.024178ms: waiting for machine to come up
	I0501 03:39:35.330087   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:35.330514   68864 main.go:141] libmachine: (embed-certs-277128) DBG | unable to find current IP address of domain embed-certs-277128 in network mk-embed-certs-277128
	I0501 03:39:35.330545   68864 main.go:141] libmachine: (embed-certs-277128) DBG | I0501 03:39:35.330478   70171 retry.go:31] will retry after 488.296666ms: waiting for machine to come up
	I0501 03:39:35.820177   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:35.820664   68864 main.go:141] libmachine: (embed-certs-277128) DBG | unable to find current IP address of domain embed-certs-277128 in network mk-embed-certs-277128
	I0501 03:39:35.820692   68864 main.go:141] libmachine: (embed-certs-277128) DBG | I0501 03:39:35.820629   70171 retry.go:31] will retry after 665.825717ms: waiting for machine to come up
	I0501 03:39:36.488492   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:36.488910   68864 main.go:141] libmachine: (embed-certs-277128) DBG | unable to find current IP address of domain embed-certs-277128 in network mk-embed-certs-277128
	I0501 03:39:36.488941   68864 main.go:141] libmachine: (embed-certs-277128) DBG | I0501 03:39:36.488860   70171 retry.go:31] will retry after 1.04269192s: waiting for machine to come up
	I0501 03:39:37.532622   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:37.533006   68864 main.go:141] libmachine: (embed-certs-277128) DBG | unable to find current IP address of domain embed-certs-277128 in network mk-embed-certs-277128
	I0501 03:39:37.533032   68864 main.go:141] libmachine: (embed-certs-277128) DBG | I0501 03:39:37.532968   70171 retry.go:31] will retry after 1.348239565s: waiting for machine to come up
	I0501 03:39:38.882895   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:38.883364   68864 main.go:141] libmachine: (embed-certs-277128) DBG | unable to find current IP address of domain embed-certs-277128 in network mk-embed-certs-277128
	I0501 03:39:38.883396   68864 main.go:141] libmachine: (embed-certs-277128) DBG | I0501 03:39:38.883301   70171 retry.go:31] will retry after 1.718495999s: waiting for machine to come up
	I0501 03:39:40.604329   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:40.604760   68864 main.go:141] libmachine: (embed-certs-277128) DBG | unable to find current IP address of domain embed-certs-277128 in network mk-embed-certs-277128
	I0501 03:39:40.604791   68864 main.go:141] libmachine: (embed-certs-277128) DBG | I0501 03:39:40.604703   70171 retry.go:31] will retry after 2.237478005s: waiting for machine to come up
	I0501 03:39:42.843398   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:42.843920   68864 main.go:141] libmachine: (embed-certs-277128) DBG | unable to find current IP address of domain embed-certs-277128 in network mk-embed-certs-277128
	I0501 03:39:42.843949   68864 main.go:141] libmachine: (embed-certs-277128) DBG | I0501 03:39:42.843869   70171 retry.go:31] will retry after 2.618059388s: waiting for machine to come up
	I0501 03:39:45.465576   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:45.465968   68864 main.go:141] libmachine: (embed-certs-277128) DBG | unable to find current IP address of domain embed-certs-277128 in network mk-embed-certs-277128
	I0501 03:39:45.465992   68864 main.go:141] libmachine: (embed-certs-277128) DBG | I0501 03:39:45.465928   70171 retry.go:31] will retry after 2.895120972s: waiting for machine to come up
	I0501 03:39:48.362239   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:48.362651   68864 main.go:141] libmachine: (embed-certs-277128) DBG | unable to find current IP address of domain embed-certs-277128 in network mk-embed-certs-277128
	I0501 03:39:48.362683   68864 main.go:141] libmachine: (embed-certs-277128) DBG | I0501 03:39:48.362617   70171 retry.go:31] will retry after 2.857441112s: waiting for machine to come up
	I0501 03:39:52.791989   69237 start.go:364] duration metric: took 4m2.036138912s to acquireMachinesLock for "default-k8s-diff-port-715118"
	I0501 03:39:52.792059   69237 start.go:96] Skipping create...Using existing machine configuration
	I0501 03:39:52.792071   69237 fix.go:54] fixHost starting: 
	I0501 03:39:52.792454   69237 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:39:52.792492   69237 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:39:52.809707   69237 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46419
	I0501 03:39:52.810075   69237 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:39:52.810544   69237 main.go:141] libmachine: Using API Version  1
	I0501 03:39:52.810564   69237 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:39:52.810881   69237 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:39:52.811067   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .DriverName
	I0501 03:39:52.811217   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetState
	I0501 03:39:52.812787   69237 fix.go:112] recreateIfNeeded on default-k8s-diff-port-715118: state=Stopped err=<nil>
	I0501 03:39:52.812820   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .DriverName
	W0501 03:39:52.812969   69237 fix.go:138] unexpected machine state, will restart: <nil>
	I0501 03:39:52.815136   69237 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-715118" ...
	I0501 03:39:51.223450   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:51.223938   68864 main.go:141] libmachine: (embed-certs-277128) Found IP for machine: 192.168.50.218
	I0501 03:39:51.223965   68864 main.go:141] libmachine: (embed-certs-277128) Reserving static IP address...
	I0501 03:39:51.223982   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has current primary IP address 192.168.50.218 and MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:51.224375   68864 main.go:141] libmachine: (embed-certs-277128) Reserved static IP address: 192.168.50.218
	I0501 03:39:51.224440   68864 main.go:141] libmachine: (embed-certs-277128) DBG | found host DHCP lease matching {name: "embed-certs-277128", mac: "52:54:00:96:11:7d", ip: "192.168.50.218"} in network mk-embed-certs-277128: {Iface:virbr2 ExpiryTime:2024-05-01 04:39:45 +0000 UTC Type:0 Mac:52:54:00:96:11:7d Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-277128 Clientid:01:52:54:00:96:11:7d}
	I0501 03:39:51.224454   68864 main.go:141] libmachine: (embed-certs-277128) Waiting for SSH to be available...
	I0501 03:39:51.224491   68864 main.go:141] libmachine: (embed-certs-277128) DBG | skip adding static IP to network mk-embed-certs-277128 - found existing host DHCP lease matching {name: "embed-certs-277128", mac: "52:54:00:96:11:7d", ip: "192.168.50.218"}
	I0501 03:39:51.224507   68864 main.go:141] libmachine: (embed-certs-277128) DBG | Getting to WaitForSSH function...
	I0501 03:39:51.226437   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:51.226733   68864 main.go:141] libmachine: (embed-certs-277128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:7d", ip: ""} in network mk-embed-certs-277128: {Iface:virbr2 ExpiryTime:2024-05-01 04:39:45 +0000 UTC Type:0 Mac:52:54:00:96:11:7d Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-277128 Clientid:01:52:54:00:96:11:7d}
	I0501 03:39:51.226764   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined IP address 192.168.50.218 and MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:51.226863   68864 main.go:141] libmachine: (embed-certs-277128) DBG | Using SSH client type: external
	I0501 03:39:51.226886   68864 main.go:141] libmachine: (embed-certs-277128) DBG | Using SSH private key: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/embed-certs-277128/id_rsa (-rw-------)
	I0501 03:39:51.226917   68864 main.go:141] libmachine: (embed-certs-277128) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.218 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18779-13391/.minikube/machines/embed-certs-277128/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0501 03:39:51.226930   68864 main.go:141] libmachine: (embed-certs-277128) DBG | About to run SSH command:
	I0501 03:39:51.226941   68864 main.go:141] libmachine: (embed-certs-277128) DBG | exit 0
	I0501 03:39:51.354225   68864 main.go:141] libmachine: (embed-certs-277128) DBG | SSH cmd err, output: <nil>: 
	I0501 03:39:51.354641   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetConfigRaw
	I0501 03:39:51.355337   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetIP
	I0501 03:39:51.357934   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:51.358265   68864 main.go:141] libmachine: (embed-certs-277128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:7d", ip: ""} in network mk-embed-certs-277128: {Iface:virbr2 ExpiryTime:2024-05-01 04:39:45 +0000 UTC Type:0 Mac:52:54:00:96:11:7d Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-277128 Clientid:01:52:54:00:96:11:7d}
	I0501 03:39:51.358302   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined IP address 192.168.50.218 and MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:51.358584   68864 profile.go:143] Saving config to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/embed-certs-277128/config.json ...
	I0501 03:39:51.358753   68864 machine.go:94] provisionDockerMachine start ...
	I0501 03:39:51.358771   68864 main.go:141] libmachine: (embed-certs-277128) Calling .DriverName
	I0501 03:39:51.358940   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHHostname
	I0501 03:39:51.361202   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:51.361564   68864 main.go:141] libmachine: (embed-certs-277128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:7d", ip: ""} in network mk-embed-certs-277128: {Iface:virbr2 ExpiryTime:2024-05-01 04:39:45 +0000 UTC Type:0 Mac:52:54:00:96:11:7d Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-277128 Clientid:01:52:54:00:96:11:7d}
	I0501 03:39:51.361600   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined IP address 192.168.50.218 and MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:51.361711   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHPort
	I0501 03:39:51.361884   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHKeyPath
	I0501 03:39:51.362054   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHKeyPath
	I0501 03:39:51.362170   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHUsername
	I0501 03:39:51.362344   68864 main.go:141] libmachine: Using SSH client type: native
	I0501 03:39:51.362572   68864 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.218 22 <nil> <nil>}
	I0501 03:39:51.362586   68864 main.go:141] libmachine: About to run SSH command:
	hostname
	I0501 03:39:51.467448   68864 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0501 03:39:51.467480   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetMachineName
	I0501 03:39:51.467740   68864 buildroot.go:166] provisioning hostname "embed-certs-277128"
	I0501 03:39:51.467772   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetMachineName
	I0501 03:39:51.467953   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHHostname
	I0501 03:39:51.470653   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:51.471022   68864 main.go:141] libmachine: (embed-certs-277128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:7d", ip: ""} in network mk-embed-certs-277128: {Iface:virbr2 ExpiryTime:2024-05-01 04:39:45 +0000 UTC Type:0 Mac:52:54:00:96:11:7d Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-277128 Clientid:01:52:54:00:96:11:7d}
	I0501 03:39:51.471044   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined IP address 192.168.50.218 and MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:51.471159   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHPort
	I0501 03:39:51.471341   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHKeyPath
	I0501 03:39:51.471482   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHKeyPath
	I0501 03:39:51.471590   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHUsername
	I0501 03:39:51.471729   68864 main.go:141] libmachine: Using SSH client type: native
	I0501 03:39:51.471913   68864 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.218 22 <nil> <nil>}
	I0501 03:39:51.471934   68864 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-277128 && echo "embed-certs-277128" | sudo tee /etc/hostname
	I0501 03:39:51.594372   68864 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-277128
	
	I0501 03:39:51.594422   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHHostname
	I0501 03:39:51.596978   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:51.597307   68864 main.go:141] libmachine: (embed-certs-277128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:7d", ip: ""} in network mk-embed-certs-277128: {Iface:virbr2 ExpiryTime:2024-05-01 04:39:45 +0000 UTC Type:0 Mac:52:54:00:96:11:7d Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-277128 Clientid:01:52:54:00:96:11:7d}
	I0501 03:39:51.597334   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined IP address 192.168.50.218 and MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:51.597495   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHPort
	I0501 03:39:51.597710   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHKeyPath
	I0501 03:39:51.597865   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHKeyPath
	I0501 03:39:51.597971   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHUsername
	I0501 03:39:51.598097   68864 main.go:141] libmachine: Using SSH client type: native
	I0501 03:39:51.598250   68864 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.218 22 <nil> <nil>}
	I0501 03:39:51.598271   68864 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-277128' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-277128/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-277128' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0501 03:39:51.712791   68864 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0501 03:39:51.712825   68864 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18779-13391/.minikube CaCertPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18779-13391/.minikube}
	I0501 03:39:51.712850   68864 buildroot.go:174] setting up certificates
	I0501 03:39:51.712860   68864 provision.go:84] configureAuth start
	I0501 03:39:51.712869   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetMachineName
	I0501 03:39:51.713158   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetIP
	I0501 03:39:51.715577   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:51.715885   68864 main.go:141] libmachine: (embed-certs-277128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:7d", ip: ""} in network mk-embed-certs-277128: {Iface:virbr2 ExpiryTime:2024-05-01 04:39:45 +0000 UTC Type:0 Mac:52:54:00:96:11:7d Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-277128 Clientid:01:52:54:00:96:11:7d}
	I0501 03:39:51.715918   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined IP address 192.168.50.218 and MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:51.716040   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHHostname
	I0501 03:39:51.718057   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:51.718342   68864 main.go:141] libmachine: (embed-certs-277128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:7d", ip: ""} in network mk-embed-certs-277128: {Iface:virbr2 ExpiryTime:2024-05-01 04:39:45 +0000 UTC Type:0 Mac:52:54:00:96:11:7d Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-277128 Clientid:01:52:54:00:96:11:7d}
	I0501 03:39:51.718367   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined IP address 192.168.50.218 and MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:51.718550   68864 provision.go:143] copyHostCerts
	I0501 03:39:51.718612   68864 exec_runner.go:144] found /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem, removing ...
	I0501 03:39:51.718622   68864 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem
	I0501 03:39:51.718685   68864 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem (1078 bytes)
	I0501 03:39:51.718790   68864 exec_runner.go:144] found /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem, removing ...
	I0501 03:39:51.718798   68864 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem
	I0501 03:39:51.718823   68864 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem (1123 bytes)
	I0501 03:39:51.718881   68864 exec_runner.go:144] found /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem, removing ...
	I0501 03:39:51.718888   68864 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem
	I0501 03:39:51.718907   68864 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem (1675 bytes)
	I0501 03:39:51.718957   68864 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca-key.pem org=jenkins.embed-certs-277128 san=[127.0.0.1 192.168.50.218 embed-certs-277128 localhost minikube]
	I0501 03:39:52.100402   68864 provision.go:177] copyRemoteCerts
	I0501 03:39:52.100459   68864 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0501 03:39:52.100494   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHHostname
	I0501 03:39:52.103133   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:52.103363   68864 main.go:141] libmachine: (embed-certs-277128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:7d", ip: ""} in network mk-embed-certs-277128: {Iface:virbr2 ExpiryTime:2024-05-01 04:39:45 +0000 UTC Type:0 Mac:52:54:00:96:11:7d Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-277128 Clientid:01:52:54:00:96:11:7d}
	I0501 03:39:52.103391   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined IP address 192.168.50.218 and MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:52.103522   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHPort
	I0501 03:39:52.103694   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHKeyPath
	I0501 03:39:52.103790   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHUsername
	I0501 03:39:52.103874   68864 sshutil.go:53] new ssh client: &{IP:192.168.50.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/embed-certs-277128/id_rsa Username:docker}
	I0501 03:39:52.186017   68864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0501 03:39:52.211959   68864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0501 03:39:52.237362   68864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0501 03:39:52.264036   68864 provision.go:87] duration metric: took 551.163591ms to configureAuth
	I0501 03:39:52.264060   68864 buildroot.go:189] setting minikube options for container-runtime
	I0501 03:39:52.264220   68864 config.go:182] Loaded profile config "embed-certs-277128": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0501 03:39:52.264290   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHHostname
	I0501 03:39:52.266809   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:52.267117   68864 main.go:141] libmachine: (embed-certs-277128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:7d", ip: ""} in network mk-embed-certs-277128: {Iface:virbr2 ExpiryTime:2024-05-01 04:39:45 +0000 UTC Type:0 Mac:52:54:00:96:11:7d Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-277128 Clientid:01:52:54:00:96:11:7d}
	I0501 03:39:52.267140   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined IP address 192.168.50.218 and MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:52.267336   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHPort
	I0501 03:39:52.267529   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHKeyPath
	I0501 03:39:52.267713   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHKeyPath
	I0501 03:39:52.267863   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHUsername
	I0501 03:39:52.268096   68864 main.go:141] libmachine: Using SSH client type: native
	I0501 03:39:52.268273   68864 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.218 22 <nil> <nil>}
	I0501 03:39:52.268290   68864 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0501 03:39:52.543539   68864 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0501 03:39:52.543569   68864 machine.go:97] duration metric: took 1.184800934s to provisionDockerMachine
	I0501 03:39:52.543585   68864 start.go:293] postStartSetup for "embed-certs-277128" (driver="kvm2")
	I0501 03:39:52.543600   68864 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0501 03:39:52.543621   68864 main.go:141] libmachine: (embed-certs-277128) Calling .DriverName
	I0501 03:39:52.543974   68864 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0501 03:39:52.544007   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHHostname
	I0501 03:39:52.546566   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:52.546918   68864 main.go:141] libmachine: (embed-certs-277128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:7d", ip: ""} in network mk-embed-certs-277128: {Iface:virbr2 ExpiryTime:2024-05-01 04:39:45 +0000 UTC Type:0 Mac:52:54:00:96:11:7d Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-277128 Clientid:01:52:54:00:96:11:7d}
	I0501 03:39:52.546955   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined IP address 192.168.50.218 and MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:52.547108   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHPort
	I0501 03:39:52.547310   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHKeyPath
	I0501 03:39:52.547480   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHUsername
	I0501 03:39:52.547622   68864 sshutil.go:53] new ssh client: &{IP:192.168.50.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/embed-certs-277128/id_rsa Username:docker}
	I0501 03:39:52.636313   68864 ssh_runner.go:195] Run: cat /etc/os-release
	I0501 03:39:52.641408   68864 info.go:137] Remote host: Buildroot 2023.02.9
	I0501 03:39:52.641435   68864 filesync.go:126] Scanning /home/jenkins/minikube-integration/18779-13391/.minikube/addons for local assets ...
	I0501 03:39:52.641514   68864 filesync.go:126] Scanning /home/jenkins/minikube-integration/18779-13391/.minikube/files for local assets ...
	I0501 03:39:52.641598   68864 filesync.go:149] local asset: /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem -> 207242.pem in /etc/ssl/certs
	I0501 03:39:52.641708   68864 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0501 03:39:52.653421   68864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem --> /etc/ssl/certs/207242.pem (1708 bytes)
	I0501 03:39:52.681796   68864 start.go:296] duration metric: took 138.197388ms for postStartSetup
	I0501 03:39:52.681840   68864 fix.go:56] duration metric: took 19.974504059s for fixHost
	I0501 03:39:52.681866   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHHostname
	I0501 03:39:52.684189   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:52.684447   68864 main.go:141] libmachine: (embed-certs-277128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:7d", ip: ""} in network mk-embed-certs-277128: {Iface:virbr2 ExpiryTime:2024-05-01 04:39:45 +0000 UTC Type:0 Mac:52:54:00:96:11:7d Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-277128 Clientid:01:52:54:00:96:11:7d}
	I0501 03:39:52.684475   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined IP address 192.168.50.218 and MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:52.684691   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHPort
	I0501 03:39:52.684901   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHKeyPath
	I0501 03:39:52.685077   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHKeyPath
	I0501 03:39:52.685226   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHUsername
	I0501 03:39:52.685393   68864 main.go:141] libmachine: Using SSH client type: native
	I0501 03:39:52.685556   68864 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.218 22 <nil> <nil>}
	I0501 03:39:52.685568   68864 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0501 03:39:52.791802   68864 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714534792.758254619
	
	I0501 03:39:52.791830   68864 fix.go:216] guest clock: 1714534792.758254619
	I0501 03:39:52.791841   68864 fix.go:229] Guest: 2024-05-01 03:39:52.758254619 +0000 UTC Remote: 2024-05-01 03:39:52.681844878 +0000 UTC m=+298.906990848 (delta=76.409741ms)
	I0501 03:39:52.791886   68864 fix.go:200] guest clock delta is within tolerance: 76.409741ms
	I0501 03:39:52.791892   68864 start.go:83] releasing machines lock for "embed-certs-277128", held for 20.08458366s
	I0501 03:39:52.791918   68864 main.go:141] libmachine: (embed-certs-277128) Calling .DriverName
	I0501 03:39:52.792188   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetIP
	I0501 03:39:52.794820   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:52.795217   68864 main.go:141] libmachine: (embed-certs-277128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:7d", ip: ""} in network mk-embed-certs-277128: {Iface:virbr2 ExpiryTime:2024-05-01 04:39:45 +0000 UTC Type:0 Mac:52:54:00:96:11:7d Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-277128 Clientid:01:52:54:00:96:11:7d}
	I0501 03:39:52.795256   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined IP address 192.168.50.218 and MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:52.795427   68864 main.go:141] libmachine: (embed-certs-277128) Calling .DriverName
	I0501 03:39:52.795971   68864 main.go:141] libmachine: (embed-certs-277128) Calling .DriverName
	I0501 03:39:52.796142   68864 main.go:141] libmachine: (embed-certs-277128) Calling .DriverName
	I0501 03:39:52.796235   68864 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0501 03:39:52.796285   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHHostname
	I0501 03:39:52.796324   68864 ssh_runner.go:195] Run: cat /version.json
	I0501 03:39:52.796346   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHHostname
	I0501 03:39:52.799128   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:52.799153   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:52.799536   68864 main.go:141] libmachine: (embed-certs-277128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:7d", ip: ""} in network mk-embed-certs-277128: {Iface:virbr2 ExpiryTime:2024-05-01 04:39:45 +0000 UTC Type:0 Mac:52:54:00:96:11:7d Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-277128 Clientid:01:52:54:00:96:11:7d}
	I0501 03:39:52.799570   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined IP address 192.168.50.218 and MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:52.799617   68864 main.go:141] libmachine: (embed-certs-277128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:7d", ip: ""} in network mk-embed-certs-277128: {Iface:virbr2 ExpiryTime:2024-05-01 04:39:45 +0000 UTC Type:0 Mac:52:54:00:96:11:7d Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-277128 Clientid:01:52:54:00:96:11:7d}
	I0501 03:39:52.799647   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined IP address 192.168.50.218 and MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:52.799779   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHPort
	I0501 03:39:52.799878   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHPort
	I0501 03:39:52.799961   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHKeyPath
	I0501 03:39:52.800048   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHKeyPath
	I0501 03:39:52.800117   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHUsername
	I0501 03:39:52.800189   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHUsername
	I0501 03:39:52.800243   68864 sshutil.go:53] new ssh client: &{IP:192.168.50.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/embed-certs-277128/id_rsa Username:docker}
	I0501 03:39:52.800299   68864 sshutil.go:53] new ssh client: &{IP:192.168.50.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/embed-certs-277128/id_rsa Username:docker}
	I0501 03:39:52.901147   68864 ssh_runner.go:195] Run: systemctl --version
	I0501 03:39:52.908399   68864 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0501 03:39:53.065012   68864 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0501 03:39:53.073635   68864 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0501 03:39:53.073724   68864 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0501 03:39:53.096146   68864 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0501 03:39:53.096179   68864 start.go:494] detecting cgroup driver to use...
	I0501 03:39:53.096253   68864 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0501 03:39:53.118525   68864 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0501 03:39:53.136238   68864 docker.go:217] disabling cri-docker service (if available) ...
	I0501 03:39:53.136301   68864 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0501 03:39:53.152535   68864 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0501 03:39:53.171415   68864 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0501 03:39:53.297831   68864 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0501 03:39:53.479469   68864 docker.go:233] disabling docker service ...
	I0501 03:39:53.479552   68864 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0501 03:39:53.497271   68864 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0501 03:39:53.512645   68864 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0501 03:39:53.658448   68864 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0501 03:39:53.787528   68864 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0501 03:39:53.804078   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0501 03:39:53.836146   68864 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0501 03:39:53.836206   68864 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:39:53.853846   68864 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0501 03:39:53.853915   68864 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:39:53.866319   68864 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:39:53.878410   68864 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:39:53.890304   68864 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0501 03:39:53.903821   68864 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:39:53.916750   68864 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:39:53.938933   68864 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:39:53.952103   68864 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0501 03:39:53.964833   68864 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0501 03:39:53.964893   68864 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0501 03:39:53.983039   68864 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0501 03:39:53.995830   68864 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 03:39:54.156748   68864 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0501 03:39:54.306973   68864 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0501 03:39:54.307051   68864 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0501 03:39:54.313515   68864 start.go:562] Will wait 60s for crictl version
	I0501 03:39:54.313569   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:39:54.317943   68864 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0501 03:39:54.356360   68864 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0501 03:39:54.356437   68864 ssh_runner.go:195] Run: crio --version
	I0501 03:39:54.391717   68864 ssh_runner.go:195] Run: crio --version
	I0501 03:39:54.428403   68864 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0501 03:39:52.816428   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .Start
	I0501 03:39:52.816592   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Ensuring networks are active...
	I0501 03:39:52.817317   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Ensuring network default is active
	I0501 03:39:52.817668   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Ensuring network mk-default-k8s-diff-port-715118 is active
	I0501 03:39:52.818040   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Getting domain xml...
	I0501 03:39:52.818777   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Creating domain...
	I0501 03:39:54.069624   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Waiting to get IP...
	I0501 03:39:54.070436   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:39:54.070855   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | unable to find current IP address of domain default-k8s-diff-port-715118 in network mk-default-k8s-diff-port-715118
	I0501 03:39:54.070891   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | I0501 03:39:54.070820   70304 retry.go:31] will retry after 260.072623ms: waiting for machine to come up
	I0501 03:39:54.332646   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:39:54.333077   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | unable to find current IP address of domain default-k8s-diff-port-715118 in network mk-default-k8s-diff-port-715118
	I0501 03:39:54.333115   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | I0501 03:39:54.333047   70304 retry.go:31] will retry after 270.897102ms: waiting for machine to come up
	I0501 03:39:54.605705   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:39:54.606102   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | unable to find current IP address of domain default-k8s-diff-port-715118 in network mk-default-k8s-diff-port-715118
	I0501 03:39:54.606155   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | I0501 03:39:54.606070   70304 retry.go:31] will retry after 417.613249ms: waiting for machine to come up
	I0501 03:39:55.025827   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:39:55.026340   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | unable to find current IP address of domain default-k8s-diff-port-715118 in network mk-default-k8s-diff-port-715118
	I0501 03:39:55.026371   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | I0501 03:39:55.026291   70304 retry.go:31] will retry after 428.515161ms: waiting for machine to come up
	I0501 03:39:55.456828   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:39:55.457443   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | unable to find current IP address of domain default-k8s-diff-port-715118 in network mk-default-k8s-diff-port-715118
	I0501 03:39:55.457480   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | I0501 03:39:55.457405   70304 retry.go:31] will retry after 701.294363ms: waiting for machine to come up
	I0501 03:39:54.429689   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetIP
	I0501 03:39:54.432488   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:54.432817   68864 main.go:141] libmachine: (embed-certs-277128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:7d", ip: ""} in network mk-embed-certs-277128: {Iface:virbr2 ExpiryTime:2024-05-01 04:39:45 +0000 UTC Type:0 Mac:52:54:00:96:11:7d Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-277128 Clientid:01:52:54:00:96:11:7d}
	I0501 03:39:54.432858   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined IP address 192.168.50.218 and MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:54.433039   68864 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0501 03:39:54.437866   68864 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0501 03:39:54.451509   68864 kubeadm.go:877] updating cluster {Name:embed-certs-277128 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.0 ClusterName:embed-certs-277128 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.218 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0501 03:39:54.451615   68864 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0501 03:39:54.451665   68864 ssh_runner.go:195] Run: sudo crictl images --output json
	I0501 03:39:54.494304   68864 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0501 03:39:54.494379   68864 ssh_runner.go:195] Run: which lz4
	I0501 03:39:54.499090   68864 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0501 03:39:54.503970   68864 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0501 03:39:54.503992   68864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394544937 bytes)
	I0501 03:39:56.216407   68864 crio.go:462] duration metric: took 1.717351739s to copy over tarball
	I0501 03:39:56.216488   68864 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0501 03:39:58.703133   68864 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.48661051s)
	I0501 03:39:58.703161   68864 crio.go:469] duration metric: took 2.486721448s to extract the tarball
	I0501 03:39:58.703171   68864 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0501 03:39:58.751431   68864 ssh_runner.go:195] Run: sudo crictl images --output json
	I0501 03:39:58.800353   68864 crio.go:514] all images are preloaded for cri-o runtime.
	I0501 03:39:58.800379   68864 cache_images.go:84] Images are preloaded, skipping loading
	I0501 03:39:58.800389   68864 kubeadm.go:928] updating node { 192.168.50.218 8443 v1.30.0 crio true true} ...
	I0501 03:39:58.800516   68864 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-277128 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.218
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:embed-certs-277128 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0501 03:39:58.800598   68864 ssh_runner.go:195] Run: crio config
	I0501 03:39:56.159966   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:39:56.160373   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | unable to find current IP address of domain default-k8s-diff-port-715118 in network mk-default-k8s-diff-port-715118
	I0501 03:39:56.160404   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | I0501 03:39:56.160334   70304 retry.go:31] will retry after 774.079459ms: waiting for machine to come up
	I0501 03:39:56.936654   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:39:56.937201   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | unable to find current IP address of domain default-k8s-diff-port-715118 in network mk-default-k8s-diff-port-715118
	I0501 03:39:56.937232   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | I0501 03:39:56.937161   70304 retry.go:31] will retry after 877.420181ms: waiting for machine to come up
	I0501 03:39:57.816002   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:39:57.816467   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | unable to find current IP address of domain default-k8s-diff-port-715118 in network mk-default-k8s-diff-port-715118
	I0501 03:39:57.816501   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | I0501 03:39:57.816425   70304 retry.go:31] will retry after 1.477997343s: waiting for machine to come up
	I0501 03:39:59.296533   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:39:59.296970   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | unable to find current IP address of domain default-k8s-diff-port-715118 in network mk-default-k8s-diff-port-715118
	I0501 03:39:59.296995   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | I0501 03:39:59.296922   70304 retry.go:31] will retry after 1.199617253s: waiting for machine to come up
	I0501 03:40:00.498388   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:00.498817   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | unable to find current IP address of domain default-k8s-diff-port-715118 in network mk-default-k8s-diff-port-715118
	I0501 03:40:00.498845   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | I0501 03:40:00.498770   70304 retry.go:31] will retry after 2.227608697s: waiting for machine to come up
	I0501 03:39:58.855600   68864 cni.go:84] Creating CNI manager for ""
	I0501 03:39:58.855630   68864 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0501 03:39:58.855650   68864 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0501 03:39:58.855686   68864 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.218 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-277128 NodeName:embed-certs-277128 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.218"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.218 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0501 03:39:58.855826   68864 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.218
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-277128"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.218
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.218"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0501 03:39:58.855890   68864 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0501 03:39:58.868074   68864 binaries.go:44] Found k8s binaries, skipping transfer
	I0501 03:39:58.868145   68864 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0501 03:39:58.879324   68864 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0501 03:39:58.897572   68864 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0501 03:39:58.918416   68864 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0501 03:39:58.940317   68864 ssh_runner.go:195] Run: grep 192.168.50.218	control-plane.minikube.internal$ /etc/hosts
	I0501 03:39:58.944398   68864 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.218	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0501 03:39:58.959372   68864 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 03:39:59.094172   68864 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0501 03:39:59.113612   68864 certs.go:68] Setting up /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/embed-certs-277128 for IP: 192.168.50.218
	I0501 03:39:59.113653   68864 certs.go:194] generating shared ca certs ...
	I0501 03:39:59.113669   68864 certs.go:226] acquiring lock for ca certs: {Name:mk85a75d14c00780d4bf09cd9bdaf0f96f1b8fce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 03:39:59.113863   68864 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18779-13391/.minikube/ca.key
	I0501 03:39:59.113919   68864 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.key
	I0501 03:39:59.113931   68864 certs.go:256] generating profile certs ...
	I0501 03:39:59.114044   68864 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/embed-certs-277128/client.key
	I0501 03:39:59.114117   68864 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/embed-certs-277128/apiserver.key.65584253
	I0501 03:39:59.114166   68864 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/embed-certs-277128/proxy-client.key
	I0501 03:39:59.114325   68864 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/20724.pem (1338 bytes)
	W0501 03:39:59.114369   68864 certs.go:480] ignoring /home/jenkins/minikube-integration/18779-13391/.minikube/certs/20724_empty.pem, impossibly tiny 0 bytes
	I0501 03:39:59.114383   68864 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca-key.pem (1679 bytes)
	I0501 03:39:59.114430   68864 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem (1078 bytes)
	I0501 03:39:59.114466   68864 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem (1123 bytes)
	I0501 03:39:59.114497   68864 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem (1675 bytes)
	I0501 03:39:59.114550   68864 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem (1708 bytes)
	I0501 03:39:59.115448   68864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0501 03:39:59.155890   68864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0501 03:39:59.209160   68864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0501 03:39:59.251552   68864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0501 03:39:59.288100   68864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/embed-certs-277128/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0501 03:39:59.325437   68864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/embed-certs-277128/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0501 03:39:59.352593   68864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/embed-certs-277128/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0501 03:39:59.378992   68864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/embed-certs-277128/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0501 03:39:59.405517   68864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/certs/20724.pem --> /usr/share/ca-certificates/20724.pem (1338 bytes)
	I0501 03:39:59.431253   68864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem --> /usr/share/ca-certificates/207242.pem (1708 bytes)
	I0501 03:39:59.457155   68864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0501 03:39:59.483696   68864 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0501 03:39:59.502758   68864 ssh_runner.go:195] Run: openssl version
	I0501 03:39:59.509307   68864 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20724.pem && ln -fs /usr/share/ca-certificates/20724.pem /etc/ssl/certs/20724.pem"
	I0501 03:39:59.521438   68864 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20724.pem
	I0501 03:39:59.526658   68864 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May  1 02:20 /usr/share/ca-certificates/20724.pem
	I0501 03:39:59.526706   68864 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20724.pem
	I0501 03:39:59.533201   68864 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20724.pem /etc/ssl/certs/51391683.0"
	I0501 03:39:59.546837   68864 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/207242.pem && ln -fs /usr/share/ca-certificates/207242.pem /etc/ssl/certs/207242.pem"
	I0501 03:39:59.560612   68864 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/207242.pem
	I0501 03:39:59.565545   68864 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May  1 02:20 /usr/share/ca-certificates/207242.pem
	I0501 03:39:59.565589   68864 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/207242.pem
	I0501 03:39:59.571737   68864 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/207242.pem /etc/ssl/certs/3ec20f2e.0"
	I0501 03:39:59.584602   68864 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0501 03:39:59.599088   68864 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0501 03:39:59.604230   68864 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May  1 02:08 /usr/share/ca-certificates/minikubeCA.pem
	I0501 03:39:59.604296   68864 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0501 03:39:59.610536   68864 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0501 03:39:59.624810   68864 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0501 03:39:59.629692   68864 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0501 03:39:59.636209   68864 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0501 03:39:59.642907   68864 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0501 03:39:59.649491   68864 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0501 03:39:59.655702   68864 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0501 03:39:59.661884   68864 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0501 03:39:59.668075   68864 kubeadm.go:391] StartCluster: {Name:embed-certs-277128 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.0 ClusterName:embed-certs-277128 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.218 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 03:39:59.668209   68864 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0501 03:39:59.668255   68864 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0501 03:39:59.712172   68864 cri.go:89] found id: ""
	I0501 03:39:59.712262   68864 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0501 03:39:59.723825   68864 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0501 03:39:59.723848   68864 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0501 03:39:59.723854   68864 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0501 03:39:59.723890   68864 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0501 03:39:59.735188   68864 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0501 03:39:59.736670   68864 kubeconfig.go:125] found "embed-certs-277128" server: "https://192.168.50.218:8443"
	I0501 03:39:59.739665   68864 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0501 03:39:59.750292   68864 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.218
	I0501 03:39:59.750329   68864 kubeadm.go:1154] stopping kube-system containers ...
	I0501 03:39:59.750339   68864 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0501 03:39:59.750388   68864 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0501 03:39:59.791334   68864 cri.go:89] found id: ""
	I0501 03:39:59.791436   68864 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0501 03:39:59.809162   68864 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0501 03:39:59.820979   68864 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0501 03:39:59.821013   68864 kubeadm.go:156] found existing configuration files:
	
	I0501 03:39:59.821072   68864 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0501 03:39:59.832368   68864 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0501 03:39:59.832443   68864 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0501 03:39:59.843920   68864 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0501 03:39:59.855489   68864 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0501 03:39:59.855562   68864 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0501 03:39:59.867337   68864 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0501 03:39:59.878582   68864 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0501 03:39:59.878659   68864 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0501 03:39:59.890049   68864 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0501 03:39:59.901054   68864 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0501 03:39:59.901110   68864 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0501 03:39:59.912900   68864 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0501 03:39:59.925358   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:40:00.065105   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:40:00.861756   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:40:01.089790   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:40:01.158944   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:40:01.249842   68864 api_server.go:52] waiting for apiserver process to appear ...
	I0501 03:40:01.250063   68864 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:01.750273   68864 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:02.250155   68864 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:02.291774   68864 api_server.go:72] duration metric: took 1.041932793s to wait for apiserver process to appear ...
	I0501 03:40:02.291807   68864 api_server.go:88] waiting for apiserver healthz status ...
	I0501 03:40:02.291831   68864 api_server.go:253] Checking apiserver healthz at https://192.168.50.218:8443/healthz ...
	I0501 03:40:02.292377   68864 api_server.go:269] stopped: https://192.168.50.218:8443/healthz: Get "https://192.168.50.218:8443/healthz": dial tcp 192.168.50.218:8443: connect: connection refused
	I0501 03:40:02.792584   68864 api_server.go:253] Checking apiserver healthz at https://192.168.50.218:8443/healthz ...
	I0501 03:40:02.727799   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:02.728314   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | unable to find current IP address of domain default-k8s-diff-port-715118 in network mk-default-k8s-diff-port-715118
	I0501 03:40:02.728347   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | I0501 03:40:02.728270   70304 retry.go:31] will retry after 1.844071576s: waiting for machine to come up
	I0501 03:40:04.574870   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:04.575326   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | unable to find current IP address of domain default-k8s-diff-port-715118 in network mk-default-k8s-diff-port-715118
	I0501 03:40:04.575349   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | I0501 03:40:04.575278   70304 retry.go:31] will retry after 2.989286916s: waiting for machine to come up
	I0501 03:40:04.843311   68864 api_server.go:279] https://192.168.50.218:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0501 03:40:04.843360   68864 api_server.go:103] status: https://192.168.50.218:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0501 03:40:04.843377   68864 api_server.go:253] Checking apiserver healthz at https://192.168.50.218:8443/healthz ...
	I0501 03:40:04.899616   68864 api_server.go:279] https://192.168.50.218:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0501 03:40:04.899655   68864 api_server.go:103] status: https://192.168.50.218:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0501 03:40:05.292097   68864 api_server.go:253] Checking apiserver healthz at https://192.168.50.218:8443/healthz ...
	I0501 03:40:05.300803   68864 api_server.go:279] https://192.168.50.218:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0501 03:40:05.300843   68864 api_server.go:103] status: https://192.168.50.218:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0501 03:40:05.792151   68864 api_server.go:253] Checking apiserver healthz at https://192.168.50.218:8443/healthz ...
	I0501 03:40:05.797124   68864 api_server.go:279] https://192.168.50.218:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0501 03:40:05.797158   68864 api_server.go:103] status: https://192.168.50.218:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0501 03:40:06.292821   68864 api_server.go:253] Checking apiserver healthz at https://192.168.50.218:8443/healthz ...
	I0501 03:40:06.297912   68864 api_server.go:279] https://192.168.50.218:8443/healthz returned 200:
	ok
	I0501 03:40:06.305165   68864 api_server.go:141] control plane version: v1.30.0
	I0501 03:40:06.305199   68864 api_server.go:131] duration metric: took 4.013383351s to wait for apiserver health ...
	I0501 03:40:06.305211   68864 cni.go:84] Creating CNI manager for ""
	I0501 03:40:06.305220   68864 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0501 03:40:06.306925   68864 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0501 03:40:06.308450   68864 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0501 03:40:06.325186   68864 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0501 03:40:06.380997   68864 system_pods.go:43] waiting for kube-system pods to appear ...
	I0501 03:40:06.394134   68864 system_pods.go:59] 8 kube-system pods found
	I0501 03:40:06.394178   68864 system_pods.go:61] "coredns-7db6d8ff4d-sjplt" [6701ee8e-0630-4332-b01c-26741ed3a7b7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0501 03:40:06.394191   68864 system_pods.go:61] "etcd-embed-certs-277128" [744d7481-dd80-4435-90da-685fc16a76a4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0501 03:40:06.394206   68864 system_pods.go:61] "kube-apiserver-embed-certs-277128" [2f1d0f09-b270-4cdc-af3c-e17e3a244bbf] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0501 03:40:06.394215   68864 system_pods.go:61] "kube-controller-manager-embed-certs-277128" [729aa138-dc0d-4616-97d4-1592453971b8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0501 03:40:06.394222   68864 system_pods.go:61] "kube-proxy-phx7x" [56c0381e-c140-4f69-bbe4-09d393db8b23] Running
	I0501 03:40:06.394232   68864 system_pods.go:61] "kube-scheduler-embed-certs-277128" [cc56e9bc-7f21-4f57-a65a-73a10ffd7145] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0501 03:40:06.394253   68864 system_pods.go:61] "metrics-server-569cc877fc-p8j59" [f8ad6c24-dd5d-4515-9052-c9aca7412b55] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0501 03:40:06.394258   68864 system_pods.go:61] "storage-provisioner" [785be666-58d5-4b9d-92fd-bcacdbdebeb2] Running
	I0501 03:40:06.394273   68864 system_pods.go:74] duration metric: took 13.25246ms to wait for pod list to return data ...
	I0501 03:40:06.394293   68864 node_conditions.go:102] verifying NodePressure condition ...
	I0501 03:40:06.399912   68864 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0501 03:40:06.399950   68864 node_conditions.go:123] node cpu capacity is 2
	I0501 03:40:06.399974   68864 node_conditions.go:105] duration metric: took 5.664461ms to run NodePressure ...
	I0501 03:40:06.399996   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:40:06.675573   68864 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0501 03:40:06.680567   68864 kubeadm.go:733] kubelet initialised
	I0501 03:40:06.680591   68864 kubeadm.go:734] duration metric: took 4.987942ms waiting for restarted kubelet to initialise ...
	I0501 03:40:06.680598   68864 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0501 03:40:06.687295   68864 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-sjplt" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:06.692224   68864 pod_ready.go:97] node "embed-certs-277128" hosting pod "coredns-7db6d8ff4d-sjplt" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-277128" has status "Ready":"False"
	I0501 03:40:06.692248   68864 pod_ready.go:81] duration metric: took 4.930388ms for pod "coredns-7db6d8ff4d-sjplt" in "kube-system" namespace to be "Ready" ...
	E0501 03:40:06.692258   68864 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-277128" hosting pod "coredns-7db6d8ff4d-sjplt" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-277128" has status "Ready":"False"
	I0501 03:40:06.692266   68864 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-277128" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:06.699559   68864 pod_ready.go:97] node "embed-certs-277128" hosting pod "etcd-embed-certs-277128" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-277128" has status "Ready":"False"
	I0501 03:40:06.699591   68864 pod_ready.go:81] duration metric: took 7.309622ms for pod "etcd-embed-certs-277128" in "kube-system" namespace to be "Ready" ...
	E0501 03:40:06.699602   68864 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-277128" hosting pod "etcd-embed-certs-277128" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-277128" has status "Ready":"False"
	I0501 03:40:06.699613   68864 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-277128" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:06.705459   68864 pod_ready.go:97] node "embed-certs-277128" hosting pod "kube-apiserver-embed-certs-277128" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-277128" has status "Ready":"False"
	I0501 03:40:06.705485   68864 pod_ready.go:81] duration metric: took 5.86335ms for pod "kube-apiserver-embed-certs-277128" in "kube-system" namespace to be "Ready" ...
	E0501 03:40:06.705497   68864 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-277128" hosting pod "kube-apiserver-embed-certs-277128" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-277128" has status "Ready":"False"
	I0501 03:40:06.705504   68864 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-277128" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:06.786157   68864 pod_ready.go:97] node "embed-certs-277128" hosting pod "kube-controller-manager-embed-certs-277128" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-277128" has status "Ready":"False"
	I0501 03:40:06.786186   68864 pod_ready.go:81] duration metric: took 80.673223ms for pod "kube-controller-manager-embed-certs-277128" in "kube-system" namespace to be "Ready" ...
	E0501 03:40:06.786198   68864 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-277128" hosting pod "kube-controller-manager-embed-certs-277128" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-277128" has status "Ready":"False"
	I0501 03:40:06.786205   68864 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-phx7x" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:07.184262   68864 pod_ready.go:97] node "embed-certs-277128" hosting pod "kube-proxy-phx7x" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-277128" has status "Ready":"False"
	I0501 03:40:07.184297   68864 pod_ready.go:81] duration metric: took 398.081204ms for pod "kube-proxy-phx7x" in "kube-system" namespace to be "Ready" ...
	E0501 03:40:07.184309   68864 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-277128" hosting pod "kube-proxy-phx7x" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-277128" has status "Ready":"False"
	I0501 03:40:07.184319   68864 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-277128" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:07.584569   68864 pod_ready.go:97] node "embed-certs-277128" hosting pod "kube-scheduler-embed-certs-277128" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-277128" has status "Ready":"False"
	I0501 03:40:07.584607   68864 pod_ready.go:81] duration metric: took 400.279023ms for pod "kube-scheduler-embed-certs-277128" in "kube-system" namespace to be "Ready" ...
	E0501 03:40:07.584620   68864 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-277128" hosting pod "kube-scheduler-embed-certs-277128" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-277128" has status "Ready":"False"
	I0501 03:40:07.584630   68864 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:07.984376   68864 pod_ready.go:97] node "embed-certs-277128" hosting pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-277128" has status "Ready":"False"
	I0501 03:40:07.984408   68864 pod_ready.go:81] duration metric: took 399.766342ms for pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace to be "Ready" ...
	E0501 03:40:07.984419   68864 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-277128" hosting pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-277128" has status "Ready":"False"
	I0501 03:40:07.984428   68864 pod_ready.go:38] duration metric: took 1.303821777s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0501 03:40:07.984448   68864 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0501 03:40:08.000370   68864 ops.go:34] apiserver oom_adj: -16
	I0501 03:40:08.000391   68864 kubeadm.go:591] duration metric: took 8.276531687s to restartPrimaryControlPlane
	I0501 03:40:08.000401   68864 kubeadm.go:393] duration metric: took 8.332343707s to StartCluster
	I0501 03:40:08.000416   68864 settings.go:142] acquiring lock: {Name:mkcfe08bcadd45d99c00ed1eaf4e9c226a489fe2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 03:40:08.000482   68864 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18779-13391/kubeconfig
	I0501 03:40:08.002013   68864 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18779-13391/kubeconfig: {Name:mk5d1131557f885800877622151c0915337cb23a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 03:40:08.002343   68864 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.218 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0501 03:40:08.004301   68864 out.go:177] * Verifying Kubernetes components...
	I0501 03:40:08.002423   68864 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0501 03:40:08.002582   68864 config.go:182] Loaded profile config "embed-certs-277128": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0501 03:40:08.005608   68864 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-277128"
	I0501 03:40:08.005624   68864 addons.go:69] Setting metrics-server=true in profile "embed-certs-277128"
	I0501 03:40:08.005658   68864 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-277128"
	W0501 03:40:08.005670   68864 addons.go:243] addon storage-provisioner should already be in state true
	I0501 03:40:08.005609   68864 addons.go:69] Setting default-storageclass=true in profile "embed-certs-277128"
	I0501 03:40:08.005785   68864 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-277128"
	I0501 03:40:08.005659   68864 addons.go:234] Setting addon metrics-server=true in "embed-certs-277128"
	W0501 03:40:08.005819   68864 addons.go:243] addon metrics-server should already be in state true
	I0501 03:40:08.005851   68864 host.go:66] Checking if "embed-certs-277128" exists ...
	I0501 03:40:08.005613   68864 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 03:40:08.005695   68864 host.go:66] Checking if "embed-certs-277128" exists ...
	I0501 03:40:08.006230   68864 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:40:08.006258   68864 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:40:08.006291   68864 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:40:08.006310   68864 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:40:08.006326   68864 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:40:08.006378   68864 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:40:08.021231   68864 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35583
	I0501 03:40:08.021276   68864 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44971
	I0501 03:40:08.021621   68864 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:40:08.021673   68864 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:40:08.022126   68864 main.go:141] libmachine: Using API Version  1
	I0501 03:40:08.022146   68864 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:40:08.022353   68864 main.go:141] libmachine: Using API Version  1
	I0501 03:40:08.022390   68864 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:40:08.022537   68864 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:40:08.022730   68864 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:40:08.022904   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetState
	I0501 03:40:08.023118   68864 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:40:08.023165   68864 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:40:08.024792   68864 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33047
	I0501 03:40:08.025226   68864 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:40:08.025734   68864 main.go:141] libmachine: Using API Version  1
	I0501 03:40:08.025761   68864 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:40:08.026090   68864 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:40:08.026569   68864 addons.go:234] Setting addon default-storageclass=true in "embed-certs-277128"
	W0501 03:40:08.026593   68864 addons.go:243] addon default-storageclass should already be in state true
	I0501 03:40:08.026620   68864 host.go:66] Checking if "embed-certs-277128" exists ...
	I0501 03:40:08.026696   68864 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:40:08.026730   68864 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:40:08.026977   68864 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:40:08.027033   68864 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:40:08.039119   68864 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42975
	I0501 03:40:08.039585   68864 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:40:08.040083   68864 main.go:141] libmachine: Using API Version  1
	I0501 03:40:08.040106   68864 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:40:08.040419   68864 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:40:08.040599   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetState
	I0501 03:40:08.042228   68864 main.go:141] libmachine: (embed-certs-277128) Calling .DriverName
	I0501 03:40:08.044289   68864 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0501 03:40:08.045766   68864 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0501 03:40:08.045787   68864 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0501 03:40:08.045804   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHHostname
	I0501 03:40:08.043677   68864 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37141
	I0501 03:40:08.045633   68864 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41653
	I0501 03:40:08.046247   68864 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:40:08.046326   68864 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:40:08.046989   68864 main.go:141] libmachine: Using API Version  1
	I0501 03:40:08.047012   68864 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:40:08.047196   68864 main.go:141] libmachine: Using API Version  1
	I0501 03:40:08.047216   68864 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:40:08.047279   68864 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:40:08.047403   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetState
	I0501 03:40:08.047515   68864 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:40:08.048047   68864 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:40:08.048081   68864 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:40:08.049225   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:40:08.049623   68864 main.go:141] libmachine: (embed-certs-277128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:7d", ip: ""} in network mk-embed-certs-277128: {Iface:virbr2 ExpiryTime:2024-05-01 04:39:45 +0000 UTC Type:0 Mac:52:54:00:96:11:7d Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-277128 Clientid:01:52:54:00:96:11:7d}
	I0501 03:40:08.049649   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined IP address 192.168.50.218 and MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:40:08.049773   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHPort
	I0501 03:40:08.049915   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHKeyPath
	I0501 03:40:08.050096   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHUsername
	I0501 03:40:08.050165   68864 main.go:141] libmachine: (embed-certs-277128) Calling .DriverName
	I0501 03:40:08.050297   68864 sshutil.go:53] new ssh client: &{IP:192.168.50.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/embed-certs-277128/id_rsa Username:docker}
	I0501 03:40:08.052006   68864 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0501 03:40:08.053365   68864 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0501 03:40:08.053380   68864 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0501 03:40:08.053394   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHHostname
	I0501 03:40:08.056360   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:40:08.056752   68864 main.go:141] libmachine: (embed-certs-277128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:7d", ip: ""} in network mk-embed-certs-277128: {Iface:virbr2 ExpiryTime:2024-05-01 04:39:45 +0000 UTC Type:0 Mac:52:54:00:96:11:7d Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-277128 Clientid:01:52:54:00:96:11:7d}
	I0501 03:40:08.056782   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined IP address 192.168.50.218 and MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:40:08.056892   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHPort
	I0501 03:40:08.057074   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHKeyPath
	I0501 03:40:08.057215   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHUsername
	I0501 03:40:08.057334   68864 sshutil.go:53] new ssh client: &{IP:192.168.50.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/embed-certs-277128/id_rsa Username:docker}
	I0501 03:40:08.064476   68864 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43029
	I0501 03:40:08.064882   68864 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:40:08.065323   68864 main.go:141] libmachine: Using API Version  1
	I0501 03:40:08.065352   68864 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:40:08.065696   68864 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:40:08.065895   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetState
	I0501 03:40:08.067420   68864 main.go:141] libmachine: (embed-certs-277128) Calling .DriverName
	I0501 03:40:08.067740   68864 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0501 03:40:08.067762   68864 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0501 03:40:08.067774   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHHostname
	I0501 03:40:08.070587   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:40:08.071043   68864 main.go:141] libmachine: (embed-certs-277128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:7d", ip: ""} in network mk-embed-certs-277128: {Iface:virbr2 ExpiryTime:2024-05-01 04:39:45 +0000 UTC Type:0 Mac:52:54:00:96:11:7d Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-277128 Clientid:01:52:54:00:96:11:7d}
	I0501 03:40:08.071073   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined IP address 192.168.50.218 and MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:40:08.071225   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHPort
	I0501 03:40:08.071401   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHKeyPath
	I0501 03:40:08.071554   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHUsername
	I0501 03:40:08.071688   68864 sshutil.go:53] new ssh client: &{IP:192.168.50.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/embed-certs-277128/id_rsa Username:docker}
	I0501 03:40:08.204158   68864 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0501 03:40:08.229990   68864 node_ready.go:35] waiting up to 6m0s for node "embed-certs-277128" to be "Ready" ...
	I0501 03:40:08.289511   68864 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0501 03:40:08.289535   68864 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0501 03:40:08.301855   68864 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0501 03:40:08.311966   68864 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0501 03:40:08.330943   68864 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0501 03:40:08.330973   68864 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0501 03:40:08.384842   68864 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0501 03:40:08.384867   68864 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0501 03:40:08.445206   68864 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0501 03:40:09.434390   68864 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.122391479s)
	I0501 03:40:09.434458   68864 main.go:141] libmachine: Making call to close driver server
	I0501 03:40:09.434471   68864 main.go:141] libmachine: (embed-certs-277128) Calling .Close
	I0501 03:40:09.434518   68864 main.go:141] libmachine: Making call to close driver server
	I0501 03:40:09.434541   68864 main.go:141] libmachine: (embed-certs-277128) Calling .Close
	I0501 03:40:09.434567   68864 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.132680542s)
	I0501 03:40:09.434595   68864 main.go:141] libmachine: Making call to close driver server
	I0501 03:40:09.434604   68864 main.go:141] libmachine: (embed-certs-277128) Calling .Close
	I0501 03:40:09.434833   68864 main.go:141] libmachine: Successfully made call to close driver server
	I0501 03:40:09.434859   68864 main.go:141] libmachine: Successfully made call to close driver server
	I0501 03:40:09.434870   68864 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 03:40:09.434872   68864 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 03:40:09.434881   68864 main.go:141] libmachine: Making call to close driver server
	I0501 03:40:09.434882   68864 main.go:141] libmachine: Making call to close driver server
	I0501 03:40:09.434889   68864 main.go:141] libmachine: (embed-certs-277128) Calling .Close
	I0501 03:40:09.434890   68864 main.go:141] libmachine: (embed-certs-277128) Calling .Close
	I0501 03:40:09.434936   68864 main.go:141] libmachine: Successfully made call to close driver server
	I0501 03:40:09.434949   68864 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 03:40:09.434967   68864 main.go:141] libmachine: Making call to close driver server
	I0501 03:40:09.434994   68864 main.go:141] libmachine: (embed-certs-277128) Calling .Close
	I0501 03:40:09.434832   68864 main.go:141] libmachine: (embed-certs-277128) DBG | Closing plugin on server side
	I0501 03:40:09.435072   68864 main.go:141] libmachine: (embed-certs-277128) DBG | Closing plugin on server side
	I0501 03:40:09.437116   68864 main.go:141] libmachine: Successfully made call to close driver server
	I0501 03:40:09.437138   68864 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 03:40:09.437146   68864 main.go:141] libmachine: (embed-certs-277128) DBG | Closing plugin on server side
	I0501 03:40:09.437179   68864 main.go:141] libmachine: (embed-certs-277128) DBG | Closing plugin on server side
	I0501 03:40:09.437194   68864 main.go:141] libmachine: Successfully made call to close driver server
	I0501 03:40:09.437215   68864 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 03:40:09.437297   68864 main.go:141] libmachine: Successfully made call to close driver server
	I0501 03:40:09.437342   68864 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 03:40:09.437359   68864 addons.go:470] Verifying addon metrics-server=true in "embed-certs-277128"
	I0501 03:40:09.445787   68864 main.go:141] libmachine: Making call to close driver server
	I0501 03:40:09.445817   68864 main.go:141] libmachine: (embed-certs-277128) Calling .Close
	I0501 03:40:09.446053   68864 main.go:141] libmachine: Successfully made call to close driver server
	I0501 03:40:09.446090   68864 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 03:40:09.446112   68864 main.go:141] libmachine: (embed-certs-277128) DBG | Closing plugin on server side
	I0501 03:40:09.448129   68864 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass
	I0501 03:40:07.567551   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:07.567914   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | unable to find current IP address of domain default-k8s-diff-port-715118 in network mk-default-k8s-diff-port-715118
	I0501 03:40:07.567948   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | I0501 03:40:07.567860   70304 retry.go:31] will retry after 4.440791777s: waiting for machine to come up
	I0501 03:40:13.516002   69580 start.go:364] duration metric: took 3m31.9441828s to acquireMachinesLock for "old-k8s-version-503971"
	I0501 03:40:13.516087   69580 start.go:96] Skipping create...Using existing machine configuration
	I0501 03:40:13.516100   69580 fix.go:54] fixHost starting: 
	I0501 03:40:13.516559   69580 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:40:13.516601   69580 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:40:13.537158   69580 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38535
	I0501 03:40:13.537631   69580 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:40:13.538169   69580 main.go:141] libmachine: Using API Version  1
	I0501 03:40:13.538197   69580 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:40:13.538570   69580 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:40:13.538769   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .DriverName
	I0501 03:40:13.538958   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetState
	I0501 03:40:13.540454   69580 fix.go:112] recreateIfNeeded on old-k8s-version-503971: state=Stopped err=<nil>
	I0501 03:40:13.540486   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .DriverName
	W0501 03:40:13.540787   69580 fix.go:138] unexpected machine state, will restart: <nil>
	I0501 03:40:13.542670   69580 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-503971" ...
	I0501 03:40:09.449483   68864 addons.go:505] duration metric: took 1.447068548s for enable addons: enabled=[storage-provisioner metrics-server default-storageclass]
	I0501 03:40:10.233650   68864 node_ready.go:53] node "embed-certs-277128" has status "Ready":"False"
	I0501 03:40:12.234270   68864 node_ready.go:53] node "embed-certs-277128" has status "Ready":"False"
	I0501 03:40:12.011886   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:12.012305   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Found IP for machine: 192.168.72.158
	I0501 03:40:12.012335   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has current primary IP address 192.168.72.158 and MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:12.012347   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Reserving static IP address...
	I0501 03:40:12.012759   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-715118", mac: "52:54:00:87:12:31", ip: "192.168.72.158"} in network mk-default-k8s-diff-port-715118: {Iface:virbr3 ExpiryTime:2024-05-01 04:40:05 +0000 UTC Type:0 Mac:52:54:00:87:12:31 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-715118 Clientid:01:52:54:00:87:12:31}
	I0501 03:40:12.012796   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | skip adding static IP to network mk-default-k8s-diff-port-715118 - found existing host DHCP lease matching {name: "default-k8s-diff-port-715118", mac: "52:54:00:87:12:31", ip: "192.168.72.158"}
	I0501 03:40:12.012809   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Reserved static IP address: 192.168.72.158
	I0501 03:40:12.012828   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Waiting for SSH to be available...
	I0501 03:40:12.012835   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | Getting to WaitForSSH function...
	I0501 03:40:12.014719   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:12.015044   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:12:31", ip: ""} in network mk-default-k8s-diff-port-715118: {Iface:virbr3 ExpiryTime:2024-05-01 04:40:05 +0000 UTC Type:0 Mac:52:54:00:87:12:31 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-715118 Clientid:01:52:54:00:87:12:31}
	I0501 03:40:12.015080   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined IP address 192.168.72.158 and MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:12.015193   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | Using SSH client type: external
	I0501 03:40:12.015220   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | Using SSH private key: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/default-k8s-diff-port-715118/id_rsa (-rw-------)
	I0501 03:40:12.015269   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.158 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18779-13391/.minikube/machines/default-k8s-diff-port-715118/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0501 03:40:12.015280   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | About to run SSH command:
	I0501 03:40:12.015289   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | exit 0
	I0501 03:40:12.138881   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | SSH cmd err, output: <nil>: 
	I0501 03:40:12.139286   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetConfigRaw
	I0501 03:40:12.140056   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetIP
	I0501 03:40:12.142869   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:12.143322   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:12:31", ip: ""} in network mk-default-k8s-diff-port-715118: {Iface:virbr3 ExpiryTime:2024-05-01 04:40:05 +0000 UTC Type:0 Mac:52:54:00:87:12:31 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-715118 Clientid:01:52:54:00:87:12:31}
	I0501 03:40:12.143353   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined IP address 192.168.72.158 and MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:12.143662   69237 profile.go:143] Saving config to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/default-k8s-diff-port-715118/config.json ...
	I0501 03:40:12.143858   69237 machine.go:94] provisionDockerMachine start ...
	I0501 03:40:12.143876   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .DriverName
	I0501 03:40:12.144117   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHHostname
	I0501 03:40:12.146145   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:12.146535   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:12:31", ip: ""} in network mk-default-k8s-diff-port-715118: {Iface:virbr3 ExpiryTime:2024-05-01 04:40:05 +0000 UTC Type:0 Mac:52:54:00:87:12:31 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-715118 Clientid:01:52:54:00:87:12:31}
	I0501 03:40:12.146563   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined IP address 192.168.72.158 and MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:12.146712   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHPort
	I0501 03:40:12.146889   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHKeyPath
	I0501 03:40:12.147021   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHKeyPath
	I0501 03:40:12.147130   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHUsername
	I0501 03:40:12.147310   69237 main.go:141] libmachine: Using SSH client type: native
	I0501 03:40:12.147558   69237 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.158 22 <nil> <nil>}
	I0501 03:40:12.147574   69237 main.go:141] libmachine: About to run SSH command:
	hostname
	I0501 03:40:12.251357   69237 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0501 03:40:12.251387   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetMachineName
	I0501 03:40:12.251629   69237 buildroot.go:166] provisioning hostname "default-k8s-diff-port-715118"
	I0501 03:40:12.251658   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetMachineName
	I0501 03:40:12.251862   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHHostname
	I0501 03:40:12.254582   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:12.254892   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:12:31", ip: ""} in network mk-default-k8s-diff-port-715118: {Iface:virbr3 ExpiryTime:2024-05-01 04:40:05 +0000 UTC Type:0 Mac:52:54:00:87:12:31 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-715118 Clientid:01:52:54:00:87:12:31}
	I0501 03:40:12.254924   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined IP address 192.168.72.158 and MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:12.255073   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHPort
	I0501 03:40:12.255276   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHKeyPath
	I0501 03:40:12.255435   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHKeyPath
	I0501 03:40:12.255575   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHUsername
	I0501 03:40:12.255744   69237 main.go:141] libmachine: Using SSH client type: native
	I0501 03:40:12.255905   69237 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.158 22 <nil> <nil>}
	I0501 03:40:12.255917   69237 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-715118 && echo "default-k8s-diff-port-715118" | sudo tee /etc/hostname
	I0501 03:40:12.377588   69237 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-715118
	
	I0501 03:40:12.377628   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHHostname
	I0501 03:40:12.380627   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:12.380927   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:12:31", ip: ""} in network mk-default-k8s-diff-port-715118: {Iface:virbr3 ExpiryTime:2024-05-01 04:40:05 +0000 UTC Type:0 Mac:52:54:00:87:12:31 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-715118 Clientid:01:52:54:00:87:12:31}
	I0501 03:40:12.380958   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined IP address 192.168.72.158 and MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:12.381155   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHPort
	I0501 03:40:12.381372   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHKeyPath
	I0501 03:40:12.381550   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHKeyPath
	I0501 03:40:12.381723   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHUsername
	I0501 03:40:12.381907   69237 main.go:141] libmachine: Using SSH client type: native
	I0501 03:40:12.382148   69237 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.158 22 <nil> <nil>}
	I0501 03:40:12.382170   69237 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-715118' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-715118/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-715118' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0501 03:40:12.494424   69237 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0501 03:40:12.494454   69237 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18779-13391/.minikube CaCertPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18779-13391/.minikube}
	I0501 03:40:12.494484   69237 buildroot.go:174] setting up certificates
	I0501 03:40:12.494493   69237 provision.go:84] configureAuth start
	I0501 03:40:12.494504   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetMachineName
	I0501 03:40:12.494786   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetIP
	I0501 03:40:12.497309   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:12.497584   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:12:31", ip: ""} in network mk-default-k8s-diff-port-715118: {Iface:virbr3 ExpiryTime:2024-05-01 04:40:05 +0000 UTC Type:0 Mac:52:54:00:87:12:31 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-715118 Clientid:01:52:54:00:87:12:31}
	I0501 03:40:12.497616   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined IP address 192.168.72.158 and MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:12.497746   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHHostname
	I0501 03:40:12.500010   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:12.500302   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:12:31", ip: ""} in network mk-default-k8s-diff-port-715118: {Iface:virbr3 ExpiryTime:2024-05-01 04:40:05 +0000 UTC Type:0 Mac:52:54:00:87:12:31 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-715118 Clientid:01:52:54:00:87:12:31}
	I0501 03:40:12.500322   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined IP address 192.168.72.158 and MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:12.500449   69237 provision.go:143] copyHostCerts
	I0501 03:40:12.500505   69237 exec_runner.go:144] found /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem, removing ...
	I0501 03:40:12.500524   69237 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem
	I0501 03:40:12.500598   69237 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem (1078 bytes)
	I0501 03:40:12.500759   69237 exec_runner.go:144] found /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem, removing ...
	I0501 03:40:12.500772   69237 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem
	I0501 03:40:12.500815   69237 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem (1123 bytes)
	I0501 03:40:12.500891   69237 exec_runner.go:144] found /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem, removing ...
	I0501 03:40:12.500900   69237 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem
	I0501 03:40:12.500925   69237 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem (1675 bytes)
	I0501 03:40:12.500991   69237 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-715118 san=[127.0.0.1 192.168.72.158 default-k8s-diff-port-715118 localhost minikube]
	I0501 03:40:12.779037   69237 provision.go:177] copyRemoteCerts
	I0501 03:40:12.779104   69237 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0501 03:40:12.779139   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHHostname
	I0501 03:40:12.781800   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:12.782159   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:12:31", ip: ""} in network mk-default-k8s-diff-port-715118: {Iface:virbr3 ExpiryTime:2024-05-01 04:40:05 +0000 UTC Type:0 Mac:52:54:00:87:12:31 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-715118 Clientid:01:52:54:00:87:12:31}
	I0501 03:40:12.782195   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined IP address 192.168.72.158 and MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:12.782356   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHPort
	I0501 03:40:12.782655   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHKeyPath
	I0501 03:40:12.782812   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHUsername
	I0501 03:40:12.782946   69237 sshutil.go:53] new ssh client: &{IP:192.168.72.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/default-k8s-diff-port-715118/id_rsa Username:docker}
	I0501 03:40:12.867622   69237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0501 03:40:12.897105   69237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0501 03:40:12.926675   69237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0501 03:40:12.955373   69237 provision.go:87] duration metric: took 460.865556ms to configureAuth
	I0501 03:40:12.955405   69237 buildroot.go:189] setting minikube options for container-runtime
	I0501 03:40:12.955606   69237 config.go:182] Loaded profile config "default-k8s-diff-port-715118": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0501 03:40:12.955700   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHHostname
	I0501 03:40:12.958286   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:12.958632   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:12:31", ip: ""} in network mk-default-k8s-diff-port-715118: {Iface:virbr3 ExpiryTime:2024-05-01 04:40:05 +0000 UTC Type:0 Mac:52:54:00:87:12:31 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-715118 Clientid:01:52:54:00:87:12:31}
	I0501 03:40:12.958670   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined IP address 192.168.72.158 and MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:12.958800   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHPort
	I0501 03:40:12.959007   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHKeyPath
	I0501 03:40:12.959225   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHKeyPath
	I0501 03:40:12.959374   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHUsername
	I0501 03:40:12.959554   69237 main.go:141] libmachine: Using SSH client type: native
	I0501 03:40:12.959729   69237 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.158 22 <nil> <nil>}
	I0501 03:40:12.959748   69237 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0501 03:40:13.253328   69237 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0501 03:40:13.253356   69237 machine.go:97] duration metric: took 1.109484866s to provisionDockerMachine
	I0501 03:40:13.253371   69237 start.go:293] postStartSetup for "default-k8s-diff-port-715118" (driver="kvm2")
	I0501 03:40:13.253385   69237 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0501 03:40:13.253405   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .DriverName
	I0501 03:40:13.253753   69237 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0501 03:40:13.253790   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHHostname
	I0501 03:40:13.256734   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:13.257187   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:12:31", ip: ""} in network mk-default-k8s-diff-port-715118: {Iface:virbr3 ExpiryTime:2024-05-01 04:40:05 +0000 UTC Type:0 Mac:52:54:00:87:12:31 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-715118 Clientid:01:52:54:00:87:12:31}
	I0501 03:40:13.257214   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined IP address 192.168.72.158 and MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:13.257345   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHPort
	I0501 03:40:13.257547   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHKeyPath
	I0501 03:40:13.257708   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHUsername
	I0501 03:40:13.257856   69237 sshutil.go:53] new ssh client: &{IP:192.168.72.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/default-k8s-diff-port-715118/id_rsa Username:docker}
	I0501 03:40:13.353373   69237 ssh_runner.go:195] Run: cat /etc/os-release
	I0501 03:40:13.359653   69237 info.go:137] Remote host: Buildroot 2023.02.9
	I0501 03:40:13.359679   69237 filesync.go:126] Scanning /home/jenkins/minikube-integration/18779-13391/.minikube/addons for local assets ...
	I0501 03:40:13.359747   69237 filesync.go:126] Scanning /home/jenkins/minikube-integration/18779-13391/.minikube/files for local assets ...
	I0501 03:40:13.359854   69237 filesync.go:149] local asset: /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem -> 207242.pem in /etc/ssl/certs
	I0501 03:40:13.359964   69237 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0501 03:40:13.370608   69237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem --> /etc/ssl/certs/207242.pem (1708 bytes)
	I0501 03:40:13.402903   69237 start.go:296] duration metric: took 149.518346ms for postStartSetup
	I0501 03:40:13.402946   69237 fix.go:56] duration metric: took 20.610871873s for fixHost
	I0501 03:40:13.402967   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHHostname
	I0501 03:40:13.406324   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:13.406762   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:12:31", ip: ""} in network mk-default-k8s-diff-port-715118: {Iface:virbr3 ExpiryTime:2024-05-01 04:40:05 +0000 UTC Type:0 Mac:52:54:00:87:12:31 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-715118 Clientid:01:52:54:00:87:12:31}
	I0501 03:40:13.406792   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined IP address 192.168.72.158 and MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:13.407028   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHPort
	I0501 03:40:13.407274   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHKeyPath
	I0501 03:40:13.407505   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHKeyPath
	I0501 03:40:13.407645   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHUsername
	I0501 03:40:13.407831   69237 main.go:141] libmachine: Using SSH client type: native
	I0501 03:40:13.408034   69237 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.158 22 <nil> <nil>}
	I0501 03:40:13.408045   69237 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0501 03:40:13.515775   69237 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714534813.490981768
	
	I0501 03:40:13.515814   69237 fix.go:216] guest clock: 1714534813.490981768
	I0501 03:40:13.515852   69237 fix.go:229] Guest: 2024-05-01 03:40:13.490981768 +0000 UTC Remote: 2024-05-01 03:40:13.402950224 +0000 UTC m=+262.796298359 (delta=88.031544ms)
	I0501 03:40:13.515884   69237 fix.go:200] guest clock delta is within tolerance: 88.031544ms
	I0501 03:40:13.515891   69237 start.go:83] releasing machines lock for "default-k8s-diff-port-715118", held for 20.723857967s
	I0501 03:40:13.515976   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .DriverName
	I0501 03:40:13.516272   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetIP
	I0501 03:40:13.519627   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:13.520098   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:12:31", ip: ""} in network mk-default-k8s-diff-port-715118: {Iface:virbr3 ExpiryTime:2024-05-01 04:40:05 +0000 UTC Type:0 Mac:52:54:00:87:12:31 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-715118 Clientid:01:52:54:00:87:12:31}
	I0501 03:40:13.520128   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined IP address 192.168.72.158 and MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:13.520304   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .DriverName
	I0501 03:40:13.520922   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .DriverName
	I0501 03:40:13.521122   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .DriverName
	I0501 03:40:13.521212   69237 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0501 03:40:13.521292   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHHostname
	I0501 03:40:13.521355   69237 ssh_runner.go:195] Run: cat /version.json
	I0501 03:40:13.521387   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHHostname
	I0501 03:40:13.524292   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:13.524328   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:13.524612   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:12:31", ip: ""} in network mk-default-k8s-diff-port-715118: {Iface:virbr3 ExpiryTime:2024-05-01 04:40:05 +0000 UTC Type:0 Mac:52:54:00:87:12:31 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-715118 Clientid:01:52:54:00:87:12:31}
	I0501 03:40:13.524672   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined IP address 192.168.72.158 and MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:13.524819   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:12:31", ip: ""} in network mk-default-k8s-diff-port-715118: {Iface:virbr3 ExpiryTime:2024-05-01 04:40:05 +0000 UTC Type:0 Mac:52:54:00:87:12:31 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-715118 Clientid:01:52:54:00:87:12:31}
	I0501 03:40:13.524948   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined IP address 192.168.72.158 and MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:13.524989   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHPort
	I0501 03:40:13.525033   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHPort
	I0501 03:40:13.525171   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHKeyPath
	I0501 03:40:13.525196   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHKeyPath
	I0501 03:40:13.525306   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHUsername
	I0501 03:40:13.525401   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHUsername
	I0501 03:40:13.525490   69237 sshutil.go:53] new ssh client: &{IP:192.168.72.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/default-k8s-diff-port-715118/id_rsa Username:docker}
	I0501 03:40:13.525553   69237 sshutil.go:53] new ssh client: &{IP:192.168.72.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/default-k8s-diff-port-715118/id_rsa Username:docker}
	I0501 03:40:13.628623   69237 ssh_runner.go:195] Run: systemctl --version
	I0501 03:40:13.636013   69237 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0501 03:40:13.787414   69237 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0501 03:40:13.795777   69237 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0501 03:40:13.795867   69237 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0501 03:40:13.822287   69237 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0501 03:40:13.822326   69237 start.go:494] detecting cgroup driver to use...
	I0501 03:40:13.822507   69237 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0501 03:40:13.841310   69237 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0501 03:40:13.857574   69237 docker.go:217] disabling cri-docker service (if available) ...
	I0501 03:40:13.857645   69237 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0501 03:40:13.872903   69237 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0501 03:40:13.889032   69237 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0501 03:40:14.020563   69237 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0501 03:40:14.222615   69237 docker.go:233] disabling docker service ...
	I0501 03:40:14.222691   69237 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0501 03:40:14.245841   69237 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0501 03:40:14.261001   69237 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0501 03:40:14.385943   69237 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0501 03:40:14.516899   69237 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0501 03:40:14.545138   69237 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0501 03:40:14.570308   69237 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0501 03:40:14.570373   69237 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:40:14.586460   69237 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0501 03:40:14.586535   69237 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:40:14.598947   69237 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:40:14.617581   69237 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:40:14.630097   69237 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0501 03:40:14.642379   69237 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:40:14.653723   69237 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:40:14.674508   69237 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:40:14.685890   69237 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0501 03:40:14.696560   69237 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0501 03:40:14.696614   69237 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0501 03:40:14.713050   69237 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0501 03:40:14.723466   69237 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 03:40:14.884910   69237 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0501 03:40:15.030618   69237 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0501 03:40:15.030689   69237 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0501 03:40:15.036403   69237 start.go:562] Will wait 60s for crictl version
	I0501 03:40:15.036470   69237 ssh_runner.go:195] Run: which crictl
	I0501 03:40:15.040924   69237 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0501 03:40:15.082944   69237 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0501 03:40:15.083037   69237 ssh_runner.go:195] Run: crio --version
	I0501 03:40:15.123492   69237 ssh_runner.go:195] Run: crio --version
	I0501 03:40:15.160739   69237 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0501 03:40:15.162026   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetIP
	I0501 03:40:15.164966   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:15.165378   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:12:31", ip: ""} in network mk-default-k8s-diff-port-715118: {Iface:virbr3 ExpiryTime:2024-05-01 04:40:05 +0000 UTC Type:0 Mac:52:54:00:87:12:31 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-715118 Clientid:01:52:54:00:87:12:31}
	I0501 03:40:15.165417   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined IP address 192.168.72.158 and MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:15.165621   69237 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0501 03:40:15.171717   69237 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0501 03:40:15.190203   69237 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-715118 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.0 ClusterName:default-k8s-diff-port-715118 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.158 Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0501 03:40:15.190359   69237 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0501 03:40:15.190439   69237 ssh_runner.go:195] Run: sudo crictl images --output json
	I0501 03:40:15.240549   69237 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0501 03:40:15.240606   69237 ssh_runner.go:195] Run: which lz4
	I0501 03:40:15.246523   69237 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0501 03:40:15.253094   69237 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0501 03:40:15.253139   69237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394544937 bytes)
	I0501 03:40:13.544100   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .Start
	I0501 03:40:13.544328   69580 main.go:141] libmachine: (old-k8s-version-503971) Ensuring networks are active...
	I0501 03:40:13.545238   69580 main.go:141] libmachine: (old-k8s-version-503971) Ensuring network default is active
	I0501 03:40:13.545621   69580 main.go:141] libmachine: (old-k8s-version-503971) Ensuring network mk-old-k8s-version-503971 is active
	I0501 03:40:13.546072   69580 main.go:141] libmachine: (old-k8s-version-503971) Getting domain xml...
	I0501 03:40:13.546928   69580 main.go:141] libmachine: (old-k8s-version-503971) Creating domain...
	I0501 03:40:14.858558   69580 main.go:141] libmachine: (old-k8s-version-503971) Waiting to get IP...
	I0501 03:40:14.859690   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:14.860108   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | unable to find current IP address of domain old-k8s-version-503971 in network mk-old-k8s-version-503971
	I0501 03:40:14.860215   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | I0501 03:40:14.860103   70499 retry.go:31] will retry after 294.057322ms: waiting for machine to come up
	I0501 03:40:15.155490   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:15.155922   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | unable to find current IP address of domain old-k8s-version-503971 in network mk-old-k8s-version-503971
	I0501 03:40:15.155954   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | I0501 03:40:15.155870   70499 retry.go:31] will retry after 281.238966ms: waiting for machine to come up
	I0501 03:40:15.439196   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:15.439735   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | unable to find current IP address of domain old-k8s-version-503971 in network mk-old-k8s-version-503971
	I0501 03:40:15.439783   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | I0501 03:40:15.439697   70499 retry.go:31] will retry after 429.353689ms: waiting for machine to come up
	I0501 03:40:15.871266   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:15.871947   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | unable to find current IP address of domain old-k8s-version-503971 in network mk-old-k8s-version-503971
	I0501 03:40:15.871970   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | I0501 03:40:15.871895   70499 retry.go:31] will retry after 478.685219ms: waiting for machine to come up
	I0501 03:40:16.352661   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:16.353125   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | unable to find current IP address of domain old-k8s-version-503971 in network mk-old-k8s-version-503971
	I0501 03:40:16.353161   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | I0501 03:40:16.353087   70499 retry.go:31] will retry after 642.905156ms: waiting for machine to come up
	I0501 03:40:14.235378   68864 node_ready.go:53] node "embed-certs-277128" has status "Ready":"False"
	I0501 03:40:15.735465   68864 node_ready.go:49] node "embed-certs-277128" has status "Ready":"True"
	I0501 03:40:15.735494   68864 node_ready.go:38] duration metric: took 7.50546727s for node "embed-certs-277128" to be "Ready" ...
	I0501 03:40:15.735503   68864 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0501 03:40:15.743215   68864 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-sjplt" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:17.752821   68864 pod_ready.go:102] pod "coredns-7db6d8ff4d-sjplt" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:17.121023   69237 crio.go:462] duration metric: took 1.874524806s to copy over tarball
	I0501 03:40:17.121097   69237 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0501 03:40:19.792970   69237 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.671840765s)
	I0501 03:40:19.793004   69237 crio.go:469] duration metric: took 2.67194801s to extract the tarball
	I0501 03:40:19.793014   69237 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0501 03:40:19.834845   69237 ssh_runner.go:195] Run: sudo crictl images --output json
	I0501 03:40:19.896841   69237 crio.go:514] all images are preloaded for cri-o runtime.
	I0501 03:40:19.896881   69237 cache_images.go:84] Images are preloaded, skipping loading
	I0501 03:40:19.896892   69237 kubeadm.go:928] updating node { 192.168.72.158 8444 v1.30.0 crio true true} ...
	I0501 03:40:19.897027   69237 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-715118 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.158
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:default-k8s-diff-port-715118 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0501 03:40:19.897113   69237 ssh_runner.go:195] Run: crio config
	I0501 03:40:19.953925   69237 cni.go:84] Creating CNI manager for ""
	I0501 03:40:19.953956   69237 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0501 03:40:19.953971   69237 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0501 03:40:19.953991   69237 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.158 APIServerPort:8444 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-715118 NodeName:default-k8s-diff-port-715118 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.158"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.158 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0501 03:40:19.954133   69237 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.158
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-715118"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.158
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.158"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0501 03:40:19.954198   69237 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0501 03:40:19.967632   69237 binaries.go:44] Found k8s binaries, skipping transfer
	I0501 03:40:19.967708   69237 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0501 03:40:19.984161   69237 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0501 03:40:20.006540   69237 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0501 03:40:20.029218   69237 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0501 03:40:20.051612   69237 ssh_runner.go:195] Run: grep 192.168.72.158	control-plane.minikube.internal$ /etc/hosts
	I0501 03:40:20.056502   69237 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.158	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0501 03:40:20.071665   69237 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 03:40:20.194289   69237 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0501 03:40:20.215402   69237 certs.go:68] Setting up /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/default-k8s-diff-port-715118 for IP: 192.168.72.158
	I0501 03:40:20.215440   69237 certs.go:194] generating shared ca certs ...
	I0501 03:40:20.215471   69237 certs.go:226] acquiring lock for ca certs: {Name:mk85a75d14c00780d4bf09cd9bdaf0f96f1b8fce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 03:40:20.215698   69237 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18779-13391/.minikube/ca.key
	I0501 03:40:20.215769   69237 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.key
	I0501 03:40:20.215785   69237 certs.go:256] generating profile certs ...
	I0501 03:40:20.215922   69237 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/default-k8s-diff-port-715118/client.key
	I0501 03:40:20.216023   69237 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/default-k8s-diff-port-715118/apiserver.key.91bc3872
	I0501 03:40:20.216094   69237 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/default-k8s-diff-port-715118/proxy-client.key
	I0501 03:40:20.216275   69237 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/20724.pem (1338 bytes)
	W0501 03:40:20.216321   69237 certs.go:480] ignoring /home/jenkins/minikube-integration/18779-13391/.minikube/certs/20724_empty.pem, impossibly tiny 0 bytes
	I0501 03:40:20.216337   69237 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca-key.pem (1679 bytes)
	I0501 03:40:20.216375   69237 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem (1078 bytes)
	I0501 03:40:20.216439   69237 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem (1123 bytes)
	I0501 03:40:20.216472   69237 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem (1675 bytes)
	I0501 03:40:20.216560   69237 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem (1708 bytes)
	I0501 03:40:20.217306   69237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0501 03:40:20.256162   69237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0501 03:40:20.293643   69237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0501 03:40:20.329175   69237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0501 03:40:20.367715   69237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/default-k8s-diff-port-715118/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0501 03:40:20.400024   69237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/default-k8s-diff-port-715118/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0501 03:40:20.428636   69237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/default-k8s-diff-port-715118/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0501 03:40:20.458689   69237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/default-k8s-diff-port-715118/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0501 03:40:20.487619   69237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem --> /usr/share/ca-certificates/207242.pem (1708 bytes)
	I0501 03:40:20.518140   69237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0501 03:40:20.547794   69237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/certs/20724.pem --> /usr/share/ca-certificates/20724.pem (1338 bytes)
	I0501 03:40:20.580453   69237 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0501 03:40:20.605211   69237 ssh_runner.go:195] Run: openssl version
	I0501 03:40:20.612269   69237 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/207242.pem && ln -fs /usr/share/ca-certificates/207242.pem /etc/ssl/certs/207242.pem"
	I0501 03:40:20.626575   69237 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/207242.pem
	I0501 03:40:20.632370   69237 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May  1 02:20 /usr/share/ca-certificates/207242.pem
	I0501 03:40:20.632439   69237 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/207242.pem
	I0501 03:40:20.639563   69237 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/207242.pem /etc/ssl/certs/3ec20f2e.0"
	I0501 03:40:16.997533   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:16.998034   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | unable to find current IP address of domain old-k8s-version-503971 in network mk-old-k8s-version-503971
	I0501 03:40:16.998076   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | I0501 03:40:16.997984   70499 retry.go:31] will retry after 596.56948ms: waiting for machine to come up
	I0501 03:40:17.596671   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:17.597182   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | unable to find current IP address of domain old-k8s-version-503971 in network mk-old-k8s-version-503971
	I0501 03:40:17.597207   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | I0501 03:40:17.597132   70499 retry.go:31] will retry after 770.742109ms: waiting for machine to come up
	I0501 03:40:18.369337   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:18.369833   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | unable to find current IP address of domain old-k8s-version-503971 in network mk-old-k8s-version-503971
	I0501 03:40:18.369864   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | I0501 03:40:18.369780   70499 retry.go:31] will retry after 1.382502808s: waiting for machine to come up
	I0501 03:40:19.753936   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:19.754419   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | unable to find current IP address of domain old-k8s-version-503971 in network mk-old-k8s-version-503971
	I0501 03:40:19.754458   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | I0501 03:40:19.754363   70499 retry.go:31] will retry after 1.344792989s: waiting for machine to come up
	I0501 03:40:21.101047   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:21.101474   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | unable to find current IP address of domain old-k8s-version-503971 in network mk-old-k8s-version-503971
	I0501 03:40:21.101514   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | I0501 03:40:21.101442   70499 retry.go:31] will retry after 1.636964906s: waiting for machine to come up
	I0501 03:40:20.252239   68864 pod_ready.go:102] pod "coredns-7db6d8ff4d-sjplt" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:22.751407   68864 pod_ready.go:92] pod "coredns-7db6d8ff4d-sjplt" in "kube-system" namespace has status "Ready":"True"
	I0501 03:40:22.751431   68864 pod_ready.go:81] duration metric: took 7.008190087s for pod "coredns-7db6d8ff4d-sjplt" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:22.751442   68864 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-277128" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:22.757104   68864 pod_ready.go:92] pod "etcd-embed-certs-277128" in "kube-system" namespace has status "Ready":"True"
	I0501 03:40:22.757124   68864 pod_ready.go:81] duration metric: took 5.677117ms for pod "etcd-embed-certs-277128" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:22.757141   68864 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-277128" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:22.763083   68864 pod_ready.go:92] pod "kube-apiserver-embed-certs-277128" in "kube-system" namespace has status "Ready":"True"
	I0501 03:40:22.763107   68864 pod_ready.go:81] duration metric: took 5.958961ms for pod "kube-apiserver-embed-certs-277128" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:22.763119   68864 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-277128" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:22.768163   68864 pod_ready.go:92] pod "kube-controller-manager-embed-certs-277128" in "kube-system" namespace has status "Ready":"True"
	I0501 03:40:22.768182   68864 pod_ready.go:81] duration metric: took 5.055934ms for pod "kube-controller-manager-embed-certs-277128" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:22.768193   68864 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-phx7x" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:22.772478   68864 pod_ready.go:92] pod "kube-proxy-phx7x" in "kube-system" namespace has status "Ready":"True"
	I0501 03:40:22.772497   68864 pod_ready.go:81] duration metric: took 4.297358ms for pod "kube-proxy-phx7x" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:22.772505   68864 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-277128" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:23.149692   68864 pod_ready.go:92] pod "kube-scheduler-embed-certs-277128" in "kube-system" namespace has status "Ready":"True"
	I0501 03:40:23.149726   68864 pod_ready.go:81] duration metric: took 377.213314ms for pod "kube-scheduler-embed-certs-277128" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:23.149741   68864 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:20.653202   69237 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0501 03:40:20.878582   69237 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0501 03:40:20.884671   69237 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May  1 02:08 /usr/share/ca-certificates/minikubeCA.pem
	I0501 03:40:20.884755   69237 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0501 03:40:20.891633   69237 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0501 03:40:20.906032   69237 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20724.pem && ln -fs /usr/share/ca-certificates/20724.pem /etc/ssl/certs/20724.pem"
	I0501 03:40:20.924491   69237 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20724.pem
	I0501 03:40:20.931346   69237 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May  1 02:20 /usr/share/ca-certificates/20724.pem
	I0501 03:40:20.931421   69237 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20724.pem
	I0501 03:40:20.937830   69237 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20724.pem /etc/ssl/certs/51391683.0"
	I0501 03:40:20.951239   69237 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0501 03:40:20.956883   69237 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0501 03:40:20.964048   69237 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0501 03:40:20.971156   69237 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0501 03:40:20.978243   69237 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0501 03:40:20.985183   69237 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0501 03:40:20.991709   69237 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0501 03:40:20.998390   69237 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-715118 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.0 ClusterName:default-k8s-diff-port-715118 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.158 Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 03:40:20.998509   69237 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0501 03:40:20.998558   69237 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0501 03:40:21.051469   69237 cri.go:89] found id: ""
	I0501 03:40:21.051575   69237 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0501 03:40:21.063280   69237 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0501 03:40:21.063301   69237 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0501 03:40:21.063307   69237 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0501 03:40:21.063381   69237 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0501 03:40:21.077380   69237 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0501 03:40:21.078445   69237 kubeconfig.go:125] found "default-k8s-diff-port-715118" server: "https://192.168.72.158:8444"
	I0501 03:40:21.080872   69237 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0501 03:40:21.095004   69237 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.158
	I0501 03:40:21.095045   69237 kubeadm.go:1154] stopping kube-system containers ...
	I0501 03:40:21.095059   69237 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0501 03:40:21.095123   69237 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0501 03:40:21.151629   69237 cri.go:89] found id: ""
	I0501 03:40:21.151711   69237 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0501 03:40:21.177077   69237 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0501 03:40:21.192057   69237 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0501 03:40:21.192087   69237 kubeadm.go:156] found existing configuration files:
	
	I0501 03:40:21.192146   69237 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0501 03:40:21.206784   69237 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0501 03:40:21.206870   69237 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0501 03:40:21.221942   69237 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0501 03:40:21.236442   69237 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0501 03:40:21.236516   69237 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0501 03:40:21.251285   69237 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0501 03:40:21.265997   69237 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0501 03:40:21.266049   69237 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0501 03:40:21.281137   69237 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0501 03:40:21.297713   69237 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0501 03:40:21.297783   69237 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0501 03:40:21.314264   69237 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0501 03:40:21.328605   69237 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:40:21.478475   69237 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:40:22.161692   69237 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:40:22.432136   69237 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:40:22.514744   69237 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:40:22.597689   69237 api_server.go:52] waiting for apiserver process to appear ...
	I0501 03:40:22.597770   69237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:23.098146   69237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:23.597831   69237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:23.629375   69237 api_server.go:72] duration metric: took 1.031684055s to wait for apiserver process to appear ...
	I0501 03:40:23.629462   69237 api_server.go:88] waiting for apiserver healthz status ...
	I0501 03:40:23.629500   69237 api_server.go:253] Checking apiserver healthz at https://192.168.72.158:8444/healthz ...
	I0501 03:40:23.630045   69237 api_server.go:269] stopped: https://192.168.72.158:8444/healthz: Get "https://192.168.72.158:8444/healthz": dial tcp 192.168.72.158:8444: connect: connection refused
	I0501 03:40:24.129831   69237 api_server.go:253] Checking apiserver healthz at https://192.168.72.158:8444/healthz ...
	I0501 03:40:22.740241   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:22.740692   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | unable to find current IP address of domain old-k8s-version-503971 in network mk-old-k8s-version-503971
	I0501 03:40:22.740722   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | I0501 03:40:22.740656   70499 retry.go:31] will retry after 1.899831455s: waiting for machine to come up
	I0501 03:40:24.642609   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:24.643075   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | unable to find current IP address of domain old-k8s-version-503971 in network mk-old-k8s-version-503971
	I0501 03:40:24.643104   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | I0501 03:40:24.643019   70499 retry.go:31] will retry after 3.503333894s: waiting for machine to come up
	I0501 03:40:25.157335   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:27.160083   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:27.091079   69237 api_server.go:279] https://192.168.72.158:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0501 03:40:27.091134   69237 api_server.go:103] status: https://192.168.72.158:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0501 03:40:27.091152   69237 api_server.go:253] Checking apiserver healthz at https://192.168.72.158:8444/healthz ...
	I0501 03:40:27.163481   69237 api_server.go:279] https://192.168.72.158:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0501 03:40:27.163509   69237 api_server.go:103] status: https://192.168.72.158:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0501 03:40:27.163522   69237 api_server.go:253] Checking apiserver healthz at https://192.168.72.158:8444/healthz ...
	I0501 03:40:27.175097   69237 api_server.go:279] https://192.168.72.158:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0501 03:40:27.175129   69237 api_server.go:103] status: https://192.168.72.158:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0501 03:40:27.629613   69237 api_server.go:253] Checking apiserver healthz at https://192.168.72.158:8444/healthz ...
	I0501 03:40:27.637166   69237 api_server.go:279] https://192.168.72.158:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0501 03:40:27.637202   69237 api_server.go:103] status: https://192.168.72.158:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0501 03:40:28.130467   69237 api_server.go:253] Checking apiserver healthz at https://192.168.72.158:8444/healthz ...
	I0501 03:40:28.148799   69237 api_server.go:279] https://192.168.72.158:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0501 03:40:28.148823   69237 api_server.go:103] status: https://192.168.72.158:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0501 03:40:28.630500   69237 api_server.go:253] Checking apiserver healthz at https://192.168.72.158:8444/healthz ...
	I0501 03:40:28.642856   69237 api_server.go:279] https://192.168.72.158:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0501 03:40:28.642890   69237 api_server.go:103] status: https://192.168.72.158:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0501 03:40:29.130453   69237 api_server.go:253] Checking apiserver healthz at https://192.168.72.158:8444/healthz ...
	I0501 03:40:29.137783   69237 api_server.go:279] https://192.168.72.158:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0501 03:40:29.137819   69237 api_server.go:103] status: https://192.168.72.158:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0501 03:40:29.630448   69237 api_server.go:253] Checking apiserver healthz at https://192.168.72.158:8444/healthz ...
	I0501 03:40:29.634736   69237 api_server.go:279] https://192.168.72.158:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0501 03:40:29.634764   69237 api_server.go:103] status: https://192.168.72.158:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0501 03:40:30.130371   69237 api_server.go:253] Checking apiserver healthz at https://192.168.72.158:8444/healthz ...
	I0501 03:40:30.134727   69237 api_server.go:279] https://192.168.72.158:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0501 03:40:30.134755   69237 api_server.go:103] status: https://192.168.72.158:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0501 03:40:30.630555   69237 api_server.go:253] Checking apiserver healthz at https://192.168.72.158:8444/healthz ...
	I0501 03:40:30.637025   69237 api_server.go:279] https://192.168.72.158:8444/healthz returned 200:
	ok
	I0501 03:40:30.644179   69237 api_server.go:141] control plane version: v1.30.0
	I0501 03:40:30.644209   69237 api_server.go:131] duration metric: took 7.014727807s to wait for apiserver health ...
	I0501 03:40:30.644217   69237 cni.go:84] Creating CNI manager for ""
	I0501 03:40:30.644223   69237 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0501 03:40:30.646018   69237 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0501 03:40:30.647222   69237 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0501 03:40:28.148102   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:28.148506   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | unable to find current IP address of domain old-k8s-version-503971 in network mk-old-k8s-version-503971
	I0501 03:40:28.148547   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | I0501 03:40:28.148463   70499 retry.go:31] will retry after 4.150508159s: waiting for machine to come up
	I0501 03:40:33.783990   68640 start.go:364] duration metric: took 56.072338201s to acquireMachinesLock for "no-preload-892672"
	I0501 03:40:33.784047   68640 start.go:96] Skipping create...Using existing machine configuration
	I0501 03:40:33.784056   68640 fix.go:54] fixHost starting: 
	I0501 03:40:33.784468   68640 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:40:33.784504   68640 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:40:33.801460   68640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37745
	I0501 03:40:33.802023   68640 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:40:33.802634   68640 main.go:141] libmachine: Using API Version  1
	I0501 03:40:33.802669   68640 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:40:33.803062   68640 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:40:33.803262   68640 main.go:141] libmachine: (no-preload-892672) Calling .DriverName
	I0501 03:40:33.803379   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetState
	I0501 03:40:33.805241   68640 fix.go:112] recreateIfNeeded on no-preload-892672: state=Stopped err=<nil>
	I0501 03:40:33.805266   68640 main.go:141] libmachine: (no-preload-892672) Calling .DriverName
	W0501 03:40:33.805452   68640 fix.go:138] unexpected machine state, will restart: <nil>
	I0501 03:40:33.807020   68640 out.go:177] * Restarting existing kvm2 VM for "no-preload-892672" ...
	I0501 03:40:29.656911   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:32.158119   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:32.303427   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:32.303804   69580 main.go:141] libmachine: (old-k8s-version-503971) Found IP for machine: 192.168.61.104
	I0501 03:40:32.303837   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has current primary IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:32.303851   69580 main.go:141] libmachine: (old-k8s-version-503971) Reserving static IP address...
	I0501 03:40:32.304254   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "old-k8s-version-503971", mac: "52:54:00:7d:68:83", ip: "192.168.61.104"} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:40:32.304286   69580 main.go:141] libmachine: (old-k8s-version-503971) Reserved static IP address: 192.168.61.104
	I0501 03:40:32.304305   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | skip adding static IP to network mk-old-k8s-version-503971 - found existing host DHCP lease matching {name: "old-k8s-version-503971", mac: "52:54:00:7d:68:83", ip: "192.168.61.104"}
	I0501 03:40:32.304323   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | Getting to WaitForSSH function...
	I0501 03:40:32.304337   69580 main.go:141] libmachine: (old-k8s-version-503971) Waiting for SSH to be available...
	I0501 03:40:32.306619   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:32.306972   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:68:83", ip: ""} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:40:32.307011   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:32.307114   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | Using SSH client type: external
	I0501 03:40:32.307138   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | Using SSH private key: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/old-k8s-version-503971/id_rsa (-rw-------)
	I0501 03:40:32.307174   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.104 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18779-13391/.minikube/machines/old-k8s-version-503971/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0501 03:40:32.307188   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | About to run SSH command:
	I0501 03:40:32.307224   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | exit 0
	I0501 03:40:32.438508   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | SSH cmd err, output: <nil>: 
	I0501 03:40:32.438882   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetConfigRaw
	I0501 03:40:32.439452   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetIP
	I0501 03:40:32.441984   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:32.442342   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:68:83", ip: ""} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:40:32.442369   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:32.442668   69580 profile.go:143] Saving config to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/old-k8s-version-503971/config.json ...
	I0501 03:40:32.442875   69580 machine.go:94] provisionDockerMachine start ...
	I0501 03:40:32.442897   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .DriverName
	I0501 03:40:32.443077   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHHostname
	I0501 03:40:32.445129   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:32.445442   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:68:83", ip: ""} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:40:32.445480   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:32.445628   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHPort
	I0501 03:40:32.445806   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHKeyPath
	I0501 03:40:32.445974   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHKeyPath
	I0501 03:40:32.446122   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHUsername
	I0501 03:40:32.446314   69580 main.go:141] libmachine: Using SSH client type: native
	I0501 03:40:32.446548   69580 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.104 22 <nil> <nil>}
	I0501 03:40:32.446564   69580 main.go:141] libmachine: About to run SSH command:
	hostname
	I0501 03:40:32.559346   69580 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0501 03:40:32.559379   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetMachineName
	I0501 03:40:32.559630   69580 buildroot.go:166] provisioning hostname "old-k8s-version-503971"
	I0501 03:40:32.559654   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetMachineName
	I0501 03:40:32.559832   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHHostname
	I0501 03:40:32.562176   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:32.562547   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:68:83", ip: ""} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:40:32.562582   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:32.562716   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHPort
	I0501 03:40:32.562892   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHKeyPath
	I0501 03:40:32.563019   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHKeyPath
	I0501 03:40:32.563161   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHUsername
	I0501 03:40:32.563332   69580 main.go:141] libmachine: Using SSH client type: native
	I0501 03:40:32.563545   69580 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.104 22 <nil> <nil>}
	I0501 03:40:32.563564   69580 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-503971 && echo "old-k8s-version-503971" | sudo tee /etc/hostname
	I0501 03:40:32.699918   69580 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-503971
	
	I0501 03:40:32.699961   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHHostname
	I0501 03:40:32.702721   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:32.703134   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:68:83", ip: ""} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:40:32.703158   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:32.703361   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHPort
	I0501 03:40:32.703547   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHKeyPath
	I0501 03:40:32.703744   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHKeyPath
	I0501 03:40:32.703881   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHUsername
	I0501 03:40:32.704037   69580 main.go:141] libmachine: Using SSH client type: native
	I0501 03:40:32.704199   69580 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.104 22 <nil> <nil>}
	I0501 03:40:32.704215   69580 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-503971' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-503971/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-503971' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0501 03:40:32.830277   69580 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0501 03:40:32.830307   69580 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18779-13391/.minikube CaCertPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18779-13391/.minikube}
	I0501 03:40:32.830323   69580 buildroot.go:174] setting up certificates
	I0501 03:40:32.830331   69580 provision.go:84] configureAuth start
	I0501 03:40:32.830340   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetMachineName
	I0501 03:40:32.830629   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetIP
	I0501 03:40:32.833575   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:32.833887   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:68:83", ip: ""} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:40:32.833932   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:32.834070   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHHostname
	I0501 03:40:32.836309   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:32.836664   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:68:83", ip: ""} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:40:32.836691   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:32.836824   69580 provision.go:143] copyHostCerts
	I0501 03:40:32.836885   69580 exec_runner.go:144] found /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem, removing ...
	I0501 03:40:32.836895   69580 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem
	I0501 03:40:32.836945   69580 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem (1078 bytes)
	I0501 03:40:32.837046   69580 exec_runner.go:144] found /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem, removing ...
	I0501 03:40:32.837054   69580 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem
	I0501 03:40:32.837072   69580 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem (1123 bytes)
	I0501 03:40:32.837129   69580 exec_runner.go:144] found /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem, removing ...
	I0501 03:40:32.837136   69580 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem
	I0501 03:40:32.837152   69580 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem (1675 bytes)
	I0501 03:40:32.837202   69580 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-503971 san=[127.0.0.1 192.168.61.104 localhost minikube old-k8s-version-503971]
	I0501 03:40:33.047948   69580 provision.go:177] copyRemoteCerts
	I0501 03:40:33.048004   69580 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0501 03:40:33.048030   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHHostname
	I0501 03:40:33.050591   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:33.050975   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:68:83", ip: ""} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:40:33.051012   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:33.051142   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHPort
	I0501 03:40:33.051310   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHKeyPath
	I0501 03:40:33.051465   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHUsername
	I0501 03:40:33.051574   69580 sshutil.go:53] new ssh client: &{IP:192.168.61.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/old-k8s-version-503971/id_rsa Username:docker}
	I0501 03:40:33.143991   69580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0501 03:40:33.175494   69580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0501 03:40:33.204770   69580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0501 03:40:33.232728   69580 provision.go:87] duration metric: took 402.386279ms to configureAuth
	I0501 03:40:33.232756   69580 buildroot.go:189] setting minikube options for container-runtime
	I0501 03:40:33.232962   69580 config.go:182] Loaded profile config "old-k8s-version-503971": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0501 03:40:33.233051   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHHostname
	I0501 03:40:33.235656   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:33.236006   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:68:83", ip: ""} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:40:33.236038   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:33.236162   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHPort
	I0501 03:40:33.236339   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHKeyPath
	I0501 03:40:33.236484   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHKeyPath
	I0501 03:40:33.236633   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHUsername
	I0501 03:40:33.236817   69580 main.go:141] libmachine: Using SSH client type: native
	I0501 03:40:33.236980   69580 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.104 22 <nil> <nil>}
	I0501 03:40:33.236997   69580 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0501 03:40:33.526370   69580 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0501 03:40:33.526419   69580 machine.go:97] duration metric: took 1.083510254s to provisionDockerMachine
	I0501 03:40:33.526432   69580 start.go:293] postStartSetup for "old-k8s-version-503971" (driver="kvm2")
	I0501 03:40:33.526443   69580 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0501 03:40:33.526470   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .DriverName
	I0501 03:40:33.526788   69580 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0501 03:40:33.526831   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHHostname
	I0501 03:40:33.529815   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:33.530209   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:68:83", ip: ""} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:40:33.530268   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:33.530364   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHPort
	I0501 03:40:33.530559   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHKeyPath
	I0501 03:40:33.530741   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHUsername
	I0501 03:40:33.530909   69580 sshutil.go:53] new ssh client: &{IP:192.168.61.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/old-k8s-version-503971/id_rsa Username:docker}
	I0501 03:40:33.620224   69580 ssh_runner.go:195] Run: cat /etc/os-release
	I0501 03:40:33.625417   69580 info.go:137] Remote host: Buildroot 2023.02.9
	I0501 03:40:33.625447   69580 filesync.go:126] Scanning /home/jenkins/minikube-integration/18779-13391/.minikube/addons for local assets ...
	I0501 03:40:33.625511   69580 filesync.go:126] Scanning /home/jenkins/minikube-integration/18779-13391/.minikube/files for local assets ...
	I0501 03:40:33.625594   69580 filesync.go:149] local asset: /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem -> 207242.pem in /etc/ssl/certs
	I0501 03:40:33.625691   69580 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0501 03:40:33.637311   69580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem --> /etc/ssl/certs/207242.pem (1708 bytes)
	I0501 03:40:33.666707   69580 start.go:296] duration metric: took 140.263297ms for postStartSetup
	I0501 03:40:33.666740   69580 fix.go:56] duration metric: took 20.150640355s for fixHost
	I0501 03:40:33.666758   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHHostname
	I0501 03:40:33.669394   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:33.669822   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:68:83", ip: ""} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:40:33.669852   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:33.669963   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHPort
	I0501 03:40:33.670213   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHKeyPath
	I0501 03:40:33.670388   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHKeyPath
	I0501 03:40:33.670589   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHUsername
	I0501 03:40:33.670794   69580 main.go:141] libmachine: Using SSH client type: native
	I0501 03:40:33.670972   69580 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.104 22 <nil> <nil>}
	I0501 03:40:33.670984   69580 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0501 03:40:33.783810   69580 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714534833.728910946
	
	I0501 03:40:33.783839   69580 fix.go:216] guest clock: 1714534833.728910946
	I0501 03:40:33.783850   69580 fix.go:229] Guest: 2024-05-01 03:40:33.728910946 +0000 UTC Remote: 2024-05-01 03:40:33.666743363 +0000 UTC m=+232.246108464 (delta=62.167583ms)
	I0501 03:40:33.783893   69580 fix.go:200] guest clock delta is within tolerance: 62.167583ms
	I0501 03:40:33.783903   69580 start.go:83] releasing machines lock for "old-k8s-version-503971", held for 20.267840723s
	I0501 03:40:33.783933   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .DriverName
	I0501 03:40:33.784203   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetIP
	I0501 03:40:33.786846   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:33.787202   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:68:83", ip: ""} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:40:33.787230   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:33.787385   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .DriverName
	I0501 03:40:33.787837   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .DriverName
	I0501 03:40:33.787997   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .DriverName
	I0501 03:40:33.788085   69580 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0501 03:40:33.788126   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHHostname
	I0501 03:40:33.788252   69580 ssh_runner.go:195] Run: cat /version.json
	I0501 03:40:33.788279   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHHostname
	I0501 03:40:33.790748   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:33.791086   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:33.791118   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:68:83", ip: ""} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:40:33.791142   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:33.791435   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHPort
	I0501 03:40:33.791491   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:68:83", ip: ""} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:40:33.791532   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:33.791618   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHKeyPath
	I0501 03:40:33.791740   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHPort
	I0501 03:40:33.791815   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHUsername
	I0501 03:40:33.791937   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHKeyPath
	I0501 03:40:33.792014   69580 sshutil.go:53] new ssh client: &{IP:192.168.61.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/old-k8s-version-503971/id_rsa Username:docker}
	I0501 03:40:33.792069   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHUsername
	I0501 03:40:33.792206   69580 sshutil.go:53] new ssh client: &{IP:192.168.61.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/old-k8s-version-503971/id_rsa Username:docker}
	I0501 03:40:33.876242   69580 ssh_runner.go:195] Run: systemctl --version
	I0501 03:40:33.901692   69580 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0501 03:40:34.056758   69580 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0501 03:40:34.065070   69580 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0501 03:40:34.065156   69580 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0501 03:40:34.085337   69580 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0501 03:40:34.085364   69580 start.go:494] detecting cgroup driver to use...
	I0501 03:40:34.085432   69580 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0501 03:40:34.102723   69580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0501 03:40:34.118792   69580 docker.go:217] disabling cri-docker service (if available) ...
	I0501 03:40:34.118847   69580 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0501 03:40:34.133978   69580 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0501 03:40:34.153890   69580 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0501 03:40:34.283815   69580 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0501 03:40:34.475851   69580 docker.go:233] disabling docker service ...
	I0501 03:40:34.475926   69580 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0501 03:40:34.500769   69580 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0501 03:40:34.517315   69580 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0501 03:40:34.674322   69580 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0501 03:40:34.833281   69580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0501 03:40:34.852610   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0501 03:40:34.879434   69580 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0501 03:40:34.879517   69580 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:40:34.892197   69580 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0501 03:40:34.892269   69580 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:40:34.904437   69580 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:40:34.919950   69580 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:40:34.933772   69580 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0501 03:40:34.947563   69580 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0501 03:40:34.965724   69580 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0501 03:40:34.965795   69580 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0501 03:40:34.984251   69580 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0501 03:40:34.997050   69580 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 03:40:35.155852   69580 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0501 03:40:35.362090   69580 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0501 03:40:35.362164   69580 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0501 03:40:35.368621   69580 start.go:562] Will wait 60s for crictl version
	I0501 03:40:35.368701   69580 ssh_runner.go:195] Run: which crictl
	I0501 03:40:35.373792   69580 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0501 03:40:35.436905   69580 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0501 03:40:35.437018   69580 ssh_runner.go:195] Run: crio --version
	I0501 03:40:35.485130   69580 ssh_runner.go:195] Run: crio --version
	I0501 03:40:35.528700   69580 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0501 03:40:30.661395   69237 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0501 03:40:30.682810   69237 system_pods.go:43] waiting for kube-system pods to appear ...
	I0501 03:40:30.694277   69237 system_pods.go:59] 8 kube-system pods found
	I0501 03:40:30.694326   69237 system_pods.go:61] "coredns-7db6d8ff4d-9r7dt" [75d43a25-d309-427e-befc-7f1851b90d8c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0501 03:40:30.694343   69237 system_pods.go:61] "etcd-default-k8s-diff-port-715118" [21f6a4cd-f662-4865-9208-83959f0a6782] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0501 03:40:30.694354   69237 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-715118" [4dc3e45e-a5d8-480f-a8e8-763ecab0976b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0501 03:40:30.694369   69237 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-715118" [340580a3-040e-48fc-b89c-36a4f6fccfc2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0501 03:40:30.694376   69237 system_pods.go:61] "kube-proxy-vg7ts" [e55f3363-178c-427a-819d-0dc94c3116f3] Running
	I0501 03:40:30.694388   69237 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-715118" [b850fc4a-da6b-4714-98bb-e36e185880dd] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0501 03:40:30.694417   69237 system_pods.go:61] "metrics-server-569cc877fc-2btjj" [9b8ff94d-9e59-46d4-ac6d-7accca8b3552] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0501 03:40:30.694427   69237 system_pods.go:61] "storage-provisioner" [d44a3cf1-c8a5-4a20-8dd6-b854680b33b9] Running
	I0501 03:40:30.694435   69237 system_pods.go:74] duration metric: took 11.599113ms to wait for pod list to return data ...
	I0501 03:40:30.694449   69237 node_conditions.go:102] verifying NodePressure condition ...
	I0501 03:40:30.697795   69237 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0501 03:40:30.697825   69237 node_conditions.go:123] node cpu capacity is 2
	I0501 03:40:30.697838   69237 node_conditions.go:105] duration metric: took 3.383507ms to run NodePressure ...
	I0501 03:40:30.697858   69237 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:40:30.978827   69237 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0501 03:40:30.984628   69237 kubeadm.go:733] kubelet initialised
	I0501 03:40:30.984650   69237 kubeadm.go:734] duration metric: took 5.799905ms waiting for restarted kubelet to initialise ...
	I0501 03:40:30.984656   69237 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0501 03:40:30.992354   69237 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-9r7dt" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:30.999663   69237 pod_ready.go:97] node "default-k8s-diff-port-715118" hosting pod "coredns-7db6d8ff4d-9r7dt" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-715118" has status "Ready":"False"
	I0501 03:40:30.999690   69237 pod_ready.go:81] duration metric: took 7.312969ms for pod "coredns-7db6d8ff4d-9r7dt" in "kube-system" namespace to be "Ready" ...
	E0501 03:40:30.999700   69237 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-715118" hosting pod "coredns-7db6d8ff4d-9r7dt" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-715118" has status "Ready":"False"
	I0501 03:40:30.999706   69237 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-715118" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:31.006163   69237 pod_ready.go:97] node "default-k8s-diff-port-715118" hosting pod "etcd-default-k8s-diff-port-715118" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-715118" has status "Ready":"False"
	I0501 03:40:31.006187   69237 pod_ready.go:81] duration metric: took 6.471262ms for pod "etcd-default-k8s-diff-port-715118" in "kube-system" namespace to be "Ready" ...
	E0501 03:40:31.006199   69237 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-715118" hosting pod "etcd-default-k8s-diff-port-715118" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-715118" has status "Ready":"False"
	I0501 03:40:31.006208   69237 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-715118" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:31.011772   69237 pod_ready.go:97] node "default-k8s-diff-port-715118" hosting pod "kube-apiserver-default-k8s-diff-port-715118" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-715118" has status "Ready":"False"
	I0501 03:40:31.011793   69237 pod_ready.go:81] duration metric: took 5.576722ms for pod "kube-apiserver-default-k8s-diff-port-715118" in "kube-system" namespace to be "Ready" ...
	E0501 03:40:31.011803   69237 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-715118" hosting pod "kube-apiserver-default-k8s-diff-port-715118" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-715118" has status "Ready":"False"
	I0501 03:40:31.011810   69237 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-715118" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:31.086163   69237 pod_ready.go:97] node "default-k8s-diff-port-715118" hosting pod "kube-controller-manager-default-k8s-diff-port-715118" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-715118" has status "Ready":"False"
	I0501 03:40:31.086194   69237 pod_ready.go:81] duration metric: took 74.377197ms for pod "kube-controller-manager-default-k8s-diff-port-715118" in "kube-system" namespace to be "Ready" ...
	E0501 03:40:31.086207   69237 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-715118" hosting pod "kube-controller-manager-default-k8s-diff-port-715118" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-715118" has status "Ready":"False"
	I0501 03:40:31.086214   69237 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-vg7ts" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:31.487056   69237 pod_ready.go:92] pod "kube-proxy-vg7ts" in "kube-system" namespace has status "Ready":"True"
	I0501 03:40:31.487078   69237 pod_ready.go:81] duration metric: took 400.857543ms for pod "kube-proxy-vg7ts" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:31.487088   69237 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-715118" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:33.502448   69237 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-715118" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:35.530015   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetIP
	I0501 03:40:35.533706   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:35.534178   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:68:83", ip: ""} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:40:35.534254   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:35.534515   69580 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0501 03:40:35.541542   69580 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0501 03:40:35.563291   69580 kubeadm.go:877] updating cluster {Name:old-k8s-version-503971 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-503971 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.104 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0501 03:40:35.563434   69580 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0501 03:40:35.563512   69580 ssh_runner.go:195] Run: sudo crictl images --output json
	I0501 03:40:35.646548   69580 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0501 03:40:35.646635   69580 ssh_runner.go:195] Run: which lz4
	I0501 03:40:35.652824   69580 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0501 03:40:35.660056   69580 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0501 03:40:35.660099   69580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0501 03:40:33.808828   68640 main.go:141] libmachine: (no-preload-892672) Calling .Start
	I0501 03:40:33.809083   68640 main.go:141] libmachine: (no-preload-892672) Ensuring networks are active...
	I0501 03:40:33.809829   68640 main.go:141] libmachine: (no-preload-892672) Ensuring network default is active
	I0501 03:40:33.810166   68640 main.go:141] libmachine: (no-preload-892672) Ensuring network mk-no-preload-892672 is active
	I0501 03:40:33.810632   68640 main.go:141] libmachine: (no-preload-892672) Getting domain xml...
	I0501 03:40:33.811386   68640 main.go:141] libmachine: (no-preload-892672) Creating domain...
	I0501 03:40:35.133886   68640 main.go:141] libmachine: (no-preload-892672) Waiting to get IP...
	I0501 03:40:35.134756   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:35.135216   68640 main.go:141] libmachine: (no-preload-892672) DBG | unable to find current IP address of domain no-preload-892672 in network mk-no-preload-892672
	I0501 03:40:35.135280   68640 main.go:141] libmachine: (no-preload-892672) DBG | I0501 03:40:35.135178   70664 retry.go:31] will retry after 275.796908ms: waiting for machine to come up
	I0501 03:40:35.412670   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:35.413206   68640 main.go:141] libmachine: (no-preload-892672) DBG | unable to find current IP address of domain no-preload-892672 in network mk-no-preload-892672
	I0501 03:40:35.413232   68640 main.go:141] libmachine: (no-preload-892672) DBG | I0501 03:40:35.413162   70664 retry.go:31] will retry after 326.173381ms: waiting for machine to come up
	I0501 03:40:35.740734   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:35.741314   68640 main.go:141] libmachine: (no-preload-892672) DBG | unable to find current IP address of domain no-preload-892672 in network mk-no-preload-892672
	I0501 03:40:35.741342   68640 main.go:141] libmachine: (no-preload-892672) DBG | I0501 03:40:35.741260   70664 retry.go:31] will retry after 476.50915ms: waiting for machine to come up
	I0501 03:40:36.219908   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:36.220440   68640 main.go:141] libmachine: (no-preload-892672) DBG | unable to find current IP address of domain no-preload-892672 in network mk-no-preload-892672
	I0501 03:40:36.220473   68640 main.go:141] libmachine: (no-preload-892672) DBG | I0501 03:40:36.220399   70664 retry.go:31] will retry after 377.277784ms: waiting for machine to come up
	I0501 03:40:36.598936   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:36.599391   68640 main.go:141] libmachine: (no-preload-892672) DBG | unable to find current IP address of domain no-preload-892672 in network mk-no-preload-892672
	I0501 03:40:36.599417   68640 main.go:141] libmachine: (no-preload-892672) DBG | I0501 03:40:36.599348   70664 retry.go:31] will retry after 587.166276ms: waiting for machine to come up
	I0501 03:40:37.188757   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:37.189406   68640 main.go:141] libmachine: (no-preload-892672) DBG | unable to find current IP address of domain no-preload-892672 in network mk-no-preload-892672
	I0501 03:40:37.189441   68640 main.go:141] libmachine: (no-preload-892672) DBG | I0501 03:40:37.189311   70664 retry.go:31] will retry after 801.958256ms: waiting for machine to come up
	I0501 03:40:34.658104   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:36.660517   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:35.998453   69237 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-715118" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:38.495088   69237 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-715118" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:39.004175   69237 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-715118" in "kube-system" namespace has status "Ready":"True"
	I0501 03:40:39.004198   69237 pod_ready.go:81] duration metric: took 7.517103824s for pod "kube-scheduler-default-k8s-diff-port-715118" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:39.004209   69237 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:37.870306   69580 crio.go:462] duration metric: took 2.217531377s to copy over tarball
	I0501 03:40:37.870393   69580 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0501 03:40:37.992669   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:37.993052   68640 main.go:141] libmachine: (no-preload-892672) DBG | unable to find current IP address of domain no-preload-892672 in network mk-no-preload-892672
	I0501 03:40:37.993080   68640 main.go:141] libmachine: (no-preload-892672) DBG | I0501 03:40:37.993016   70664 retry.go:31] will retry after 1.085029482s: waiting for machine to come up
	I0501 03:40:39.079315   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:39.079739   68640 main.go:141] libmachine: (no-preload-892672) DBG | unable to find current IP address of domain no-preload-892672 in network mk-no-preload-892672
	I0501 03:40:39.079779   68640 main.go:141] libmachine: (no-preload-892672) DBG | I0501 03:40:39.079682   70664 retry.go:31] will retry after 1.140448202s: waiting for machine to come up
	I0501 03:40:40.221645   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:40.222165   68640 main.go:141] libmachine: (no-preload-892672) DBG | unable to find current IP address of domain no-preload-892672 in network mk-no-preload-892672
	I0501 03:40:40.222192   68640 main.go:141] libmachine: (no-preload-892672) DBG | I0501 03:40:40.222103   70664 retry.go:31] will retry after 1.434247869s: waiting for machine to come up
	I0501 03:40:41.658447   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:41.659034   68640 main.go:141] libmachine: (no-preload-892672) DBG | unable to find current IP address of domain no-preload-892672 in network mk-no-preload-892672
	I0501 03:40:41.659072   68640 main.go:141] libmachine: (no-preload-892672) DBG | I0501 03:40:41.659003   70664 retry.go:31] will retry after 1.759453732s: waiting for machine to come up
	I0501 03:40:39.157834   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:41.164729   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:43.658248   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:41.014770   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:43.513038   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:45.516821   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:41.534681   69580 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.664236925s)
	I0501 03:40:41.599216   69580 crio.go:469] duration metric: took 3.72886857s to extract the tarball
	I0501 03:40:41.599238   69580 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0501 03:40:41.649221   69580 ssh_runner.go:195] Run: sudo crictl images --output json
	I0501 03:40:41.697169   69580 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0501 03:40:41.697198   69580 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0501 03:40:41.697302   69580 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0501 03:40:41.697346   69580 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0501 03:40:41.697367   69580 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0501 03:40:41.697352   69580 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0501 03:40:41.697375   69580 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0501 03:40:41.697329   69580 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0501 03:40:41.697275   69580 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0501 03:40:41.697329   69580 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0501 03:40:41.698950   69580 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0501 03:40:41.699010   69580 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0501 03:40:41.699114   69580 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0501 03:40:41.699251   69580 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0501 03:40:41.699292   69580 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0501 03:40:41.699020   69580 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0501 03:40:41.699550   69580 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0501 03:40:41.699715   69580 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0501 03:40:41.830042   69580 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0501 03:40:41.881770   69580 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0501 03:40:41.881834   69580 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0501 03:40:41.881896   69580 ssh_runner.go:195] Run: which crictl
	I0501 03:40:41.887083   69580 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0501 03:40:41.894597   69580 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0501 03:40:41.935993   69580 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0501 03:40:41.937339   69580 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0501 03:40:41.961728   69580 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0501 03:40:41.961778   69580 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0501 03:40:41.961827   69580 ssh_runner.go:195] Run: which crictl
	I0501 03:40:42.004327   69580 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0501 03:40:42.004395   69580 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0501 03:40:42.004435   69580 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0501 03:40:42.004493   69580 ssh_runner.go:195] Run: which crictl
	I0501 03:40:42.053743   69580 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0501 03:40:42.055914   69580 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0501 03:40:42.056267   69580 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0501 03:40:42.056610   69580 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0501 03:40:42.060229   69580 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0501 03:40:42.070489   69580 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0501 03:40:42.127829   69580 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0501 03:40:42.127880   69580 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0501 03:40:42.127927   69580 ssh_runner.go:195] Run: which crictl
	I0501 03:40:42.201731   69580 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0501 03:40:42.201783   69580 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0501 03:40:42.201814   69580 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0501 03:40:42.201842   69580 ssh_runner.go:195] Run: which crictl
	I0501 03:40:42.211112   69580 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0501 03:40:42.211163   69580 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0501 03:40:42.211227   69580 ssh_runner.go:195] Run: which crictl
	I0501 03:40:42.217794   69580 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0501 03:40:42.217835   69580 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0501 03:40:42.217873   69580 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0501 03:40:42.217917   69580 ssh_runner.go:195] Run: which crictl
	I0501 03:40:42.217873   69580 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0501 03:40:42.220250   69580 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0501 03:40:42.274880   69580 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0501 03:40:42.294354   69580 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0501 03:40:42.294436   69580 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0501 03:40:42.305191   69580 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0501 03:40:42.342502   69580 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0501 03:40:42.560474   69580 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0501 03:40:42.712970   69580 cache_images.go:92] duration metric: took 1.015752585s to LoadCachedImages
	W0501 03:40:42.713057   69580 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	I0501 03:40:42.713074   69580 kubeadm.go:928] updating node { 192.168.61.104 8443 v1.20.0 crio true true} ...
	I0501 03:40:42.713227   69580 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-503971 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.104
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-503971 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0501 03:40:42.713323   69580 ssh_runner.go:195] Run: crio config
	I0501 03:40:42.771354   69580 cni.go:84] Creating CNI manager for ""
	I0501 03:40:42.771384   69580 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0501 03:40:42.771403   69580 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0501 03:40:42.771428   69580 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.104 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-503971 NodeName:old-k8s-version-503971 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.104"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.104 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0501 03:40:42.771644   69580 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.104
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-503971"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.104
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.104"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0501 03:40:42.771722   69580 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0501 03:40:42.784978   69580 binaries.go:44] Found k8s binaries, skipping transfer
	I0501 03:40:42.785057   69580 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0501 03:40:42.800945   69580 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0501 03:40:42.824293   69580 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0501 03:40:42.845949   69580 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0501 03:40:42.867390   69580 ssh_runner.go:195] Run: grep 192.168.61.104	control-plane.minikube.internal$ /etc/hosts
	I0501 03:40:42.872038   69580 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.104	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0501 03:40:42.890213   69580 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 03:40:43.041533   69580 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0501 03:40:43.070048   69580 certs.go:68] Setting up /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/old-k8s-version-503971 for IP: 192.168.61.104
	I0501 03:40:43.070075   69580 certs.go:194] generating shared ca certs ...
	I0501 03:40:43.070097   69580 certs.go:226] acquiring lock for ca certs: {Name:mk85a75d14c00780d4bf09cd9bdaf0f96f1b8fce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 03:40:43.070315   69580 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18779-13391/.minikube/ca.key
	I0501 03:40:43.070388   69580 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.key
	I0501 03:40:43.070419   69580 certs.go:256] generating profile certs ...
	I0501 03:40:43.070558   69580 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/old-k8s-version-503971/client.key
	I0501 03:40:43.070631   69580 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/old-k8s-version-503971/apiserver.key.760b883a
	I0501 03:40:43.070670   69580 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/old-k8s-version-503971/proxy-client.key
	I0501 03:40:43.070804   69580 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/20724.pem (1338 bytes)
	W0501 03:40:43.070852   69580 certs.go:480] ignoring /home/jenkins/minikube-integration/18779-13391/.minikube/certs/20724_empty.pem, impossibly tiny 0 bytes
	I0501 03:40:43.070865   69580 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca-key.pem (1679 bytes)
	I0501 03:40:43.070914   69580 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem (1078 bytes)
	I0501 03:40:43.070955   69580 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem (1123 bytes)
	I0501 03:40:43.070985   69580 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem (1675 bytes)
	I0501 03:40:43.071044   69580 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem (1708 bytes)
	I0501 03:40:43.071869   69580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0501 03:40:43.110078   69580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0501 03:40:43.164382   69580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0501 03:40:43.197775   69580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0501 03:40:43.230575   69580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/old-k8s-version-503971/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0501 03:40:43.260059   69580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/old-k8s-version-503971/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0501 03:40:43.288704   69580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/old-k8s-version-503971/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0501 03:40:43.315417   69580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/old-k8s-version-503971/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0501 03:40:43.363440   69580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0501 03:40:43.396043   69580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/certs/20724.pem --> /usr/share/ca-certificates/20724.pem (1338 bytes)
	I0501 03:40:43.425997   69580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem --> /usr/share/ca-certificates/207242.pem (1708 bytes)
	I0501 03:40:43.456927   69580 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0501 03:40:43.478177   69580 ssh_runner.go:195] Run: openssl version
	I0501 03:40:43.484513   69580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0501 03:40:43.497230   69580 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0501 03:40:43.504025   69580 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May  1 02:08 /usr/share/ca-certificates/minikubeCA.pem
	I0501 03:40:43.504112   69580 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0501 03:40:43.513309   69580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0501 03:40:43.528592   69580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20724.pem && ln -fs /usr/share/ca-certificates/20724.pem /etc/ssl/certs/20724.pem"
	I0501 03:40:43.544560   69580 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20724.pem
	I0501 03:40:43.550975   69580 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May  1 02:20 /usr/share/ca-certificates/20724.pem
	I0501 03:40:43.551047   69580 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20724.pem
	I0501 03:40:43.559214   69580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20724.pem /etc/ssl/certs/51391683.0"
	I0501 03:40:43.575362   69580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/207242.pem && ln -fs /usr/share/ca-certificates/207242.pem /etc/ssl/certs/207242.pem"
	I0501 03:40:43.587848   69580 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/207242.pem
	I0501 03:40:43.593131   69580 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May  1 02:20 /usr/share/ca-certificates/207242.pem
	I0501 03:40:43.593183   69580 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/207242.pem
	I0501 03:40:43.600365   69580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/207242.pem /etc/ssl/certs/3ec20f2e.0"
	I0501 03:40:43.613912   69580 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0501 03:40:43.619576   69580 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0501 03:40:43.628551   69580 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0501 03:40:43.637418   69580 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0501 03:40:43.645060   69580 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0501 03:40:43.654105   69580 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0501 03:40:43.663501   69580 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0501 03:40:43.670855   69580 kubeadm.go:391] StartCluster: {Name:old-k8s-version-503971 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-503971 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.104 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 03:40:43.670937   69580 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0501 03:40:43.670982   69580 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0501 03:40:43.720350   69580 cri.go:89] found id: ""
	I0501 03:40:43.720419   69580 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0501 03:40:43.732518   69580 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0501 03:40:43.732544   69580 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0501 03:40:43.732552   69580 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0501 03:40:43.732612   69580 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0501 03:40:43.743804   69580 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0501 03:40:43.745071   69580 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-503971" does not appear in /home/jenkins/minikube-integration/18779-13391/kubeconfig
	I0501 03:40:43.745785   69580 kubeconfig.go:62] /home/jenkins/minikube-integration/18779-13391/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-503971" cluster setting kubeconfig missing "old-k8s-version-503971" context setting]
	I0501 03:40:43.747054   69580 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18779-13391/kubeconfig: {Name:mk5d1131557f885800877622151c0915337cb23a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 03:40:43.748989   69580 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0501 03:40:43.760349   69580 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.104
	I0501 03:40:43.760389   69580 kubeadm.go:1154] stopping kube-system containers ...
	I0501 03:40:43.760403   69580 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0501 03:40:43.760473   69580 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0501 03:40:43.804745   69580 cri.go:89] found id: ""
	I0501 03:40:43.804841   69580 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0501 03:40:43.825960   69580 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0501 03:40:43.838038   69580 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0501 03:40:43.838062   69580 kubeadm.go:156] found existing configuration files:
	
	I0501 03:40:43.838115   69580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0501 03:40:43.849075   69580 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0501 03:40:43.849164   69580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0501 03:40:43.860634   69580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0501 03:40:43.871244   69580 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0501 03:40:43.871313   69580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0501 03:40:43.882184   69580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0501 03:40:43.893193   69580 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0501 03:40:43.893254   69580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0501 03:40:43.904257   69580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0501 03:40:43.915414   69580 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0501 03:40:43.915492   69580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0501 03:40:43.927372   69580 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0501 03:40:43.939117   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:40:44.098502   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:40:45.150125   69580 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.051581029s)
	I0501 03:40:45.150161   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:40:45.443307   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:40:45.563369   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:40:45.678620   69580 api_server.go:52] waiting for apiserver process to appear ...
	I0501 03:40:45.678731   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:46.179675   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:43.419480   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:43.419952   68640 main.go:141] libmachine: (no-preload-892672) DBG | unable to find current IP address of domain no-preload-892672 in network mk-no-preload-892672
	I0501 03:40:43.419980   68640 main.go:141] libmachine: (no-preload-892672) DBG | I0501 03:40:43.419907   70664 retry.go:31] will retry after 2.329320519s: waiting for machine to come up
	I0501 03:40:45.751405   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:45.751871   68640 main.go:141] libmachine: (no-preload-892672) DBG | unable to find current IP address of domain no-preload-892672 in network mk-no-preload-892672
	I0501 03:40:45.751902   68640 main.go:141] libmachine: (no-preload-892672) DBG | I0501 03:40:45.751822   70664 retry.go:31] will retry after 3.262804058s: waiting for machine to come up
	I0501 03:40:45.659845   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:48.157145   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:48.013520   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:50.514729   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:46.679449   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:47.179179   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:47.678890   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:48.179190   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:48.679276   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:49.179698   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:49.679121   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:50.179723   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:50.679603   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:51.179094   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:49.016460   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:49.016856   68640 main.go:141] libmachine: (no-preload-892672) DBG | unable to find current IP address of domain no-preload-892672 in network mk-no-preload-892672
	I0501 03:40:49.016878   68640 main.go:141] libmachine: (no-preload-892672) DBG | I0501 03:40:49.016826   70664 retry.go:31] will retry after 3.440852681s: waiting for machine to come up
	I0501 03:40:52.461349   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:52.461771   68640 main.go:141] libmachine: (no-preload-892672) DBG | unable to find current IP address of domain no-preload-892672 in network mk-no-preload-892672
	I0501 03:40:52.461800   68640 main.go:141] libmachine: (no-preload-892672) DBG | I0501 03:40:52.461722   70664 retry.go:31] will retry after 4.871322728s: waiting for machine to come up
	I0501 03:40:50.157703   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:52.655677   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:53.011851   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:55.510458   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:51.679850   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:52.179568   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:52.679100   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:53.179470   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:53.679115   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:54.178815   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:54.679769   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:55.179576   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:55.678864   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:56.179617   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:57.335855   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:57.336228   68640 main.go:141] libmachine: (no-preload-892672) Found IP for machine: 192.168.39.144
	I0501 03:40:57.336263   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has current primary IP address 192.168.39.144 and MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:57.336281   68640 main.go:141] libmachine: (no-preload-892672) Reserving static IP address...
	I0501 03:40:57.336629   68640 main.go:141] libmachine: (no-preload-892672) DBG | found host DHCP lease matching {name: "no-preload-892672", mac: "52:54:00:c7:6d:9a", ip: "192.168.39.144"} in network mk-no-preload-892672: {Iface:virbr1 ExpiryTime:2024-05-01 04:40:47 +0000 UTC Type:0 Mac:52:54:00:c7:6d:9a Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:no-preload-892672 Clientid:01:52:54:00:c7:6d:9a}
	I0501 03:40:57.336649   68640 main.go:141] libmachine: (no-preload-892672) DBG | skip adding static IP to network mk-no-preload-892672 - found existing host DHCP lease matching {name: "no-preload-892672", mac: "52:54:00:c7:6d:9a", ip: "192.168.39.144"}
	I0501 03:40:57.336661   68640 main.go:141] libmachine: (no-preload-892672) Reserved static IP address: 192.168.39.144
	I0501 03:40:57.336671   68640 main.go:141] libmachine: (no-preload-892672) Waiting for SSH to be available...
	I0501 03:40:57.336680   68640 main.go:141] libmachine: (no-preload-892672) DBG | Getting to WaitForSSH function...
	I0501 03:40:57.338862   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:57.339135   68640 main.go:141] libmachine: (no-preload-892672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6d:9a", ip: ""} in network mk-no-preload-892672: {Iface:virbr1 ExpiryTime:2024-05-01 04:40:47 +0000 UTC Type:0 Mac:52:54:00:c7:6d:9a Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:no-preload-892672 Clientid:01:52:54:00:c7:6d:9a}
	I0501 03:40:57.339163   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined IP address 192.168.39.144 and MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:57.339268   68640 main.go:141] libmachine: (no-preload-892672) DBG | Using SSH client type: external
	I0501 03:40:57.339296   68640 main.go:141] libmachine: (no-preload-892672) DBG | Using SSH private key: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/no-preload-892672/id_rsa (-rw-------)
	I0501 03:40:57.339328   68640 main.go:141] libmachine: (no-preload-892672) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.144 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18779-13391/.minikube/machines/no-preload-892672/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0501 03:40:57.339341   68640 main.go:141] libmachine: (no-preload-892672) DBG | About to run SSH command:
	I0501 03:40:57.339370   68640 main.go:141] libmachine: (no-preload-892672) DBG | exit 0
	I0501 03:40:57.466775   68640 main.go:141] libmachine: (no-preload-892672) DBG | SSH cmd err, output: <nil>: 
	I0501 03:40:57.467183   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetConfigRaw
	I0501 03:40:57.467890   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetIP
	I0501 03:40:57.470097   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:57.470527   68640 main.go:141] libmachine: (no-preload-892672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6d:9a", ip: ""} in network mk-no-preload-892672: {Iface:virbr1 ExpiryTime:2024-05-01 04:40:47 +0000 UTC Type:0 Mac:52:54:00:c7:6d:9a Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:no-preload-892672 Clientid:01:52:54:00:c7:6d:9a}
	I0501 03:40:57.470555   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined IP address 192.168.39.144 and MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:57.470767   68640 profile.go:143] Saving config to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/no-preload-892672/config.json ...
	I0501 03:40:57.470929   68640 machine.go:94] provisionDockerMachine start ...
	I0501 03:40:57.470950   68640 main.go:141] libmachine: (no-preload-892672) Calling .DriverName
	I0501 03:40:57.471177   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHHostname
	I0501 03:40:57.473301   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:57.473599   68640 main.go:141] libmachine: (no-preload-892672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6d:9a", ip: ""} in network mk-no-preload-892672: {Iface:virbr1 ExpiryTime:2024-05-01 04:40:47 +0000 UTC Type:0 Mac:52:54:00:c7:6d:9a Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:no-preload-892672 Clientid:01:52:54:00:c7:6d:9a}
	I0501 03:40:57.473626   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined IP address 192.168.39.144 and MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:57.473724   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHPort
	I0501 03:40:57.473863   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHKeyPath
	I0501 03:40:57.474032   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHKeyPath
	I0501 03:40:57.474181   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHUsername
	I0501 03:40:57.474337   68640 main.go:141] libmachine: Using SSH client type: native
	I0501 03:40:57.474545   68640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.144 22 <nil> <nil>}
	I0501 03:40:57.474558   68640 main.go:141] libmachine: About to run SSH command:
	hostname
	I0501 03:40:57.591733   68640 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0501 03:40:57.591766   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetMachineName
	I0501 03:40:57.592016   68640 buildroot.go:166] provisioning hostname "no-preload-892672"
	I0501 03:40:57.592048   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetMachineName
	I0501 03:40:57.592308   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHHostname
	I0501 03:40:57.595192   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:57.595593   68640 main.go:141] libmachine: (no-preload-892672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6d:9a", ip: ""} in network mk-no-preload-892672: {Iface:virbr1 ExpiryTime:2024-05-01 04:40:47 +0000 UTC Type:0 Mac:52:54:00:c7:6d:9a Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:no-preload-892672 Clientid:01:52:54:00:c7:6d:9a}
	I0501 03:40:57.595618   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined IP address 192.168.39.144 and MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:57.595697   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHPort
	I0501 03:40:57.595891   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHKeyPath
	I0501 03:40:57.596041   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHKeyPath
	I0501 03:40:57.596192   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHUsername
	I0501 03:40:57.596376   68640 main.go:141] libmachine: Using SSH client type: native
	I0501 03:40:57.596544   68640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.144 22 <nil> <nil>}
	I0501 03:40:57.596559   68640 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-892672 && echo "no-preload-892672" | sudo tee /etc/hostname
	I0501 03:40:57.727738   68640 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-892672
	
	I0501 03:40:57.727770   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHHostname
	I0501 03:40:57.730673   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:57.731033   68640 main.go:141] libmachine: (no-preload-892672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6d:9a", ip: ""} in network mk-no-preload-892672: {Iface:virbr1 ExpiryTime:2024-05-01 04:40:47 +0000 UTC Type:0 Mac:52:54:00:c7:6d:9a Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:no-preload-892672 Clientid:01:52:54:00:c7:6d:9a}
	I0501 03:40:57.731066   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined IP address 192.168.39.144 and MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:57.731202   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHPort
	I0501 03:40:57.731383   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHKeyPath
	I0501 03:40:57.731577   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHKeyPath
	I0501 03:40:57.731744   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHUsername
	I0501 03:40:57.731936   68640 main.go:141] libmachine: Using SSH client type: native
	I0501 03:40:57.732155   68640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.144 22 <nil> <nil>}
	I0501 03:40:57.732173   68640 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-892672' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-892672/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-892672' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0501 03:40:57.857465   68640 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0501 03:40:57.857492   68640 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18779-13391/.minikube CaCertPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18779-13391/.minikube}
	I0501 03:40:57.857515   68640 buildroot.go:174] setting up certificates
	I0501 03:40:57.857524   68640 provision.go:84] configureAuth start
	I0501 03:40:57.857532   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetMachineName
	I0501 03:40:57.857791   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetIP
	I0501 03:40:57.860530   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:57.860881   68640 main.go:141] libmachine: (no-preload-892672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6d:9a", ip: ""} in network mk-no-preload-892672: {Iface:virbr1 ExpiryTime:2024-05-01 04:40:47 +0000 UTC Type:0 Mac:52:54:00:c7:6d:9a Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:no-preload-892672 Clientid:01:52:54:00:c7:6d:9a}
	I0501 03:40:57.860911   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined IP address 192.168.39.144 and MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:57.861035   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHHostname
	I0501 03:40:57.863122   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:57.863445   68640 main.go:141] libmachine: (no-preload-892672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6d:9a", ip: ""} in network mk-no-preload-892672: {Iface:virbr1 ExpiryTime:2024-05-01 04:40:47 +0000 UTC Type:0 Mac:52:54:00:c7:6d:9a Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:no-preload-892672 Clientid:01:52:54:00:c7:6d:9a}
	I0501 03:40:57.863472   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined IP address 192.168.39.144 and MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:57.863565   68640 provision.go:143] copyHostCerts
	I0501 03:40:57.863614   68640 exec_runner.go:144] found /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem, removing ...
	I0501 03:40:57.863624   68640 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem
	I0501 03:40:57.863689   68640 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem (1078 bytes)
	I0501 03:40:57.863802   68640 exec_runner.go:144] found /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem, removing ...
	I0501 03:40:57.863814   68640 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem
	I0501 03:40:57.863843   68640 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem (1123 bytes)
	I0501 03:40:57.863928   68640 exec_runner.go:144] found /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem, removing ...
	I0501 03:40:57.863938   68640 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem
	I0501 03:40:57.863962   68640 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem (1675 bytes)
	I0501 03:40:57.864040   68640 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca-key.pem org=jenkins.no-preload-892672 san=[127.0.0.1 192.168.39.144 localhost minikube no-preload-892672]
	I0501 03:40:54.658003   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:56.658041   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:58.125270   68640 provision.go:177] copyRemoteCerts
	I0501 03:40:58.125321   68640 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0501 03:40:58.125342   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHHostname
	I0501 03:40:58.127890   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:58.128299   68640 main.go:141] libmachine: (no-preload-892672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6d:9a", ip: ""} in network mk-no-preload-892672: {Iface:virbr1 ExpiryTime:2024-05-01 04:40:47 +0000 UTC Type:0 Mac:52:54:00:c7:6d:9a Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:no-preload-892672 Clientid:01:52:54:00:c7:6d:9a}
	I0501 03:40:58.128330   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined IP address 192.168.39.144 and MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:58.128469   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHPort
	I0501 03:40:58.128645   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHKeyPath
	I0501 03:40:58.128809   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHUsername
	I0501 03:40:58.128941   68640 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/no-preload-892672/id_rsa Username:docker}
	I0501 03:40:58.222112   68640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0501 03:40:58.249760   68640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0501 03:40:58.277574   68640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0501 03:40:58.304971   68640 provision.go:87] duration metric: took 447.420479ms to configureAuth
	I0501 03:40:58.305017   68640 buildroot.go:189] setting minikube options for container-runtime
	I0501 03:40:58.305270   68640 config.go:182] Loaded profile config "no-preload-892672": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0501 03:40:58.305434   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHHostname
	I0501 03:40:58.308098   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:58.308487   68640 main.go:141] libmachine: (no-preload-892672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6d:9a", ip: ""} in network mk-no-preload-892672: {Iface:virbr1 ExpiryTime:2024-05-01 04:40:47 +0000 UTC Type:0 Mac:52:54:00:c7:6d:9a Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:no-preload-892672 Clientid:01:52:54:00:c7:6d:9a}
	I0501 03:40:58.308528   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined IP address 192.168.39.144 and MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:58.308658   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHPort
	I0501 03:40:58.308857   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHKeyPath
	I0501 03:40:58.309025   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHKeyPath
	I0501 03:40:58.309173   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHUsername
	I0501 03:40:58.309354   68640 main.go:141] libmachine: Using SSH client type: native
	I0501 03:40:58.309510   68640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.144 22 <nil> <nil>}
	I0501 03:40:58.309526   68640 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0501 03:40:58.609833   68640 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0501 03:40:58.609859   68640 machine.go:97] duration metric: took 1.138916322s to provisionDockerMachine
	I0501 03:40:58.609873   68640 start.go:293] postStartSetup for "no-preload-892672" (driver="kvm2")
	I0501 03:40:58.609885   68640 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0501 03:40:58.609905   68640 main.go:141] libmachine: (no-preload-892672) Calling .DriverName
	I0501 03:40:58.610271   68640 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0501 03:40:58.610307   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHHostname
	I0501 03:40:58.612954   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:58.613308   68640 main.go:141] libmachine: (no-preload-892672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6d:9a", ip: ""} in network mk-no-preload-892672: {Iface:virbr1 ExpiryTime:2024-05-01 04:40:47 +0000 UTC Type:0 Mac:52:54:00:c7:6d:9a Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:no-preload-892672 Clientid:01:52:54:00:c7:6d:9a}
	I0501 03:40:58.613322   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined IP address 192.168.39.144 and MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:58.613485   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHPort
	I0501 03:40:58.613683   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHKeyPath
	I0501 03:40:58.613871   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHUsername
	I0501 03:40:58.614005   68640 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/no-preload-892672/id_rsa Username:docker}
	I0501 03:40:58.702752   68640 ssh_runner.go:195] Run: cat /etc/os-release
	I0501 03:40:58.707441   68640 info.go:137] Remote host: Buildroot 2023.02.9
	I0501 03:40:58.707468   68640 filesync.go:126] Scanning /home/jenkins/minikube-integration/18779-13391/.minikube/addons for local assets ...
	I0501 03:40:58.707577   68640 filesync.go:126] Scanning /home/jenkins/minikube-integration/18779-13391/.minikube/files for local assets ...
	I0501 03:40:58.707646   68640 filesync.go:149] local asset: /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem -> 207242.pem in /etc/ssl/certs
	I0501 03:40:58.707728   68640 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0501 03:40:58.718247   68640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem --> /etc/ssl/certs/207242.pem (1708 bytes)
	I0501 03:40:58.745184   68640 start.go:296] duration metric: took 135.29943ms for postStartSetup
	I0501 03:40:58.745218   68640 fix.go:56] duration metric: took 24.96116093s for fixHost
	I0501 03:40:58.745236   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHHostname
	I0501 03:40:58.747809   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:58.748228   68640 main.go:141] libmachine: (no-preload-892672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6d:9a", ip: ""} in network mk-no-preload-892672: {Iface:virbr1 ExpiryTime:2024-05-01 04:40:47 +0000 UTC Type:0 Mac:52:54:00:c7:6d:9a Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:no-preload-892672 Clientid:01:52:54:00:c7:6d:9a}
	I0501 03:40:58.748261   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined IP address 192.168.39.144 and MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:58.748380   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHPort
	I0501 03:40:58.748591   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHKeyPath
	I0501 03:40:58.748747   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHKeyPath
	I0501 03:40:58.748870   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHUsername
	I0501 03:40:58.749049   68640 main.go:141] libmachine: Using SSH client type: native
	I0501 03:40:58.749262   68640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.144 22 <nil> <nil>}
	I0501 03:40:58.749275   68640 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0501 03:40:58.867651   68640 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714534858.808639015
	
	I0501 03:40:58.867676   68640 fix.go:216] guest clock: 1714534858.808639015
	I0501 03:40:58.867686   68640 fix.go:229] Guest: 2024-05-01 03:40:58.808639015 +0000 UTC Remote: 2024-05-01 03:40:58.745221709 +0000 UTC m=+370.854832040 (delta=63.417306ms)
	I0501 03:40:58.867735   68640 fix.go:200] guest clock delta is within tolerance: 63.417306ms
	I0501 03:40:58.867746   68640 start.go:83] releasing machines lock for "no-preload-892672", held for 25.083724737s
	I0501 03:40:58.867770   68640 main.go:141] libmachine: (no-preload-892672) Calling .DriverName
	I0501 03:40:58.868053   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetIP
	I0501 03:40:58.871193   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:58.871618   68640 main.go:141] libmachine: (no-preload-892672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6d:9a", ip: ""} in network mk-no-preload-892672: {Iface:virbr1 ExpiryTime:2024-05-01 04:40:47 +0000 UTC Type:0 Mac:52:54:00:c7:6d:9a Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:no-preload-892672 Clientid:01:52:54:00:c7:6d:9a}
	I0501 03:40:58.871664   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined IP address 192.168.39.144 and MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:58.871815   68640 main.go:141] libmachine: (no-preload-892672) Calling .DriverName
	I0501 03:40:58.872441   68640 main.go:141] libmachine: (no-preload-892672) Calling .DriverName
	I0501 03:40:58.872665   68640 main.go:141] libmachine: (no-preload-892672) Calling .DriverName
	I0501 03:40:58.872750   68640 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0501 03:40:58.872787   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHHostname
	I0501 03:40:58.872918   68640 ssh_runner.go:195] Run: cat /version.json
	I0501 03:40:58.872946   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHHostname
	I0501 03:40:58.875797   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:58.875976   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:58.876230   68640 main.go:141] libmachine: (no-preload-892672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6d:9a", ip: ""} in network mk-no-preload-892672: {Iface:virbr1 ExpiryTime:2024-05-01 04:40:47 +0000 UTC Type:0 Mac:52:54:00:c7:6d:9a Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:no-preload-892672 Clientid:01:52:54:00:c7:6d:9a}
	I0501 03:40:58.876341   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined IP address 192.168.39.144 and MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:58.876377   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHPort
	I0501 03:40:58.876502   68640 main.go:141] libmachine: (no-preload-892672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6d:9a", ip: ""} in network mk-no-preload-892672: {Iface:virbr1 ExpiryTime:2024-05-01 04:40:47 +0000 UTC Type:0 Mac:52:54:00:c7:6d:9a Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:no-preload-892672 Clientid:01:52:54:00:c7:6d:9a}
	I0501 03:40:58.876539   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined IP address 192.168.39.144 and MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:58.876587   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHKeyPath
	I0501 03:40:58.876756   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHUsername
	I0501 03:40:58.876894   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHPort
	I0501 03:40:58.876969   68640 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/no-preload-892672/id_rsa Username:docker}
	I0501 03:40:58.877057   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHKeyPath
	I0501 03:40:58.877246   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHUsername
	I0501 03:40:58.877424   68640 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/no-preload-892672/id_rsa Username:docker}
	I0501 03:40:58.983384   68640 ssh_runner.go:195] Run: systemctl --version
	I0501 03:40:58.991625   68640 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0501 03:40:59.143916   68640 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0501 03:40:59.151065   68640 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0501 03:40:59.151124   68640 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0501 03:40:59.168741   68640 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0501 03:40:59.168763   68640 start.go:494] detecting cgroup driver to use...
	I0501 03:40:59.168825   68640 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0501 03:40:59.188524   68640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0501 03:40:59.205602   68640 docker.go:217] disabling cri-docker service (if available) ...
	I0501 03:40:59.205668   68640 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0501 03:40:59.221173   68640 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0501 03:40:59.236546   68640 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0501 03:40:59.364199   68640 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0501 03:40:59.533188   68640 docker.go:233] disabling docker service ...
	I0501 03:40:59.533266   68640 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0501 03:40:59.549488   68640 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0501 03:40:59.562910   68640 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0501 03:40:59.705451   68640 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0501 03:40:59.843226   68640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0501 03:40:59.858878   68640 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0501 03:40:59.882729   68640 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0501 03:40:59.882808   68640 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:40:59.895678   68640 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0501 03:40:59.895763   68640 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:40:59.908439   68640 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:40:59.921319   68640 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:40:59.934643   68640 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0501 03:40:59.947416   68640 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:40:59.959887   68640 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:40:59.981849   68640 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:40:59.994646   68640 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0501 03:41:00.006059   68640 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0501 03:41:00.006133   68640 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0501 03:41:00.024850   68640 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0501 03:41:00.036834   68640 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 03:41:00.161283   68640 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0501 03:41:00.312304   68640 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0501 03:41:00.312375   68640 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0501 03:41:00.317980   68640 start.go:562] Will wait 60s for crictl version
	I0501 03:41:00.318043   68640 ssh_runner.go:195] Run: which crictl
	I0501 03:41:00.322780   68640 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0501 03:41:00.362830   68640 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0501 03:41:00.362920   68640 ssh_runner.go:195] Run: crio --version
	I0501 03:41:00.399715   68640 ssh_runner.go:195] Run: crio --version
	I0501 03:41:00.432510   68640 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0501 03:40:57.511719   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:00.013693   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:56.679034   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:57.179062   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:57.679579   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:58.179221   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:58.679728   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:59.178851   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:59.679647   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:00.179397   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:00.678839   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:01.179679   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:00.433777   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetIP
	I0501 03:41:00.436557   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:41:00.436892   68640 main.go:141] libmachine: (no-preload-892672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6d:9a", ip: ""} in network mk-no-preload-892672: {Iface:virbr1 ExpiryTime:2024-05-01 04:40:47 +0000 UTC Type:0 Mac:52:54:00:c7:6d:9a Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:no-preload-892672 Clientid:01:52:54:00:c7:6d:9a}
	I0501 03:41:00.436920   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined IP address 192.168.39.144 and MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:41:00.437124   68640 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0501 03:41:00.441861   68640 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0501 03:41:00.455315   68640 kubeadm.go:877] updating cluster {Name:no-preload-892672 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.0 ClusterName:no-preload-892672 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.144 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0501 03:41:00.455417   68640 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0501 03:41:00.455462   68640 ssh_runner.go:195] Run: sudo crictl images --output json
	I0501 03:41:00.496394   68640 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0501 03:41:00.496422   68640 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.30.0 registry.k8s.io/kube-controller-manager:v1.30.0 registry.k8s.io/kube-scheduler:v1.30.0 registry.k8s.io/kube-proxy:v1.30.0 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.12-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0501 03:41:00.496508   68640 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0501 03:41:00.496532   68640 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.0
	I0501 03:41:00.496551   68640 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.0
	I0501 03:41:00.496581   68640 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0501 03:41:00.496679   68640 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.0
	I0501 03:41:00.496701   68640 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0501 03:41:00.496736   68640 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0501 03:41:00.496529   68640 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.0
	I0501 03:41:00.498207   68640 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.0
	I0501 03:41:00.498227   68640 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0501 03:41:00.498246   68640 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.0
	I0501 03:41:00.498250   68640 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.0
	I0501 03:41:00.498270   68640 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.0
	I0501 03:41:00.498254   68640 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0501 03:41:00.498298   68640 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0501 03:41:00.498477   68640 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0501 03:41:00.617430   68640 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.30.0
	I0501 03:41:00.621346   68640 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.30.0
	I0501 03:41:00.622759   68640 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0501 03:41:00.628313   68640 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.30.0
	I0501 03:41:00.629087   68640 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0501 03:41:00.633625   68640 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.12-0
	I0501 03:41:00.652130   68640 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.30.0
	I0501 03:41:00.722500   68640 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.30.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.30.0" does not exist at hash "c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0" in container runtime
	I0501 03:41:00.722554   68640 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.30.0
	I0501 03:41:00.722623   68640 ssh_runner.go:195] Run: which crictl
	I0501 03:41:00.796476   68640 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.30.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.30.0" does not exist at hash "259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced" in container runtime
	I0501 03:41:00.796530   68640 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.30.0
	I0501 03:41:00.796580   68640 ssh_runner.go:195] Run: which crictl
	I0501 03:41:00.944235   68640 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.30.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.30.0" does not exist at hash "c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b" in container runtime
	I0501 03:41:00.944262   68640 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0501 03:41:00.944289   68640 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.30.0
	I0501 03:41:00.944297   68640 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0501 03:41:00.944305   68640 cache_images.go:116] "registry.k8s.io/etcd:3.5.12-0" needs transfer: "registry.k8s.io/etcd:3.5.12-0" does not exist at hash "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899" in container runtime
	I0501 03:41:00.944325   68640 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.12-0
	I0501 03:41:00.944344   68640 ssh_runner.go:195] Run: which crictl
	I0501 03:41:00.944357   68640 ssh_runner.go:195] Run: which crictl
	I0501 03:41:00.944398   68640 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.30.0" needs transfer: "registry.k8s.io/kube-proxy:v1.30.0" does not exist at hash "a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b" in container runtime
	I0501 03:41:00.944348   68640 ssh_runner.go:195] Run: which crictl
	I0501 03:41:00.944434   68640 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.30.0
	I0501 03:41:00.944422   68640 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.30.0
	I0501 03:41:00.944452   68640 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.0
	I0501 03:41:00.944464   68640 ssh_runner.go:195] Run: which crictl
	I0501 03:41:00.998765   68640 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.30.0
	I0501 03:41:00.998791   68640 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0
	I0501 03:41:00.998846   68640 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.12-0
	I0501 03:41:00.998891   68640 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.0
	I0501 03:41:01.017469   68640 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.30.0
	I0501 03:41:01.017494   68640 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0
	I0501 03:41:01.017584   68640 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.0
	I0501 03:41:01.018040   68640 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0501 03:41:01.105445   68640 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0
	I0501 03:41:01.105517   68640 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0
	I0501 03:41:01.105560   68640 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0
	I0501 03:41:01.105583   68640 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.30.0 (exists)
	I0501 03:41:01.105595   68640 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.30.0
	I0501 03:41:01.105635   68640 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.0
	I0501 03:41:01.105645   68640 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0
	I0501 03:41:01.105734   68640 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.30.0 (exists)
	I0501 03:41:01.105814   68640 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0
	I0501 03:41:01.105888   68640 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.0
	I0501 03:41:01.120943   68640 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0501 03:41:01.121044   68640 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0501 03:41:01.127975   68640 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.12-0 (exists)
	I0501 03:41:01.359381   68640 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0501 03:40:59.156924   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:01.659307   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:03.661498   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:02.511652   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:05.011220   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:01.679527   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:02.179435   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:02.679626   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:03.179351   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:03.679618   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:04.179426   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:04.678853   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:05.179143   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:05.679065   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:06.179513   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:04.315680   68640 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.0: (3.210016587s)
	I0501 03:41:04.315725   68640 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.30.0 (exists)
	I0501 03:41:04.315756   68640 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.0: (3.209843913s)
	I0501 03:41:04.315784   68640 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1: (3.194721173s)
	I0501 03:41:04.315799   68640 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0: (3.210139611s)
	I0501 03:41:04.315812   68640 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.30.0 (exists)
	I0501 03:41:04.315813   68640 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0501 03:41:04.315813   68640 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0 from cache
	I0501 03:41:04.315844   68640 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.956432506s)
	I0501 03:41:04.315859   68640 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.30.0
	I0501 03:41:04.315902   68640 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0501 03:41:04.315905   68640 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0
	I0501 03:41:04.315927   68640 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0501 03:41:04.315962   68640 ssh_runner.go:195] Run: which crictl
	I0501 03:41:05.691351   68640 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0: (1.375419764s)
	I0501 03:41:05.691394   68640 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0 from cache
	I0501 03:41:05.691418   68640 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.12-0
	I0501 03:41:05.691467   68640 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0
	I0501 03:41:05.691477   68640 ssh_runner.go:235] Completed: which crictl: (1.375499162s)
	I0501 03:41:05.691529   68640 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0501 03:41:06.159381   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:08.659756   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:07.012126   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:09.511459   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:06.679246   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:07.179629   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:07.679601   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:08.179634   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:08.679603   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:09.179675   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:09.678837   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:10.178860   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:10.679638   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:11.179802   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:09.757005   68640 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0: (4.065509843s)
	I0501 03:41:09.757044   68640 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 from cache
	I0501 03:41:09.757079   68640 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.30.0
	I0501 03:41:09.757093   68640 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (4.065539206s)
	I0501 03:41:09.757137   68640 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0501 03:41:09.757158   68640 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0
	I0501 03:41:09.757222   68640 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0501 03:41:12.125691   68640 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0: (2.368504788s)
	I0501 03:41:12.125729   68640 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0 from cache
	I0501 03:41:12.125726   68640 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.368475622s)
	I0501 03:41:12.125755   68640 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0501 03:41:12.125754   68640 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.30.0
	I0501 03:41:12.125817   68640 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0
	I0501 03:41:11.157019   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:13.157632   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:11.513027   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:14.013463   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:11.679355   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:12.178847   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:12.679660   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:13.179641   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:13.678808   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:14.178955   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:14.679651   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:15.179623   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:15.678862   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:16.179775   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:14.315765   68640 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0: (2.18991878s)
	I0501 03:41:14.315791   68640 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0 from cache
	I0501 03:41:14.315835   68640 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0501 03:41:14.315911   68640 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0501 03:41:16.401221   68640 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.085281928s)
	I0501 03:41:16.401261   68640 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0501 03:41:16.401291   68640 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0501 03:41:16.401335   68640 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0501 03:41:17.152926   68640 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0501 03:41:17.152969   68640 cache_images.go:123] Successfully loaded all cached images
	I0501 03:41:17.152976   68640 cache_images.go:92] duration metric: took 16.656540612s to LoadCachedImages
	I0501 03:41:17.152989   68640 kubeadm.go:928] updating node { 192.168.39.144 8443 v1.30.0 crio true true} ...
	I0501 03:41:17.153119   68640 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-892672 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.144
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:no-preload-892672 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0501 03:41:17.153241   68640 ssh_runner.go:195] Run: crio config
	I0501 03:41:17.207153   68640 cni.go:84] Creating CNI manager for ""
	I0501 03:41:17.207181   68640 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0501 03:41:17.207196   68640 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0501 03:41:17.207225   68640 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.144 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-892672 NodeName:no-preload-892672 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.144"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.144 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0501 03:41:17.207407   68640 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.144
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-892672"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.144
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.144"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0501 03:41:17.207488   68640 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0501 03:41:17.221033   68640 binaries.go:44] Found k8s binaries, skipping transfer
	I0501 03:41:17.221099   68640 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0501 03:41:17.232766   68640 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0501 03:41:17.252543   68640 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0501 03:41:17.272030   68640 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0501 03:41:17.291541   68640 ssh_runner.go:195] Run: grep 192.168.39.144	control-plane.minikube.internal$ /etc/hosts
	I0501 03:41:17.295801   68640 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.144	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0501 03:41:17.309880   68640 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 03:41:17.432917   68640 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0501 03:41:17.452381   68640 certs.go:68] Setting up /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/no-preload-892672 for IP: 192.168.39.144
	I0501 03:41:17.452406   68640 certs.go:194] generating shared ca certs ...
	I0501 03:41:17.452425   68640 certs.go:226] acquiring lock for ca certs: {Name:mk85a75d14c00780d4bf09cd9bdaf0f96f1b8fce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 03:41:17.452606   68640 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18779-13391/.minikube/ca.key
	I0501 03:41:17.452655   68640 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.key
	I0501 03:41:17.452669   68640 certs.go:256] generating profile certs ...
	I0501 03:41:17.452746   68640 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/no-preload-892672/client.key
	I0501 03:41:17.452809   68640 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/no-preload-892672/apiserver.key.3644a8af
	I0501 03:41:17.452848   68640 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/no-preload-892672/proxy-client.key
	I0501 03:41:17.452963   68640 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/20724.pem (1338 bytes)
	W0501 03:41:17.453007   68640 certs.go:480] ignoring /home/jenkins/minikube-integration/18779-13391/.minikube/certs/20724_empty.pem, impossibly tiny 0 bytes
	I0501 03:41:17.453021   68640 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca-key.pem (1679 bytes)
	I0501 03:41:17.453050   68640 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem (1078 bytes)
	I0501 03:41:17.453083   68640 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem (1123 bytes)
	I0501 03:41:17.453116   68640 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem (1675 bytes)
	I0501 03:41:17.453166   68640 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem (1708 bytes)
	I0501 03:41:17.453767   68640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0501 03:41:17.490616   68640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0501 03:41:17.545217   68640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0501 03:41:17.576908   68640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0501 03:41:17.607371   68640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/no-preload-892672/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0501 03:41:17.657675   68640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/no-preload-892672/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0501 03:41:17.684681   68640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/no-preload-892672/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0501 03:41:17.716319   68640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/no-preload-892672/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0501 03:41:17.745731   68640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0501 03:41:17.770939   68640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/certs/20724.pem --> /usr/share/ca-certificates/20724.pem (1338 bytes)
	I0501 03:41:17.796366   68640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem --> /usr/share/ca-certificates/207242.pem (1708 bytes)
	I0501 03:41:17.823301   68640 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0501 03:41:17.841496   68640 ssh_runner.go:195] Run: openssl version
	I0501 03:41:17.848026   68640 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0501 03:41:17.860734   68640 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0501 03:41:17.865978   68640 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May  1 02:08 /usr/share/ca-certificates/minikubeCA.pem
	I0501 03:41:17.866037   68640 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0501 03:41:17.872644   68640 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0501 03:41:17.886241   68640 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20724.pem && ln -fs /usr/share/ca-certificates/20724.pem /etc/ssl/certs/20724.pem"
	I0501 03:41:17.899619   68640 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20724.pem
	I0501 03:41:17.904664   68640 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May  1 02:20 /usr/share/ca-certificates/20724.pem
	I0501 03:41:17.904701   68640 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20724.pem
	I0501 03:41:17.910799   68640 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20724.pem /etc/ssl/certs/51391683.0"
	I0501 03:41:17.923007   68640 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/207242.pem && ln -fs /usr/share/ca-certificates/207242.pem /etc/ssl/certs/207242.pem"
	I0501 03:41:15.657403   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:18.156777   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:16.511834   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:18.512735   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:20.513144   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:16.679614   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:17.179604   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:17.679100   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:18.179166   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:18.679202   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:19.179631   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:19.679583   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:20.179584   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:20.679493   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:21.178945   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:17.935647   68640 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/207242.pem
	I0501 03:41:17.942147   68640 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May  1 02:20 /usr/share/ca-certificates/207242.pem
	I0501 03:41:17.942187   68640 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/207242.pem
	I0501 03:41:17.948468   68640 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/207242.pem /etc/ssl/certs/3ec20f2e.0"
	I0501 03:41:17.962737   68640 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0501 03:41:17.968953   68640 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0501 03:41:17.975849   68640 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0501 03:41:17.982324   68640 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0501 03:41:17.988930   68640 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0501 03:41:17.995221   68640 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0501 03:41:18.001868   68640 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0501 03:41:18.008701   68640 kubeadm.go:391] StartCluster: {Name:no-preload-892672 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.0 ClusterName:no-preload-892672 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.144 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 03:41:18.008831   68640 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0501 03:41:18.008893   68640 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0501 03:41:18.056939   68640 cri.go:89] found id: ""
	I0501 03:41:18.057005   68640 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0501 03:41:18.070898   68640 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0501 03:41:18.070921   68640 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0501 03:41:18.070926   68640 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0501 03:41:18.070968   68640 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0501 03:41:18.083907   68640 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0501 03:41:18.085116   68640 kubeconfig.go:125] found "no-preload-892672" server: "https://192.168.39.144:8443"
	I0501 03:41:18.088582   68640 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0501 03:41:18.101426   68640 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.144
	I0501 03:41:18.101471   68640 kubeadm.go:1154] stopping kube-system containers ...
	I0501 03:41:18.101493   68640 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0501 03:41:18.101543   68640 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0501 03:41:18.153129   68640 cri.go:89] found id: ""
	I0501 03:41:18.153193   68640 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0501 03:41:18.173100   68640 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0501 03:41:18.188443   68640 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0501 03:41:18.188463   68640 kubeadm.go:156] found existing configuration files:
	
	I0501 03:41:18.188509   68640 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0501 03:41:18.202153   68640 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0501 03:41:18.202204   68640 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0501 03:41:18.215390   68640 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0501 03:41:18.227339   68640 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0501 03:41:18.227404   68640 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0501 03:41:18.239160   68640 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0501 03:41:18.251992   68640 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0501 03:41:18.252053   68640 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0501 03:41:18.265088   68640 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0501 03:41:18.277922   68640 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0501 03:41:18.277983   68640 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0501 03:41:18.291307   68640 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0501 03:41:18.304879   68640 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:41:18.417921   68640 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:41:19.350848   68640 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:41:19.586348   68640 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:41:19.761056   68640 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:41:19.867315   68640 api_server.go:52] waiting for apiserver process to appear ...
	I0501 03:41:19.867435   68640 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:20.368520   68640 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:20.868444   68640 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:20.913411   68640 api_server.go:72] duration metric: took 1.046095165s to wait for apiserver process to appear ...
	I0501 03:41:20.913444   68640 api_server.go:88] waiting for apiserver healthz status ...
	I0501 03:41:20.913469   68640 api_server.go:253] Checking apiserver healthz at https://192.168.39.144:8443/healthz ...
	I0501 03:41:20.914000   68640 api_server.go:269] stopped: https://192.168.39.144:8443/healthz: Get "https://192.168.39.144:8443/healthz": dial tcp 192.168.39.144:8443: connect: connection refused
	I0501 03:41:21.414544   68640 api_server.go:253] Checking apiserver healthz at https://192.168.39.144:8443/healthz ...
	I0501 03:41:20.658333   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:23.157298   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:23.011395   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:25.012164   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:21.678785   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:22.179435   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:22.679615   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:23.179610   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:23.679473   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:24.179613   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:24.679672   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:25.179400   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:25.679793   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:26.179809   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:24.166756   68640 api_server.go:279] https://192.168.39.144:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0501 03:41:24.166786   68640 api_server.go:103] status: https://192.168.39.144:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0501 03:41:24.166807   68640 api_server.go:253] Checking apiserver healthz at https://192.168.39.144:8443/healthz ...
	I0501 03:41:24.205679   68640 api_server.go:279] https://192.168.39.144:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0501 03:41:24.205713   68640 api_server.go:103] status: https://192.168.39.144:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0501 03:41:24.414055   68640 api_server.go:253] Checking apiserver healthz at https://192.168.39.144:8443/healthz ...
	I0501 03:41:24.420468   68640 api_server.go:279] https://192.168.39.144:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0501 03:41:24.420502   68640 api_server.go:103] status: https://192.168.39.144:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0501 03:41:24.914021   68640 api_server.go:253] Checking apiserver healthz at https://192.168.39.144:8443/healthz ...
	I0501 03:41:24.919717   68640 api_server.go:279] https://192.168.39.144:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0501 03:41:24.919754   68640 api_server.go:103] status: https://192.168.39.144:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0501 03:41:25.414015   68640 api_server.go:253] Checking apiserver healthz at https://192.168.39.144:8443/healthz ...
	I0501 03:41:25.422149   68640 api_server.go:279] https://192.168.39.144:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0501 03:41:25.422180   68640 api_server.go:103] status: https://192.168.39.144:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0501 03:41:25.913751   68640 api_server.go:253] Checking apiserver healthz at https://192.168.39.144:8443/healthz ...
	I0501 03:41:25.917839   68640 api_server.go:279] https://192.168.39.144:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0501 03:41:25.917865   68640 api_server.go:103] status: https://192.168.39.144:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0501 03:41:26.414458   68640 api_server.go:253] Checking apiserver healthz at https://192.168.39.144:8443/healthz ...
	I0501 03:41:26.419346   68640 api_server.go:279] https://192.168.39.144:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0501 03:41:26.419367   68640 api_server.go:103] status: https://192.168.39.144:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0501 03:41:26.913912   68640 api_server.go:253] Checking apiserver healthz at https://192.168.39.144:8443/healthz ...
	I0501 03:41:26.918504   68640 api_server.go:279] https://192.168.39.144:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0501 03:41:26.918537   68640 api_server.go:103] status: https://192.168.39.144:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0501 03:41:27.413693   68640 api_server.go:253] Checking apiserver healthz at https://192.168.39.144:8443/healthz ...
	I0501 03:41:27.421752   68640 api_server.go:279] https://192.168.39.144:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0501 03:41:27.421776   68640 api_server.go:103] status: https://192.168.39.144:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0501 03:41:27.913582   68640 api_server.go:253] Checking apiserver healthz at https://192.168.39.144:8443/healthz ...
	I0501 03:41:27.918116   68640 api_server.go:279] https://192.168.39.144:8443/healthz returned 200:
	ok
	I0501 03:41:27.927764   68640 api_server.go:141] control plane version: v1.30.0
	I0501 03:41:27.927790   68640 api_server.go:131] duration metric: took 7.014339409s to wait for apiserver health ...
	I0501 03:41:27.927799   68640 cni.go:84] Creating CNI manager for ""
	I0501 03:41:27.927805   68640 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0501 03:41:27.929889   68640 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0501 03:41:27.931210   68640 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0501 03:41:25.158177   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:27.656879   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:27.511692   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:30.010468   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:26.679430   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:27.179043   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:27.678801   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:28.179629   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:28.679111   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:29.179599   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:29.679624   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:30.179585   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:30.679442   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:31.179530   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:27.945852   68640 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0501 03:41:27.968311   68640 system_pods.go:43] waiting for kube-system pods to appear ...
	I0501 03:41:27.981571   68640 system_pods.go:59] 8 kube-system pods found
	I0501 03:41:27.981609   68640 system_pods.go:61] "coredns-7db6d8ff4d-v8bqq" [bf389521-9f19-4f2b-83a5-6d469c7ce0fe] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0501 03:41:27.981615   68640 system_pods.go:61] "etcd-no-preload-892672" [108fce6d-03f3-4bb9-a410-a58c58e8f186] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0501 03:41:27.981621   68640 system_pods.go:61] "kube-apiserver-no-preload-892672" [a18b7242-1865-4a67-aab6-c6cc19552326] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0501 03:41:27.981629   68640 system_pods.go:61] "kube-controller-manager-no-preload-892672" [318d39e1-5265-42e5-a3d5-4408b7b73542] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0501 03:41:27.981636   68640 system_pods.go:61] "kube-proxy-dwvdl" [f7a97598-aaa1-4df5-8d6a-8f6286568ad6] Running
	I0501 03:41:27.981642   68640 system_pods.go:61] "kube-scheduler-no-preload-892672" [cbf1c183-16df-42c8-b1c8-b9adf3c25a7a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0501 03:41:27.981647   68640 system_pods.go:61] "metrics-server-569cc877fc-k8jnl" [1dd0fb29-4d90-41c8-9de2-d163eeb0247b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0501 03:41:27.981651   68640 system_pods.go:61] "storage-provisioner" [fc703ab1-f14b-4766-8ee2-a43477d3df21] Running
	I0501 03:41:27.981657   68640 system_pods.go:74] duration metric: took 13.322893ms to wait for pod list to return data ...
	I0501 03:41:27.981667   68640 node_conditions.go:102] verifying NodePressure condition ...
	I0501 03:41:27.985896   68640 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0501 03:41:27.985931   68640 node_conditions.go:123] node cpu capacity is 2
	I0501 03:41:27.985944   68640 node_conditions.go:105] duration metric: took 4.271726ms to run NodePressure ...
	I0501 03:41:27.985966   68640 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:41:28.269675   68640 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0501 03:41:28.276487   68640 kubeadm.go:733] kubelet initialised
	I0501 03:41:28.276512   68640 kubeadm.go:734] duration metric: took 6.808875ms waiting for restarted kubelet to initialise ...
	I0501 03:41:28.276522   68640 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0501 03:41:28.287109   68640 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-v8bqq" in "kube-system" namespace to be "Ready" ...
	I0501 03:41:28.297143   68640 pod_ready.go:97] node "no-preload-892672" hosting pod "coredns-7db6d8ff4d-v8bqq" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-892672" has status "Ready":"False"
	I0501 03:41:28.297185   68640 pod_ready.go:81] duration metric: took 10.040841ms for pod "coredns-7db6d8ff4d-v8bqq" in "kube-system" namespace to be "Ready" ...
	E0501 03:41:28.297198   68640 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-892672" hosting pod "coredns-7db6d8ff4d-v8bqq" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-892672" has status "Ready":"False"
	I0501 03:41:28.297206   68640 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-892672" in "kube-system" namespace to be "Ready" ...
	I0501 03:41:28.307648   68640 pod_ready.go:97] node "no-preload-892672" hosting pod "etcd-no-preload-892672" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-892672" has status "Ready":"False"
	I0501 03:41:28.307682   68640 pod_ready.go:81] duration metric: took 10.464199ms for pod "etcd-no-preload-892672" in "kube-system" namespace to be "Ready" ...
	E0501 03:41:28.307695   68640 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-892672" hosting pod "etcd-no-preload-892672" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-892672" has status "Ready":"False"
	I0501 03:41:28.307707   68640 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-892672" in "kube-system" namespace to be "Ready" ...
	I0501 03:41:30.319652   68640 pod_ready.go:102] pod "kube-apiserver-no-preload-892672" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:32.821375   68640 pod_ready.go:102] pod "kube-apiserver-no-preload-892672" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:29.657167   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:32.157549   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:32.012009   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:34.511543   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:31.679423   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:32.179628   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:32.679456   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:33.179336   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:33.679221   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:34.178900   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:34.679236   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:35.179595   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:35.679520   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:36.179639   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:35.317202   68640 pod_ready.go:102] pod "kube-apiserver-no-preload-892672" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:37.318125   68640 pod_ready.go:92] pod "kube-apiserver-no-preload-892672" in "kube-system" namespace has status "Ready":"True"
	I0501 03:41:37.318157   68640 pod_ready.go:81] duration metric: took 9.010440772s for pod "kube-apiserver-no-preload-892672" in "kube-system" namespace to be "Ready" ...
	I0501 03:41:37.318170   68640 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-892672" in "kube-system" namespace to be "Ready" ...
	I0501 03:41:37.327390   68640 pod_ready.go:92] pod "kube-controller-manager-no-preload-892672" in "kube-system" namespace has status "Ready":"True"
	I0501 03:41:37.327412   68640 pod_ready.go:81] duration metric: took 9.233689ms for pod "kube-controller-manager-no-preload-892672" in "kube-system" namespace to be "Ready" ...
	I0501 03:41:37.327425   68640 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-dwvdl" in "kube-system" namespace to be "Ready" ...
	I0501 03:41:37.333971   68640 pod_ready.go:92] pod "kube-proxy-dwvdl" in "kube-system" namespace has status "Ready":"True"
	I0501 03:41:37.333994   68640 pod_ready.go:81] duration metric: took 6.561014ms for pod "kube-proxy-dwvdl" in "kube-system" namespace to be "Ready" ...
	I0501 03:41:37.334006   68640 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-892672" in "kube-system" namespace to be "Ready" ...
	I0501 03:41:37.338637   68640 pod_ready.go:92] pod "kube-scheduler-no-preload-892672" in "kube-system" namespace has status "Ready":"True"
	I0501 03:41:37.338657   68640 pod_ready.go:81] duration metric: took 4.644395ms for pod "kube-scheduler-no-preload-892672" in "kube-system" namespace to be "Ready" ...
	I0501 03:41:37.338665   68640 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace to be "Ready" ...
	I0501 03:41:34.657958   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:36.658191   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:36.512234   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:39.012636   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:36.678883   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:37.179198   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:37.679101   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:38.179088   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:38.679354   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:39.179163   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:39.678809   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:40.179768   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:40.679046   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:41.179618   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:39.346054   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:41.346434   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:39.157142   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:41.656902   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:41.510939   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:43.511571   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:45.511959   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:41.679751   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:42.178848   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:42.679525   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:43.179706   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:43.679665   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:44.179053   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:44.679615   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:45.178830   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:45.679547   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:41:45.679620   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:41:45.718568   69580 cri.go:89] found id: ""
	I0501 03:41:45.718597   69580 logs.go:276] 0 containers: []
	W0501 03:41:45.718611   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:41:45.718619   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:41:45.718678   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:41:45.755572   69580 cri.go:89] found id: ""
	I0501 03:41:45.755596   69580 logs.go:276] 0 containers: []
	W0501 03:41:45.755604   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:41:45.755609   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:41:45.755654   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:41:45.793411   69580 cri.go:89] found id: ""
	I0501 03:41:45.793440   69580 logs.go:276] 0 containers: []
	W0501 03:41:45.793450   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:41:45.793458   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:41:45.793526   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:41:45.834547   69580 cri.go:89] found id: ""
	I0501 03:41:45.834572   69580 logs.go:276] 0 containers: []
	W0501 03:41:45.834579   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:41:45.834585   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:41:45.834668   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:41:45.873293   69580 cri.go:89] found id: ""
	I0501 03:41:45.873321   69580 logs.go:276] 0 containers: []
	W0501 03:41:45.873332   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:41:45.873348   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:41:45.873411   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:41:45.911703   69580 cri.go:89] found id: ""
	I0501 03:41:45.911734   69580 logs.go:276] 0 containers: []
	W0501 03:41:45.911745   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:41:45.911766   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:41:45.911826   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:41:45.949577   69580 cri.go:89] found id: ""
	I0501 03:41:45.949602   69580 logs.go:276] 0 containers: []
	W0501 03:41:45.949610   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:41:45.949616   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:41:45.949666   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:41:45.986174   69580 cri.go:89] found id: ""
	I0501 03:41:45.986199   69580 logs.go:276] 0 containers: []
	W0501 03:41:45.986207   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:41:45.986216   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:41:45.986228   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:41:46.041028   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:41:46.041064   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:41:46.057097   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:41:46.057126   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:41:46.195021   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:41:46.195042   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:41:46.195055   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:41:46.261153   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:41:46.261197   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:41:43.845096   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:45.845950   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:47.849620   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:44.157041   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:46.158028   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:48.658062   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:48.011975   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:50.512345   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:48.809274   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:48.824295   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:41:48.824369   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:41:48.869945   69580 cri.go:89] found id: ""
	I0501 03:41:48.869975   69580 logs.go:276] 0 containers: []
	W0501 03:41:48.869985   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:41:48.869993   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:41:48.870053   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:41:48.918088   69580 cri.go:89] found id: ""
	I0501 03:41:48.918113   69580 logs.go:276] 0 containers: []
	W0501 03:41:48.918122   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:41:48.918131   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:41:48.918190   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:41:48.958102   69580 cri.go:89] found id: ""
	I0501 03:41:48.958132   69580 logs.go:276] 0 containers: []
	W0501 03:41:48.958143   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:41:48.958149   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:41:48.958207   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:41:48.997163   69580 cri.go:89] found id: ""
	I0501 03:41:48.997194   69580 logs.go:276] 0 containers: []
	W0501 03:41:48.997211   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:41:48.997218   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:41:48.997284   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:41:49.040132   69580 cri.go:89] found id: ""
	I0501 03:41:49.040156   69580 logs.go:276] 0 containers: []
	W0501 03:41:49.040164   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:41:49.040170   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:41:49.040228   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:41:49.079680   69580 cri.go:89] found id: ""
	I0501 03:41:49.079712   69580 logs.go:276] 0 containers: []
	W0501 03:41:49.079724   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:41:49.079732   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:41:49.079790   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:41:49.120577   69580 cri.go:89] found id: ""
	I0501 03:41:49.120610   69580 logs.go:276] 0 containers: []
	W0501 03:41:49.120623   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:41:49.120630   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:41:49.120700   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:41:49.167098   69580 cri.go:89] found id: ""
	I0501 03:41:49.167123   69580 logs.go:276] 0 containers: []
	W0501 03:41:49.167133   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:41:49.167141   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:41:49.167152   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:41:49.242834   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:41:49.242868   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:41:49.264011   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:41:49.264033   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:41:49.367711   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:41:49.367739   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:41:49.367764   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:41:49.441925   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:41:49.441964   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:41:50.346009   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:52.346333   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:51.156287   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:53.657588   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:53.010720   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:55.012329   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:51.986536   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:52.001651   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:41:52.001734   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:41:52.039550   69580 cri.go:89] found id: ""
	I0501 03:41:52.039571   69580 logs.go:276] 0 containers: []
	W0501 03:41:52.039579   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:41:52.039584   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:41:52.039636   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:41:52.082870   69580 cri.go:89] found id: ""
	I0501 03:41:52.082892   69580 logs.go:276] 0 containers: []
	W0501 03:41:52.082900   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:41:52.082905   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:41:52.082949   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:41:52.126970   69580 cri.go:89] found id: ""
	I0501 03:41:52.126996   69580 logs.go:276] 0 containers: []
	W0501 03:41:52.127009   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:41:52.127014   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:41:52.127076   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:41:52.169735   69580 cri.go:89] found id: ""
	I0501 03:41:52.169761   69580 logs.go:276] 0 containers: []
	W0501 03:41:52.169769   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:41:52.169774   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:41:52.169826   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:41:52.207356   69580 cri.go:89] found id: ""
	I0501 03:41:52.207392   69580 logs.go:276] 0 containers: []
	W0501 03:41:52.207404   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:41:52.207412   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:41:52.207472   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:41:52.250074   69580 cri.go:89] found id: ""
	I0501 03:41:52.250102   69580 logs.go:276] 0 containers: []
	W0501 03:41:52.250113   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:41:52.250121   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:41:52.250180   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:41:52.290525   69580 cri.go:89] found id: ""
	I0501 03:41:52.290550   69580 logs.go:276] 0 containers: []
	W0501 03:41:52.290558   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:41:52.290564   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:41:52.290610   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:41:52.336058   69580 cri.go:89] found id: ""
	I0501 03:41:52.336084   69580 logs.go:276] 0 containers: []
	W0501 03:41:52.336092   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:41:52.336103   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:41:52.336118   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:41:52.392738   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:41:52.392773   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:41:52.408475   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:41:52.408503   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:41:52.493567   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:41:52.493594   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:41:52.493608   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:41:52.566550   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:41:52.566583   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:41:55.117129   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:55.134840   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:41:55.134918   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:41:55.193990   69580 cri.go:89] found id: ""
	I0501 03:41:55.194019   69580 logs.go:276] 0 containers: []
	W0501 03:41:55.194029   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:41:55.194038   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:41:55.194100   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:41:55.261710   69580 cri.go:89] found id: ""
	I0501 03:41:55.261743   69580 logs.go:276] 0 containers: []
	W0501 03:41:55.261754   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:41:55.261761   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:41:55.261823   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:41:55.302432   69580 cri.go:89] found id: ""
	I0501 03:41:55.302468   69580 logs.go:276] 0 containers: []
	W0501 03:41:55.302480   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:41:55.302488   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:41:55.302550   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:41:55.346029   69580 cri.go:89] found id: ""
	I0501 03:41:55.346058   69580 logs.go:276] 0 containers: []
	W0501 03:41:55.346067   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:41:55.346073   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:41:55.346117   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:41:55.393206   69580 cri.go:89] found id: ""
	I0501 03:41:55.393229   69580 logs.go:276] 0 containers: []
	W0501 03:41:55.393236   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:41:55.393242   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:41:55.393295   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:41:55.437908   69580 cri.go:89] found id: ""
	I0501 03:41:55.437940   69580 logs.go:276] 0 containers: []
	W0501 03:41:55.437952   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:41:55.437960   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:41:55.438020   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:41:55.480439   69580 cri.go:89] found id: ""
	I0501 03:41:55.480468   69580 logs.go:276] 0 containers: []
	W0501 03:41:55.480480   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:41:55.480488   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:41:55.480589   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:41:55.524782   69580 cri.go:89] found id: ""
	I0501 03:41:55.524811   69580 logs.go:276] 0 containers: []
	W0501 03:41:55.524819   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:41:55.524828   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:41:55.524840   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:41:55.604337   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:41:55.604373   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:41:55.649427   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:41:55.649455   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:41:55.707928   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:41:55.707976   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:41:55.723289   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:41:55.723316   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:41:55.805146   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:41:54.347203   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:56.847806   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:55.658387   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:58.156886   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:57.511280   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:59.511460   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:58.306145   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:58.322207   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:41:58.322280   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:41:58.370291   69580 cri.go:89] found id: ""
	I0501 03:41:58.370319   69580 logs.go:276] 0 containers: []
	W0501 03:41:58.370331   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:41:58.370338   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:41:58.370417   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:41:58.421230   69580 cri.go:89] found id: ""
	I0501 03:41:58.421256   69580 logs.go:276] 0 containers: []
	W0501 03:41:58.421264   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:41:58.421270   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:41:58.421317   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:41:58.463694   69580 cri.go:89] found id: ""
	I0501 03:41:58.463724   69580 logs.go:276] 0 containers: []
	W0501 03:41:58.463735   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:41:58.463743   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:41:58.463797   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:41:58.507756   69580 cri.go:89] found id: ""
	I0501 03:41:58.507785   69580 logs.go:276] 0 containers: []
	W0501 03:41:58.507791   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:41:58.507797   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:41:58.507870   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:41:58.554852   69580 cri.go:89] found id: ""
	I0501 03:41:58.554884   69580 logs.go:276] 0 containers: []
	W0501 03:41:58.554895   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:41:58.554903   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:41:58.554969   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:41:58.602467   69580 cri.go:89] found id: ""
	I0501 03:41:58.602495   69580 logs.go:276] 0 containers: []
	W0501 03:41:58.602505   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:41:58.602511   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:41:58.602561   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:41:58.652718   69580 cri.go:89] found id: ""
	I0501 03:41:58.652749   69580 logs.go:276] 0 containers: []
	W0501 03:41:58.652759   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:41:58.652766   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:41:58.652837   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:41:58.694351   69580 cri.go:89] found id: ""
	I0501 03:41:58.694377   69580 logs.go:276] 0 containers: []
	W0501 03:41:58.694385   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:41:58.694393   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:41:58.694434   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:41:58.779878   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:41:58.779911   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:41:58.826733   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:41:58.826768   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:41:58.883808   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:41:58.883842   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:41:58.900463   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:41:58.900495   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:41:58.991346   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:41:59.345807   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:01.846099   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:00.157131   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:02.157204   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:01.511711   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:03.512536   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:01.492396   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:42:01.508620   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:42:01.508756   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:42:01.555669   69580 cri.go:89] found id: ""
	I0501 03:42:01.555696   69580 logs.go:276] 0 containers: []
	W0501 03:42:01.555712   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:42:01.555720   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:42:01.555782   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:42:01.597591   69580 cri.go:89] found id: ""
	I0501 03:42:01.597615   69580 logs.go:276] 0 containers: []
	W0501 03:42:01.597626   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:42:01.597635   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:42:01.597693   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:42:01.636259   69580 cri.go:89] found id: ""
	I0501 03:42:01.636286   69580 logs.go:276] 0 containers: []
	W0501 03:42:01.636297   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:42:01.636305   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:42:01.636361   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:42:01.684531   69580 cri.go:89] found id: ""
	I0501 03:42:01.684562   69580 logs.go:276] 0 containers: []
	W0501 03:42:01.684572   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:42:01.684579   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:42:01.684647   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:42:01.725591   69580 cri.go:89] found id: ""
	I0501 03:42:01.725621   69580 logs.go:276] 0 containers: []
	W0501 03:42:01.725628   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:42:01.725652   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:42:01.725718   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:42:01.767868   69580 cri.go:89] found id: ""
	I0501 03:42:01.767901   69580 logs.go:276] 0 containers: []
	W0501 03:42:01.767910   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:42:01.767917   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:42:01.767977   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:42:01.817590   69580 cri.go:89] found id: ""
	I0501 03:42:01.817618   69580 logs.go:276] 0 containers: []
	W0501 03:42:01.817629   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:42:01.817637   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:42:01.817697   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:42:01.863549   69580 cri.go:89] found id: ""
	I0501 03:42:01.863576   69580 logs.go:276] 0 containers: []
	W0501 03:42:01.863586   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:42:01.863595   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:42:01.863607   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:42:01.879134   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:42:01.879162   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:42:01.967015   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:42:01.967043   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:42:01.967059   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:42:02.051576   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:42:02.051614   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:42:02.095614   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:42:02.095644   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:42:04.652974   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:42:04.671018   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:42:04.671103   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:42:04.712392   69580 cri.go:89] found id: ""
	I0501 03:42:04.712425   69580 logs.go:276] 0 containers: []
	W0501 03:42:04.712435   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:42:04.712442   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:42:04.712503   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:42:04.756854   69580 cri.go:89] found id: ""
	I0501 03:42:04.756881   69580 logs.go:276] 0 containers: []
	W0501 03:42:04.756893   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:42:04.756900   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:42:04.756962   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:42:04.797665   69580 cri.go:89] found id: ""
	I0501 03:42:04.797694   69580 logs.go:276] 0 containers: []
	W0501 03:42:04.797703   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:42:04.797709   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:42:04.797756   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:42:04.838441   69580 cri.go:89] found id: ""
	I0501 03:42:04.838472   69580 logs.go:276] 0 containers: []
	W0501 03:42:04.838483   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:42:04.838491   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:42:04.838556   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:42:04.879905   69580 cri.go:89] found id: ""
	I0501 03:42:04.879935   69580 logs.go:276] 0 containers: []
	W0501 03:42:04.879945   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:42:04.879952   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:42:04.880012   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:42:04.924759   69580 cri.go:89] found id: ""
	I0501 03:42:04.924792   69580 logs.go:276] 0 containers: []
	W0501 03:42:04.924804   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:42:04.924813   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:42:04.924879   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:42:04.965638   69580 cri.go:89] found id: ""
	I0501 03:42:04.965663   69580 logs.go:276] 0 containers: []
	W0501 03:42:04.965670   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:42:04.965676   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:42:04.965721   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:42:05.013127   69580 cri.go:89] found id: ""
	I0501 03:42:05.013153   69580 logs.go:276] 0 containers: []
	W0501 03:42:05.013163   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:42:05.013173   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:42:05.013185   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:42:05.108388   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:42:05.108409   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:42:05.108422   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:42:05.198239   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:42:05.198281   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:42:05.241042   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:42:05.241076   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:42:05.299017   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:42:05.299069   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:42:04.345910   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:06.346830   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:04.657438   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:06.657707   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:06.011511   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:08.016548   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:10.510503   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:07.815458   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:42:07.832047   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:42:07.832125   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:42:07.882950   69580 cri.go:89] found id: ""
	I0501 03:42:07.882985   69580 logs.go:276] 0 containers: []
	W0501 03:42:07.882996   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:42:07.883002   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:42:07.883051   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:42:07.928086   69580 cri.go:89] found id: ""
	I0501 03:42:07.928111   69580 logs.go:276] 0 containers: []
	W0501 03:42:07.928119   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:42:07.928124   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:42:07.928177   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:42:07.976216   69580 cri.go:89] found id: ""
	I0501 03:42:07.976250   69580 logs.go:276] 0 containers: []
	W0501 03:42:07.976268   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:42:07.976274   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:42:07.976331   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:42:08.019903   69580 cri.go:89] found id: ""
	I0501 03:42:08.019932   69580 logs.go:276] 0 containers: []
	W0501 03:42:08.019943   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:42:08.019951   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:42:08.020009   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:42:08.075980   69580 cri.go:89] found id: ""
	I0501 03:42:08.076004   69580 logs.go:276] 0 containers: []
	W0501 03:42:08.076012   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:42:08.076018   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:42:08.076065   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:42:08.114849   69580 cri.go:89] found id: ""
	I0501 03:42:08.114881   69580 logs.go:276] 0 containers: []
	W0501 03:42:08.114891   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:42:08.114897   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:42:08.114955   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:42:08.159427   69580 cri.go:89] found id: ""
	I0501 03:42:08.159457   69580 logs.go:276] 0 containers: []
	W0501 03:42:08.159468   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:42:08.159476   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:42:08.159543   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:42:08.200117   69580 cri.go:89] found id: ""
	I0501 03:42:08.200151   69580 logs.go:276] 0 containers: []
	W0501 03:42:08.200163   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:42:08.200182   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:42:08.200197   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:42:08.281926   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:42:08.281972   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:42:08.331393   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:42:08.331429   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:42:08.386758   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:42:08.386793   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:42:08.402551   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:42:08.402581   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:42:08.489678   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:42:10.990653   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:42:11.007879   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:42:11.007958   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:42:11.049842   69580 cri.go:89] found id: ""
	I0501 03:42:11.049867   69580 logs.go:276] 0 containers: []
	W0501 03:42:11.049879   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:42:11.049885   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:42:11.049933   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:42:11.091946   69580 cri.go:89] found id: ""
	I0501 03:42:11.091980   69580 logs.go:276] 0 containers: []
	W0501 03:42:11.091992   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:42:11.092000   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:42:11.092079   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:42:11.140100   69580 cri.go:89] found id: ""
	I0501 03:42:11.140129   69580 logs.go:276] 0 containers: []
	W0501 03:42:11.140138   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:42:11.140144   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:42:11.140207   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:42:11.182796   69580 cri.go:89] found id: ""
	I0501 03:42:11.182821   69580 logs.go:276] 0 containers: []
	W0501 03:42:11.182832   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:42:11.182838   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:42:11.182896   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:42:11.222985   69580 cri.go:89] found id: ""
	I0501 03:42:11.223016   69580 logs.go:276] 0 containers: []
	W0501 03:42:11.223027   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:42:11.223033   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:42:11.223114   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:42:11.265793   69580 cri.go:89] found id: ""
	I0501 03:42:11.265818   69580 logs.go:276] 0 containers: []
	W0501 03:42:11.265830   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:42:11.265838   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:42:11.265913   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:42:11.309886   69580 cri.go:89] found id: ""
	I0501 03:42:11.309912   69580 logs.go:276] 0 containers: []
	W0501 03:42:11.309924   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:42:11.309931   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:42:11.309989   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:42:11.357757   69580 cri.go:89] found id: ""
	I0501 03:42:11.357791   69580 logs.go:276] 0 containers: []
	W0501 03:42:11.357803   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:42:11.357823   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:42:11.357839   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:42:11.412668   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:42:11.412704   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:42:11.428380   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:42:11.428422   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0501 03:42:08.347511   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:10.846691   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:09.156632   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:11.158047   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:13.657603   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:12.512713   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:15.011382   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	W0501 03:42:11.521898   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:42:11.521924   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:42:11.521940   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:42:11.607081   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:42:11.607116   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:42:14.153054   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:42:14.173046   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:42:14.173150   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:42:14.219583   69580 cri.go:89] found id: ""
	I0501 03:42:14.219605   69580 logs.go:276] 0 containers: []
	W0501 03:42:14.219613   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:42:14.219619   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:42:14.219664   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:42:14.260316   69580 cri.go:89] found id: ""
	I0501 03:42:14.260349   69580 logs.go:276] 0 containers: []
	W0501 03:42:14.260357   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:42:14.260366   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:42:14.260420   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:42:14.305049   69580 cri.go:89] found id: ""
	I0501 03:42:14.305085   69580 logs.go:276] 0 containers: []
	W0501 03:42:14.305109   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:42:14.305117   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:42:14.305198   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:42:14.359589   69580 cri.go:89] found id: ""
	I0501 03:42:14.359614   69580 logs.go:276] 0 containers: []
	W0501 03:42:14.359622   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:42:14.359628   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:42:14.359672   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:42:14.403867   69580 cri.go:89] found id: ""
	I0501 03:42:14.403895   69580 logs.go:276] 0 containers: []
	W0501 03:42:14.403904   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:42:14.403910   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:42:14.403987   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:42:14.446626   69580 cri.go:89] found id: ""
	I0501 03:42:14.446655   69580 logs.go:276] 0 containers: []
	W0501 03:42:14.446675   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:42:14.446683   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:42:14.446754   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:42:14.490983   69580 cri.go:89] found id: ""
	I0501 03:42:14.491016   69580 logs.go:276] 0 containers: []
	W0501 03:42:14.491028   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:42:14.491036   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:42:14.491117   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:42:14.534180   69580 cri.go:89] found id: ""
	I0501 03:42:14.534205   69580 logs.go:276] 0 containers: []
	W0501 03:42:14.534213   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:42:14.534221   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:42:14.534236   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:42:14.621433   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:42:14.621491   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:42:14.680265   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:42:14.680310   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:42:14.738943   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:42:14.738983   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:42:14.754145   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:42:14.754176   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:42:14.839974   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:42:13.347081   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:15.847072   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:17.847749   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:16.157433   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:18.158120   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:17.017276   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:19.514339   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:17.340948   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:42:17.360007   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:42:17.360068   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:42:17.403201   69580 cri.go:89] found id: ""
	I0501 03:42:17.403231   69580 logs.go:276] 0 containers: []
	W0501 03:42:17.403239   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:42:17.403245   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:42:17.403301   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:42:17.442940   69580 cri.go:89] found id: ""
	I0501 03:42:17.442966   69580 logs.go:276] 0 containers: []
	W0501 03:42:17.442975   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:42:17.442981   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:42:17.443038   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:42:17.487219   69580 cri.go:89] found id: ""
	I0501 03:42:17.487248   69580 logs.go:276] 0 containers: []
	W0501 03:42:17.487259   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:42:17.487267   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:42:17.487324   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:42:17.528551   69580 cri.go:89] found id: ""
	I0501 03:42:17.528583   69580 logs.go:276] 0 containers: []
	W0501 03:42:17.528593   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:42:17.528601   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:42:17.528668   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:42:17.577005   69580 cri.go:89] found id: ""
	I0501 03:42:17.577041   69580 logs.go:276] 0 containers: []
	W0501 03:42:17.577052   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:42:17.577061   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:42:17.577132   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:42:17.618924   69580 cri.go:89] found id: ""
	I0501 03:42:17.618949   69580 logs.go:276] 0 containers: []
	W0501 03:42:17.618957   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:42:17.618963   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:42:17.619022   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:42:17.660487   69580 cri.go:89] found id: ""
	I0501 03:42:17.660514   69580 logs.go:276] 0 containers: []
	W0501 03:42:17.660525   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:42:17.660532   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:42:17.660592   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:42:17.701342   69580 cri.go:89] found id: ""
	I0501 03:42:17.701370   69580 logs.go:276] 0 containers: []
	W0501 03:42:17.701378   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:42:17.701387   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:42:17.701400   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:42:17.757034   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:42:17.757069   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:42:17.772955   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:42:17.772984   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:42:17.888062   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:42:17.888088   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:42:17.888101   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:42:17.969274   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:42:17.969312   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:42:20.521053   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:42:20.536065   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:42:20.536141   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:42:20.577937   69580 cri.go:89] found id: ""
	I0501 03:42:20.577967   69580 logs.go:276] 0 containers: []
	W0501 03:42:20.577977   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:42:20.577986   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:42:20.578055   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:42:20.626690   69580 cri.go:89] found id: ""
	I0501 03:42:20.626714   69580 logs.go:276] 0 containers: []
	W0501 03:42:20.626722   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:42:20.626728   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:42:20.626809   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:42:20.670849   69580 cri.go:89] found id: ""
	I0501 03:42:20.670872   69580 logs.go:276] 0 containers: []
	W0501 03:42:20.670881   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:42:20.670886   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:42:20.670946   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:42:20.711481   69580 cri.go:89] found id: ""
	I0501 03:42:20.711511   69580 logs.go:276] 0 containers: []
	W0501 03:42:20.711522   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:42:20.711531   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:42:20.711596   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:42:20.753413   69580 cri.go:89] found id: ""
	I0501 03:42:20.753443   69580 logs.go:276] 0 containers: []
	W0501 03:42:20.753452   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:42:20.753459   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:42:20.753536   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:42:20.791424   69580 cri.go:89] found id: ""
	I0501 03:42:20.791452   69580 logs.go:276] 0 containers: []
	W0501 03:42:20.791461   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:42:20.791466   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:42:20.791526   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:42:20.833718   69580 cri.go:89] found id: ""
	I0501 03:42:20.833740   69580 logs.go:276] 0 containers: []
	W0501 03:42:20.833748   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:42:20.833752   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:42:20.833799   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:42:20.879788   69580 cri.go:89] found id: ""
	I0501 03:42:20.879818   69580 logs.go:276] 0 containers: []
	W0501 03:42:20.879828   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:42:20.879839   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:42:20.879855   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:42:20.895266   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:42:20.895304   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:42:20.976429   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:42:20.976452   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:42:20.976465   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:42:21.063573   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:42:21.063611   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:42:21.113510   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:42:21.113543   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:42:20.346735   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:22.347096   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:20.658642   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:22.659841   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:22.011045   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:24.012756   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:23.672203   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:42:23.687849   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:42:23.687946   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:42:23.731428   69580 cri.go:89] found id: ""
	I0501 03:42:23.731455   69580 logs.go:276] 0 containers: []
	W0501 03:42:23.731467   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:42:23.731473   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:42:23.731534   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:42:23.772219   69580 cri.go:89] found id: ""
	I0501 03:42:23.772248   69580 logs.go:276] 0 containers: []
	W0501 03:42:23.772259   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:42:23.772266   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:42:23.772369   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:42:23.837203   69580 cri.go:89] found id: ""
	I0501 03:42:23.837235   69580 logs.go:276] 0 containers: []
	W0501 03:42:23.837247   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:42:23.837255   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:42:23.837317   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:42:23.884681   69580 cri.go:89] found id: ""
	I0501 03:42:23.884709   69580 logs.go:276] 0 containers: []
	W0501 03:42:23.884716   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:42:23.884722   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:42:23.884783   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:42:23.927544   69580 cri.go:89] found id: ""
	I0501 03:42:23.927576   69580 logs.go:276] 0 containers: []
	W0501 03:42:23.927584   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:42:23.927590   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:42:23.927652   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:42:23.970428   69580 cri.go:89] found id: ""
	I0501 03:42:23.970457   69580 logs.go:276] 0 containers: []
	W0501 03:42:23.970467   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:42:23.970476   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:42:23.970541   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:42:24.010545   69580 cri.go:89] found id: ""
	I0501 03:42:24.010573   69580 logs.go:276] 0 containers: []
	W0501 03:42:24.010583   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:42:24.010593   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:42:24.010653   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:42:24.053547   69580 cri.go:89] found id: ""
	I0501 03:42:24.053574   69580 logs.go:276] 0 containers: []
	W0501 03:42:24.053582   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:42:24.053591   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:42:24.053602   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:42:24.108416   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:42:24.108452   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:42:24.124052   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:42:24.124083   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:42:24.209024   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:42:24.209048   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:42:24.209063   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:42:24.291644   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:42:24.291693   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:42:24.846439   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:26.846750   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:25.157009   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:27.657022   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:26.510679   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:28.511049   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:30.511542   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:26.840623   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:42:26.856231   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:42:26.856320   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:42:26.897988   69580 cri.go:89] found id: ""
	I0501 03:42:26.898022   69580 logs.go:276] 0 containers: []
	W0501 03:42:26.898033   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:42:26.898041   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:42:26.898109   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:42:26.937608   69580 cri.go:89] found id: ""
	I0501 03:42:26.937638   69580 logs.go:276] 0 containers: []
	W0501 03:42:26.937660   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:42:26.937668   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:42:26.937731   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:42:26.979799   69580 cri.go:89] found id: ""
	I0501 03:42:26.979836   69580 logs.go:276] 0 containers: []
	W0501 03:42:26.979847   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:42:26.979854   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:42:26.979922   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:42:27.018863   69580 cri.go:89] found id: ""
	I0501 03:42:27.018896   69580 logs.go:276] 0 containers: []
	W0501 03:42:27.018903   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:42:27.018909   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:42:27.018959   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:42:27.057864   69580 cri.go:89] found id: ""
	I0501 03:42:27.057893   69580 logs.go:276] 0 containers: []
	W0501 03:42:27.057904   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:42:27.057912   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:42:27.057982   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:42:27.102909   69580 cri.go:89] found id: ""
	I0501 03:42:27.102939   69580 logs.go:276] 0 containers: []
	W0501 03:42:27.102950   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:42:27.102958   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:42:27.103019   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:42:27.148292   69580 cri.go:89] found id: ""
	I0501 03:42:27.148326   69580 logs.go:276] 0 containers: []
	W0501 03:42:27.148336   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:42:27.148344   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:42:27.148407   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:42:27.197557   69580 cri.go:89] found id: ""
	I0501 03:42:27.197581   69580 logs.go:276] 0 containers: []
	W0501 03:42:27.197588   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:42:27.197596   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:42:27.197609   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:42:27.281768   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:42:27.281793   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:42:27.281806   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:42:27.361496   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:42:27.361528   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:42:27.407640   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:42:27.407675   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:42:27.472533   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:42:27.472576   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:42:29.987773   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:42:30.003511   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:42:30.003619   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:42:30.049330   69580 cri.go:89] found id: ""
	I0501 03:42:30.049363   69580 logs.go:276] 0 containers: []
	W0501 03:42:30.049377   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:42:30.049384   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:42:30.049439   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:42:30.088521   69580 cri.go:89] found id: ""
	I0501 03:42:30.088549   69580 logs.go:276] 0 containers: []
	W0501 03:42:30.088560   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:42:30.088568   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:42:30.088624   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:42:30.132731   69580 cri.go:89] found id: ""
	I0501 03:42:30.132765   69580 logs.go:276] 0 containers: []
	W0501 03:42:30.132777   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:42:30.132784   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:42:30.132847   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:42:30.178601   69580 cri.go:89] found id: ""
	I0501 03:42:30.178639   69580 logs.go:276] 0 containers: []
	W0501 03:42:30.178648   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:42:30.178656   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:42:30.178714   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:42:30.230523   69580 cri.go:89] found id: ""
	I0501 03:42:30.230549   69580 logs.go:276] 0 containers: []
	W0501 03:42:30.230561   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:42:30.230569   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:42:30.230632   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:42:30.289234   69580 cri.go:89] found id: ""
	I0501 03:42:30.289262   69580 logs.go:276] 0 containers: []
	W0501 03:42:30.289270   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:42:30.289277   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:42:30.289342   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:42:30.332596   69580 cri.go:89] found id: ""
	I0501 03:42:30.332627   69580 logs.go:276] 0 containers: []
	W0501 03:42:30.332637   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:42:30.332644   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:42:30.332710   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:42:30.383871   69580 cri.go:89] found id: ""
	I0501 03:42:30.383901   69580 logs.go:276] 0 containers: []
	W0501 03:42:30.383908   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:42:30.383917   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:42:30.383929   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:42:30.464382   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:42:30.464404   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:42:30.464417   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:42:30.550604   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:42:30.550637   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:42:30.594927   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:42:30.594959   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:42:30.648392   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:42:30.648426   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:42:28.847271   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:31.345865   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:29.657316   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:31.657435   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:32.511887   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:35.011677   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:33.167591   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:42:33.183804   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:42:33.183874   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:42:33.223501   69580 cri.go:89] found id: ""
	I0501 03:42:33.223525   69580 logs.go:276] 0 containers: []
	W0501 03:42:33.223532   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:42:33.223539   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:42:33.223600   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:42:33.268674   69580 cri.go:89] found id: ""
	I0501 03:42:33.268705   69580 logs.go:276] 0 containers: []
	W0501 03:42:33.268741   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:42:33.268749   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:42:33.268807   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:42:33.310613   69580 cri.go:89] found id: ""
	I0501 03:42:33.310655   69580 logs.go:276] 0 containers: []
	W0501 03:42:33.310666   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:42:33.310674   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:42:33.310737   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:42:33.353156   69580 cri.go:89] found id: ""
	I0501 03:42:33.353177   69580 logs.go:276] 0 containers: []
	W0501 03:42:33.353184   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:42:33.353189   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:42:33.353237   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:42:33.389702   69580 cri.go:89] found id: ""
	I0501 03:42:33.389730   69580 logs.go:276] 0 containers: []
	W0501 03:42:33.389743   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:42:33.389751   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:42:33.389817   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:42:33.431244   69580 cri.go:89] found id: ""
	I0501 03:42:33.431275   69580 logs.go:276] 0 containers: []
	W0501 03:42:33.431290   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:42:33.431298   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:42:33.431384   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:42:33.472382   69580 cri.go:89] found id: ""
	I0501 03:42:33.472412   69580 logs.go:276] 0 containers: []
	W0501 03:42:33.472423   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:42:33.472431   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:42:33.472519   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:42:33.517042   69580 cri.go:89] found id: ""
	I0501 03:42:33.517064   69580 logs.go:276] 0 containers: []
	W0501 03:42:33.517071   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:42:33.517079   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:42:33.517091   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:42:33.573343   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:42:33.573372   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:42:33.588932   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:42:33.588963   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:42:33.674060   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:42:33.674090   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:42:33.674106   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:42:33.756635   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:42:33.756684   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:42:36.300909   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:42:36.320407   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:42:36.320474   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:42:36.367236   69580 cri.go:89] found id: ""
	I0501 03:42:36.367261   69580 logs.go:276] 0 containers: []
	W0501 03:42:36.367269   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:42:36.367274   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:42:36.367335   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:42:36.406440   69580 cri.go:89] found id: ""
	I0501 03:42:36.406471   69580 logs.go:276] 0 containers: []
	W0501 03:42:36.406482   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:42:36.406489   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:42:36.406552   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:42:36.443931   69580 cri.go:89] found id: ""
	I0501 03:42:36.443957   69580 logs.go:276] 0 containers: []
	W0501 03:42:36.443964   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:42:36.443969   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:42:36.444024   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:42:33.844832   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:35.845476   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:37.846291   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:34.156976   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:36.657001   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:38.657056   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:37.510534   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:39.511335   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:36.486169   69580 cri.go:89] found id: ""
	I0501 03:42:36.486200   69580 logs.go:276] 0 containers: []
	W0501 03:42:36.486213   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:42:36.486220   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:42:36.486276   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:42:36.532211   69580 cri.go:89] found id: ""
	I0501 03:42:36.532237   69580 logs.go:276] 0 containers: []
	W0501 03:42:36.532246   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:42:36.532251   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:42:36.532311   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:42:36.571889   69580 cri.go:89] found id: ""
	I0501 03:42:36.571921   69580 logs.go:276] 0 containers: []
	W0501 03:42:36.571933   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:42:36.571940   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:42:36.572000   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:42:36.612126   69580 cri.go:89] found id: ""
	I0501 03:42:36.612159   69580 logs.go:276] 0 containers: []
	W0501 03:42:36.612170   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:42:36.612177   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:42:36.612238   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:42:36.654067   69580 cri.go:89] found id: ""
	I0501 03:42:36.654096   69580 logs.go:276] 0 containers: []
	W0501 03:42:36.654106   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:42:36.654117   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:42:36.654129   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:42:36.740205   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:42:36.740226   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:42:36.740237   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:42:36.821403   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:42:36.821437   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:42:36.874829   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:42:36.874867   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:42:36.928312   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:42:36.928342   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:42:39.444598   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:42:39.460086   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:42:39.460151   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:42:39.500833   69580 cri.go:89] found id: ""
	I0501 03:42:39.500859   69580 logs.go:276] 0 containers: []
	W0501 03:42:39.500870   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:42:39.500879   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:42:39.500936   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:42:39.544212   69580 cri.go:89] found id: ""
	I0501 03:42:39.544238   69580 logs.go:276] 0 containers: []
	W0501 03:42:39.544248   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:42:39.544260   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:42:39.544326   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:42:39.582167   69580 cri.go:89] found id: ""
	I0501 03:42:39.582200   69580 logs.go:276] 0 containers: []
	W0501 03:42:39.582218   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:42:39.582231   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:42:39.582296   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:42:39.624811   69580 cri.go:89] found id: ""
	I0501 03:42:39.624837   69580 logs.go:276] 0 containers: []
	W0501 03:42:39.624848   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:42:39.624855   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:42:39.624913   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:42:39.666001   69580 cri.go:89] found id: ""
	I0501 03:42:39.666030   69580 logs.go:276] 0 containers: []
	W0501 03:42:39.666041   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:42:39.666048   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:42:39.666111   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:42:39.708790   69580 cri.go:89] found id: ""
	I0501 03:42:39.708820   69580 logs.go:276] 0 containers: []
	W0501 03:42:39.708831   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:42:39.708839   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:42:39.708896   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:42:39.750585   69580 cri.go:89] found id: ""
	I0501 03:42:39.750609   69580 logs.go:276] 0 containers: []
	W0501 03:42:39.750617   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:42:39.750622   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:42:39.750670   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:42:39.798576   69580 cri.go:89] found id: ""
	I0501 03:42:39.798612   69580 logs.go:276] 0 containers: []
	W0501 03:42:39.798624   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:42:39.798636   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:42:39.798651   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:42:39.891759   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:42:39.891782   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:42:39.891797   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:42:39.974419   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:42:39.974462   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:42:40.020700   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:42:40.020728   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:42:40.073946   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:42:40.073980   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:42:40.345975   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:42.350579   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:40.657403   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:42.658271   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:41.511780   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:43.512428   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:42.590933   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:42:42.606044   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:42:42.606120   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:42:42.653074   69580 cri.go:89] found id: ""
	I0501 03:42:42.653104   69580 logs.go:276] 0 containers: []
	W0501 03:42:42.653115   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:42:42.653123   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:42:42.653195   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:42:42.693770   69580 cri.go:89] found id: ""
	I0501 03:42:42.693809   69580 logs.go:276] 0 containers: []
	W0501 03:42:42.693821   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:42:42.693829   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:42:42.693885   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:42:42.739087   69580 cri.go:89] found id: ""
	I0501 03:42:42.739115   69580 logs.go:276] 0 containers: []
	W0501 03:42:42.739125   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:42:42.739133   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:42:42.739196   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:42:42.779831   69580 cri.go:89] found id: ""
	I0501 03:42:42.779863   69580 logs.go:276] 0 containers: []
	W0501 03:42:42.779876   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:42:42.779885   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:42:42.779950   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:42:42.826759   69580 cri.go:89] found id: ""
	I0501 03:42:42.826791   69580 logs.go:276] 0 containers: []
	W0501 03:42:42.826799   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:42:42.826804   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:42:42.826854   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:42:42.872602   69580 cri.go:89] found id: ""
	I0501 03:42:42.872629   69580 logs.go:276] 0 containers: []
	W0501 03:42:42.872640   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:42:42.872648   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:42:42.872707   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:42:42.913833   69580 cri.go:89] found id: ""
	I0501 03:42:42.913862   69580 logs.go:276] 0 containers: []
	W0501 03:42:42.913872   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:42:42.913879   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:42:42.913936   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:42:42.953629   69580 cri.go:89] found id: ""
	I0501 03:42:42.953657   69580 logs.go:276] 0 containers: []
	W0501 03:42:42.953667   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:42:42.953679   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:42:42.953695   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:42:42.968420   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:42:42.968447   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:42:43.046840   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:42:43.046874   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:42:43.046898   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:42:43.135453   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:42:43.135492   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:42:43.184103   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:42:43.184141   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:42:45.738246   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:42:45.753193   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:42:45.753258   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:42:45.791191   69580 cri.go:89] found id: ""
	I0501 03:42:45.791216   69580 logs.go:276] 0 containers: []
	W0501 03:42:45.791224   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:42:45.791236   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:42:45.791285   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:42:45.831935   69580 cri.go:89] found id: ""
	I0501 03:42:45.831967   69580 logs.go:276] 0 containers: []
	W0501 03:42:45.831978   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:42:45.831986   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:42:45.832041   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:42:45.869492   69580 cri.go:89] found id: ""
	I0501 03:42:45.869517   69580 logs.go:276] 0 containers: []
	W0501 03:42:45.869529   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:42:45.869536   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:42:45.869593   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:42:45.910642   69580 cri.go:89] found id: ""
	I0501 03:42:45.910672   69580 logs.go:276] 0 containers: []
	W0501 03:42:45.910682   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:42:45.910691   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:42:45.910754   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:42:45.951489   69580 cri.go:89] found id: ""
	I0501 03:42:45.951518   69580 logs.go:276] 0 containers: []
	W0501 03:42:45.951528   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:42:45.951535   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:42:45.951582   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:42:45.991388   69580 cri.go:89] found id: ""
	I0501 03:42:45.991410   69580 logs.go:276] 0 containers: []
	W0501 03:42:45.991418   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:42:45.991423   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:42:45.991467   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:42:46.036524   69580 cri.go:89] found id: ""
	I0501 03:42:46.036546   69580 logs.go:276] 0 containers: []
	W0501 03:42:46.036553   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:42:46.036560   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:42:46.036622   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:42:46.087472   69580 cri.go:89] found id: ""
	I0501 03:42:46.087495   69580 logs.go:276] 0 containers: []
	W0501 03:42:46.087504   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:42:46.087513   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:42:46.087526   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:42:46.101283   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:42:46.101314   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:42:46.176459   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:42:46.176491   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:42:46.176506   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:42:46.261921   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:42:46.261956   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:42:46.309879   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:42:46.309910   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:42:44.846042   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:47.349023   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:44.658318   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:47.155780   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:46.011347   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:48.511156   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:50.512175   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:48.867064   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:42:48.884082   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:42:48.884192   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:42:48.929681   69580 cri.go:89] found id: ""
	I0501 03:42:48.929708   69580 logs.go:276] 0 containers: []
	W0501 03:42:48.929716   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:42:48.929722   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:42:48.929789   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:42:48.977850   69580 cri.go:89] found id: ""
	I0501 03:42:48.977882   69580 logs.go:276] 0 containers: []
	W0501 03:42:48.977894   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:42:48.977901   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:42:48.977962   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:42:49.022590   69580 cri.go:89] found id: ""
	I0501 03:42:49.022619   69580 logs.go:276] 0 containers: []
	W0501 03:42:49.022629   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:42:49.022637   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:42:49.022706   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:42:49.064092   69580 cri.go:89] found id: ""
	I0501 03:42:49.064122   69580 logs.go:276] 0 containers: []
	W0501 03:42:49.064143   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:42:49.064152   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:42:49.064220   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:42:49.103962   69580 cri.go:89] found id: ""
	I0501 03:42:49.103990   69580 logs.go:276] 0 containers: []
	W0501 03:42:49.104002   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:42:49.104009   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:42:49.104070   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:42:49.144566   69580 cri.go:89] found id: ""
	I0501 03:42:49.144596   69580 logs.go:276] 0 containers: []
	W0501 03:42:49.144604   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:42:49.144610   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:42:49.144669   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:42:49.183110   69580 cri.go:89] found id: ""
	I0501 03:42:49.183141   69580 logs.go:276] 0 containers: []
	W0501 03:42:49.183161   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:42:49.183166   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:42:49.183239   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:42:49.225865   69580 cri.go:89] found id: ""
	I0501 03:42:49.225890   69580 logs.go:276] 0 containers: []
	W0501 03:42:49.225902   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:42:49.225912   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:42:49.225926   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:42:49.312967   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:42:49.313005   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:42:49.361171   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:42:49.361206   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:42:49.418731   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:42:49.418780   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:42:49.436976   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:42:49.437007   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:42:49.517994   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:42:49.848517   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:52.346908   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:49.160713   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:51.656444   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:53.659040   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:53.011092   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:55.011811   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:52.018675   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:42:52.033946   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:42:52.034022   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:42:52.081433   69580 cri.go:89] found id: ""
	I0501 03:42:52.081465   69580 logs.go:276] 0 containers: []
	W0501 03:42:52.081477   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:42:52.081485   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:42:52.081544   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:42:52.123914   69580 cri.go:89] found id: ""
	I0501 03:42:52.123947   69580 logs.go:276] 0 containers: []
	W0501 03:42:52.123958   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:42:52.123966   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:42:52.124023   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:42:52.164000   69580 cri.go:89] found id: ""
	I0501 03:42:52.164020   69580 logs.go:276] 0 containers: []
	W0501 03:42:52.164027   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:42:52.164033   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:42:52.164086   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:42:52.205984   69580 cri.go:89] found id: ""
	I0501 03:42:52.206011   69580 logs.go:276] 0 containers: []
	W0501 03:42:52.206023   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:42:52.206031   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:42:52.206096   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:42:52.252743   69580 cri.go:89] found id: ""
	I0501 03:42:52.252766   69580 logs.go:276] 0 containers: []
	W0501 03:42:52.252774   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:42:52.252779   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:42:52.252839   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:42:52.296814   69580 cri.go:89] found id: ""
	I0501 03:42:52.296838   69580 logs.go:276] 0 containers: []
	W0501 03:42:52.296856   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:42:52.296864   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:42:52.296928   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:42:52.335996   69580 cri.go:89] found id: ""
	I0501 03:42:52.336023   69580 logs.go:276] 0 containers: []
	W0501 03:42:52.336034   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:42:52.336042   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:42:52.336105   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:42:52.377470   69580 cri.go:89] found id: ""
	I0501 03:42:52.377498   69580 logs.go:276] 0 containers: []
	W0501 03:42:52.377513   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:42:52.377524   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:42:52.377540   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:42:52.432644   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:42:52.432680   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:42:52.447518   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:42:52.447552   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:42:52.530967   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:42:52.530992   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:42:52.531005   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:42:52.612280   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:42:52.612327   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:42:55.170134   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:42:55.185252   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:42:55.185328   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:42:55.227741   69580 cri.go:89] found id: ""
	I0501 03:42:55.227764   69580 logs.go:276] 0 containers: []
	W0501 03:42:55.227771   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:42:55.227777   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:42:55.227820   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:42:55.270796   69580 cri.go:89] found id: ""
	I0501 03:42:55.270823   69580 logs.go:276] 0 containers: []
	W0501 03:42:55.270834   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:42:55.270840   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:42:55.270898   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:42:55.312146   69580 cri.go:89] found id: ""
	I0501 03:42:55.312171   69580 logs.go:276] 0 containers: []
	W0501 03:42:55.312180   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:42:55.312190   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:42:55.312236   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:42:55.354410   69580 cri.go:89] found id: ""
	I0501 03:42:55.354436   69580 logs.go:276] 0 containers: []
	W0501 03:42:55.354445   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:42:55.354450   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:42:55.354509   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:42:55.393550   69580 cri.go:89] found id: ""
	I0501 03:42:55.393580   69580 logs.go:276] 0 containers: []
	W0501 03:42:55.393589   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:42:55.393594   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:42:55.393651   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:42:55.431468   69580 cri.go:89] found id: ""
	I0501 03:42:55.431497   69580 logs.go:276] 0 containers: []
	W0501 03:42:55.431507   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:42:55.431514   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:42:55.431566   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:42:55.470491   69580 cri.go:89] found id: ""
	I0501 03:42:55.470513   69580 logs.go:276] 0 containers: []
	W0501 03:42:55.470520   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:42:55.470526   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:42:55.470571   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:42:55.509849   69580 cri.go:89] found id: ""
	I0501 03:42:55.509875   69580 logs.go:276] 0 containers: []
	W0501 03:42:55.509885   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:42:55.509894   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:42:55.509909   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:42:55.566680   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:42:55.566762   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:42:55.584392   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:42:55.584423   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:42:55.663090   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:42:55.663116   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:42:55.663131   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:42:55.741459   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:42:55.741494   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:42:54.846549   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:56.848989   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:56.156918   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:58.157016   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:57.012980   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:59.513719   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:58.294435   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:42:58.310204   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:42:58.310267   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:42:58.350292   69580 cri.go:89] found id: ""
	I0501 03:42:58.350322   69580 logs.go:276] 0 containers: []
	W0501 03:42:58.350334   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:42:58.350343   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:42:58.350431   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:42:58.395998   69580 cri.go:89] found id: ""
	I0501 03:42:58.396029   69580 logs.go:276] 0 containers: []
	W0501 03:42:58.396041   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:42:58.396049   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:42:58.396131   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:42:58.434371   69580 cri.go:89] found id: ""
	I0501 03:42:58.434414   69580 logs.go:276] 0 containers: []
	W0501 03:42:58.434427   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:42:58.434434   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:42:58.434493   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:42:58.473457   69580 cri.go:89] found id: ""
	I0501 03:42:58.473489   69580 logs.go:276] 0 containers: []
	W0501 03:42:58.473499   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:42:58.473507   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:42:58.473572   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:42:58.515172   69580 cri.go:89] found id: ""
	I0501 03:42:58.515201   69580 logs.go:276] 0 containers: []
	W0501 03:42:58.515212   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:42:58.515221   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:42:58.515291   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:42:58.560305   69580 cri.go:89] found id: ""
	I0501 03:42:58.560333   69580 logs.go:276] 0 containers: []
	W0501 03:42:58.560341   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:42:58.560348   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:42:58.560407   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:42:58.617980   69580 cri.go:89] found id: ""
	I0501 03:42:58.618005   69580 logs.go:276] 0 containers: []
	W0501 03:42:58.618013   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:42:58.618019   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:42:58.618080   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:42:58.659800   69580 cri.go:89] found id: ""
	I0501 03:42:58.659827   69580 logs.go:276] 0 containers: []
	W0501 03:42:58.659838   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:42:58.659848   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:42:58.659862   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:42:58.718134   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:42:58.718169   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:42:58.733972   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:42:58.734001   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:42:58.813055   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:42:58.813082   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:42:58.813099   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:42:58.897293   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:42:58.897331   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:43:01.442980   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:43:01.459602   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:43:01.459687   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:42:58.849599   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:01.346264   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:00.157322   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:02.657002   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:02.012753   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:04.510896   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:01.502817   69580 cri.go:89] found id: ""
	I0501 03:43:01.502848   69580 logs.go:276] 0 containers: []
	W0501 03:43:01.502857   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:43:01.502863   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:43:01.502924   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:43:01.547251   69580 cri.go:89] found id: ""
	I0501 03:43:01.547289   69580 logs.go:276] 0 containers: []
	W0501 03:43:01.547301   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:43:01.547308   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:43:01.547376   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:43:01.590179   69580 cri.go:89] found id: ""
	I0501 03:43:01.590211   69580 logs.go:276] 0 containers: []
	W0501 03:43:01.590221   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:43:01.590228   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:43:01.590296   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:43:01.628772   69580 cri.go:89] found id: ""
	I0501 03:43:01.628814   69580 logs.go:276] 0 containers: []
	W0501 03:43:01.628826   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:43:01.628834   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:43:01.628893   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:43:01.677414   69580 cri.go:89] found id: ""
	I0501 03:43:01.677440   69580 logs.go:276] 0 containers: []
	W0501 03:43:01.677448   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:43:01.677453   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:43:01.677500   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:43:01.723107   69580 cri.go:89] found id: ""
	I0501 03:43:01.723139   69580 logs.go:276] 0 containers: []
	W0501 03:43:01.723152   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:43:01.723160   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:43:01.723225   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:43:01.771846   69580 cri.go:89] found id: ""
	I0501 03:43:01.771873   69580 logs.go:276] 0 containers: []
	W0501 03:43:01.771883   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:43:01.771890   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:43:01.771952   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:43:01.818145   69580 cri.go:89] found id: ""
	I0501 03:43:01.818179   69580 logs.go:276] 0 containers: []
	W0501 03:43:01.818191   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:43:01.818202   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:43:01.818218   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:43:01.881502   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:43:01.881546   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:43:01.897580   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:43:01.897614   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:43:01.981959   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:43:01.981980   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:43:01.981996   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:43:02.066228   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:43:02.066269   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:43:04.609855   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:43:04.626885   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:43:04.626962   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:43:04.668248   69580 cri.go:89] found id: ""
	I0501 03:43:04.668277   69580 logs.go:276] 0 containers: []
	W0501 03:43:04.668290   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:43:04.668298   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:43:04.668364   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:43:04.711032   69580 cri.go:89] found id: ""
	I0501 03:43:04.711057   69580 logs.go:276] 0 containers: []
	W0501 03:43:04.711068   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:43:04.711076   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:43:04.711136   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:43:04.754197   69580 cri.go:89] found id: ""
	I0501 03:43:04.754232   69580 logs.go:276] 0 containers: []
	W0501 03:43:04.754241   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:43:04.754248   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:43:04.754317   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:43:04.801062   69580 cri.go:89] found id: ""
	I0501 03:43:04.801089   69580 logs.go:276] 0 containers: []
	W0501 03:43:04.801097   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:43:04.801103   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:43:04.801163   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:43:04.849425   69580 cri.go:89] found id: ""
	I0501 03:43:04.849454   69580 logs.go:276] 0 containers: []
	W0501 03:43:04.849465   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:43:04.849473   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:43:04.849536   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:43:04.892555   69580 cri.go:89] found id: ""
	I0501 03:43:04.892589   69580 logs.go:276] 0 containers: []
	W0501 03:43:04.892597   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:43:04.892603   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:43:04.892661   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:43:04.934101   69580 cri.go:89] found id: ""
	I0501 03:43:04.934129   69580 logs.go:276] 0 containers: []
	W0501 03:43:04.934137   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:43:04.934142   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:43:04.934191   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:43:04.985720   69580 cri.go:89] found id: ""
	I0501 03:43:04.985747   69580 logs.go:276] 0 containers: []
	W0501 03:43:04.985760   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:43:04.985773   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:43:04.985789   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:43:05.060634   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:43:05.060692   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:43:05.082007   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:43:05.082036   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:43:05.164613   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:43:05.164636   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:43:05.164652   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:43:05.244064   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:43:05.244103   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:43:03.845495   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:06.346757   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:05.157929   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:07.657094   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:06.511168   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:08.511512   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:10.511984   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:07.793867   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:43:07.811161   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:43:07.811236   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:43:07.850738   69580 cri.go:89] found id: ""
	I0501 03:43:07.850765   69580 logs.go:276] 0 containers: []
	W0501 03:43:07.850775   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:43:07.850782   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:43:07.850841   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:43:07.892434   69580 cri.go:89] found id: ""
	I0501 03:43:07.892466   69580 logs.go:276] 0 containers: []
	W0501 03:43:07.892476   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:43:07.892483   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:43:07.892543   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:43:07.934093   69580 cri.go:89] found id: ""
	I0501 03:43:07.934122   69580 logs.go:276] 0 containers: []
	W0501 03:43:07.934133   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:43:07.934141   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:43:07.934200   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:43:07.976165   69580 cri.go:89] found id: ""
	I0501 03:43:07.976196   69580 logs.go:276] 0 containers: []
	W0501 03:43:07.976205   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:43:07.976216   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:43:07.976278   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:43:08.016925   69580 cri.go:89] found id: ""
	I0501 03:43:08.016956   69580 logs.go:276] 0 containers: []
	W0501 03:43:08.016968   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:43:08.016975   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:43:08.017038   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:43:08.063385   69580 cri.go:89] found id: ""
	I0501 03:43:08.063438   69580 logs.go:276] 0 containers: []
	W0501 03:43:08.063454   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:43:08.063465   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:43:08.063551   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:43:08.103586   69580 cri.go:89] found id: ""
	I0501 03:43:08.103610   69580 logs.go:276] 0 containers: []
	W0501 03:43:08.103618   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:43:08.103628   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:43:08.103672   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:43:08.142564   69580 cri.go:89] found id: ""
	I0501 03:43:08.142594   69580 logs.go:276] 0 containers: []
	W0501 03:43:08.142605   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:43:08.142617   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:43:08.142635   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:43:08.231532   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:43:08.231556   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:43:08.231571   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:43:08.311009   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:43:08.311053   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:43:08.357841   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:43:08.357877   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:43:08.409577   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:43:08.409610   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:43:10.924898   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:43:10.941525   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:43:10.941591   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:43:11.009214   69580 cri.go:89] found id: ""
	I0501 03:43:11.009238   69580 logs.go:276] 0 containers: []
	W0501 03:43:11.009247   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:43:11.009255   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:43:11.009316   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:43:11.072233   69580 cri.go:89] found id: ""
	I0501 03:43:11.072259   69580 logs.go:276] 0 containers: []
	W0501 03:43:11.072267   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:43:11.072273   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:43:11.072327   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:43:11.111662   69580 cri.go:89] found id: ""
	I0501 03:43:11.111691   69580 logs.go:276] 0 containers: []
	W0501 03:43:11.111701   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:43:11.111708   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:43:11.111765   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:43:11.151540   69580 cri.go:89] found id: ""
	I0501 03:43:11.151570   69580 logs.go:276] 0 containers: []
	W0501 03:43:11.151580   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:43:11.151594   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:43:11.151656   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:43:11.194030   69580 cri.go:89] found id: ""
	I0501 03:43:11.194064   69580 logs.go:276] 0 containers: []
	W0501 03:43:11.194076   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:43:11.194083   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:43:11.194146   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:43:11.233010   69580 cri.go:89] found id: ""
	I0501 03:43:11.233045   69580 logs.go:276] 0 containers: []
	W0501 03:43:11.233056   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:43:11.233063   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:43:11.233117   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:43:11.270979   69580 cri.go:89] found id: ""
	I0501 03:43:11.271009   69580 logs.go:276] 0 containers: []
	W0501 03:43:11.271019   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:43:11.271026   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:43:11.271088   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:43:11.312338   69580 cri.go:89] found id: ""
	I0501 03:43:11.312369   69580 logs.go:276] 0 containers: []
	W0501 03:43:11.312381   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:43:11.312393   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:43:11.312408   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:43:11.364273   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:43:11.364307   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:43:11.418603   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:43:11.418634   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:43:11.433409   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:43:11.433438   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0501 03:43:08.349537   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:10.845566   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:12.846699   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:10.157910   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:12.657859   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:12.512669   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:15.013314   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	W0501 03:43:11.511243   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:43:11.511265   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:43:11.511280   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:43:14.089834   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:43:14.104337   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:43:14.104419   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:43:14.148799   69580 cri.go:89] found id: ""
	I0501 03:43:14.148826   69580 logs.go:276] 0 containers: []
	W0501 03:43:14.148833   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:43:14.148839   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:43:14.148904   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:43:14.191330   69580 cri.go:89] found id: ""
	I0501 03:43:14.191366   69580 logs.go:276] 0 containers: []
	W0501 03:43:14.191378   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:43:14.191386   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:43:14.191448   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:43:14.245978   69580 cri.go:89] found id: ""
	I0501 03:43:14.246010   69580 logs.go:276] 0 containers: []
	W0501 03:43:14.246018   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:43:14.246024   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:43:14.246093   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:43:14.287188   69580 cri.go:89] found id: ""
	I0501 03:43:14.287215   69580 logs.go:276] 0 containers: []
	W0501 03:43:14.287223   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:43:14.287228   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:43:14.287276   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:43:14.328060   69580 cri.go:89] found id: ""
	I0501 03:43:14.328093   69580 logs.go:276] 0 containers: []
	W0501 03:43:14.328104   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:43:14.328113   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:43:14.328179   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:43:14.370734   69580 cri.go:89] found id: ""
	I0501 03:43:14.370765   69580 logs.go:276] 0 containers: []
	W0501 03:43:14.370776   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:43:14.370783   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:43:14.370837   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:43:14.414690   69580 cri.go:89] found id: ""
	I0501 03:43:14.414713   69580 logs.go:276] 0 containers: []
	W0501 03:43:14.414721   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:43:14.414726   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:43:14.414790   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:43:14.459030   69580 cri.go:89] found id: ""
	I0501 03:43:14.459060   69580 logs.go:276] 0 containers: []
	W0501 03:43:14.459072   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:43:14.459083   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:43:14.459098   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:43:14.519728   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:43:14.519761   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:43:14.535841   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:43:14.535871   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:43:14.615203   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:43:14.615231   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:43:14.615249   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:43:14.707677   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:43:14.707725   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:43:15.345927   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:17.846732   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:14.657956   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:17.156935   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:17.512424   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:20.012471   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:17.254918   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:43:17.270643   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:43:17.270698   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:43:17.310692   69580 cri.go:89] found id: ""
	I0501 03:43:17.310724   69580 logs.go:276] 0 containers: []
	W0501 03:43:17.310732   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:43:17.310739   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:43:17.310806   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:43:17.349932   69580 cri.go:89] found id: ""
	I0501 03:43:17.349959   69580 logs.go:276] 0 containers: []
	W0501 03:43:17.349969   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:43:17.349976   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:43:17.350040   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:43:17.393073   69580 cri.go:89] found id: ""
	I0501 03:43:17.393099   69580 logs.go:276] 0 containers: []
	W0501 03:43:17.393109   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:43:17.393116   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:43:17.393176   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:43:17.429736   69580 cri.go:89] found id: ""
	I0501 03:43:17.429763   69580 logs.go:276] 0 containers: []
	W0501 03:43:17.429773   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:43:17.429787   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:43:17.429858   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:43:17.473052   69580 cri.go:89] found id: ""
	I0501 03:43:17.473085   69580 logs.go:276] 0 containers: []
	W0501 03:43:17.473097   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:43:17.473105   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:43:17.473168   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:43:17.514035   69580 cri.go:89] found id: ""
	I0501 03:43:17.514062   69580 logs.go:276] 0 containers: []
	W0501 03:43:17.514071   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:43:17.514078   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:43:17.514126   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:43:17.553197   69580 cri.go:89] found id: ""
	I0501 03:43:17.553225   69580 logs.go:276] 0 containers: []
	W0501 03:43:17.553234   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:43:17.553240   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:43:17.553300   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:43:17.592170   69580 cri.go:89] found id: ""
	I0501 03:43:17.592192   69580 logs.go:276] 0 containers: []
	W0501 03:43:17.592199   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:43:17.592208   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:43:17.592220   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:43:17.647549   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:43:17.647584   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:43:17.663084   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:43:17.663114   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:43:17.748357   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:43:17.748385   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:43:17.748401   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:43:17.832453   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:43:17.832491   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:43:20.375927   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:43:20.391840   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:43:20.391918   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:43:20.434158   69580 cri.go:89] found id: ""
	I0501 03:43:20.434185   69580 logs.go:276] 0 containers: []
	W0501 03:43:20.434193   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:43:20.434198   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:43:20.434254   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:43:20.477209   69580 cri.go:89] found id: ""
	I0501 03:43:20.477237   69580 logs.go:276] 0 containers: []
	W0501 03:43:20.477253   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:43:20.477259   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:43:20.477309   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:43:20.517227   69580 cri.go:89] found id: ""
	I0501 03:43:20.517260   69580 logs.go:276] 0 containers: []
	W0501 03:43:20.517270   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:43:20.517282   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:43:20.517340   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:43:20.555771   69580 cri.go:89] found id: ""
	I0501 03:43:20.555802   69580 logs.go:276] 0 containers: []
	W0501 03:43:20.555812   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:43:20.555820   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:43:20.555866   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:43:20.598177   69580 cri.go:89] found id: ""
	I0501 03:43:20.598200   69580 logs.go:276] 0 containers: []
	W0501 03:43:20.598213   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:43:20.598218   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:43:20.598326   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:43:20.637336   69580 cri.go:89] found id: ""
	I0501 03:43:20.637364   69580 logs.go:276] 0 containers: []
	W0501 03:43:20.637373   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:43:20.637378   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:43:20.637435   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:43:20.687736   69580 cri.go:89] found id: ""
	I0501 03:43:20.687761   69580 logs.go:276] 0 containers: []
	W0501 03:43:20.687768   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:43:20.687782   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:43:20.687840   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:43:20.726102   69580 cri.go:89] found id: ""
	I0501 03:43:20.726135   69580 logs.go:276] 0 containers: []
	W0501 03:43:20.726143   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:43:20.726154   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:43:20.726169   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:43:20.780874   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:43:20.780905   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:43:20.795798   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:43:20.795836   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:43:20.882337   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:43:20.882367   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:43:20.882381   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:43:20.962138   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:43:20.962188   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:43:20.345887   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:22.346061   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:19.157165   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:21.657358   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:22.015676   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:24.511682   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:23.512174   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:43:23.528344   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:43:23.528417   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:43:23.567182   69580 cri.go:89] found id: ""
	I0501 03:43:23.567212   69580 logs.go:276] 0 containers: []
	W0501 03:43:23.567222   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:43:23.567230   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:43:23.567291   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:43:23.607522   69580 cri.go:89] found id: ""
	I0501 03:43:23.607556   69580 logs.go:276] 0 containers: []
	W0501 03:43:23.607567   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:43:23.607574   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:43:23.607637   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:43:23.650932   69580 cri.go:89] found id: ""
	I0501 03:43:23.650959   69580 logs.go:276] 0 containers: []
	W0501 03:43:23.650970   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:43:23.650976   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:43:23.651035   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:43:23.695392   69580 cri.go:89] found id: ""
	I0501 03:43:23.695419   69580 logs.go:276] 0 containers: []
	W0501 03:43:23.695428   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:43:23.695436   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:43:23.695514   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:43:23.736577   69580 cri.go:89] found id: ""
	I0501 03:43:23.736607   69580 logs.go:276] 0 containers: []
	W0501 03:43:23.736619   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:43:23.736627   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:43:23.736685   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:43:23.776047   69580 cri.go:89] found id: ""
	I0501 03:43:23.776070   69580 logs.go:276] 0 containers: []
	W0501 03:43:23.776077   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:43:23.776082   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:43:23.776134   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:43:23.813896   69580 cri.go:89] found id: ""
	I0501 03:43:23.813934   69580 logs.go:276] 0 containers: []
	W0501 03:43:23.813943   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:43:23.813949   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:43:23.813997   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:43:23.858898   69580 cri.go:89] found id: ""
	I0501 03:43:23.858925   69580 logs.go:276] 0 containers: []
	W0501 03:43:23.858936   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:43:23.858947   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:43:23.858964   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:43:23.901796   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:43:23.901850   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:43:23.957009   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:43:23.957040   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:43:23.972811   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:43:23.972839   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:43:24.055535   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:43:24.055557   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:43:24.055576   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:43:24.845310   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:26.847397   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:24.157453   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:26.661073   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:27.012181   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:29.511387   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:26.640114   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:43:26.657217   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:43:26.657285   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:43:26.701191   69580 cri.go:89] found id: ""
	I0501 03:43:26.701218   69580 logs.go:276] 0 containers: []
	W0501 03:43:26.701227   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:43:26.701232   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:43:26.701287   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:43:26.740710   69580 cri.go:89] found id: ""
	I0501 03:43:26.740737   69580 logs.go:276] 0 containers: []
	W0501 03:43:26.740745   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:43:26.740750   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:43:26.740808   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:43:26.778682   69580 cri.go:89] found id: ""
	I0501 03:43:26.778710   69580 logs.go:276] 0 containers: []
	W0501 03:43:26.778724   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:43:26.778730   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:43:26.778789   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:43:26.822143   69580 cri.go:89] found id: ""
	I0501 03:43:26.822190   69580 logs.go:276] 0 containers: []
	W0501 03:43:26.822201   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:43:26.822209   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:43:26.822270   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:43:26.865938   69580 cri.go:89] found id: ""
	I0501 03:43:26.865976   69580 logs.go:276] 0 containers: []
	W0501 03:43:26.865988   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:43:26.865996   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:43:26.866058   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:43:26.914939   69580 cri.go:89] found id: ""
	I0501 03:43:26.914969   69580 logs.go:276] 0 containers: []
	W0501 03:43:26.914979   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:43:26.914986   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:43:26.915043   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:43:26.961822   69580 cri.go:89] found id: ""
	I0501 03:43:26.961850   69580 logs.go:276] 0 containers: []
	W0501 03:43:26.961860   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:43:26.961867   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:43:26.961920   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:43:27.005985   69580 cri.go:89] found id: ""
	I0501 03:43:27.006012   69580 logs.go:276] 0 containers: []
	W0501 03:43:27.006021   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:43:27.006032   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:43:27.006046   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:43:27.058265   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:43:27.058303   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:43:27.076270   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:43:27.076308   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:43:27.152627   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:43:27.152706   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:43:27.152728   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:43:27.229638   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:43:27.229678   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:43:29.775960   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:43:29.792849   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:43:29.792925   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:43:29.832508   69580 cri.go:89] found id: ""
	I0501 03:43:29.832537   69580 logs.go:276] 0 containers: []
	W0501 03:43:29.832551   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:43:29.832559   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:43:29.832617   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:43:29.873160   69580 cri.go:89] found id: ""
	I0501 03:43:29.873188   69580 logs.go:276] 0 containers: []
	W0501 03:43:29.873199   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:43:29.873207   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:43:29.873271   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:43:29.919431   69580 cri.go:89] found id: ""
	I0501 03:43:29.919459   69580 logs.go:276] 0 containers: []
	W0501 03:43:29.919468   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:43:29.919474   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:43:29.919533   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:43:29.967944   69580 cri.go:89] found id: ""
	I0501 03:43:29.967976   69580 logs.go:276] 0 containers: []
	W0501 03:43:29.967987   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:43:29.967995   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:43:29.968060   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:43:30.011626   69580 cri.go:89] found id: ""
	I0501 03:43:30.011657   69580 logs.go:276] 0 containers: []
	W0501 03:43:30.011669   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:43:30.011678   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:43:30.011743   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:43:30.051998   69580 cri.go:89] found id: ""
	I0501 03:43:30.052020   69580 logs.go:276] 0 containers: []
	W0501 03:43:30.052028   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:43:30.052034   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:43:30.052095   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:43:30.094140   69580 cri.go:89] found id: ""
	I0501 03:43:30.094164   69580 logs.go:276] 0 containers: []
	W0501 03:43:30.094172   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:43:30.094179   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:43:30.094253   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:43:30.132363   69580 cri.go:89] found id: ""
	I0501 03:43:30.132391   69580 logs.go:276] 0 containers: []
	W0501 03:43:30.132399   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:43:30.132411   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:43:30.132422   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:43:30.221368   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:43:30.221410   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:43:30.271279   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:43:30.271317   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:43:30.325549   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:43:30.325586   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:43:30.345337   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:43:30.345376   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:43:30.427552   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:43:29.347108   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:31.846435   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:29.156483   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:31.156871   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:33.157355   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:32.015498   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:34.511190   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:32.928667   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:43:32.945489   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:43:32.945557   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:43:32.989604   69580 cri.go:89] found id: ""
	I0501 03:43:32.989628   69580 logs.go:276] 0 containers: []
	W0501 03:43:32.989636   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:43:32.989642   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:43:32.989701   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:43:33.030862   69580 cri.go:89] found id: ""
	I0501 03:43:33.030892   69580 logs.go:276] 0 containers: []
	W0501 03:43:33.030903   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:43:33.030912   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:43:33.030977   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:43:33.079795   69580 cri.go:89] found id: ""
	I0501 03:43:33.079827   69580 logs.go:276] 0 containers: []
	W0501 03:43:33.079835   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:43:33.079841   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:43:33.079898   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:43:33.120612   69580 cri.go:89] found id: ""
	I0501 03:43:33.120636   69580 logs.go:276] 0 containers: []
	W0501 03:43:33.120644   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:43:33.120649   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:43:33.120694   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:43:33.161824   69580 cri.go:89] found id: ""
	I0501 03:43:33.161851   69580 logs.go:276] 0 containers: []
	W0501 03:43:33.161861   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:43:33.161868   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:43:33.161924   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:43:33.200068   69580 cri.go:89] found id: ""
	I0501 03:43:33.200098   69580 logs.go:276] 0 containers: []
	W0501 03:43:33.200107   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:43:33.200113   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:43:33.200175   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:43:33.239314   69580 cri.go:89] found id: ""
	I0501 03:43:33.239341   69580 logs.go:276] 0 containers: []
	W0501 03:43:33.239351   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:43:33.239359   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:43:33.239427   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:43:33.281381   69580 cri.go:89] found id: ""
	I0501 03:43:33.281408   69580 logs.go:276] 0 containers: []
	W0501 03:43:33.281419   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:43:33.281431   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:43:33.281447   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:43:33.297992   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:43:33.298047   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:43:33.383273   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:43:33.383292   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:43:33.383303   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:43:33.465256   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:43:33.465289   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:43:33.509593   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:43:33.509621   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:43:36.065074   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:43:36.081361   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:43:36.081429   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:43:36.130394   69580 cri.go:89] found id: ""
	I0501 03:43:36.130436   69580 logs.go:276] 0 containers: []
	W0501 03:43:36.130448   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:43:36.130456   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:43:36.130524   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:43:36.171013   69580 cri.go:89] found id: ""
	I0501 03:43:36.171038   69580 logs.go:276] 0 containers: []
	W0501 03:43:36.171046   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:43:36.171052   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:43:36.171099   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:43:36.215372   69580 cri.go:89] found id: ""
	I0501 03:43:36.215411   69580 logs.go:276] 0 containers: []
	W0501 03:43:36.215424   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:43:36.215431   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:43:36.215493   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:43:36.257177   69580 cri.go:89] found id: ""
	I0501 03:43:36.257204   69580 logs.go:276] 0 containers: []
	W0501 03:43:36.257216   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:43:36.257223   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:43:36.257293   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:43:36.299035   69580 cri.go:89] found id: ""
	I0501 03:43:36.299066   69580 logs.go:276] 0 containers: []
	W0501 03:43:36.299085   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:43:36.299094   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:43:36.299166   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:43:36.339060   69580 cri.go:89] found id: ""
	I0501 03:43:36.339087   69580 logs.go:276] 0 containers: []
	W0501 03:43:36.339097   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:43:36.339105   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:43:36.339163   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:43:36.379982   69580 cri.go:89] found id: ""
	I0501 03:43:36.380016   69580 logs.go:276] 0 containers: []
	W0501 03:43:36.380028   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:43:36.380037   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:43:36.380100   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:43:36.419702   69580 cri.go:89] found id: ""
	I0501 03:43:36.419734   69580 logs.go:276] 0 containers: []
	W0501 03:43:36.419746   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:43:36.419758   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:43:36.419780   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:43:33.846499   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:35.846579   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:37.852802   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:35.159724   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:37.657040   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:36.516601   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:39.012001   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:36.472553   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:43:36.472774   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:43:36.488402   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:43:36.488439   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:43:36.566390   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:43:36.566433   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:43:36.566446   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:43:36.643493   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:43:36.643527   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:43:39.199060   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:43:39.216612   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:43:39.216695   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:43:39.262557   69580 cri.go:89] found id: ""
	I0501 03:43:39.262581   69580 logs.go:276] 0 containers: []
	W0501 03:43:39.262589   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:43:39.262595   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:43:39.262642   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:43:39.331051   69580 cri.go:89] found id: ""
	I0501 03:43:39.331076   69580 logs.go:276] 0 containers: []
	W0501 03:43:39.331093   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:43:39.331098   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:43:39.331162   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:43:39.382033   69580 cri.go:89] found id: ""
	I0501 03:43:39.382058   69580 logs.go:276] 0 containers: []
	W0501 03:43:39.382066   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:43:39.382071   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:43:39.382122   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:43:39.424019   69580 cri.go:89] found id: ""
	I0501 03:43:39.424049   69580 logs.go:276] 0 containers: []
	W0501 03:43:39.424058   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:43:39.424064   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:43:39.424120   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:43:39.465787   69580 cri.go:89] found id: ""
	I0501 03:43:39.465833   69580 logs.go:276] 0 containers: []
	W0501 03:43:39.465846   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:43:39.465855   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:43:39.465916   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:43:39.507746   69580 cri.go:89] found id: ""
	I0501 03:43:39.507781   69580 logs.go:276] 0 containers: []
	W0501 03:43:39.507791   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:43:39.507798   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:43:39.507861   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:43:39.550737   69580 cri.go:89] found id: ""
	I0501 03:43:39.550768   69580 logs.go:276] 0 containers: []
	W0501 03:43:39.550775   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:43:39.550781   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:43:39.550831   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:43:39.592279   69580 cri.go:89] found id: ""
	I0501 03:43:39.592329   69580 logs.go:276] 0 containers: []
	W0501 03:43:39.592343   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:43:39.592356   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:43:39.592373   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:43:39.648858   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:43:39.648896   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:43:39.665316   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:43:39.665343   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:43:39.743611   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:43:39.743632   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:43:39.743646   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:43:39.829285   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:43:39.829322   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:43:40.347121   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:42.845466   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:39.657888   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:41.657976   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:41.512061   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:44.017693   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:42.374457   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:43:42.389944   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:43:42.390002   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:43:42.431270   69580 cri.go:89] found id: ""
	I0501 03:43:42.431294   69580 logs.go:276] 0 containers: []
	W0501 03:43:42.431302   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:43:42.431308   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:43:42.431366   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:43:42.470515   69580 cri.go:89] found id: ""
	I0501 03:43:42.470546   69580 logs.go:276] 0 containers: []
	W0501 03:43:42.470558   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:43:42.470566   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:43:42.470619   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:43:42.518472   69580 cri.go:89] found id: ""
	I0501 03:43:42.518494   69580 logs.go:276] 0 containers: []
	W0501 03:43:42.518501   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:43:42.518506   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:43:42.518555   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:43:42.562192   69580 cri.go:89] found id: ""
	I0501 03:43:42.562220   69580 logs.go:276] 0 containers: []
	W0501 03:43:42.562231   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:43:42.562239   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:43:42.562300   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:43:42.599372   69580 cri.go:89] found id: ""
	I0501 03:43:42.599403   69580 logs.go:276] 0 containers: []
	W0501 03:43:42.599414   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:43:42.599422   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:43:42.599483   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:43:42.636738   69580 cri.go:89] found id: ""
	I0501 03:43:42.636766   69580 logs.go:276] 0 containers: []
	W0501 03:43:42.636777   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:43:42.636786   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:43:42.636845   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:43:42.682087   69580 cri.go:89] found id: ""
	I0501 03:43:42.682115   69580 logs.go:276] 0 containers: []
	W0501 03:43:42.682125   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:43:42.682133   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:43:42.682198   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:43:42.724280   69580 cri.go:89] found id: ""
	I0501 03:43:42.724316   69580 logs.go:276] 0 containers: []
	W0501 03:43:42.724328   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:43:42.724340   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:43:42.724354   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:43:42.771667   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:43:42.771702   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:43:42.827390   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:43:42.827428   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:43:42.843452   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:43:42.843480   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:43:42.925544   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:43:42.925563   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:43:42.925577   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:43:45.515104   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:43:45.529545   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:43:45.529619   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:43:45.573451   69580 cri.go:89] found id: ""
	I0501 03:43:45.573475   69580 logs.go:276] 0 containers: []
	W0501 03:43:45.573483   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:43:45.573489   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:43:45.573536   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:43:45.613873   69580 cri.go:89] found id: ""
	I0501 03:43:45.613897   69580 logs.go:276] 0 containers: []
	W0501 03:43:45.613905   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:43:45.613910   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:43:45.613954   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:43:45.660195   69580 cri.go:89] found id: ""
	I0501 03:43:45.660215   69580 logs.go:276] 0 containers: []
	W0501 03:43:45.660221   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:43:45.660226   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:43:45.660284   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:43:45.703539   69580 cri.go:89] found id: ""
	I0501 03:43:45.703566   69580 logs.go:276] 0 containers: []
	W0501 03:43:45.703574   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:43:45.703580   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:43:45.703637   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:43:45.754635   69580 cri.go:89] found id: ""
	I0501 03:43:45.754659   69580 logs.go:276] 0 containers: []
	W0501 03:43:45.754668   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:43:45.754675   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:43:45.754738   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:43:45.800836   69580 cri.go:89] found id: ""
	I0501 03:43:45.800866   69580 logs.go:276] 0 containers: []
	W0501 03:43:45.800884   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:43:45.800892   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:43:45.800955   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:43:45.859057   69580 cri.go:89] found id: ""
	I0501 03:43:45.859084   69580 logs.go:276] 0 containers: []
	W0501 03:43:45.859092   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:43:45.859098   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:43:45.859145   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:43:45.913173   69580 cri.go:89] found id: ""
	I0501 03:43:45.913204   69580 logs.go:276] 0 containers: []
	W0501 03:43:45.913216   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:43:45.913227   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:43:45.913243   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:43:45.930050   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:43:45.930087   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:43:46.006047   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:43:46.006081   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:43:46.006097   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:43:46.086630   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:43:46.086666   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:43:46.134635   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:43:46.134660   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:43:45.347071   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:47.845983   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:44.157143   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:46.157880   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:48.656747   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:46.510981   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:48.512854   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:48.690330   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:43:48.705024   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:43:48.705093   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:43:48.750244   69580 cri.go:89] found id: ""
	I0501 03:43:48.750278   69580 logs.go:276] 0 containers: []
	W0501 03:43:48.750299   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:43:48.750307   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:43:48.750377   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:43:48.791231   69580 cri.go:89] found id: ""
	I0501 03:43:48.791264   69580 logs.go:276] 0 containers: []
	W0501 03:43:48.791276   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:43:48.791283   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:43:48.791348   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:43:48.834692   69580 cri.go:89] found id: ""
	I0501 03:43:48.834720   69580 logs.go:276] 0 containers: []
	W0501 03:43:48.834731   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:43:48.834739   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:43:48.834809   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:43:48.877383   69580 cri.go:89] found id: ""
	I0501 03:43:48.877415   69580 logs.go:276] 0 containers: []
	W0501 03:43:48.877424   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:43:48.877430   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:43:48.877479   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:43:48.919728   69580 cri.go:89] found id: ""
	I0501 03:43:48.919756   69580 logs.go:276] 0 containers: []
	W0501 03:43:48.919767   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:43:48.919775   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:43:48.919836   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:43:48.962090   69580 cri.go:89] found id: ""
	I0501 03:43:48.962122   69580 logs.go:276] 0 containers: []
	W0501 03:43:48.962137   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:43:48.962144   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:43:48.962205   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:43:48.998456   69580 cri.go:89] found id: ""
	I0501 03:43:48.998487   69580 logs.go:276] 0 containers: []
	W0501 03:43:48.998498   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:43:48.998506   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:43:48.998566   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:43:49.042591   69580 cri.go:89] found id: ""
	I0501 03:43:49.042623   69580 logs.go:276] 0 containers: []
	W0501 03:43:49.042633   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:43:49.042645   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:43:49.042661   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:43:49.088533   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:43:49.088571   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:43:49.145252   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:43:49.145288   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:43:49.163093   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:43:49.163120   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:43:49.240805   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:43:49.240831   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:43:49.240844   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:43:49.848864   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:52.347128   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:50.656790   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:52.658130   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:51.011713   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:53.510598   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:55.512900   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:51.825530   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:43:51.839596   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:43:51.839669   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:43:51.879493   69580 cri.go:89] found id: ""
	I0501 03:43:51.879516   69580 logs.go:276] 0 containers: []
	W0501 03:43:51.879524   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:43:51.879530   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:43:51.879585   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:43:51.921577   69580 cri.go:89] found id: ""
	I0501 03:43:51.921608   69580 logs.go:276] 0 containers: []
	W0501 03:43:51.921620   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:43:51.921627   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:43:51.921693   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:43:51.961000   69580 cri.go:89] found id: ""
	I0501 03:43:51.961028   69580 logs.go:276] 0 containers: []
	W0501 03:43:51.961037   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:43:51.961043   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:43:51.961103   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:43:52.006087   69580 cri.go:89] found id: ""
	I0501 03:43:52.006118   69580 logs.go:276] 0 containers: []
	W0501 03:43:52.006129   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:43:52.006137   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:43:52.006201   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:43:52.047196   69580 cri.go:89] found id: ""
	I0501 03:43:52.047228   69580 logs.go:276] 0 containers: []
	W0501 03:43:52.047239   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:43:52.047250   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:43:52.047319   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:43:52.086380   69580 cri.go:89] found id: ""
	I0501 03:43:52.086423   69580 logs.go:276] 0 containers: []
	W0501 03:43:52.086434   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:43:52.086442   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:43:52.086499   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:43:52.128824   69580 cri.go:89] found id: ""
	I0501 03:43:52.128851   69580 logs.go:276] 0 containers: []
	W0501 03:43:52.128861   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:43:52.128868   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:43:52.128933   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:43:52.168743   69580 cri.go:89] found id: ""
	I0501 03:43:52.168769   69580 logs.go:276] 0 containers: []
	W0501 03:43:52.168776   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:43:52.168788   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:43:52.168802   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:43:52.184391   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:43:52.184419   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:43:52.268330   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:43:52.268368   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:43:52.268386   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:43:52.350556   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:43:52.350586   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:43:52.395930   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:43:52.395967   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:43:54.952879   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:43:54.968440   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:43:54.968517   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:43:55.008027   69580 cri.go:89] found id: ""
	I0501 03:43:55.008056   69580 logs.go:276] 0 containers: []
	W0501 03:43:55.008067   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:43:55.008074   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:43:55.008137   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:43:55.048848   69580 cri.go:89] found id: ""
	I0501 03:43:55.048869   69580 logs.go:276] 0 containers: []
	W0501 03:43:55.048877   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:43:55.048882   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:43:55.048931   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:43:55.085886   69580 cri.go:89] found id: ""
	I0501 03:43:55.085910   69580 logs.go:276] 0 containers: []
	W0501 03:43:55.085919   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:43:55.085924   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:43:55.085971   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:43:55.119542   69580 cri.go:89] found id: ""
	I0501 03:43:55.119567   69580 logs.go:276] 0 containers: []
	W0501 03:43:55.119574   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:43:55.119580   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:43:55.119636   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:43:55.158327   69580 cri.go:89] found id: ""
	I0501 03:43:55.158357   69580 logs.go:276] 0 containers: []
	W0501 03:43:55.158367   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:43:55.158374   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:43:55.158449   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:43:55.200061   69580 cri.go:89] found id: ""
	I0501 03:43:55.200085   69580 logs.go:276] 0 containers: []
	W0501 03:43:55.200093   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:43:55.200100   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:43:55.200146   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:43:55.239446   69580 cri.go:89] found id: ""
	I0501 03:43:55.239476   69580 logs.go:276] 0 containers: []
	W0501 03:43:55.239487   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:43:55.239493   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:43:55.239557   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:43:55.275593   69580 cri.go:89] found id: ""
	I0501 03:43:55.275623   69580 logs.go:276] 0 containers: []
	W0501 03:43:55.275635   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:43:55.275646   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:43:55.275662   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:43:55.356701   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:43:55.356724   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:43:55.356740   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:43:55.437445   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:43:55.437483   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:43:55.489024   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:43:55.489051   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:43:55.548083   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:43:55.548114   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:43:54.845529   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:57.348771   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:55.158591   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:57.657361   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:58.010099   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:00.010511   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:58.067063   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:43:58.080485   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:43:58.080539   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:43:58.121459   69580 cri.go:89] found id: ""
	I0501 03:43:58.121488   69580 logs.go:276] 0 containers: []
	W0501 03:43:58.121498   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:43:58.121505   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:43:58.121562   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:43:58.161445   69580 cri.go:89] found id: ""
	I0501 03:43:58.161479   69580 logs.go:276] 0 containers: []
	W0501 03:43:58.161489   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:43:58.161499   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:43:58.161560   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:43:58.203216   69580 cri.go:89] found id: ""
	I0501 03:43:58.203238   69580 logs.go:276] 0 containers: []
	W0501 03:43:58.203246   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:43:58.203251   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:43:58.203297   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:43:58.239496   69580 cri.go:89] found id: ""
	I0501 03:43:58.239526   69580 logs.go:276] 0 containers: []
	W0501 03:43:58.239538   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:43:58.239546   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:43:58.239605   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:43:58.280331   69580 cri.go:89] found id: ""
	I0501 03:43:58.280359   69580 logs.go:276] 0 containers: []
	W0501 03:43:58.280370   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:43:58.280378   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:43:58.280438   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:43:58.318604   69580 cri.go:89] found id: ""
	I0501 03:43:58.318634   69580 logs.go:276] 0 containers: []
	W0501 03:43:58.318646   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:43:58.318653   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:43:58.318712   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:43:58.359360   69580 cri.go:89] found id: ""
	I0501 03:43:58.359383   69580 logs.go:276] 0 containers: []
	W0501 03:43:58.359392   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:43:58.359398   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:43:58.359446   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:43:58.401172   69580 cri.go:89] found id: ""
	I0501 03:43:58.401202   69580 logs.go:276] 0 containers: []
	W0501 03:43:58.401211   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:43:58.401220   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:43:58.401232   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:43:58.416877   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:43:58.416907   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:43:58.489812   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:43:58.489835   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:43:58.489849   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:43:58.574971   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:43:58.575004   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:43:58.619526   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:43:58.619557   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:44:01.173759   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:44:01.187838   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:44:01.187922   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:44:01.227322   69580 cri.go:89] found id: ""
	I0501 03:44:01.227355   69580 logs.go:276] 0 containers: []
	W0501 03:44:01.227366   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:44:01.227372   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:44:01.227432   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:44:01.268418   69580 cri.go:89] found id: ""
	I0501 03:44:01.268453   69580 logs.go:276] 0 containers: []
	W0501 03:44:01.268465   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:44:01.268472   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:44:01.268530   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:44:01.314641   69580 cri.go:89] found id: ""
	I0501 03:44:01.314667   69580 logs.go:276] 0 containers: []
	W0501 03:44:01.314675   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:44:01.314681   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:44:01.314739   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:44:01.361237   69580 cri.go:89] found id: ""
	I0501 03:44:01.361272   69580 logs.go:276] 0 containers: []
	W0501 03:44:01.361288   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:44:01.361294   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:44:01.361348   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:44:01.400650   69580 cri.go:89] found id: ""
	I0501 03:44:01.400676   69580 logs.go:276] 0 containers: []
	W0501 03:44:01.400684   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:44:01.400690   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:44:01.400739   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:44:01.447998   69580 cri.go:89] found id: ""
	I0501 03:44:01.448023   69580 logs.go:276] 0 containers: []
	W0501 03:44:01.448032   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:44:01.448040   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:44:01.448101   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:43:59.845726   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:02.345826   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:00.155851   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:02.155998   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:02.010828   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:04.014801   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:01.492172   69580 cri.go:89] found id: ""
	I0501 03:44:01.492199   69580 logs.go:276] 0 containers: []
	W0501 03:44:01.492207   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:44:01.492213   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:44:01.492265   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:44:01.538589   69580 cri.go:89] found id: ""
	I0501 03:44:01.538617   69580 logs.go:276] 0 containers: []
	W0501 03:44:01.538628   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:44:01.538638   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:44:01.538653   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:44:01.592914   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:44:01.592952   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:44:01.611706   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:44:01.611754   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:44:01.693469   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:44:01.693488   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:44:01.693501   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:44:01.774433   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:44:01.774470   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:44:04.321593   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:44:04.335428   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:44:04.335497   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:44:04.378479   69580 cri.go:89] found id: ""
	I0501 03:44:04.378505   69580 logs.go:276] 0 containers: []
	W0501 03:44:04.378516   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:44:04.378525   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:44:04.378585   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:44:04.420025   69580 cri.go:89] found id: ""
	I0501 03:44:04.420050   69580 logs.go:276] 0 containers: []
	W0501 03:44:04.420059   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:44:04.420065   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:44:04.420113   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:44:04.464009   69580 cri.go:89] found id: ""
	I0501 03:44:04.464039   69580 logs.go:276] 0 containers: []
	W0501 03:44:04.464047   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:44:04.464052   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:44:04.464113   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:44:04.502039   69580 cri.go:89] found id: ""
	I0501 03:44:04.502069   69580 logs.go:276] 0 containers: []
	W0501 03:44:04.502081   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:44:04.502088   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:44:04.502150   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:44:04.544566   69580 cri.go:89] found id: ""
	I0501 03:44:04.544593   69580 logs.go:276] 0 containers: []
	W0501 03:44:04.544605   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:44:04.544614   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:44:04.544672   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:44:04.584067   69580 cri.go:89] found id: ""
	I0501 03:44:04.584095   69580 logs.go:276] 0 containers: []
	W0501 03:44:04.584104   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:44:04.584112   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:44:04.584174   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:44:04.625165   69580 cri.go:89] found id: ""
	I0501 03:44:04.625197   69580 logs.go:276] 0 containers: []
	W0501 03:44:04.625210   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:44:04.625219   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:44:04.625292   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:44:04.667796   69580 cri.go:89] found id: ""
	I0501 03:44:04.667830   69580 logs.go:276] 0 containers: []
	W0501 03:44:04.667839   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:44:04.667850   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:44:04.667868   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:44:04.722269   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:44:04.722303   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:44:04.738232   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:44:04.738265   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:44:04.821551   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:44:04.821578   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:44:04.821595   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:44:04.902575   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:44:04.902618   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:44:04.346197   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:06.845552   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:04.157333   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:06.157366   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:08.656837   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:06.513484   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:09.012004   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:07.449793   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:44:07.466348   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:44:07.466450   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:44:07.510325   69580 cri.go:89] found id: ""
	I0501 03:44:07.510352   69580 logs.go:276] 0 containers: []
	W0501 03:44:07.510363   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:44:07.510371   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:44:07.510450   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:44:07.550722   69580 cri.go:89] found id: ""
	I0501 03:44:07.550748   69580 logs.go:276] 0 containers: []
	W0501 03:44:07.550756   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:44:07.550762   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:44:07.550810   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:44:07.589592   69580 cri.go:89] found id: ""
	I0501 03:44:07.589617   69580 logs.go:276] 0 containers: []
	W0501 03:44:07.589625   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:44:07.589630   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:44:07.589678   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:44:07.631628   69580 cri.go:89] found id: ""
	I0501 03:44:07.631655   69580 logs.go:276] 0 containers: []
	W0501 03:44:07.631662   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:44:07.631668   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:44:07.631726   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:44:07.674709   69580 cri.go:89] found id: ""
	I0501 03:44:07.674743   69580 logs.go:276] 0 containers: []
	W0501 03:44:07.674753   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:44:07.674760   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:44:07.674811   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:44:07.714700   69580 cri.go:89] found id: ""
	I0501 03:44:07.714767   69580 logs.go:276] 0 containers: []
	W0501 03:44:07.714788   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:44:07.714797   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:44:07.714856   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:44:07.753440   69580 cri.go:89] found id: ""
	I0501 03:44:07.753467   69580 logs.go:276] 0 containers: []
	W0501 03:44:07.753478   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:44:07.753485   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:44:07.753549   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:44:07.791579   69580 cri.go:89] found id: ""
	I0501 03:44:07.791606   69580 logs.go:276] 0 containers: []
	W0501 03:44:07.791617   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:44:07.791628   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:44:07.791644   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:44:07.845568   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:44:07.845606   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:44:07.861861   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:44:07.861885   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:44:07.941719   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:44:07.941743   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:44:07.941757   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:44:08.022684   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:44:08.022720   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:44:10.575417   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:44:10.593408   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:44:10.593468   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:44:10.641322   69580 cri.go:89] found id: ""
	I0501 03:44:10.641357   69580 logs.go:276] 0 containers: []
	W0501 03:44:10.641370   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:44:10.641378   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:44:10.641442   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:44:10.686330   69580 cri.go:89] found id: ""
	I0501 03:44:10.686358   69580 logs.go:276] 0 containers: []
	W0501 03:44:10.686368   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:44:10.686377   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:44:10.686458   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:44:10.734414   69580 cri.go:89] found id: ""
	I0501 03:44:10.734444   69580 logs.go:276] 0 containers: []
	W0501 03:44:10.734456   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:44:10.734463   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:44:10.734527   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:44:10.776063   69580 cri.go:89] found id: ""
	I0501 03:44:10.776095   69580 logs.go:276] 0 containers: []
	W0501 03:44:10.776106   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:44:10.776113   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:44:10.776176   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:44:10.819035   69580 cri.go:89] found id: ""
	I0501 03:44:10.819065   69580 logs.go:276] 0 containers: []
	W0501 03:44:10.819076   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:44:10.819084   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:44:10.819150   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:44:10.868912   69580 cri.go:89] found id: ""
	I0501 03:44:10.868938   69580 logs.go:276] 0 containers: []
	W0501 03:44:10.868946   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:44:10.868952   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:44:10.869000   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:44:10.910517   69580 cri.go:89] found id: ""
	I0501 03:44:10.910549   69580 logs.go:276] 0 containers: []
	W0501 03:44:10.910572   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:44:10.910581   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:44:10.910678   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:44:10.949267   69580 cri.go:89] found id: ""
	I0501 03:44:10.949297   69580 logs.go:276] 0 containers: []
	W0501 03:44:10.949306   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:44:10.949314   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:44:10.949327   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:44:11.004731   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:44:11.004779   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:44:11.022146   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:44:11.022174   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:44:11.108992   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:44:11.109020   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:44:11.109035   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:44:11.192571   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:44:11.192605   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:44:08.846431   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:11.346295   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:10.657938   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:13.156112   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:11.012040   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:13.512166   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:15.512232   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:13.739336   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:44:13.758622   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:44:13.758721   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:44:13.805395   69580 cri.go:89] found id: ""
	I0501 03:44:13.805423   69580 logs.go:276] 0 containers: []
	W0501 03:44:13.805434   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:44:13.805442   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:44:13.805523   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:44:13.847372   69580 cri.go:89] found id: ""
	I0501 03:44:13.847400   69580 logs.go:276] 0 containers: []
	W0501 03:44:13.847409   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:44:13.847417   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:44:13.847474   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:44:13.891842   69580 cri.go:89] found id: ""
	I0501 03:44:13.891867   69580 logs.go:276] 0 containers: []
	W0501 03:44:13.891874   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:44:13.891880   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:44:13.891935   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:44:13.933382   69580 cri.go:89] found id: ""
	I0501 03:44:13.933411   69580 logs.go:276] 0 containers: []
	W0501 03:44:13.933422   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:44:13.933430   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:44:13.933490   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:44:13.973955   69580 cri.go:89] found id: ""
	I0501 03:44:13.973980   69580 logs.go:276] 0 containers: []
	W0501 03:44:13.973991   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:44:13.974000   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:44:13.974053   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:44:14.015202   69580 cri.go:89] found id: ""
	I0501 03:44:14.015226   69580 logs.go:276] 0 containers: []
	W0501 03:44:14.015234   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:44:14.015240   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:44:14.015287   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:44:14.057441   69580 cri.go:89] found id: ""
	I0501 03:44:14.057471   69580 logs.go:276] 0 containers: []
	W0501 03:44:14.057483   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:44:14.057491   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:44:14.057551   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:44:14.099932   69580 cri.go:89] found id: ""
	I0501 03:44:14.099961   69580 logs.go:276] 0 containers: []
	W0501 03:44:14.099972   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:44:14.099983   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:44:14.099996   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:44:14.160386   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:44:14.160418   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:44:14.176880   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:44:14.176908   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:44:14.272137   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:44:14.272155   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:44:14.272168   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:44:14.366523   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:44:14.366571   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:44:13.349770   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:15.351345   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:17.845182   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:15.156569   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:17.157994   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:17.512836   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:20.012034   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:16.914394   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:44:16.930976   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:44:16.931038   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:44:16.977265   69580 cri.go:89] found id: ""
	I0501 03:44:16.977294   69580 logs.go:276] 0 containers: []
	W0501 03:44:16.977303   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:44:16.977309   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:44:16.977363   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:44:17.015656   69580 cri.go:89] found id: ""
	I0501 03:44:17.015686   69580 logs.go:276] 0 containers: []
	W0501 03:44:17.015694   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:44:17.015700   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:44:17.015768   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:44:17.056079   69580 cri.go:89] found id: ""
	I0501 03:44:17.056111   69580 logs.go:276] 0 containers: []
	W0501 03:44:17.056121   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:44:17.056129   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:44:17.056188   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:44:17.099504   69580 cri.go:89] found id: ""
	I0501 03:44:17.099528   69580 logs.go:276] 0 containers: []
	W0501 03:44:17.099536   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:44:17.099542   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:44:17.099606   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:44:17.141371   69580 cri.go:89] found id: ""
	I0501 03:44:17.141401   69580 logs.go:276] 0 containers: []
	W0501 03:44:17.141410   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:44:17.141417   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:44:17.141484   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:44:17.184143   69580 cri.go:89] found id: ""
	I0501 03:44:17.184167   69580 logs.go:276] 0 containers: []
	W0501 03:44:17.184179   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:44:17.184193   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:44:17.184246   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:44:17.224012   69580 cri.go:89] found id: ""
	I0501 03:44:17.224049   69580 logs.go:276] 0 containers: []
	W0501 03:44:17.224061   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:44:17.224069   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:44:17.224136   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:44:17.268185   69580 cri.go:89] found id: ""
	I0501 03:44:17.268216   69580 logs.go:276] 0 containers: []
	W0501 03:44:17.268224   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:44:17.268233   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:44:17.268248   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:44:17.351342   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:44:17.351392   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:44:17.398658   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:44:17.398689   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:44:17.452476   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:44:17.452517   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:44:17.468734   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:44:17.468771   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:44:17.558971   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:44:20.059342   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:44:20.075707   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:44:20.075791   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:44:20.114436   69580 cri.go:89] found id: ""
	I0501 03:44:20.114472   69580 logs.go:276] 0 containers: []
	W0501 03:44:20.114486   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:44:20.114495   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:44:20.114562   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:44:20.155607   69580 cri.go:89] found id: ""
	I0501 03:44:20.155638   69580 logs.go:276] 0 containers: []
	W0501 03:44:20.155649   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:44:20.155657   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:44:20.155715   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:44:20.198188   69580 cri.go:89] found id: ""
	I0501 03:44:20.198218   69580 logs.go:276] 0 containers: []
	W0501 03:44:20.198227   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:44:20.198234   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:44:20.198291   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:44:20.237183   69580 cri.go:89] found id: ""
	I0501 03:44:20.237213   69580 logs.go:276] 0 containers: []
	W0501 03:44:20.237223   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:44:20.237232   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:44:20.237286   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:44:20.279289   69580 cri.go:89] found id: ""
	I0501 03:44:20.279320   69580 logs.go:276] 0 containers: []
	W0501 03:44:20.279332   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:44:20.279341   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:44:20.279409   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:44:20.334066   69580 cri.go:89] found id: ""
	I0501 03:44:20.334091   69580 logs.go:276] 0 containers: []
	W0501 03:44:20.334112   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:44:20.334121   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:44:20.334181   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:44:20.385740   69580 cri.go:89] found id: ""
	I0501 03:44:20.385775   69580 logs.go:276] 0 containers: []
	W0501 03:44:20.385785   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:44:20.385796   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:44:20.385860   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:44:20.425151   69580 cri.go:89] found id: ""
	I0501 03:44:20.425176   69580 logs.go:276] 0 containers: []
	W0501 03:44:20.425183   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:44:20.425193   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:44:20.425214   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:44:20.472563   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:44:20.472605   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:44:20.526589   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:44:20.526626   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:44:20.541978   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:44:20.542013   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:44:20.619513   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:44:20.619540   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:44:20.619555   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:44:19.846208   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:22.345166   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:19.658986   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:22.156821   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:23.159267   68864 pod_ready.go:81] duration metric: took 4m0.009511824s for pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace to be "Ready" ...
	E0501 03:44:23.159296   68864 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0501 03:44:23.159308   68864 pod_ready.go:38] duration metric: took 4m7.423794373s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0501 03:44:23.159327   68864 api_server.go:52] waiting for apiserver process to appear ...
	I0501 03:44:23.159362   68864 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:44:23.159422   68864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:44:23.225563   68864 cri.go:89] found id: "a96815c49ac45b34c359d7171b30be5c25cee80fae08d947fc004d479693df00"
	I0501 03:44:23.225590   68864 cri.go:89] found id: ""
	I0501 03:44:23.225607   68864 logs.go:276] 1 containers: [a96815c49ac45b34c359d7171b30be5c25cee80fae08d947fc004d479693df00]
	I0501 03:44:23.225663   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:44:23.231542   68864 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:44:23.231598   68864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:44:23.290847   68864 cri.go:89] found id: "d109948ffbbddd2e38484d83d2260690afce3c51e16abff0fe8713e844bfdcb3"
	I0501 03:44:23.290871   68864 cri.go:89] found id: ""
	I0501 03:44:23.290878   68864 logs.go:276] 1 containers: [d109948ffbbddd2e38484d83d2260690afce3c51e16abff0fe8713e844bfdcb3]
	I0501 03:44:23.290926   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:44:23.295697   68864 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:44:23.295755   68864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:44:23.348625   68864 cri.go:89] found id: "e3c74de489af3d78a4b0cf620ed16e1fba7c9ce1c7021a15c5b4bd241b7a934a"
	I0501 03:44:23.348652   68864 cri.go:89] found id: ""
	I0501 03:44:23.348661   68864 logs.go:276] 1 containers: [e3c74de489af3d78a4b0cf620ed16e1fba7c9ce1c7021a15c5b4bd241b7a934a]
	I0501 03:44:23.348717   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:44:23.355801   68864 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:44:23.355896   68864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:44:23.409428   68864 cri.go:89] found id: "1813f35574f4f0398e7d482fe77c5d7f8490726666a277035bda0a5cff80af6e"
	I0501 03:44:23.409461   68864 cri.go:89] found id: ""
	I0501 03:44:23.409471   68864 logs.go:276] 1 containers: [1813f35574f4f0398e7d482fe77c5d7f8490726666a277035bda0a5cff80af6e]
	I0501 03:44:23.409530   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:44:23.416480   68864 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:44:23.416560   68864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:44:23.466642   68864 cri.go:89] found id: "94afdb03c382269209154b2e73edbf200662e89960c248ba775a0ef2b8fbe6b1"
	I0501 03:44:23.466672   68864 cri.go:89] found id: ""
	I0501 03:44:23.466681   68864 logs.go:276] 1 containers: [94afdb03c382269209154b2e73edbf200662e89960c248ba775a0ef2b8fbe6b1]
	I0501 03:44:23.466739   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:44:23.472831   68864 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:44:23.472906   68864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:44:23.524815   68864 cri.go:89] found id: "7e7158f7ff3922e97ba5ee97108acc1d0ba0350dff2f9aa56c2f71519108791c"
	I0501 03:44:23.524841   68864 cri.go:89] found id: ""
	I0501 03:44:23.524850   68864 logs.go:276] 1 containers: [7e7158f7ff3922e97ba5ee97108acc1d0ba0350dff2f9aa56c2f71519108791c]
	I0501 03:44:23.524902   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:44:23.532092   68864 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:44:23.532161   68864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:44:23.577262   68864 cri.go:89] found id: ""
	I0501 03:44:23.577292   68864 logs.go:276] 0 containers: []
	W0501 03:44:23.577305   68864 logs.go:278] No container was found matching "kindnet"
	I0501 03:44:23.577312   68864 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0501 03:44:23.577374   68864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0501 03:44:23.623597   68864 cri.go:89] found id: "f9a8d2f0f9453969b410fb7f880f6cf4c7e6e2f3c17a687583906efe4181dd17"
	I0501 03:44:23.623626   68864 cri.go:89] found id: "aaae36261c5ce1accf3bfb5993be5a256a037a640d5f6f30de8384900dda06b2"
	I0501 03:44:23.623632   68864 cri.go:89] found id: ""
	I0501 03:44:23.623640   68864 logs.go:276] 2 containers: [f9a8d2f0f9453969b410fb7f880f6cf4c7e6e2f3c17a687583906efe4181dd17 aaae36261c5ce1accf3bfb5993be5a256a037a640d5f6f30de8384900dda06b2]
	I0501 03:44:23.623702   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:44:23.630189   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:44:23.635673   68864 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:44:23.635694   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:44:22.012084   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:24.511736   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:23.203031   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:44:23.219964   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:44:23.220043   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:44:23.264287   69580 cri.go:89] found id: ""
	I0501 03:44:23.264315   69580 logs.go:276] 0 containers: []
	W0501 03:44:23.264323   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:44:23.264328   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:44:23.264395   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:44:23.310337   69580 cri.go:89] found id: ""
	I0501 03:44:23.310366   69580 logs.go:276] 0 containers: []
	W0501 03:44:23.310375   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:44:23.310383   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:44:23.310461   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:44:23.364550   69580 cri.go:89] found id: ""
	I0501 03:44:23.364577   69580 logs.go:276] 0 containers: []
	W0501 03:44:23.364588   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:44:23.364596   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:44:23.364676   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:44:23.412620   69580 cri.go:89] found id: ""
	I0501 03:44:23.412647   69580 logs.go:276] 0 containers: []
	W0501 03:44:23.412657   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:44:23.412665   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:44:23.412726   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:44:23.461447   69580 cri.go:89] found id: ""
	I0501 03:44:23.461477   69580 logs.go:276] 0 containers: []
	W0501 03:44:23.461488   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:44:23.461496   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:44:23.461558   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:44:23.514868   69580 cri.go:89] found id: ""
	I0501 03:44:23.514896   69580 logs.go:276] 0 containers: []
	W0501 03:44:23.514915   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:44:23.514924   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:44:23.514984   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:44:23.559171   69580 cri.go:89] found id: ""
	I0501 03:44:23.559200   69580 logs.go:276] 0 containers: []
	W0501 03:44:23.559210   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:44:23.559218   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:44:23.559284   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:44:23.601713   69580 cri.go:89] found id: ""
	I0501 03:44:23.601740   69580 logs.go:276] 0 containers: []
	W0501 03:44:23.601749   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:44:23.601760   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:44:23.601772   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:44:23.656147   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:44:23.656187   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:44:23.673507   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:44:23.673545   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:44:23.771824   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:44:23.771846   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:44:23.771861   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:44:23.861128   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:44:23.861161   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:44:26.406507   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:44:26.421836   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:44:26.421894   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:44:26.462758   69580 cri.go:89] found id: ""
	I0501 03:44:26.462785   69580 logs.go:276] 0 containers: []
	W0501 03:44:26.462796   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:44:26.462804   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:44:26.462860   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:44:24.346534   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:26.847370   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:24.220047   68864 logs.go:123] Gathering logs for kubelet ...
	I0501 03:44:24.220087   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:44:24.279596   68864 logs.go:123] Gathering logs for kube-apiserver [a96815c49ac45b34c359d7171b30be5c25cee80fae08d947fc004d479693df00] ...
	I0501 03:44:24.279633   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a96815c49ac45b34c359d7171b30be5c25cee80fae08d947fc004d479693df00"
	I0501 03:44:24.336092   68864 logs.go:123] Gathering logs for etcd [d109948ffbbddd2e38484d83d2260690afce3c51e16abff0fe8713e844bfdcb3] ...
	I0501 03:44:24.336128   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d109948ffbbddd2e38484d83d2260690afce3c51e16abff0fe8713e844bfdcb3"
	I0501 03:44:24.396117   68864 logs.go:123] Gathering logs for storage-provisioner [f9a8d2f0f9453969b410fb7f880f6cf4c7e6e2f3c17a687583906efe4181dd17] ...
	I0501 03:44:24.396145   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f9a8d2f0f9453969b410fb7f880f6cf4c7e6e2f3c17a687583906efe4181dd17"
	I0501 03:44:24.443608   68864 logs.go:123] Gathering logs for storage-provisioner [aaae36261c5ce1accf3bfb5993be5a256a037a640d5f6f30de8384900dda06b2] ...
	I0501 03:44:24.443644   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aaae36261c5ce1accf3bfb5993be5a256a037a640d5f6f30de8384900dda06b2"
	I0501 03:44:24.499533   68864 logs.go:123] Gathering logs for kube-controller-manager [7e7158f7ff3922e97ba5ee97108acc1d0ba0350dff2f9aa56c2f71519108791c] ...
	I0501 03:44:24.499560   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7e7158f7ff3922e97ba5ee97108acc1d0ba0350dff2f9aa56c2f71519108791c"
	I0501 03:44:24.562990   68864 logs.go:123] Gathering logs for container status ...
	I0501 03:44:24.563028   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:44:24.622630   68864 logs.go:123] Gathering logs for dmesg ...
	I0501 03:44:24.622671   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:44:24.641106   68864 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:44:24.641145   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0501 03:44:24.781170   68864 logs.go:123] Gathering logs for coredns [e3c74de489af3d78a4b0cf620ed16e1fba7c9ce1c7021a15c5b4bd241b7a934a] ...
	I0501 03:44:24.781203   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e3c74de489af3d78a4b0cf620ed16e1fba7c9ce1c7021a15c5b4bd241b7a934a"
	I0501 03:44:24.824616   68864 logs.go:123] Gathering logs for kube-scheduler [1813f35574f4f0398e7d482fe77c5d7f8490726666a277035bda0a5cff80af6e] ...
	I0501 03:44:24.824643   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1813f35574f4f0398e7d482fe77c5d7f8490726666a277035bda0a5cff80af6e"
	I0501 03:44:24.871956   68864 logs.go:123] Gathering logs for kube-proxy [94afdb03c382269209154b2e73edbf200662e89960c248ba775a0ef2b8fbe6b1] ...
	I0501 03:44:24.871992   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 94afdb03c382269209154b2e73edbf200662e89960c248ba775a0ef2b8fbe6b1"
	I0501 03:44:27.424582   68864 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:44:27.447490   68864 api_server.go:72] duration metric: took 4m19.445111196s to wait for apiserver process to appear ...
	I0501 03:44:27.447522   68864 api_server.go:88] waiting for apiserver healthz status ...
	I0501 03:44:27.447555   68864 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:44:27.447601   68864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:44:27.494412   68864 cri.go:89] found id: "a96815c49ac45b34c359d7171b30be5c25cee80fae08d947fc004d479693df00"
	I0501 03:44:27.494437   68864 cri.go:89] found id: ""
	I0501 03:44:27.494445   68864 logs.go:276] 1 containers: [a96815c49ac45b34c359d7171b30be5c25cee80fae08d947fc004d479693df00]
	I0501 03:44:27.494490   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:44:27.503782   68864 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:44:27.503853   68864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:44:27.550991   68864 cri.go:89] found id: "d109948ffbbddd2e38484d83d2260690afce3c51e16abff0fe8713e844bfdcb3"
	I0501 03:44:27.551018   68864 cri.go:89] found id: ""
	I0501 03:44:27.551026   68864 logs.go:276] 1 containers: [d109948ffbbddd2e38484d83d2260690afce3c51e16abff0fe8713e844bfdcb3]
	I0501 03:44:27.551073   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:44:27.556919   68864 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:44:27.556983   68864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:44:27.606005   68864 cri.go:89] found id: "e3c74de489af3d78a4b0cf620ed16e1fba7c9ce1c7021a15c5b4bd241b7a934a"
	I0501 03:44:27.606033   68864 cri.go:89] found id: ""
	I0501 03:44:27.606042   68864 logs.go:276] 1 containers: [e3c74de489af3d78a4b0cf620ed16e1fba7c9ce1c7021a15c5b4bd241b7a934a]
	I0501 03:44:27.606100   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:44:27.611639   68864 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:44:27.611706   68864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:44:27.661151   68864 cri.go:89] found id: "1813f35574f4f0398e7d482fe77c5d7f8490726666a277035bda0a5cff80af6e"
	I0501 03:44:27.661172   68864 cri.go:89] found id: ""
	I0501 03:44:27.661179   68864 logs.go:276] 1 containers: [1813f35574f4f0398e7d482fe77c5d7f8490726666a277035bda0a5cff80af6e]
	I0501 03:44:27.661278   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:44:27.666443   68864 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:44:27.666514   68864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:44:27.712387   68864 cri.go:89] found id: "94afdb03c382269209154b2e73edbf200662e89960c248ba775a0ef2b8fbe6b1"
	I0501 03:44:27.712416   68864 cri.go:89] found id: ""
	I0501 03:44:27.712424   68864 logs.go:276] 1 containers: [94afdb03c382269209154b2e73edbf200662e89960c248ba775a0ef2b8fbe6b1]
	I0501 03:44:27.712480   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:44:27.717280   68864 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:44:27.717342   68864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:44:27.767124   68864 cri.go:89] found id: "7e7158f7ff3922e97ba5ee97108acc1d0ba0350dff2f9aa56c2f71519108791c"
	I0501 03:44:27.767154   68864 cri.go:89] found id: ""
	I0501 03:44:27.767163   68864 logs.go:276] 1 containers: [7e7158f7ff3922e97ba5ee97108acc1d0ba0350dff2f9aa56c2f71519108791c]
	I0501 03:44:27.767215   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:44:27.773112   68864 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:44:27.773183   68864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:44:27.829966   68864 cri.go:89] found id: ""
	I0501 03:44:27.829991   68864 logs.go:276] 0 containers: []
	W0501 03:44:27.829999   68864 logs.go:278] No container was found matching "kindnet"
	I0501 03:44:27.830005   68864 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0501 03:44:27.830056   68864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0501 03:44:27.873391   68864 cri.go:89] found id: "f9a8d2f0f9453969b410fb7f880f6cf4c7e6e2f3c17a687583906efe4181dd17"
	I0501 03:44:27.873415   68864 cri.go:89] found id: "aaae36261c5ce1accf3bfb5993be5a256a037a640d5f6f30de8384900dda06b2"
	I0501 03:44:27.873419   68864 cri.go:89] found id: ""
	I0501 03:44:27.873426   68864 logs.go:276] 2 containers: [f9a8d2f0f9453969b410fb7f880f6cf4c7e6e2f3c17a687583906efe4181dd17 aaae36261c5ce1accf3bfb5993be5a256a037a640d5f6f30de8384900dda06b2]
	I0501 03:44:27.873473   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:44:27.878537   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:44:27.883518   68864 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:44:27.883543   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0501 03:44:28.012337   68864 logs.go:123] Gathering logs for kube-apiserver [a96815c49ac45b34c359d7171b30be5c25cee80fae08d947fc004d479693df00] ...
	I0501 03:44:28.012377   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a96815c49ac45b34c359d7171b30be5c25cee80fae08d947fc004d479693df00"
	I0501 03:44:28.063686   68864 logs.go:123] Gathering logs for etcd [d109948ffbbddd2e38484d83d2260690afce3c51e16abff0fe8713e844bfdcb3] ...
	I0501 03:44:28.063715   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d109948ffbbddd2e38484d83d2260690afce3c51e16abff0fe8713e844bfdcb3"
	I0501 03:44:28.116507   68864 logs.go:123] Gathering logs for storage-provisioner [aaae36261c5ce1accf3bfb5993be5a256a037a640d5f6f30de8384900dda06b2] ...
	I0501 03:44:28.116535   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aaae36261c5ce1accf3bfb5993be5a256a037a640d5f6f30de8384900dda06b2"
	I0501 03:44:28.165593   68864 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:44:28.165636   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:44:28.595278   68864 logs.go:123] Gathering logs for container status ...
	I0501 03:44:28.595333   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:44:28.645790   68864 logs.go:123] Gathering logs for dmesg ...
	I0501 03:44:28.645836   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:44:28.662952   68864 logs.go:123] Gathering logs for coredns [e3c74de489af3d78a4b0cf620ed16e1fba7c9ce1c7021a15c5b4bd241b7a934a] ...
	I0501 03:44:28.662984   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e3c74de489af3d78a4b0cf620ed16e1fba7c9ce1c7021a15c5b4bd241b7a934a"
	I0501 03:44:28.710273   68864 logs.go:123] Gathering logs for kube-scheduler [1813f35574f4f0398e7d482fe77c5d7f8490726666a277035bda0a5cff80af6e] ...
	I0501 03:44:28.710302   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1813f35574f4f0398e7d482fe77c5d7f8490726666a277035bda0a5cff80af6e"
	I0501 03:44:28.761838   68864 logs.go:123] Gathering logs for kube-proxy [94afdb03c382269209154b2e73edbf200662e89960c248ba775a0ef2b8fbe6b1] ...
	I0501 03:44:28.761872   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 94afdb03c382269209154b2e73edbf200662e89960c248ba775a0ef2b8fbe6b1"
	I0501 03:44:28.810775   68864 logs.go:123] Gathering logs for kube-controller-manager [7e7158f7ff3922e97ba5ee97108acc1d0ba0350dff2f9aa56c2f71519108791c] ...
	I0501 03:44:28.810808   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7e7158f7ff3922e97ba5ee97108acc1d0ba0350dff2f9aa56c2f71519108791c"
	I0501 03:44:27.012119   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:29.510651   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:26.505067   69580 cri.go:89] found id: ""
	I0501 03:44:26.505098   69580 logs.go:276] 0 containers: []
	W0501 03:44:26.505110   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:44:26.505121   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:44:26.505182   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:44:26.544672   69580 cri.go:89] found id: ""
	I0501 03:44:26.544699   69580 logs.go:276] 0 containers: []
	W0501 03:44:26.544711   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:44:26.544717   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:44:26.544764   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:44:26.590579   69580 cri.go:89] found id: ""
	I0501 03:44:26.590605   69580 logs.go:276] 0 containers: []
	W0501 03:44:26.590614   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:44:26.590620   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:44:26.590670   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:44:26.637887   69580 cri.go:89] found id: ""
	I0501 03:44:26.637920   69580 logs.go:276] 0 containers: []
	W0501 03:44:26.637930   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:44:26.637939   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:44:26.637998   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:44:26.686778   69580 cri.go:89] found id: ""
	I0501 03:44:26.686807   69580 logs.go:276] 0 containers: []
	W0501 03:44:26.686815   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:44:26.686821   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:44:26.686882   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:44:26.729020   69580 cri.go:89] found id: ""
	I0501 03:44:26.729045   69580 logs.go:276] 0 containers: []
	W0501 03:44:26.729054   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:44:26.729060   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:44:26.729124   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:44:26.769022   69580 cri.go:89] found id: ""
	I0501 03:44:26.769043   69580 logs.go:276] 0 containers: []
	W0501 03:44:26.769051   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:44:26.769059   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:44:26.769073   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:44:26.854985   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:44:26.855011   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:44:26.855024   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:44:26.937031   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:44:26.937063   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:44:27.006267   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:44:27.006301   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:44:27.080503   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:44:27.080545   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:44:29.598176   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:44:29.614465   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:44:29.614523   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:44:29.662384   69580 cri.go:89] found id: ""
	I0501 03:44:29.662421   69580 logs.go:276] 0 containers: []
	W0501 03:44:29.662433   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:44:29.662439   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:44:29.662483   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:44:29.705262   69580 cri.go:89] found id: ""
	I0501 03:44:29.705286   69580 logs.go:276] 0 containers: []
	W0501 03:44:29.705295   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:44:29.705300   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:44:29.705345   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:44:29.752308   69580 cri.go:89] found id: ""
	I0501 03:44:29.752335   69580 logs.go:276] 0 containers: []
	W0501 03:44:29.752343   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:44:29.752349   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:44:29.752403   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:44:29.802702   69580 cri.go:89] found id: ""
	I0501 03:44:29.802729   69580 logs.go:276] 0 containers: []
	W0501 03:44:29.802741   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:44:29.802749   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:44:29.802814   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:44:29.854112   69580 cri.go:89] found id: ""
	I0501 03:44:29.854138   69580 logs.go:276] 0 containers: []
	W0501 03:44:29.854149   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:44:29.854157   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:44:29.854217   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:44:29.898447   69580 cri.go:89] found id: ""
	I0501 03:44:29.898470   69580 logs.go:276] 0 containers: []
	W0501 03:44:29.898480   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:44:29.898486   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:44:29.898545   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:44:29.938832   69580 cri.go:89] found id: ""
	I0501 03:44:29.938862   69580 logs.go:276] 0 containers: []
	W0501 03:44:29.938873   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:44:29.938881   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:44:29.938948   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:44:29.987697   69580 cri.go:89] found id: ""
	I0501 03:44:29.987721   69580 logs.go:276] 0 containers: []
	W0501 03:44:29.987730   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:44:29.987738   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:44:29.987753   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:44:30.042446   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:44:30.042473   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:44:30.095358   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:44:30.095389   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:44:30.110745   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:44:30.110782   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:44:30.190923   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:44:30.190951   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:44:30.190965   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:44:29.346013   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:31.347513   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:28.868838   68864 logs.go:123] Gathering logs for storage-provisioner [f9a8d2f0f9453969b410fb7f880f6cf4c7e6e2f3c17a687583906efe4181dd17] ...
	I0501 03:44:28.868876   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f9a8d2f0f9453969b410fb7f880f6cf4c7e6e2f3c17a687583906efe4181dd17"
	I0501 03:44:28.912436   68864 logs.go:123] Gathering logs for kubelet ...
	I0501 03:44:28.912474   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:44:31.469456   68864 api_server.go:253] Checking apiserver healthz at https://192.168.50.218:8443/healthz ...
	I0501 03:44:31.478498   68864 api_server.go:279] https://192.168.50.218:8443/healthz returned 200:
	ok
	I0501 03:44:31.479838   68864 api_server.go:141] control plane version: v1.30.0
	I0501 03:44:31.479861   68864 api_server.go:131] duration metric: took 4.032331979s to wait for apiserver health ...
	I0501 03:44:31.479869   68864 system_pods.go:43] waiting for kube-system pods to appear ...
	I0501 03:44:31.479889   68864 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:44:31.479930   68864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:44:31.531068   68864 cri.go:89] found id: "a96815c49ac45b34c359d7171b30be5c25cee80fae08d947fc004d479693df00"
	I0501 03:44:31.531088   68864 cri.go:89] found id: ""
	I0501 03:44:31.531095   68864 logs.go:276] 1 containers: [a96815c49ac45b34c359d7171b30be5c25cee80fae08d947fc004d479693df00]
	I0501 03:44:31.531137   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:44:31.536216   68864 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:44:31.536292   68864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:44:31.584155   68864 cri.go:89] found id: "d109948ffbbddd2e38484d83d2260690afce3c51e16abff0fe8713e844bfdcb3"
	I0501 03:44:31.584183   68864 cri.go:89] found id: ""
	I0501 03:44:31.584194   68864 logs.go:276] 1 containers: [d109948ffbbddd2e38484d83d2260690afce3c51e16abff0fe8713e844bfdcb3]
	I0501 03:44:31.584250   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:44:31.589466   68864 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:44:31.589528   68864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:44:31.639449   68864 cri.go:89] found id: "e3c74de489af3d78a4b0cf620ed16e1fba7c9ce1c7021a15c5b4bd241b7a934a"
	I0501 03:44:31.639476   68864 cri.go:89] found id: ""
	I0501 03:44:31.639484   68864 logs.go:276] 1 containers: [e3c74de489af3d78a4b0cf620ed16e1fba7c9ce1c7021a15c5b4bd241b7a934a]
	I0501 03:44:31.639535   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:44:31.644684   68864 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:44:31.644750   68864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:44:31.702095   68864 cri.go:89] found id: "1813f35574f4f0398e7d482fe77c5d7f8490726666a277035bda0a5cff80af6e"
	I0501 03:44:31.702119   68864 cri.go:89] found id: ""
	I0501 03:44:31.702125   68864 logs.go:276] 1 containers: [1813f35574f4f0398e7d482fe77c5d7f8490726666a277035bda0a5cff80af6e]
	I0501 03:44:31.702173   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:44:31.707443   68864 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:44:31.707508   68864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:44:31.758582   68864 cri.go:89] found id: "94afdb03c382269209154b2e73edbf200662e89960c248ba775a0ef2b8fbe6b1"
	I0501 03:44:31.758603   68864 cri.go:89] found id: ""
	I0501 03:44:31.758610   68864 logs.go:276] 1 containers: [94afdb03c382269209154b2e73edbf200662e89960c248ba775a0ef2b8fbe6b1]
	I0501 03:44:31.758656   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:44:31.764261   68864 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:44:31.764325   68864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:44:31.813385   68864 cri.go:89] found id: "7e7158f7ff3922e97ba5ee97108acc1d0ba0350dff2f9aa56c2f71519108791c"
	I0501 03:44:31.813407   68864 cri.go:89] found id: ""
	I0501 03:44:31.813414   68864 logs.go:276] 1 containers: [7e7158f7ff3922e97ba5ee97108acc1d0ba0350dff2f9aa56c2f71519108791c]
	I0501 03:44:31.813457   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:44:31.818289   68864 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:44:31.818348   68864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:44:31.862788   68864 cri.go:89] found id: ""
	I0501 03:44:31.862814   68864 logs.go:276] 0 containers: []
	W0501 03:44:31.862824   68864 logs.go:278] No container was found matching "kindnet"
	I0501 03:44:31.862832   68864 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0501 03:44:31.862890   68864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0501 03:44:31.912261   68864 cri.go:89] found id: "f9a8d2f0f9453969b410fb7f880f6cf4c7e6e2f3c17a687583906efe4181dd17"
	I0501 03:44:31.912284   68864 cri.go:89] found id: "aaae36261c5ce1accf3bfb5993be5a256a037a640d5f6f30de8384900dda06b2"
	I0501 03:44:31.912298   68864 cri.go:89] found id: ""
	I0501 03:44:31.912312   68864 logs.go:276] 2 containers: [f9a8d2f0f9453969b410fb7f880f6cf4c7e6e2f3c17a687583906efe4181dd17 aaae36261c5ce1accf3bfb5993be5a256a037a640d5f6f30de8384900dda06b2]
	I0501 03:44:31.912367   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:44:31.917696   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:44:31.922432   68864 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:44:31.922450   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:44:32.332797   68864 logs.go:123] Gathering logs for kubelet ...
	I0501 03:44:32.332836   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:44:32.396177   68864 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:44:32.396214   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0501 03:44:32.511915   68864 logs.go:123] Gathering logs for etcd [d109948ffbbddd2e38484d83d2260690afce3c51e16abff0fe8713e844bfdcb3] ...
	I0501 03:44:32.511953   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d109948ffbbddd2e38484d83d2260690afce3c51e16abff0fe8713e844bfdcb3"
	I0501 03:44:32.564447   68864 logs.go:123] Gathering logs for kube-proxy [94afdb03c382269209154b2e73edbf200662e89960c248ba775a0ef2b8fbe6b1] ...
	I0501 03:44:32.564475   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 94afdb03c382269209154b2e73edbf200662e89960c248ba775a0ef2b8fbe6b1"
	I0501 03:44:32.610196   68864 logs.go:123] Gathering logs for kube-controller-manager [7e7158f7ff3922e97ba5ee97108acc1d0ba0350dff2f9aa56c2f71519108791c] ...
	I0501 03:44:32.610235   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7e7158f7ff3922e97ba5ee97108acc1d0ba0350dff2f9aa56c2f71519108791c"
	I0501 03:44:32.665262   68864 logs.go:123] Gathering logs for storage-provisioner [aaae36261c5ce1accf3bfb5993be5a256a037a640d5f6f30de8384900dda06b2] ...
	I0501 03:44:32.665314   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aaae36261c5ce1accf3bfb5993be5a256a037a640d5f6f30de8384900dda06b2"
	I0501 03:44:32.707346   68864 logs.go:123] Gathering logs for container status ...
	I0501 03:44:32.707377   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:44:32.757693   68864 logs.go:123] Gathering logs for dmesg ...
	I0501 03:44:32.757726   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:44:32.775720   68864 logs.go:123] Gathering logs for kube-apiserver [a96815c49ac45b34c359d7171b30be5c25cee80fae08d947fc004d479693df00] ...
	I0501 03:44:32.775759   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a96815c49ac45b34c359d7171b30be5c25cee80fae08d947fc004d479693df00"
	I0501 03:44:32.831002   68864 logs.go:123] Gathering logs for coredns [e3c74de489af3d78a4b0cf620ed16e1fba7c9ce1c7021a15c5b4bd241b7a934a] ...
	I0501 03:44:32.831039   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e3c74de489af3d78a4b0cf620ed16e1fba7c9ce1c7021a15c5b4bd241b7a934a"
	I0501 03:44:32.878365   68864 logs.go:123] Gathering logs for kube-scheduler [1813f35574f4f0398e7d482fe77c5d7f8490726666a277035bda0a5cff80af6e] ...
	I0501 03:44:32.878416   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1813f35574f4f0398e7d482fe77c5d7f8490726666a277035bda0a5cff80af6e"
	I0501 03:44:32.935752   68864 logs.go:123] Gathering logs for storage-provisioner [f9a8d2f0f9453969b410fb7f880f6cf4c7e6e2f3c17a687583906efe4181dd17] ...
	I0501 03:44:32.935791   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f9a8d2f0f9453969b410fb7f880f6cf4c7e6e2f3c17a687583906efe4181dd17"
	I0501 03:44:35.492575   68864 system_pods.go:59] 8 kube-system pods found
	I0501 03:44:35.492603   68864 system_pods.go:61] "coredns-7db6d8ff4d-sjplt" [6701ee8e-0630-4332-b01c-26741ed3a7b7] Running
	I0501 03:44:35.492607   68864 system_pods.go:61] "etcd-embed-certs-277128" [744d7481-dd80-4435-90da-685fc16a76a4] Running
	I0501 03:44:35.492612   68864 system_pods.go:61] "kube-apiserver-embed-certs-277128" [2f1d0f09-b270-4cdc-af3c-e17e3a244bbf] Running
	I0501 03:44:35.492616   68864 system_pods.go:61] "kube-controller-manager-embed-certs-277128" [729aa138-dc0d-4616-97d4-1592453971b8] Running
	I0501 03:44:35.492619   68864 system_pods.go:61] "kube-proxy-phx7x" [56c0381e-c140-4f69-bbe4-09d393db8b23] Running
	I0501 03:44:35.492621   68864 system_pods.go:61] "kube-scheduler-embed-certs-277128" [cc56e9bc-7f21-4f57-a65a-73a10ffd7145] Running
	I0501 03:44:35.492627   68864 system_pods.go:61] "metrics-server-569cc877fc-p8j59" [f8ad6c24-dd5d-4515-9052-c9aca7412b55] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0501 03:44:35.492631   68864 system_pods.go:61] "storage-provisioner" [785be666-58d5-4b9d-92fd-bcacdbdebeb2] Running
	I0501 03:44:35.492638   68864 system_pods.go:74] duration metric: took 4.012764043s to wait for pod list to return data ...
	I0501 03:44:35.492644   68864 default_sa.go:34] waiting for default service account to be created ...
	I0501 03:44:35.494580   68864 default_sa.go:45] found service account: "default"
	I0501 03:44:35.494599   68864 default_sa.go:55] duration metric: took 1.949121ms for default service account to be created ...
	I0501 03:44:35.494606   68864 system_pods.go:116] waiting for k8s-apps to be running ...
	I0501 03:44:35.499484   68864 system_pods.go:86] 8 kube-system pods found
	I0501 03:44:35.499507   68864 system_pods.go:89] "coredns-7db6d8ff4d-sjplt" [6701ee8e-0630-4332-b01c-26741ed3a7b7] Running
	I0501 03:44:35.499514   68864 system_pods.go:89] "etcd-embed-certs-277128" [744d7481-dd80-4435-90da-685fc16a76a4] Running
	I0501 03:44:35.499519   68864 system_pods.go:89] "kube-apiserver-embed-certs-277128" [2f1d0f09-b270-4cdc-af3c-e17e3a244bbf] Running
	I0501 03:44:35.499523   68864 system_pods.go:89] "kube-controller-manager-embed-certs-277128" [729aa138-dc0d-4616-97d4-1592453971b8] Running
	I0501 03:44:35.499526   68864 system_pods.go:89] "kube-proxy-phx7x" [56c0381e-c140-4f69-bbe4-09d393db8b23] Running
	I0501 03:44:35.499531   68864 system_pods.go:89] "kube-scheduler-embed-certs-277128" [cc56e9bc-7f21-4f57-a65a-73a10ffd7145] Running
	I0501 03:44:35.499537   68864 system_pods.go:89] "metrics-server-569cc877fc-p8j59" [f8ad6c24-dd5d-4515-9052-c9aca7412b55] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0501 03:44:35.499544   68864 system_pods.go:89] "storage-provisioner" [785be666-58d5-4b9d-92fd-bcacdbdebeb2] Running
	I0501 03:44:35.499550   68864 system_pods.go:126] duration metric: took 4.939659ms to wait for k8s-apps to be running ...
	I0501 03:44:35.499559   68864 system_svc.go:44] waiting for kubelet service to be running ....
	I0501 03:44:35.499599   68864 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 03:44:35.518471   68864 system_svc.go:56] duration metric: took 18.902776ms WaitForService to wait for kubelet
	I0501 03:44:35.518498   68864 kubeadm.go:576] duration metric: took 4m27.516125606s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0501 03:44:35.518521   68864 node_conditions.go:102] verifying NodePressure condition ...
	I0501 03:44:35.521936   68864 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0501 03:44:35.521956   68864 node_conditions.go:123] node cpu capacity is 2
	I0501 03:44:35.521966   68864 node_conditions.go:105] duration metric: took 3.439997ms to run NodePressure ...
	I0501 03:44:35.521976   68864 start.go:240] waiting for startup goroutines ...
	I0501 03:44:35.521983   68864 start.go:245] waiting for cluster config update ...
	I0501 03:44:35.521994   68864 start.go:254] writing updated cluster config ...
	I0501 03:44:35.522311   68864 ssh_runner.go:195] Run: rm -f paused
	I0501 03:44:35.572130   68864 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0501 03:44:35.573709   68864 out.go:177] * Done! kubectl is now configured to use "embed-certs-277128" cluster and "default" namespace by default
	I0501 03:44:31.512755   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:34.011892   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:32.772208   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:44:32.791063   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:44:32.791145   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:44:32.856883   69580 cri.go:89] found id: ""
	I0501 03:44:32.856909   69580 logs.go:276] 0 containers: []
	W0501 03:44:32.856920   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:44:32.856927   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:44:32.856988   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:44:32.928590   69580 cri.go:89] found id: ""
	I0501 03:44:32.928625   69580 logs.go:276] 0 containers: []
	W0501 03:44:32.928637   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:44:32.928644   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:44:32.928707   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:44:32.978068   69580 cri.go:89] found id: ""
	I0501 03:44:32.978100   69580 logs.go:276] 0 containers: []
	W0501 03:44:32.978113   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:44:32.978120   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:44:32.978184   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:44:33.018873   69580 cri.go:89] found id: ""
	I0501 03:44:33.018897   69580 logs.go:276] 0 containers: []
	W0501 03:44:33.018905   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:44:33.018911   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:44:33.018970   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:44:33.060633   69580 cri.go:89] found id: ""
	I0501 03:44:33.060661   69580 logs.go:276] 0 containers: []
	W0501 03:44:33.060673   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:44:33.060681   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:44:33.060735   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:44:33.099862   69580 cri.go:89] found id: ""
	I0501 03:44:33.099891   69580 logs.go:276] 0 containers: []
	W0501 03:44:33.099900   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:44:33.099906   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:44:33.099953   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:44:33.139137   69580 cri.go:89] found id: ""
	I0501 03:44:33.139163   69580 logs.go:276] 0 containers: []
	W0501 03:44:33.139171   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:44:33.139177   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:44:33.139224   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:44:33.178800   69580 cri.go:89] found id: ""
	I0501 03:44:33.178826   69580 logs.go:276] 0 containers: []
	W0501 03:44:33.178834   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:44:33.178842   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:44:33.178856   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:44:33.233811   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:44:33.233842   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:44:33.248931   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:44:33.248958   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:44:33.325530   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:44:33.325551   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:44:33.325563   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:44:33.412071   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:44:33.412103   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:44:35.954706   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:44:35.970256   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:44:35.970333   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:44:36.010417   69580 cri.go:89] found id: ""
	I0501 03:44:36.010443   69580 logs.go:276] 0 containers: []
	W0501 03:44:36.010452   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:44:36.010459   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:44:36.010524   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:44:36.051571   69580 cri.go:89] found id: ""
	I0501 03:44:36.051600   69580 logs.go:276] 0 containers: []
	W0501 03:44:36.051611   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:44:36.051619   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:44:36.051683   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:44:36.092148   69580 cri.go:89] found id: ""
	I0501 03:44:36.092176   69580 logs.go:276] 0 containers: []
	W0501 03:44:36.092185   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:44:36.092190   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:44:36.092247   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:44:36.136243   69580 cri.go:89] found id: ""
	I0501 03:44:36.136282   69580 logs.go:276] 0 containers: []
	W0501 03:44:36.136290   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:44:36.136296   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:44:36.136342   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:44:36.178154   69580 cri.go:89] found id: ""
	I0501 03:44:36.178183   69580 logs.go:276] 0 containers: []
	W0501 03:44:36.178193   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:44:36.178200   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:44:36.178264   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:44:36.217050   69580 cri.go:89] found id: ""
	I0501 03:44:36.217077   69580 logs.go:276] 0 containers: []
	W0501 03:44:36.217089   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:44:36.217096   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:44:36.217172   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:44:36.260438   69580 cri.go:89] found id: ""
	I0501 03:44:36.260470   69580 logs.go:276] 0 containers: []
	W0501 03:44:36.260481   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:44:36.260488   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:44:36.260546   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:44:36.303410   69580 cri.go:89] found id: ""
	I0501 03:44:36.303436   69580 logs.go:276] 0 containers: []
	W0501 03:44:36.303448   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:44:36.303459   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:44:36.303475   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:44:36.390427   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:44:36.390468   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:44:36.433631   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:44:36.433663   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:44:33.845863   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:35.847896   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:36.012448   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:38.510722   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:39.005005   69237 pod_ready.go:81] duration metric: took 4m0.000783466s for pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace to be "Ready" ...
	E0501 03:44:39.005036   69237 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace to be "Ready" (will not retry!)
	I0501 03:44:39.005057   69237 pod_ready.go:38] duration metric: took 4m8.020392425s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0501 03:44:39.005089   69237 kubeadm.go:591] duration metric: took 4m17.941775807s to restartPrimaryControlPlane
	W0501 03:44:39.005175   69237 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0501 03:44:39.005208   69237 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0501 03:44:36.486334   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:44:36.486365   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:44:36.502145   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:44:36.502175   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:44:36.586733   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:44:39.087607   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:44:39.102475   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:44:39.102552   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:44:39.141916   69580 cri.go:89] found id: ""
	I0501 03:44:39.141947   69580 logs.go:276] 0 containers: []
	W0501 03:44:39.141958   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:44:39.141964   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:44:39.142012   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:44:39.188472   69580 cri.go:89] found id: ""
	I0501 03:44:39.188501   69580 logs.go:276] 0 containers: []
	W0501 03:44:39.188512   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:44:39.188520   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:44:39.188582   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:44:39.243282   69580 cri.go:89] found id: ""
	I0501 03:44:39.243306   69580 logs.go:276] 0 containers: []
	W0501 03:44:39.243313   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:44:39.243318   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:44:39.243377   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:44:39.288254   69580 cri.go:89] found id: ""
	I0501 03:44:39.288284   69580 logs.go:276] 0 containers: []
	W0501 03:44:39.288296   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:44:39.288304   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:44:39.288379   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:44:39.330846   69580 cri.go:89] found id: ""
	I0501 03:44:39.330879   69580 logs.go:276] 0 containers: []
	W0501 03:44:39.330892   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:44:39.330901   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:44:39.330969   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:44:39.377603   69580 cri.go:89] found id: ""
	I0501 03:44:39.377632   69580 logs.go:276] 0 containers: []
	W0501 03:44:39.377642   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:44:39.377649   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:44:39.377710   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:44:39.421545   69580 cri.go:89] found id: ""
	I0501 03:44:39.421574   69580 logs.go:276] 0 containers: []
	W0501 03:44:39.421585   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:44:39.421594   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:44:39.421653   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:44:39.463394   69580 cri.go:89] found id: ""
	I0501 03:44:39.463424   69580 logs.go:276] 0 containers: []
	W0501 03:44:39.463435   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:44:39.463447   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:44:39.463464   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:44:39.552196   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:44:39.552218   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:44:39.552229   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:44:39.648509   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:44:39.648549   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:44:39.702829   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:44:39.702866   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:44:39.757712   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:44:39.757746   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:44:38.347120   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:40.355310   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:42.847346   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:42.273443   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:44:42.289788   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:44:42.289856   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:44:42.336802   69580 cri.go:89] found id: ""
	I0501 03:44:42.336833   69580 logs.go:276] 0 containers: []
	W0501 03:44:42.336846   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:44:42.336854   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:44:42.336919   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:44:42.387973   69580 cri.go:89] found id: ""
	I0501 03:44:42.388017   69580 logs.go:276] 0 containers: []
	W0501 03:44:42.388028   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:44:42.388036   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:44:42.388103   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:44:42.444866   69580 cri.go:89] found id: ""
	I0501 03:44:42.444895   69580 logs.go:276] 0 containers: []
	W0501 03:44:42.444906   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:44:42.444914   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:44:42.444987   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:44:42.493647   69580 cri.go:89] found id: ""
	I0501 03:44:42.493676   69580 logs.go:276] 0 containers: []
	W0501 03:44:42.493686   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:44:42.493692   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:44:42.493748   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:44:42.535046   69580 cri.go:89] found id: ""
	I0501 03:44:42.535075   69580 logs.go:276] 0 containers: []
	W0501 03:44:42.535086   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:44:42.535093   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:44:42.535161   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:44:42.579453   69580 cri.go:89] found id: ""
	I0501 03:44:42.579486   69580 logs.go:276] 0 containers: []
	W0501 03:44:42.579499   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:44:42.579507   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:44:42.579568   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:44:42.621903   69580 cri.go:89] found id: ""
	I0501 03:44:42.621931   69580 logs.go:276] 0 containers: []
	W0501 03:44:42.621942   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:44:42.621950   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:44:42.622009   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:44:42.666202   69580 cri.go:89] found id: ""
	I0501 03:44:42.666232   69580 logs.go:276] 0 containers: []
	W0501 03:44:42.666243   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:44:42.666257   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:44:42.666272   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:44:42.736032   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:44:42.736078   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:44:42.750773   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:44:42.750799   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:44:42.836942   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:44:42.836975   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:44:42.836997   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:44:42.930660   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:44:42.930695   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:44:45.479619   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:44:45.495112   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:44:45.495174   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:44:45.536693   69580 cri.go:89] found id: ""
	I0501 03:44:45.536722   69580 logs.go:276] 0 containers: []
	W0501 03:44:45.536730   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:44:45.536737   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:44:45.536785   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:44:45.577838   69580 cri.go:89] found id: ""
	I0501 03:44:45.577866   69580 logs.go:276] 0 containers: []
	W0501 03:44:45.577876   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:44:45.577894   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:44:45.577958   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:44:45.615842   69580 cri.go:89] found id: ""
	I0501 03:44:45.615868   69580 logs.go:276] 0 containers: []
	W0501 03:44:45.615879   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:44:45.615892   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:44:45.615953   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:44:45.654948   69580 cri.go:89] found id: ""
	I0501 03:44:45.654972   69580 logs.go:276] 0 containers: []
	W0501 03:44:45.654980   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:44:45.654986   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:44:45.655042   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:44:45.695104   69580 cri.go:89] found id: ""
	I0501 03:44:45.695129   69580 logs.go:276] 0 containers: []
	W0501 03:44:45.695138   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:44:45.695145   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:44:45.695212   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:44:45.737609   69580 cri.go:89] found id: ""
	I0501 03:44:45.737633   69580 logs.go:276] 0 containers: []
	W0501 03:44:45.737641   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:44:45.737647   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:44:45.737693   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:44:45.778655   69580 cri.go:89] found id: ""
	I0501 03:44:45.778685   69580 logs.go:276] 0 containers: []
	W0501 03:44:45.778696   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:44:45.778702   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:44:45.778781   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:44:45.819430   69580 cri.go:89] found id: ""
	I0501 03:44:45.819452   69580 logs.go:276] 0 containers: []
	W0501 03:44:45.819460   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:44:45.819469   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:44:45.819485   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:44:45.875879   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:44:45.875911   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:44:45.892035   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:44:45.892062   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:44:45.975803   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:44:45.975836   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:44:45.975853   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:44:46.058183   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:44:46.058222   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:44:45.345465   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:47.346947   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:48.604991   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:44:48.621226   69580 kubeadm.go:591] duration metric: took 4m4.888665162s to restartPrimaryControlPlane
	W0501 03:44:48.621351   69580 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0501 03:44:48.621407   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0501 03:44:49.654748   69580 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.033320548s)
	I0501 03:44:49.654838   69580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 03:44:49.671511   69580 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0501 03:44:49.684266   69580 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0501 03:44:49.697079   69580 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0501 03:44:49.697101   69580 kubeadm.go:156] found existing configuration files:
	
	I0501 03:44:49.697159   69580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0501 03:44:49.710609   69580 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0501 03:44:49.710692   69580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0501 03:44:49.723647   69580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0501 03:44:49.736855   69580 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0501 03:44:49.737023   69580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0501 03:44:49.748842   69580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0501 03:44:49.760856   69580 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0501 03:44:49.760923   69580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0501 03:44:49.772685   69580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0501 03:44:49.784035   69580 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0501 03:44:49.784114   69580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0501 03:44:49.795699   69580 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0501 03:44:49.869387   69580 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0501 03:44:49.869481   69580 kubeadm.go:309] [preflight] Running pre-flight checks
	I0501 03:44:50.028858   69580 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0501 03:44:50.028999   69580 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0501 03:44:50.029182   69580 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0501 03:44:50.242773   69580 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0501 03:44:50.244816   69580 out.go:204]   - Generating certificates and keys ...
	I0501 03:44:50.244918   69580 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0501 03:44:50.245008   69580 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0501 03:44:50.245111   69580 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0501 03:44:50.245216   69580 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0501 03:44:50.245331   69580 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0501 03:44:50.245424   69580 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0501 03:44:50.245490   69580 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0501 03:44:50.245556   69580 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0501 03:44:50.245629   69580 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0501 03:44:50.245724   69580 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0501 03:44:50.245784   69580 kubeadm.go:309] [certs] Using the existing "sa" key
	I0501 03:44:50.245877   69580 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0501 03:44:50.501955   69580 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0501 03:44:50.683749   69580 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0501 03:44:50.905745   69580 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0501 03:44:51.005912   69580 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0501 03:44:51.025470   69580 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0501 03:44:51.029411   69580 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0501 03:44:51.029859   69580 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0501 03:44:51.181498   69580 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0501 03:44:51.183222   69580 out.go:204]   - Booting up control plane ...
	I0501 03:44:51.183334   69580 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0501 03:44:51.200394   69580 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0501 03:44:51.201612   69580 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0501 03:44:51.202445   69580 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0501 03:44:51.204681   69580 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0501 03:44:49.847629   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:52.345383   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:54.346479   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:56.348560   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:58.846207   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:45:01.345790   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:45:03.847746   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:45:06.346172   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:45:08.346693   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:45:10.846797   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:45:11.778923   69237 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.773690939s)
	I0501 03:45:11.778992   69237 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 03:45:11.796337   69237 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0501 03:45:11.810167   69237 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0501 03:45:11.822425   69237 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0501 03:45:11.822457   69237 kubeadm.go:156] found existing configuration files:
	
	I0501 03:45:11.822514   69237 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0501 03:45:11.834539   69237 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0501 03:45:11.834596   69237 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0501 03:45:11.848336   69237 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0501 03:45:11.860459   69237 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0501 03:45:11.860535   69237 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0501 03:45:11.873903   69237 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0501 03:45:11.887353   69237 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0501 03:45:11.887427   69237 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0501 03:45:11.900805   69237 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0501 03:45:11.912512   69237 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0501 03:45:11.912572   69237 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0501 03:45:11.924870   69237 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0501 03:45:12.149168   69237 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0501 03:45:13.348651   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:45:15.847148   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:45:20.882309   69237 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0501 03:45:20.882382   69237 kubeadm.go:309] [preflight] Running pre-flight checks
	I0501 03:45:20.882472   69237 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0501 03:45:20.882602   69237 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0501 03:45:20.882741   69237 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0501 03:45:20.882836   69237 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0501 03:45:20.884733   69237 out.go:204]   - Generating certificates and keys ...
	I0501 03:45:20.884837   69237 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0501 03:45:20.884894   69237 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0501 03:45:20.884996   69237 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0501 03:45:20.885106   69237 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0501 03:45:20.885209   69237 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0501 03:45:20.885316   69237 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0501 03:45:20.885400   69237 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0501 03:45:20.885483   69237 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0501 03:45:20.885590   69237 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0501 03:45:20.885702   69237 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0501 03:45:20.885759   69237 kubeadm.go:309] [certs] Using the existing "sa" key
	I0501 03:45:20.885838   69237 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0501 03:45:20.885915   69237 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0501 03:45:20.885996   69237 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0501 03:45:20.886074   69237 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0501 03:45:20.886164   69237 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0501 03:45:20.886233   69237 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0501 03:45:20.886362   69237 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0501 03:45:20.886492   69237 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0501 03:45:20.888113   69237 out.go:204]   - Booting up control plane ...
	I0501 03:45:20.888194   69237 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0501 03:45:20.888264   69237 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0501 03:45:20.888329   69237 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0501 03:45:20.888458   69237 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0501 03:45:20.888570   69237 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0501 03:45:20.888627   69237 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0501 03:45:20.888777   69237 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0501 03:45:20.888863   69237 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0501 03:45:20.888964   69237 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 502.867448ms
	I0501 03:45:20.889080   69237 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0501 03:45:20.889177   69237 kubeadm.go:309] [api-check] The API server is healthy after 5.503139909s
	I0501 03:45:20.889335   69237 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0501 03:45:20.889506   69237 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0501 03:45:20.889579   69237 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0501 03:45:20.889817   69237 kubeadm.go:309] [mark-control-plane] Marking the node default-k8s-diff-port-715118 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0501 03:45:20.889868   69237 kubeadm.go:309] [bootstrap-token] Using token: 2vhvw6.gdesonhc2twrukzt
	I0501 03:45:20.892253   69237 out.go:204]   - Configuring RBAC rules ...
	I0501 03:45:20.892395   69237 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0501 03:45:20.892475   69237 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0501 03:45:20.892652   69237 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0501 03:45:20.892812   69237 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0501 03:45:20.892931   69237 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0501 03:45:20.893040   69237 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0501 03:45:20.893201   69237 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0501 03:45:20.893264   69237 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0501 03:45:20.893309   69237 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0501 03:45:20.893319   69237 kubeadm.go:309] 
	I0501 03:45:20.893367   69237 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0501 03:45:20.893373   69237 kubeadm.go:309] 
	I0501 03:45:20.893450   69237 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0501 03:45:20.893458   69237 kubeadm.go:309] 
	I0501 03:45:20.893481   69237 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0501 03:45:20.893544   69237 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0501 03:45:20.893591   69237 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0501 03:45:20.893597   69237 kubeadm.go:309] 
	I0501 03:45:20.893643   69237 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0501 03:45:20.893650   69237 kubeadm.go:309] 
	I0501 03:45:20.893685   69237 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0501 03:45:20.893690   69237 kubeadm.go:309] 
	I0501 03:45:20.893741   69237 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0501 03:45:20.893805   69237 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0501 03:45:20.893858   69237 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0501 03:45:20.893863   69237 kubeadm.go:309] 
	I0501 03:45:20.893946   69237 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0501 03:45:20.894035   69237 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0501 03:45:20.894045   69237 kubeadm.go:309] 
	I0501 03:45:20.894139   69237 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8444 --token 2vhvw6.gdesonhc2twrukzt \
	I0501 03:45:20.894267   69237 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:bd94cc6acfb95fe36f70b12c6a1f2980601b8235e7c4dea65cebdf35eb514754 \
	I0501 03:45:20.894294   69237 kubeadm.go:309] 	--control-plane 
	I0501 03:45:20.894301   69237 kubeadm.go:309] 
	I0501 03:45:20.894368   69237 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0501 03:45:20.894375   69237 kubeadm.go:309] 
	I0501 03:45:20.894498   69237 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8444 --token 2vhvw6.gdesonhc2twrukzt \
	I0501 03:45:20.894605   69237 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:bd94cc6acfb95fe36f70b12c6a1f2980601b8235e7c4dea65cebdf35eb514754 
	I0501 03:45:20.894616   69237 cni.go:84] Creating CNI manager for ""
	I0501 03:45:20.894623   69237 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0501 03:45:20.896151   69237 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0501 03:45:18.346276   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:45:20.846958   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:45:20.897443   69237 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0501 03:45:20.911935   69237 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0501 03:45:20.941109   69237 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0501 03:45:20.941193   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:20.941249   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-715118 minikube.k8s.io/updated_at=2024_05_01T03_45_20_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=2c4eae41cda912e6a762d77f0d8868e00f97bb4e minikube.k8s.io/name=default-k8s-diff-port-715118 minikube.k8s.io/primary=true
	I0501 03:45:20.971300   69237 ops.go:34] apiserver oom_adj: -16
	I0501 03:45:21.143744   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:21.643800   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:22.144096   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:22.643852   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:23.144726   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:23.644174   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:24.143735   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:24.643947   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:25.143871   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:25.644557   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:23.345774   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:45:25.346189   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:45:27.348026   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:45:26.144443   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:26.643761   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:27.144691   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:27.644445   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:28.144006   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:28.643904   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:29.144077   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:29.644690   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:30.144692   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:30.644604   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:31.207553   69580 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0501 03:45:31.208328   69580 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0501 03:45:31.208516   69580 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0501 03:45:29.845785   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:45:32.348020   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:45:31.144517   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:31.644673   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:32.143793   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:32.644380   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:33.144729   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:33.644415   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:33.752056   69237 kubeadm.go:1107] duration metric: took 12.810918189s to wait for elevateKubeSystemPrivileges
	W0501 03:45:33.752096   69237 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0501 03:45:33.752105   69237 kubeadm.go:393] duration metric: took 5m12.753721662s to StartCluster
	I0501 03:45:33.752124   69237 settings.go:142] acquiring lock: {Name:mkcfe08bcadd45d99c00ed1eaf4e9c226a489fe2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 03:45:33.752219   69237 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18779-13391/kubeconfig
	I0501 03:45:33.753829   69237 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18779-13391/kubeconfig: {Name:mk5d1131557f885800877622151c0915337cb23a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 03:45:33.754094   69237 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.72.158 Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0501 03:45:33.755764   69237 out.go:177] * Verifying Kubernetes components...
	I0501 03:45:33.754178   69237 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0501 03:45:33.754310   69237 config.go:182] Loaded profile config "default-k8s-diff-port-715118": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0501 03:45:33.757144   69237 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 03:45:33.757151   69237 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-715118"
	I0501 03:45:33.757172   69237 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-715118"
	I0501 03:45:33.757189   69237 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-715118"
	I0501 03:45:33.757213   69237 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-715118"
	I0501 03:45:33.757221   69237 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-715118"
	W0501 03:45:33.757230   69237 addons.go:243] addon metrics-server should already be in state true
	I0501 03:45:33.757264   69237 host.go:66] Checking if "default-k8s-diff-port-715118" exists ...
	I0501 03:45:33.757180   69237 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-715118"
	W0501 03:45:33.757295   69237 addons.go:243] addon storage-provisioner should already be in state true
	I0501 03:45:33.757355   69237 host.go:66] Checking if "default-k8s-diff-port-715118" exists ...
	I0501 03:45:33.757596   69237 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:45:33.757624   69237 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:45:33.757630   69237 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:45:33.757762   69237 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:45:33.757808   69237 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:45:33.757662   69237 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:45:33.773846   69237 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44313
	I0501 03:45:33.774442   69237 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:45:33.775002   69237 main.go:141] libmachine: Using API Version  1
	I0501 03:45:33.775024   69237 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:45:33.775438   69237 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:45:33.776086   69237 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:45:33.776117   69237 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:45:33.777715   69237 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37079
	I0501 03:45:33.777835   69237 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38097
	I0501 03:45:33.778170   69237 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:45:33.778240   69237 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:45:33.778701   69237 main.go:141] libmachine: Using API Version  1
	I0501 03:45:33.778734   69237 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:45:33.778778   69237 main.go:141] libmachine: Using API Version  1
	I0501 03:45:33.778795   69237 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:45:33.779142   69237 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:45:33.779150   69237 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:45:33.779427   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetState
	I0501 03:45:33.779721   69237 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:45:33.779769   69237 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:45:33.783493   69237 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-715118"
	W0501 03:45:33.783519   69237 addons.go:243] addon default-storageclass should already be in state true
	I0501 03:45:33.783551   69237 host.go:66] Checking if "default-k8s-diff-port-715118" exists ...
	I0501 03:45:33.783922   69237 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:45:33.783965   69237 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:45:33.795373   69237 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43491
	I0501 03:45:33.795988   69237 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:45:33.796557   69237 main.go:141] libmachine: Using API Version  1
	I0501 03:45:33.796579   69237 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:45:33.796931   69237 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:45:33.797093   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetState
	I0501 03:45:33.797155   69237 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46755
	I0501 03:45:33.797806   69237 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:45:33.798383   69237 main.go:141] libmachine: Using API Version  1
	I0501 03:45:33.798442   69237 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:45:33.798848   69237 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:45:33.799052   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetState
	I0501 03:45:33.799105   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .DriverName
	I0501 03:45:33.801809   69237 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0501 03:45:33.800600   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .DriverName
	I0501 03:45:33.803752   69237 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0501 03:45:33.803779   69237 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0501 03:45:33.803800   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHHostname
	I0501 03:45:33.805235   69237 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0501 03:45:33.804172   69237 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35709
	I0501 03:45:33.806635   69237 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0501 03:45:33.806651   69237 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0501 03:45:33.806670   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHHostname
	I0501 03:45:33.806889   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:45:33.806967   69237 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:45:33.807292   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:12:31", ip: ""} in network mk-default-k8s-diff-port-715118: {Iface:virbr3 ExpiryTime:2024-05-01 04:40:05 +0000 UTC Type:0 Mac:52:54:00:87:12:31 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-715118 Clientid:01:52:54:00:87:12:31}
	I0501 03:45:33.807426   69237 main.go:141] libmachine: Using API Version  1
	I0501 03:45:33.807428   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHPort
	I0501 03:45:33.807437   69237 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:45:33.807449   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined IP address 192.168.72.158 and MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:45:33.807578   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHKeyPath
	I0501 03:45:33.807680   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHUsername
	I0501 03:45:33.807799   69237 sshutil.go:53] new ssh client: &{IP:192.168.72.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/default-k8s-diff-port-715118/id_rsa Username:docker}
	I0501 03:45:33.808171   69237 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:45:33.808625   69237 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:45:33.808660   69237 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:45:33.810668   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:45:33.811266   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:12:31", ip: ""} in network mk-default-k8s-diff-port-715118: {Iface:virbr3 ExpiryTime:2024-05-01 04:40:05 +0000 UTC Type:0 Mac:52:54:00:87:12:31 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-715118 Clientid:01:52:54:00:87:12:31}
	I0501 03:45:33.811297   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined IP address 192.168.72.158 and MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:45:33.811595   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHPort
	I0501 03:45:33.811794   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHKeyPath
	I0501 03:45:33.811983   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHUsername
	I0501 03:45:33.812124   69237 sshutil.go:53] new ssh client: &{IP:192.168.72.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/default-k8s-diff-port-715118/id_rsa Username:docker}
	I0501 03:45:33.825315   69237 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35461
	I0501 03:45:33.825891   69237 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:45:33.826334   69237 main.go:141] libmachine: Using API Version  1
	I0501 03:45:33.826351   69237 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:45:33.826679   69237 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:45:33.826912   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetState
	I0501 03:45:33.828659   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .DriverName
	I0501 03:45:33.828931   69237 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0501 03:45:33.828946   69237 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0501 03:45:33.828963   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHHostname
	I0501 03:45:33.832151   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:45:33.832632   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:12:31", ip: ""} in network mk-default-k8s-diff-port-715118: {Iface:virbr3 ExpiryTime:2024-05-01 04:40:05 +0000 UTC Type:0 Mac:52:54:00:87:12:31 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-715118 Clientid:01:52:54:00:87:12:31}
	I0501 03:45:33.832656   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined IP address 192.168.72.158 and MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:45:33.832863   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHPort
	I0501 03:45:33.833010   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHKeyPath
	I0501 03:45:33.833146   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHUsername
	I0501 03:45:33.833302   69237 sshutil.go:53] new ssh client: &{IP:192.168.72.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/default-k8s-diff-port-715118/id_rsa Username:docker}
	I0501 03:45:34.014287   69237 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0501 03:45:34.047199   69237 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-715118" to be "Ready" ...
	I0501 03:45:34.069000   69237 node_ready.go:49] node "default-k8s-diff-port-715118" has status "Ready":"True"
	I0501 03:45:34.069023   69237 node_ready.go:38] duration metric: took 21.790599ms for node "default-k8s-diff-port-715118" to be "Ready" ...
	I0501 03:45:34.069033   69237 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0501 03:45:34.077182   69237 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-bg755" in "kube-system" namespace to be "Ready" ...
	I0501 03:45:34.151001   69237 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0501 03:45:34.166362   69237 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0501 03:45:34.166385   69237 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0501 03:45:34.214624   69237 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0501 03:45:34.329110   69237 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0501 03:45:34.329133   69237 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0501 03:45:34.436779   69237 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0501 03:45:34.436804   69237 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0501 03:45:34.611410   69237 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0501 03:45:34.698997   69237 main.go:141] libmachine: Making call to close driver server
	I0501 03:45:34.699026   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .Close
	I0501 03:45:34.699321   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | Closing plugin on server side
	I0501 03:45:34.699389   69237 main.go:141] libmachine: Successfully made call to close driver server
	I0501 03:45:34.699408   69237 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 03:45:34.699423   69237 main.go:141] libmachine: Making call to close driver server
	I0501 03:45:34.699437   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .Close
	I0501 03:45:34.699684   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | Closing plugin on server side
	I0501 03:45:34.699726   69237 main.go:141] libmachine: Successfully made call to close driver server
	I0501 03:45:34.699734   69237 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 03:45:34.708143   69237 main.go:141] libmachine: Making call to close driver server
	I0501 03:45:34.708171   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .Close
	I0501 03:45:34.708438   69237 main.go:141] libmachine: Successfully made call to close driver server
	I0501 03:45:34.708457   69237 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 03:45:34.708463   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | Closing plugin on server side
	I0501 03:45:35.510225   69237 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.295555956s)
	I0501 03:45:35.510274   69237 main.go:141] libmachine: Making call to close driver server
	I0501 03:45:35.510286   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .Close
	I0501 03:45:35.510700   69237 main.go:141] libmachine: Successfully made call to close driver server
	I0501 03:45:35.510721   69237 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 03:45:35.510732   69237 main.go:141] libmachine: Making call to close driver server
	I0501 03:45:35.510728   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | Closing plugin on server side
	I0501 03:45:35.510740   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .Close
	I0501 03:45:35.510961   69237 main.go:141] libmachine: Successfully made call to close driver server
	I0501 03:45:35.510979   69237 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 03:45:35.510983   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | Closing plugin on server side
	I0501 03:45:35.845633   69237 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.234178466s)
	I0501 03:45:35.845691   69237 main.go:141] libmachine: Making call to close driver server
	I0501 03:45:35.845708   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .Close
	I0501 03:45:35.845997   69237 main.go:141] libmachine: Successfully made call to close driver server
	I0501 03:45:35.846017   69237 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 03:45:35.846027   69237 main.go:141] libmachine: Making call to close driver server
	I0501 03:45:35.846026   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | Closing plugin on server side
	I0501 03:45:35.846036   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .Close
	I0501 03:45:35.847736   69237 main.go:141] libmachine: Successfully made call to close driver server
	I0501 03:45:35.847767   69237 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 03:45:35.847781   69237 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-715118"
	I0501 03:45:35.847786   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | Closing plugin on server side
	I0501 03:45:35.849438   69237 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0501 03:45:36.209029   69580 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0501 03:45:36.209300   69580 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0501 03:45:34.848699   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:45:37.338985   68640 pod_ready.go:81] duration metric: took 4m0.000306796s for pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace to be "Ready" ...
	E0501 03:45:37.339010   68640 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace to be "Ready" (will not retry!)
	I0501 03:45:37.339029   68640 pod_ready.go:38] duration metric: took 4m9.062496127s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0501 03:45:37.339089   68640 kubeadm.go:591] duration metric: took 4m19.268153875s to restartPrimaryControlPlane
	W0501 03:45:37.339148   68640 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0501 03:45:37.339176   68640 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0501 03:45:35.851156   69237 addons.go:505] duration metric: took 2.096980743s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0501 03:45:36.085176   69237 pod_ready.go:102] pod "coredns-7db6d8ff4d-bg755" in "kube-system" namespace has status "Ready":"False"
	I0501 03:45:36.585390   69237 pod_ready.go:92] pod "coredns-7db6d8ff4d-bg755" in "kube-system" namespace has status "Ready":"True"
	I0501 03:45:36.585415   69237 pod_ready.go:81] duration metric: took 2.508204204s for pod "coredns-7db6d8ff4d-bg755" in "kube-system" namespace to be "Ready" ...
	I0501 03:45:36.585428   69237 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-mp6f5" in "kube-system" namespace to be "Ready" ...
	I0501 03:45:36.594575   69237 pod_ready.go:92] pod "coredns-7db6d8ff4d-mp6f5" in "kube-system" namespace has status "Ready":"True"
	I0501 03:45:36.594600   69237 pod_ready.go:81] duration metric: took 9.163923ms for pod "coredns-7db6d8ff4d-mp6f5" in "kube-system" namespace to be "Ready" ...
	I0501 03:45:36.594613   69237 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-715118" in "kube-system" namespace to be "Ready" ...
	I0501 03:45:36.606784   69237 pod_ready.go:92] pod "etcd-default-k8s-diff-port-715118" in "kube-system" namespace has status "Ready":"True"
	I0501 03:45:36.606807   69237 pod_ready.go:81] duration metric: took 12.186129ms for pod "etcd-default-k8s-diff-port-715118" in "kube-system" namespace to be "Ready" ...
	I0501 03:45:36.606819   69237 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-715118" in "kube-system" namespace to be "Ready" ...
	I0501 03:45:36.617373   69237 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-715118" in "kube-system" namespace has status "Ready":"True"
	I0501 03:45:36.617394   69237 pod_ready.go:81] duration metric: took 10.566278ms for pod "kube-apiserver-default-k8s-diff-port-715118" in "kube-system" namespace to be "Ready" ...
	I0501 03:45:36.617404   69237 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-715118" in "kube-system" namespace to be "Ready" ...
	I0501 03:45:36.622441   69237 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-715118" in "kube-system" namespace has status "Ready":"True"
	I0501 03:45:36.622460   69237 pod_ready.go:81] duration metric: took 5.049948ms for pod "kube-controller-manager-default-k8s-diff-port-715118" in "kube-system" namespace to be "Ready" ...
	I0501 03:45:36.622469   69237 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2knrp" in "kube-system" namespace to be "Ready" ...
	I0501 03:45:36.981490   69237 pod_ready.go:92] pod "kube-proxy-2knrp" in "kube-system" namespace has status "Ready":"True"
	I0501 03:45:36.981513   69237 pod_ready.go:81] duration metric: took 359.038927ms for pod "kube-proxy-2knrp" in "kube-system" namespace to be "Ready" ...
	I0501 03:45:36.981523   69237 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-715118" in "kube-system" namespace to be "Ready" ...
	I0501 03:45:37.381970   69237 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-715118" in "kube-system" namespace has status "Ready":"True"
	I0501 03:45:37.381999   69237 pod_ready.go:81] duration metric: took 400.468372ms for pod "kube-scheduler-default-k8s-diff-port-715118" in "kube-system" namespace to be "Ready" ...
	I0501 03:45:37.382011   69237 pod_ready.go:38] duration metric: took 3.312967983s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0501 03:45:37.382028   69237 api_server.go:52] waiting for apiserver process to appear ...
	I0501 03:45:37.382091   69237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:45:37.401961   69237 api_server.go:72] duration metric: took 3.647829991s to wait for apiserver process to appear ...
	I0501 03:45:37.401992   69237 api_server.go:88] waiting for apiserver healthz status ...
	I0501 03:45:37.402016   69237 api_server.go:253] Checking apiserver healthz at https://192.168.72.158:8444/healthz ...
	I0501 03:45:37.407177   69237 api_server.go:279] https://192.168.72.158:8444/healthz returned 200:
	ok
	I0501 03:45:37.408020   69237 api_server.go:141] control plane version: v1.30.0
	I0501 03:45:37.408037   69237 api_server.go:131] duration metric: took 6.036621ms to wait for apiserver health ...
	I0501 03:45:37.408046   69237 system_pods.go:43] waiting for kube-system pods to appear ...
	I0501 03:45:37.586052   69237 system_pods.go:59] 9 kube-system pods found
	I0501 03:45:37.586081   69237 system_pods.go:61] "coredns-7db6d8ff4d-bg755" [884d489a-bc1e-442c-8e00-4616f983d3e9] Running
	I0501 03:45:37.586085   69237 system_pods.go:61] "coredns-7db6d8ff4d-mp6f5" [4c8550d0-0029-48f1-a892-1800f6639c75] Running
	I0501 03:45:37.586090   69237 system_pods.go:61] "etcd-default-k8s-diff-port-715118" [12be9bec-1d84-49ee-898c-499ff75a8026] Running
	I0501 03:45:37.586094   69237 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-715118" [ae9a476b-03cf-4d4d-9990-5e760db82e60] Running
	I0501 03:45:37.586098   69237 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-715118" [542bbe50-58b6-40fb-b81b-0cc2444a3401] Running
	I0501 03:45:37.586101   69237 system_pods.go:61] "kube-proxy-2knrp" [cf1406ff-8a6e-49bb-b180-1e72f4b54fbf] Running
	I0501 03:45:37.586104   69237 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-715118" [d24f02a2-67a9-4f28-9acc-445e0e74a68d] Running
	I0501 03:45:37.586109   69237 system_pods.go:61] "metrics-server-569cc877fc-xwxx9" [a66f5df4-355c-47f0-8b6e-da29e1c4394e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0501 03:45:37.586113   69237 system_pods.go:61] "storage-provisioner" [debb3a59-143a-46d3-87da-c2403e264861] Running
	I0501 03:45:37.586123   69237 system_pods.go:74] duration metric: took 178.07045ms to wait for pod list to return data ...
	I0501 03:45:37.586132   69237 default_sa.go:34] waiting for default service account to be created ...
	I0501 03:45:37.780696   69237 default_sa.go:45] found service account: "default"
	I0501 03:45:37.780720   69237 default_sa.go:55] duration metric: took 194.582743ms for default service account to be created ...
	I0501 03:45:37.780728   69237 system_pods.go:116] waiting for k8s-apps to be running ...
	I0501 03:45:37.985342   69237 system_pods.go:86] 9 kube-system pods found
	I0501 03:45:37.985368   69237 system_pods.go:89] "coredns-7db6d8ff4d-bg755" [884d489a-bc1e-442c-8e00-4616f983d3e9] Running
	I0501 03:45:37.985374   69237 system_pods.go:89] "coredns-7db6d8ff4d-mp6f5" [4c8550d0-0029-48f1-a892-1800f6639c75] Running
	I0501 03:45:37.985378   69237 system_pods.go:89] "etcd-default-k8s-diff-port-715118" [12be9bec-1d84-49ee-898c-499ff75a8026] Running
	I0501 03:45:37.985383   69237 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-715118" [ae9a476b-03cf-4d4d-9990-5e760db82e60] Running
	I0501 03:45:37.985387   69237 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-715118" [542bbe50-58b6-40fb-b81b-0cc2444a3401] Running
	I0501 03:45:37.985391   69237 system_pods.go:89] "kube-proxy-2knrp" [cf1406ff-8a6e-49bb-b180-1e72f4b54fbf] Running
	I0501 03:45:37.985395   69237 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-715118" [d24f02a2-67a9-4f28-9acc-445e0e74a68d] Running
	I0501 03:45:37.985401   69237 system_pods.go:89] "metrics-server-569cc877fc-xwxx9" [a66f5df4-355c-47f0-8b6e-da29e1c4394e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0501 03:45:37.985405   69237 system_pods.go:89] "storage-provisioner" [debb3a59-143a-46d3-87da-c2403e264861] Running
	I0501 03:45:37.985412   69237 system_pods.go:126] duration metric: took 204.679545ms to wait for k8s-apps to be running ...
	I0501 03:45:37.985418   69237 system_svc.go:44] waiting for kubelet service to be running ....
	I0501 03:45:37.985463   69237 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 03:45:38.002421   69237 system_svc.go:56] duration metric: took 16.992346ms WaitForService to wait for kubelet
	I0501 03:45:38.002458   69237 kubeadm.go:576] duration metric: took 4.248332952s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0501 03:45:38.002477   69237 node_conditions.go:102] verifying NodePressure condition ...
	I0501 03:45:38.181465   69237 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0501 03:45:38.181496   69237 node_conditions.go:123] node cpu capacity is 2
	I0501 03:45:38.181510   69237 node_conditions.go:105] duration metric: took 179.027834ms to run NodePressure ...
	I0501 03:45:38.181524   69237 start.go:240] waiting for startup goroutines ...
	I0501 03:45:38.181534   69237 start.go:245] waiting for cluster config update ...
	I0501 03:45:38.181547   69237 start.go:254] writing updated cluster config ...
	I0501 03:45:38.181810   69237 ssh_runner.go:195] Run: rm -f paused
	I0501 03:45:38.244075   69237 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0501 03:45:38.246261   69237 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-715118" cluster and "default" namespace by default
	I0501 03:45:46.209837   69580 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0501 03:45:46.210120   69580 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0501 03:46:06.211471   69580 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0501 03:46:06.211673   69580 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0501 03:46:09.967666   68640 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.628454657s)
	I0501 03:46:09.967737   68640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 03:46:09.985802   68640 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0501 03:46:09.996494   68640 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0501 03:46:10.006956   68640 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0501 03:46:10.006979   68640 kubeadm.go:156] found existing configuration files:
	
	I0501 03:46:10.007025   68640 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0501 03:46:10.017112   68640 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0501 03:46:10.017174   68640 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0501 03:46:10.027747   68640 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0501 03:46:10.037853   68640 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0501 03:46:10.037910   68640 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0501 03:46:10.048023   68640 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0501 03:46:10.057354   68640 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0501 03:46:10.057408   68640 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0501 03:46:10.067352   68640 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0501 03:46:10.076696   68640 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0501 03:46:10.076741   68640 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0501 03:46:10.086799   68640 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0501 03:46:10.150816   68640 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0501 03:46:10.150871   68640 kubeadm.go:309] [preflight] Running pre-flight checks
	I0501 03:46:10.325430   68640 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0501 03:46:10.325546   68640 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0501 03:46:10.325669   68640 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0501 03:46:10.581934   68640 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0501 03:46:10.585119   68640 out.go:204]   - Generating certificates and keys ...
	I0501 03:46:10.585221   68640 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0501 03:46:10.585319   68640 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0501 03:46:10.585416   68640 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0501 03:46:10.585522   68640 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0501 03:46:10.585620   68640 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0501 03:46:10.585695   68640 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0501 03:46:10.585781   68640 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0501 03:46:10.585861   68640 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0501 03:46:10.585959   68640 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0501 03:46:10.586064   68640 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0501 03:46:10.586116   68640 kubeadm.go:309] [certs] Using the existing "sa" key
	I0501 03:46:10.586208   68640 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0501 03:46:10.789482   68640 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0501 03:46:10.991219   68640 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0501 03:46:11.194897   68640 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0501 03:46:11.411926   68640 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0501 03:46:11.994791   68640 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0501 03:46:11.995468   68640 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0501 03:46:11.998463   68640 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0501 03:46:12.000394   68640 out.go:204]   - Booting up control plane ...
	I0501 03:46:12.000521   68640 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0501 03:46:12.000632   68640 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0501 03:46:12.000721   68640 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0501 03:46:12.022371   68640 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0501 03:46:12.023628   68640 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0501 03:46:12.023709   68640 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0501 03:46:12.178475   68640 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0501 03:46:12.178615   68640 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0501 03:46:12.680307   68640 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 502.179909ms
	I0501 03:46:12.680409   68640 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0501 03:46:18.182830   68640 kubeadm.go:309] [api-check] The API server is healthy after 5.502227274s
	I0501 03:46:18.197822   68640 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0501 03:46:18.217282   68640 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0501 03:46:18.247591   68640 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0501 03:46:18.247833   68640 kubeadm.go:309] [mark-control-plane] Marking the node no-preload-892672 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0501 03:46:18.259687   68640 kubeadm.go:309] [bootstrap-token] Using token: 8rc6kt.ele1oeavg6hezahw
	I0501 03:46:18.261204   68640 out.go:204]   - Configuring RBAC rules ...
	I0501 03:46:18.261333   68640 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0501 03:46:18.272461   68640 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0501 03:46:18.284615   68640 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0501 03:46:18.288686   68640 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0501 03:46:18.292005   68640 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0501 03:46:18.295772   68640 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0501 03:46:18.591035   68640 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0501 03:46:19.028299   68640 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0501 03:46:19.598192   68640 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0501 03:46:19.598219   68640 kubeadm.go:309] 
	I0501 03:46:19.598323   68640 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0501 03:46:19.598337   68640 kubeadm.go:309] 
	I0501 03:46:19.598490   68640 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0501 03:46:19.598514   68640 kubeadm.go:309] 
	I0501 03:46:19.598542   68640 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0501 03:46:19.598604   68640 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0501 03:46:19.598648   68640 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0501 03:46:19.598673   68640 kubeadm.go:309] 
	I0501 03:46:19.598771   68640 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0501 03:46:19.598784   68640 kubeadm.go:309] 
	I0501 03:46:19.598850   68640 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0501 03:46:19.598860   68640 kubeadm.go:309] 
	I0501 03:46:19.598963   68640 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0501 03:46:19.599069   68640 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0501 03:46:19.599167   68640 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0501 03:46:19.599183   68640 kubeadm.go:309] 
	I0501 03:46:19.599283   68640 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0501 03:46:19.599389   68640 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0501 03:46:19.599400   68640 kubeadm.go:309] 
	I0501 03:46:19.599500   68640 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 8rc6kt.ele1oeavg6hezahw \
	I0501 03:46:19.599626   68640 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:bd94cc6acfb95fe36f70b12c6a1f2980601b8235e7c4dea65cebdf35eb514754 \
	I0501 03:46:19.599666   68640 kubeadm.go:309] 	--control-plane 
	I0501 03:46:19.599676   68640 kubeadm.go:309] 
	I0501 03:46:19.599779   68640 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0501 03:46:19.599807   68640 kubeadm.go:309] 
	I0501 03:46:19.599931   68640 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 8rc6kt.ele1oeavg6hezahw \
	I0501 03:46:19.600079   68640 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:bd94cc6acfb95fe36f70b12c6a1f2980601b8235e7c4dea65cebdf35eb514754 
	I0501 03:46:19.600763   68640 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0501 03:46:19.600786   68640 cni.go:84] Creating CNI manager for ""
	I0501 03:46:19.600792   68640 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0501 03:46:19.602473   68640 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0501 03:46:19.603816   68640 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0501 03:46:19.621706   68640 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0501 03:46:19.649643   68640 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0501 03:46:19.649762   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:19.649787   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-892672 minikube.k8s.io/updated_at=2024_05_01T03_46_19_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=2c4eae41cda912e6a762d77f0d8868e00f97bb4e minikube.k8s.io/name=no-preload-892672 minikube.k8s.io/primary=true
	I0501 03:46:19.892482   68640 ops.go:34] apiserver oom_adj: -16
	I0501 03:46:19.892631   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:20.393436   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:20.893412   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:21.393634   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:21.893273   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:22.393031   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:22.893498   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:23.393599   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:23.893024   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:24.393544   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:24.893431   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:25.393290   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:25.892718   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:26.392928   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:26.893101   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:27.393045   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:27.892722   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:28.393102   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:28.892871   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:29.392650   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:29.893034   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:30.393561   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:30.893661   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:31.393235   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:31.892889   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:32.393263   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:32.893427   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:33.046965   68640 kubeadm.go:1107] duration metric: took 13.397277159s to wait for elevateKubeSystemPrivileges
	W0501 03:46:33.047010   68640 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0501 03:46:33.047020   68640 kubeadm.go:393] duration metric: took 5m15.038324633s to StartCluster
	I0501 03:46:33.047042   68640 settings.go:142] acquiring lock: {Name:mkcfe08bcadd45d99c00ed1eaf4e9c226a489fe2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 03:46:33.047126   68640 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18779-13391/kubeconfig
	I0501 03:46:33.048731   68640 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18779-13391/kubeconfig: {Name:mk5d1131557f885800877622151c0915337cb23a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 03:46:33.048988   68640 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.144 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0501 03:46:33.050376   68640 out.go:177] * Verifying Kubernetes components...
	I0501 03:46:33.049030   68640 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0501 03:46:33.049253   68640 config.go:182] Loaded profile config "no-preload-892672": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0501 03:46:33.051595   68640 addons.go:69] Setting storage-provisioner=true in profile "no-preload-892672"
	I0501 03:46:33.051616   68640 addons.go:69] Setting metrics-server=true in profile "no-preload-892672"
	I0501 03:46:33.051639   68640 addons.go:234] Setting addon storage-provisioner=true in "no-preload-892672"
	I0501 03:46:33.051644   68640 addons.go:234] Setting addon metrics-server=true in "no-preload-892672"
	W0501 03:46:33.051649   68640 addons.go:243] addon storage-provisioner should already be in state true
	W0501 03:46:33.051653   68640 addons.go:243] addon metrics-server should already be in state true
	I0501 03:46:33.051675   68640 host.go:66] Checking if "no-preload-892672" exists ...
	I0501 03:46:33.051680   68640 host.go:66] Checking if "no-preload-892672" exists ...
	I0501 03:46:33.051599   68640 addons.go:69] Setting default-storageclass=true in profile "no-preload-892672"
	I0501 03:46:33.051760   68640 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-892672"
	I0501 03:46:33.051600   68640 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 03:46:33.052016   68640 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:46:33.052047   68640 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:46:33.052064   68640 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:46:33.052095   68640 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:46:33.052110   68640 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:46:33.052135   68640 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:46:33.068515   68640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46407
	I0501 03:46:33.069115   68640 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:46:33.069702   68640 main.go:141] libmachine: Using API Version  1
	I0501 03:46:33.069728   68640 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:46:33.070085   68640 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:46:33.070731   68640 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:46:33.070763   68640 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:46:33.072166   68640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46609
	I0501 03:46:33.072179   68640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46109
	I0501 03:46:33.072632   68640 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:46:33.072770   68640 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:46:33.073161   68640 main.go:141] libmachine: Using API Version  1
	I0501 03:46:33.073180   68640 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:46:33.073318   68640 main.go:141] libmachine: Using API Version  1
	I0501 03:46:33.073333   68640 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:46:33.073467   68640 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:46:33.073893   68640 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:46:33.074056   68640 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:46:33.074065   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetState
	I0501 03:46:33.074092   68640 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:46:33.077976   68640 addons.go:234] Setting addon default-storageclass=true in "no-preload-892672"
	W0501 03:46:33.077997   68640 addons.go:243] addon default-storageclass should already be in state true
	I0501 03:46:33.078110   68640 host.go:66] Checking if "no-preload-892672" exists ...
	I0501 03:46:33.078535   68640 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:46:33.078566   68640 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:46:33.092605   68640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39707
	I0501 03:46:33.092996   68640 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:46:33.093578   68640 main.go:141] libmachine: Using API Version  1
	I0501 03:46:33.093597   68640 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:46:33.093602   68640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36671
	I0501 03:46:33.093778   68640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34727
	I0501 03:46:33.093893   68640 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:46:33.094117   68640 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:46:33.094169   68640 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:46:33.094250   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetState
	I0501 03:46:33.094577   68640 main.go:141] libmachine: Using API Version  1
	I0501 03:46:33.094602   68640 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:46:33.094986   68640 main.go:141] libmachine: Using API Version  1
	I0501 03:46:33.095004   68640 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:46:33.095062   68640 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:46:33.095389   68640 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:46:33.096401   68640 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:46:33.096423   68640 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:46:33.096665   68640 main.go:141] libmachine: (no-preload-892672) Calling .DriverName
	I0501 03:46:33.096678   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetState
	I0501 03:46:33.098465   68640 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0501 03:46:33.099842   68640 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0501 03:46:33.099861   68640 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0501 03:46:33.099879   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHHostname
	I0501 03:46:33.098734   68640 main.go:141] libmachine: (no-preload-892672) Calling .DriverName
	I0501 03:46:33.101305   68640 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0501 03:46:33.102491   68640 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0501 03:46:33.102512   68640 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0501 03:46:33.102531   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHHostname
	I0501 03:46:33.103006   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:46:33.103617   68640 main.go:141] libmachine: (no-preload-892672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6d:9a", ip: ""} in network mk-no-preload-892672: {Iface:virbr1 ExpiryTime:2024-05-01 04:40:47 +0000 UTC Type:0 Mac:52:54:00:c7:6d:9a Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:no-preload-892672 Clientid:01:52:54:00:c7:6d:9a}
	I0501 03:46:33.103641   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined IP address 192.168.39.144 and MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:46:33.103799   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHPort
	I0501 03:46:33.103977   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHKeyPath
	I0501 03:46:33.104143   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHUsername
	I0501 03:46:33.104272   68640 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/no-preload-892672/id_rsa Username:docker}
	I0501 03:46:33.105452   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:46:33.105795   68640 main.go:141] libmachine: (no-preload-892672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6d:9a", ip: ""} in network mk-no-preload-892672: {Iface:virbr1 ExpiryTime:2024-05-01 04:40:47 +0000 UTC Type:0 Mac:52:54:00:c7:6d:9a Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:no-preload-892672 Clientid:01:52:54:00:c7:6d:9a}
	I0501 03:46:33.105821   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined IP address 192.168.39.144 and MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:46:33.106142   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHPort
	I0501 03:46:33.106290   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHKeyPath
	I0501 03:46:33.106392   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHUsername
	I0501 03:46:33.106511   68640 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/no-preload-892672/id_rsa Username:docker}
	I0501 03:46:33.113012   68640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42615
	I0501 03:46:33.113365   68640 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:46:33.113813   68640 main.go:141] libmachine: Using API Version  1
	I0501 03:46:33.113834   68640 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:46:33.114127   68640 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:46:33.114304   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetState
	I0501 03:46:33.115731   68640 main.go:141] libmachine: (no-preload-892672) Calling .DriverName
	I0501 03:46:33.115997   68640 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0501 03:46:33.116010   68640 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0501 03:46:33.116023   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHHostname
	I0501 03:46:33.119272   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:46:33.119644   68640 main.go:141] libmachine: (no-preload-892672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6d:9a", ip: ""} in network mk-no-preload-892672: {Iface:virbr1 ExpiryTime:2024-05-01 04:40:47 +0000 UTC Type:0 Mac:52:54:00:c7:6d:9a Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:no-preload-892672 Clientid:01:52:54:00:c7:6d:9a}
	I0501 03:46:33.119661   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined IP address 192.168.39.144 and MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:46:33.119845   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHPort
	I0501 03:46:33.120223   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHKeyPath
	I0501 03:46:33.120358   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHUsername
	I0501 03:46:33.120449   68640 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/no-preload-892672/id_rsa Username:docker}
	I0501 03:46:33.296711   68640 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0501 03:46:33.342215   68640 node_ready.go:35] waiting up to 6m0s for node "no-preload-892672" to be "Ready" ...
	I0501 03:46:33.355677   68640 node_ready.go:49] node "no-preload-892672" has status "Ready":"True"
	I0501 03:46:33.355707   68640 node_ready.go:38] duration metric: took 13.392381ms for node "no-preload-892672" to be "Ready" ...
	I0501 03:46:33.355718   68640 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0501 03:46:33.367706   68640 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-57k52" in "kube-system" namespace to be "Ready" ...
	I0501 03:46:33.413444   68640 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0501 03:46:33.418869   68640 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0501 03:46:33.439284   68640 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0501 03:46:33.439318   68640 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0501 03:46:33.512744   68640 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0501 03:46:33.512768   68640 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0501 03:46:33.594777   68640 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0501 03:46:33.594798   68640 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0501 03:46:33.658506   68640 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0501 03:46:34.013890   68640 main.go:141] libmachine: Making call to close driver server
	I0501 03:46:34.013919   68640 main.go:141] libmachine: (no-preload-892672) Calling .Close
	I0501 03:46:34.014023   68640 main.go:141] libmachine: Making call to close driver server
	I0501 03:46:34.014056   68640 main.go:141] libmachine: (no-preload-892672) Calling .Close
	I0501 03:46:34.014250   68640 main.go:141] libmachine: Successfully made call to close driver server
	I0501 03:46:34.014284   68640 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 03:46:34.014297   68640 main.go:141] libmachine: Making call to close driver server
	I0501 03:46:34.014306   68640 main.go:141] libmachine: (no-preload-892672) Calling .Close
	I0501 03:46:34.014353   68640 main.go:141] libmachine: Successfully made call to close driver server
	I0501 03:46:34.014370   68640 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 03:46:34.014383   68640 main.go:141] libmachine: Making call to close driver server
	I0501 03:46:34.014393   68640 main.go:141] libmachine: (no-preload-892672) Calling .Close
	I0501 03:46:34.014642   68640 main.go:141] libmachine: Successfully made call to close driver server
	I0501 03:46:34.014664   68640 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 03:46:34.016263   68640 main.go:141] libmachine: (no-preload-892672) DBG | Closing plugin on server side
	I0501 03:46:34.016263   68640 main.go:141] libmachine: Successfully made call to close driver server
	I0501 03:46:34.016288   68640 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 03:46:34.031961   68640 main.go:141] libmachine: Making call to close driver server
	I0501 03:46:34.031996   68640 main.go:141] libmachine: (no-preload-892672) Calling .Close
	I0501 03:46:34.032303   68640 main.go:141] libmachine: Successfully made call to close driver server
	I0501 03:46:34.032324   68640 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 03:46:34.260154   68640 main.go:141] libmachine: Making call to close driver server
	I0501 03:46:34.260180   68640 main.go:141] libmachine: (no-preload-892672) Calling .Close
	I0501 03:46:34.260600   68640 main.go:141] libmachine: Successfully made call to close driver server
	I0501 03:46:34.260629   68640 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 03:46:34.260641   68640 main.go:141] libmachine: Making call to close driver server
	I0501 03:46:34.260650   68640 main.go:141] libmachine: (no-preload-892672) Calling .Close
	I0501 03:46:34.260876   68640 main.go:141] libmachine: Successfully made call to close driver server
	I0501 03:46:34.260888   68640 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 03:46:34.260899   68640 addons.go:470] Verifying addon metrics-server=true in "no-preload-892672"
	I0501 03:46:34.262520   68640 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0501 03:46:34.264176   68640 addons.go:505] duration metric: took 1.215147486s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0501 03:46:35.384910   68640 pod_ready.go:102] pod "coredns-7db6d8ff4d-57k52" in "kube-system" namespace has status "Ready":"False"
	I0501 03:46:36.377298   68640 pod_ready.go:92] pod "coredns-7db6d8ff4d-57k52" in "kube-system" namespace has status "Ready":"True"
	I0501 03:46:36.377321   68640 pod_ready.go:81] duration metric: took 3.009581117s for pod "coredns-7db6d8ff4d-57k52" in "kube-system" namespace to be "Ready" ...
	I0501 03:46:36.377331   68640 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-c6lnj" in "kube-system" namespace to be "Ready" ...
	I0501 03:46:36.383022   68640 pod_ready.go:92] pod "coredns-7db6d8ff4d-c6lnj" in "kube-system" namespace has status "Ready":"True"
	I0501 03:46:36.383042   68640 pod_ready.go:81] duration metric: took 5.704691ms for pod "coredns-7db6d8ff4d-c6lnj" in "kube-system" namespace to be "Ready" ...
	I0501 03:46:36.383051   68640 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-892672" in "kube-system" namespace to be "Ready" ...
	I0501 03:46:36.387456   68640 pod_ready.go:92] pod "etcd-no-preload-892672" in "kube-system" namespace has status "Ready":"True"
	I0501 03:46:36.387476   68640 pod_ready.go:81] duration metric: took 4.418883ms for pod "etcd-no-preload-892672" in "kube-system" namespace to be "Ready" ...
	I0501 03:46:36.387485   68640 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-892672" in "kube-system" namespace to be "Ready" ...
	I0501 03:46:36.392348   68640 pod_ready.go:92] pod "kube-apiserver-no-preload-892672" in "kube-system" namespace has status "Ready":"True"
	I0501 03:46:36.392366   68640 pod_ready.go:81] duration metric: took 4.874928ms for pod "kube-apiserver-no-preload-892672" in "kube-system" namespace to be "Ready" ...
	I0501 03:46:36.392375   68640 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-892672" in "kube-system" namespace to be "Ready" ...
	I0501 03:46:36.397155   68640 pod_ready.go:92] pod "kube-controller-manager-no-preload-892672" in "kube-system" namespace has status "Ready":"True"
	I0501 03:46:36.397175   68640 pod_ready.go:81] duration metric: took 4.794583ms for pod "kube-controller-manager-no-preload-892672" in "kube-system" namespace to be "Ready" ...
	I0501 03:46:36.397185   68640 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-czsqz" in "kube-system" namespace to be "Ready" ...
	I0501 03:46:36.774003   68640 pod_ready.go:92] pod "kube-proxy-czsqz" in "kube-system" namespace has status "Ready":"True"
	I0501 03:46:36.774025   68640 pod_ready.go:81] duration metric: took 376.83321ms for pod "kube-proxy-czsqz" in "kube-system" namespace to be "Ready" ...
	I0501 03:46:36.774036   68640 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-892672" in "kube-system" namespace to be "Ready" ...
	I0501 03:46:37.171504   68640 pod_ready.go:92] pod "kube-scheduler-no-preload-892672" in "kube-system" namespace has status "Ready":"True"
	I0501 03:46:37.171526   68640 pod_ready.go:81] duration metric: took 397.484706ms for pod "kube-scheduler-no-preload-892672" in "kube-system" namespace to be "Ready" ...
	I0501 03:46:37.171535   68640 pod_ready.go:38] duration metric: took 3.815806043s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0501 03:46:37.171549   68640 api_server.go:52] waiting for apiserver process to appear ...
	I0501 03:46:37.171609   68640 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:46:37.189446   68640 api_server.go:72] duration metric: took 4.140414812s to wait for apiserver process to appear ...
	I0501 03:46:37.189473   68640 api_server.go:88] waiting for apiserver healthz status ...
	I0501 03:46:37.189494   68640 api_server.go:253] Checking apiserver healthz at https://192.168.39.144:8443/healthz ...
	I0501 03:46:37.195052   68640 api_server.go:279] https://192.168.39.144:8443/healthz returned 200:
	ok
	I0501 03:46:37.196163   68640 api_server.go:141] control plane version: v1.30.0
	I0501 03:46:37.196183   68640 api_server.go:131] duration metric: took 6.703804ms to wait for apiserver health ...
	I0501 03:46:37.196191   68640 system_pods.go:43] waiting for kube-system pods to appear ...
	I0501 03:46:37.375742   68640 system_pods.go:59] 9 kube-system pods found
	I0501 03:46:37.375775   68640 system_pods.go:61] "coredns-7db6d8ff4d-57k52" [f98cb358-71ba-49c5-8213-0f3160c6e38b] Running
	I0501 03:46:37.375784   68640 system_pods.go:61] "coredns-7db6d8ff4d-c6lnj" [f8b8c1f1-7696-43f2-98be-339f99963e7c] Running
	I0501 03:46:37.375789   68640 system_pods.go:61] "etcd-no-preload-892672" [5f92eb1b-6611-4663-95f0-8c071a3a37c9] Running
	I0501 03:46:37.375796   68640 system_pods.go:61] "kube-apiserver-no-preload-892672" [90bcaa82-61b0-49d5-b50c-76288b099683] Running
	I0501 03:46:37.375804   68640 system_pods.go:61] "kube-controller-manager-no-preload-892672" [f80af654-aa81-4cd2-b5ce-4f31f6e49e9f] Running
	I0501 03:46:37.375809   68640 system_pods.go:61] "kube-proxy-czsqz" [4254b019-b6c8-4ff9-a361-c96eaf20dc65] Running
	I0501 03:46:37.375813   68640 system_pods.go:61] "kube-scheduler-no-preload-892672" [6753a5df-86d1-47bf-9514-6b8352acf969] Running
	I0501 03:46:37.375824   68640 system_pods.go:61] "metrics-server-569cc877fc-5m5qf" [a1ec3e6c-fe90-4168-b0ec-54f82f17b46d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0501 03:46:37.375830   68640 system_pods.go:61] "storage-provisioner" [b55b7e8b-4de0-40f8-96ff-bf0b550699d1] Running
	I0501 03:46:37.375841   68640 system_pods.go:74] duration metric: took 179.642731ms to wait for pod list to return data ...
	I0501 03:46:37.375857   68640 default_sa.go:34] waiting for default service account to be created ...
	I0501 03:46:37.572501   68640 default_sa.go:45] found service account: "default"
	I0501 03:46:37.572530   68640 default_sa.go:55] duration metric: took 196.664812ms for default service account to be created ...
	I0501 03:46:37.572542   68640 system_pods.go:116] waiting for k8s-apps to be running ...
	I0501 03:46:37.778012   68640 system_pods.go:86] 9 kube-system pods found
	I0501 03:46:37.778053   68640 system_pods.go:89] "coredns-7db6d8ff4d-57k52" [f98cb358-71ba-49c5-8213-0f3160c6e38b] Running
	I0501 03:46:37.778062   68640 system_pods.go:89] "coredns-7db6d8ff4d-c6lnj" [f8b8c1f1-7696-43f2-98be-339f99963e7c] Running
	I0501 03:46:37.778068   68640 system_pods.go:89] "etcd-no-preload-892672" [5f92eb1b-6611-4663-95f0-8c071a3a37c9] Running
	I0501 03:46:37.778075   68640 system_pods.go:89] "kube-apiserver-no-preload-892672" [90bcaa82-61b0-49d5-b50c-76288b099683] Running
	I0501 03:46:37.778082   68640 system_pods.go:89] "kube-controller-manager-no-preload-892672" [f80af654-aa81-4cd2-b5ce-4f31f6e49e9f] Running
	I0501 03:46:37.778088   68640 system_pods.go:89] "kube-proxy-czsqz" [4254b019-b6c8-4ff9-a361-c96eaf20dc65] Running
	I0501 03:46:37.778094   68640 system_pods.go:89] "kube-scheduler-no-preload-892672" [6753a5df-86d1-47bf-9514-6b8352acf969] Running
	I0501 03:46:37.778104   68640 system_pods.go:89] "metrics-server-569cc877fc-5m5qf" [a1ec3e6c-fe90-4168-b0ec-54f82f17b46d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0501 03:46:37.778112   68640 system_pods.go:89] "storage-provisioner" [b55b7e8b-4de0-40f8-96ff-bf0b550699d1] Running
	I0501 03:46:37.778127   68640 system_pods.go:126] duration metric: took 205.578312ms to wait for k8s-apps to be running ...
	I0501 03:46:37.778148   68640 system_svc.go:44] waiting for kubelet service to be running ....
	I0501 03:46:37.778215   68640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 03:46:37.794660   68640 system_svc.go:56] duration metric: took 16.509214ms WaitForService to wait for kubelet
	I0501 03:46:37.794694   68640 kubeadm.go:576] duration metric: took 4.745668881s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0501 03:46:37.794721   68640 node_conditions.go:102] verifying NodePressure condition ...
	I0501 03:46:37.972621   68640 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0501 03:46:37.972647   68640 node_conditions.go:123] node cpu capacity is 2
	I0501 03:46:37.972660   68640 node_conditions.go:105] duration metric: took 177.933367ms to run NodePressure ...
	I0501 03:46:37.972676   68640 start.go:240] waiting for startup goroutines ...
	I0501 03:46:37.972684   68640 start.go:245] waiting for cluster config update ...
	I0501 03:46:37.972699   68640 start.go:254] writing updated cluster config ...
	I0501 03:46:37.972951   68640 ssh_runner.go:195] Run: rm -f paused
	I0501 03:46:38.023054   68640 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0501 03:46:38.025098   68640 out.go:177] * Done! kubectl is now configured to use "no-preload-892672" cluster and "default" namespace by default
	I0501 03:46:46.214470   69580 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0501 03:46:46.214695   69580 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0501 03:46:46.214721   69580 kubeadm.go:309] 
	I0501 03:46:46.214770   69580 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0501 03:46:46.214837   69580 kubeadm.go:309] 		timed out waiting for the condition
	I0501 03:46:46.214875   69580 kubeadm.go:309] 
	I0501 03:46:46.214936   69580 kubeadm.go:309] 	This error is likely caused by:
	I0501 03:46:46.214983   69580 kubeadm.go:309] 		- The kubelet is not running
	I0501 03:46:46.215076   69580 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0501 03:46:46.215084   69580 kubeadm.go:309] 
	I0501 03:46:46.215169   69580 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0501 03:46:46.215201   69580 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0501 03:46:46.215233   69580 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0501 03:46:46.215239   69580 kubeadm.go:309] 
	I0501 03:46:46.215380   69580 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0501 03:46:46.215489   69580 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0501 03:46:46.215505   69580 kubeadm.go:309] 
	I0501 03:46:46.215657   69580 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0501 03:46:46.215782   69580 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0501 03:46:46.215882   69580 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0501 03:46:46.215972   69580 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0501 03:46:46.215984   69580 kubeadm.go:309] 
	I0501 03:46:46.217243   69580 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0501 03:46:46.217352   69580 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0501 03:46:46.217426   69580 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0501 03:46:46.217550   69580 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0501 03:46:46.217611   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0501 03:46:47.375634   69580 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.157990231s)
	I0501 03:46:47.375723   69580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 03:46:47.392333   69580 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0501 03:46:47.404983   69580 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0501 03:46:47.405007   69580 kubeadm.go:156] found existing configuration files:
	
	I0501 03:46:47.405054   69580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0501 03:46:47.417437   69580 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0501 03:46:47.417501   69580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0501 03:46:47.429929   69580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0501 03:46:47.441141   69580 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0501 03:46:47.441215   69580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0501 03:46:47.453012   69580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0501 03:46:47.463702   69580 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0501 03:46:47.463759   69580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0501 03:46:47.474783   69580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0501 03:46:47.485793   69580 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0501 03:46:47.485853   69580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0501 03:46:47.497706   69580 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0501 03:46:47.588221   69580 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0501 03:46:47.588340   69580 kubeadm.go:309] [preflight] Running pre-flight checks
	I0501 03:46:47.759631   69580 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0501 03:46:47.759801   69580 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0501 03:46:47.759949   69580 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0501 03:46:47.978077   69580 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0501 03:46:47.980130   69580 out.go:204]   - Generating certificates and keys ...
	I0501 03:46:47.980240   69580 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0501 03:46:47.980323   69580 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0501 03:46:47.980455   69580 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0501 03:46:47.980579   69580 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0501 03:46:47.980679   69580 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0501 03:46:47.980771   69580 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0501 03:46:47.980864   69580 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0501 03:46:47.981256   69580 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0501 03:46:47.981616   69580 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0501 03:46:47.981858   69580 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0501 03:46:47.981907   69580 kubeadm.go:309] [certs] Using the existing "sa" key
	I0501 03:46:47.981991   69580 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0501 03:46:48.100377   69580 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0501 03:46:48.463892   69580 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0501 03:46:48.521991   69580 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0501 03:46:48.735222   69580 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0501 03:46:48.753098   69580 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0501 03:46:48.756950   69580 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0501 03:46:48.757379   69580 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0501 03:46:48.937039   69580 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0501 03:46:48.939065   69580 out.go:204]   - Booting up control plane ...
	I0501 03:46:48.939183   69580 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0501 03:46:48.961380   69580 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0501 03:46:48.962890   69580 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0501 03:46:48.963978   69580 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0501 03:46:48.971754   69580 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0501 03:47:28.974873   69580 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0501 03:47:28.975296   69580 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0501 03:47:28.975545   69580 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0501 03:47:33.976469   69580 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0501 03:47:33.976699   69580 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0501 03:47:43.977443   69580 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0501 03:47:43.977663   69580 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0501 03:48:03.979113   69580 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0501 03:48:03.979409   69580 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0501 03:48:43.982479   69580 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0501 03:48:43.982781   69580 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0501 03:48:43.983363   69580 kubeadm.go:309] 
	I0501 03:48:43.983427   69580 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0501 03:48:43.983484   69580 kubeadm.go:309] 		timed out waiting for the condition
	I0501 03:48:43.983490   69580 kubeadm.go:309] 
	I0501 03:48:43.983520   69580 kubeadm.go:309] 	This error is likely caused by:
	I0501 03:48:43.983547   69580 kubeadm.go:309] 		- The kubelet is not running
	I0501 03:48:43.983633   69580 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0501 03:48:43.983637   69580 kubeadm.go:309] 
	I0501 03:48:43.983721   69580 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0501 03:48:43.983748   69580 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0501 03:48:43.983774   69580 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0501 03:48:43.983778   69580 kubeadm.go:309] 
	I0501 03:48:43.983861   69580 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0501 03:48:43.983928   69580 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0501 03:48:43.983932   69580 kubeadm.go:309] 
	I0501 03:48:43.984023   69580 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0501 03:48:43.984094   69580 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0501 03:48:43.984155   69580 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0501 03:48:43.984212   69580 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0501 03:48:43.984216   69580 kubeadm.go:309] 
	I0501 03:48:43.985577   69580 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0501 03:48:43.985777   69580 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0501 03:48:43.985875   69580 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0501 03:48:43.985971   69580 kubeadm.go:393] duration metric: took 8m0.315126498s to StartCluster
	I0501 03:48:43.986025   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:48:43.986092   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:48:44.038296   69580 cri.go:89] found id: ""
	I0501 03:48:44.038328   69580 logs.go:276] 0 containers: []
	W0501 03:48:44.038339   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:48:44.038346   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:48:44.038426   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:48:44.081855   69580 cri.go:89] found id: ""
	I0501 03:48:44.081891   69580 logs.go:276] 0 containers: []
	W0501 03:48:44.081904   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:48:44.081913   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:48:44.081996   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:48:44.131400   69580 cri.go:89] found id: ""
	I0501 03:48:44.131435   69580 logs.go:276] 0 containers: []
	W0501 03:48:44.131445   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:48:44.131451   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:48:44.131519   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:48:44.178274   69580 cri.go:89] found id: ""
	I0501 03:48:44.178302   69580 logs.go:276] 0 containers: []
	W0501 03:48:44.178310   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:48:44.178316   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:48:44.178376   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:48:44.223087   69580 cri.go:89] found id: ""
	I0501 03:48:44.223115   69580 logs.go:276] 0 containers: []
	W0501 03:48:44.223125   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:48:44.223133   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:48:44.223196   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:48:44.266093   69580 cri.go:89] found id: ""
	I0501 03:48:44.266122   69580 logs.go:276] 0 containers: []
	W0501 03:48:44.266135   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:48:44.266143   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:48:44.266204   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:48:44.307766   69580 cri.go:89] found id: ""
	I0501 03:48:44.307795   69580 logs.go:276] 0 containers: []
	W0501 03:48:44.307806   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:48:44.307813   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:48:44.307876   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:48:44.348548   69580 cri.go:89] found id: ""
	I0501 03:48:44.348576   69580 logs.go:276] 0 containers: []
	W0501 03:48:44.348585   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:48:44.348594   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:48:44.348614   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:48:44.394160   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:48:44.394209   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:48:44.449845   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:48:44.449879   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:48:44.467663   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:48:44.467694   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:48:44.556150   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:48:44.556183   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:48:44.556199   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W0501 03:48:44.661110   69580 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0501 03:48:44.661169   69580 out.go:239] * 
	W0501 03:48:44.661226   69580 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0501 03:48:44.661246   69580 out.go:239] * 
	W0501 03:48:44.662064   69580 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0501 03:48:44.665608   69580 out.go:177] 
	W0501 03:48:44.666799   69580 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0501 03:48:44.666851   69580 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0501 03:48:44.666870   69580 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0501 03:48:44.668487   69580 out.go:177] 
	
	
	==> CRI-O <==
	May 01 03:55:40 no-preload-892672 crio[731]: time="2024-05-01 03:55:40.167072697Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e6b65709-f090-42a0-a8df-efe260edce52 name=/runtime.v1.RuntimeService/Version
	May 01 03:55:40 no-preload-892672 crio[731]: time="2024-05-01 03:55:40.168133662Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6d5238dd-0d60-4960-867e-b49ce9d8a1be name=/runtime.v1.ImageService/ImageFsInfo
	May 01 03:55:40 no-preload-892672 crio[731]: time="2024-05-01 03:55:40.168457745Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714535740168436462,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:99941,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6d5238dd-0d60-4960-867e-b49ce9d8a1be name=/runtime.v1.ImageService/ImageFsInfo
	May 01 03:55:40 no-preload-892672 crio[731]: time="2024-05-01 03:55:40.169286878Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=256b8803-f7dc-4dee-bf2d-faff8a80a126 name=/runtime.v1.RuntimeService/ListContainers
	May 01 03:55:40 no-preload-892672 crio[731]: time="2024-05-01 03:55:40.169368868Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=256b8803-f7dc-4dee-bf2d-faff8a80a126 name=/runtime.v1.RuntimeService/ListContainers
	May 01 03:55:40 no-preload-892672 crio[731]: time="2024-05-01 03:55:40.169539141Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:229139f4b20254ba487deecee0957c02e4f011770d365596c0c3b1a7cb75aafe,PodSandboxId:36d1422b84d96f29c7b5c5c115029f07e361297527c5cda590996788e6df2618,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714535195778139410,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-c6lnj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8b8c1f1-7696-43f2-98be-339f99963e7c,},Annotations:map[string]string{io.kubernetes.container.hash: dd92bd6c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b243772338376823399e57b784e713d89c9c15400f25af5fed738127fe432a08,PodSandboxId:541b7bcfe6dd1ba293905ff34808d1eaae351f7f54d5d8c239bf7fc63d25f7f4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714535195700634320,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-57k52,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: f98cb358-71ba-49c5-8213-0f3160c6e38b,},Annotations:map[string]string{io.kubernetes.container.hash: f7e62959,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04f79e62716c5afcb3b939952b3de3a05e6166f5847ade1fbd8dca444a3fa313,PodSandboxId:a46f8f22e3d4c24f67bf26ccaaca42a528a92292e7031601647d30ba5c57d02e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNIN
G,CreatedAt:1714535195294421620,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-czsqz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4254b019-b6c8-4ff9-a361-c96eaf20dc65,},Annotations:map[string]string{io.kubernetes.container.hash: 3d6570b6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a075f10431025603b7e9b5776296ff25449e3e5d51294564a01819472c4dca0,PodSandboxId:7a62bb2a7d3f6de789b97414c2171f092bb841d9041b3ec47c00196e6d8d1ecc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:171453519514
9151540,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b55b7e8b-4de0-40f8-96ff-bf0b550699d1,},Annotations:map[string]string{io.kubernetes.container.hash: 3f614e11,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbdcf5ff9f94c64a25e5d5db485d98b35e56f58847e7ee075ec3a11b9b03f77e,PodSandboxId:832bd8bc8daecc585f587d113c26ea91219068eef2b7f50c9f3dbf5975a1cd7e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714535173470897940,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-892672,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb6628181fb0fd531dcdedce99926112,},Annotations:map[string]string{io.kubernetes.container.hash: 4e95860,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:188801d1d61ccf3dc55289bf9fd5e10246328ef4baecbfa211addd80c00d256a,PodSandboxId:f37b4459a01badcd37acdb1d54d3055b85e5279497875c6733ac116960476f52,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714535173438381585,Labels:map[string]string{io.kubernetes.container.name
: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-892672,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 043564780f07ce23cfcadab65c7a3f99,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49c8b9ee369c3f7c9f427f2e761135d1c6b58c7847503aa7a66c55f5046fa31f,PodSandboxId:cc89d396df98699a11f805c34c3d86f49e0908ab5497295df6019472cd74c88d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714535173383895353,Labels:map[string]string{io.kubernetes.container.name: ku
be-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-892672,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46dd0fcfa81ec0b723fbe5f0491243d6,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94dcf48a1f2ca610f98afb5926a769111580b5a6b7ac380fe96fde3d9d32804e,PodSandboxId:242654a439354540c50f23961d2d1b4ed7eba4bcb23dc3009c96cb9d447706fb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714535173346150522,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-892672,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31f60d94b4d435b7b8a84f622c3f01ba,},Annotations:map[string]string{io.kubernetes.container.hash: 68bdc315,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=256b8803-f7dc-4dee-bf2d-faff8a80a126 name=/runtime.v1.RuntimeService/ListContainers
	May 01 03:55:40 no-preload-892672 crio[731]: time="2024-05-01 03:55:40.211167918Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=484c20fc-4c8d-4090-af0c-579b9573d914 name=/runtime.v1.RuntimeService/Version
	May 01 03:55:40 no-preload-892672 crio[731]: time="2024-05-01 03:55:40.211235513Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=484c20fc-4c8d-4090-af0c-579b9573d914 name=/runtime.v1.RuntimeService/Version
	May 01 03:55:40 no-preload-892672 crio[731]: time="2024-05-01 03:55:40.212694562Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=997da37b-415f-4129-9613-d2d510d29203 name=/runtime.v1.ImageService/ImageFsInfo
	May 01 03:55:40 no-preload-892672 crio[731]: time="2024-05-01 03:55:40.213482141Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714535740213457334,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:99941,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=997da37b-415f-4129-9613-d2d510d29203 name=/runtime.v1.ImageService/ImageFsInfo
	May 01 03:55:40 no-preload-892672 crio[731]: time="2024-05-01 03:55:40.213953940Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=33d7e50b-fe58-4a82-9ea3-4eceb966421e name=/runtime.v1.RuntimeService/ListPodSandbox
	May 01 03:55:40 no-preload-892672 crio[731]: time="2024-05-01 03:55:40.214192008Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:a46f8f22e3d4c24f67bf26ccaaca42a528a92292e7031601647d30ba5c57d02e,Metadata:&PodSandboxMetadata{Name:kube-proxy-czsqz,Uid:4254b019-b6c8-4ff9-a361-c96eaf20dc65,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1714535195062522259,Labels:map[string]string{controller-revision-hash: 79cf874c65,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-czsqz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4254b019-b6c8-4ff9-a361-c96eaf20dc65,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-05-01T03:46:33.248046769Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2d277a73d4ca2dee5736282de8ced8089917d1f29762a3ddd03841a53bed646e,Metadata:&PodSandboxMetadata{Name:metrics-server-569cc877fc-5m5qf,Uid:a1ec3e6c-fe90-4168-b0ec-54f
82f17b46d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1714535195030330193,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-569cc877fc-5m5qf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1ec3e6c-fe90-4168-b0ec-54f82f17b46d,k8s-app: metrics-server,pod-template-hash: 569cc877fc,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-05-01T03:46:34.112365412Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:541b7bcfe6dd1ba293905ff34808d1eaae351f7f54d5d8c239bf7fc63d25f7f4,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-57k52,Uid:f98cb358-71ba-49c5-8213-0f3160c6e38b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1714535194946949439,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-57k52,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f98cb358-71ba-49c5-8213-0f3160c6e38b,k8s-app: kube-dns,pod-template-hash: 7db6d8f
f4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-05-01T03:46:34.320618923Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:36d1422b84d96f29c7b5c5c115029f07e361297527c5cda590996788e6df2618,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-c6lnj,Uid:f8b8c1f1-7696-43f2-98be-339f99963e7c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1714535194945591956,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-c6lnj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8b8c1f1-7696-43f2-98be-339f99963e7c,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-05-01T03:46:34.319547183Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:7a62bb2a7d3f6de789b97414c2171f092bb841d9041b3ec47c00196e6d8d1ecc,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:b55b7e8b-4de0-40f8-96ff-bf0b550699d1,Namespace:kube-system,Attempt:0,},Stat
e:SANDBOX_READY,CreatedAt:1714535194908677991,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b55b7e8b-4de0-40f8-96ff-bf0b550699d1,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tm
p\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-05-01T03:46:33.999372984Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:cc89d396df98699a11f805c34c3d86f49e0908ab5497295df6019472cd74c88d,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-no-preload-892672,Uid:46dd0fcfa81ec0b723fbe5f0491243d6,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1714535173109898626,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-no-preload-892672,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46dd0fcfa81ec0b723fbe5f0491243d6,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 46dd0fcfa81ec0b723fbe5f0491243d6,kubernetes.io/config.seen: 2024-05-01T03:46:12.649882552Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:f37b4459a01badcd37acdb1d54d3055b85e5279497875c6733ac116960476f52,Metadata:&PodSandboxMeta
data{Name:kube-scheduler-no-preload-892672,Uid:043564780f07ce23cfcadab65c7a3f99,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1714535173109313090,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-no-preload-892672,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 043564780f07ce23cfcadab65c7a3f99,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 043564780f07ce23cfcadab65c7a3f99,kubernetes.io/config.seen: 2024-05-01T03:46:12.649884556Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:242654a439354540c50f23961d2d1b4ed7eba4bcb23dc3009c96cb9d447706fb,Metadata:&PodSandboxMetadata{Name:kube-apiserver-no-preload-892672,Uid:31f60d94b4d435b7b8a84f622c3f01ba,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1714535173107044022,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-no-pr
eload-892672,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31f60d94b4d435b7b8a84f622c3f01ba,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.144:8443,kubernetes.io/config.hash: 31f60d94b4d435b7b8a84f622c3f01ba,kubernetes.io/config.seen: 2024-05-01T03:46:12.649880995Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:832bd8bc8daecc585f587d113c26ea91219068eef2b7f50c9f3dbf5975a1cd7e,Metadata:&PodSandboxMetadata{Name:etcd-no-preload-892672,Uid:cb6628181fb0fd531dcdedce99926112,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1714535173106772228,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-no-preload-892672,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb6628181fb0fd531dcdedce99926112,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.144:237
9,kubernetes.io/config.hash: cb6628181fb0fd531dcdedce99926112,kubernetes.io/config.seen: 2024-05-01T03:46:12.649874663Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=33d7e50b-fe58-4a82-9ea3-4eceb966421e name=/runtime.v1.RuntimeService/ListPodSandbox
	May 01 03:55:40 no-preload-892672 crio[731]: time="2024-05-01 03:55:40.214839202Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c80e157f-0a34-4262-994b-68bcde3575e6 name=/runtime.v1.RuntimeService/ListContainers
	May 01 03:55:40 no-preload-892672 crio[731]: time="2024-05-01 03:55:40.214917707Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c80e157f-0a34-4262-994b-68bcde3575e6 name=/runtime.v1.RuntimeService/ListContainers
	May 01 03:55:40 no-preload-892672 crio[731]: time="2024-05-01 03:55:40.215100865Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:229139f4b20254ba487deecee0957c02e4f011770d365596c0c3b1a7cb75aafe,PodSandboxId:36d1422b84d96f29c7b5c5c115029f07e361297527c5cda590996788e6df2618,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714535195778139410,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-c6lnj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8b8c1f1-7696-43f2-98be-339f99963e7c,},Annotations:map[string]string{io.kubernetes.container.hash: dd92bd6c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b243772338376823399e57b784e713d89c9c15400f25af5fed738127fe432a08,PodSandboxId:541b7bcfe6dd1ba293905ff34808d1eaae351f7f54d5d8c239bf7fc63d25f7f4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714535195700634320,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-57k52,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: f98cb358-71ba-49c5-8213-0f3160c6e38b,},Annotations:map[string]string{io.kubernetes.container.hash: f7e62959,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04f79e62716c5afcb3b939952b3de3a05e6166f5847ade1fbd8dca444a3fa313,PodSandboxId:a46f8f22e3d4c24f67bf26ccaaca42a528a92292e7031601647d30ba5c57d02e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNIN
G,CreatedAt:1714535195294421620,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-czsqz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4254b019-b6c8-4ff9-a361-c96eaf20dc65,},Annotations:map[string]string{io.kubernetes.container.hash: 3d6570b6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a075f10431025603b7e9b5776296ff25449e3e5d51294564a01819472c4dca0,PodSandboxId:7a62bb2a7d3f6de789b97414c2171f092bb841d9041b3ec47c00196e6d8d1ecc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:171453519514
9151540,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b55b7e8b-4de0-40f8-96ff-bf0b550699d1,},Annotations:map[string]string{io.kubernetes.container.hash: 3f614e11,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbdcf5ff9f94c64a25e5d5db485d98b35e56f58847e7ee075ec3a11b9b03f77e,PodSandboxId:832bd8bc8daecc585f587d113c26ea91219068eef2b7f50c9f3dbf5975a1cd7e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714535173470897940,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-892672,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb6628181fb0fd531dcdedce99926112,},Annotations:map[string]string{io.kubernetes.container.hash: 4e95860,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:188801d1d61ccf3dc55289bf9fd5e10246328ef4baecbfa211addd80c00d256a,PodSandboxId:f37b4459a01badcd37acdb1d54d3055b85e5279497875c6733ac116960476f52,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714535173438381585,Labels:map[string]string{io.kubernetes.container.name
: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-892672,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 043564780f07ce23cfcadab65c7a3f99,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49c8b9ee369c3f7c9f427f2e761135d1c6b58c7847503aa7a66c55f5046fa31f,PodSandboxId:cc89d396df98699a11f805c34c3d86f49e0908ab5497295df6019472cd74c88d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714535173383895353,Labels:map[string]string{io.kubernetes.container.name: ku
be-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-892672,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46dd0fcfa81ec0b723fbe5f0491243d6,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94dcf48a1f2ca610f98afb5926a769111580b5a6b7ac380fe96fde3d9d32804e,PodSandboxId:242654a439354540c50f23961d2d1b4ed7eba4bcb23dc3009c96cb9d447706fb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714535173346150522,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-892672,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31f60d94b4d435b7b8a84f622c3f01ba,},Annotations:map[string]string{io.kubernetes.container.hash: 68bdc315,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c80e157f-0a34-4262-994b-68bcde3575e6 name=/runtime.v1.RuntimeService/ListContainers
	May 01 03:55:40 no-preload-892672 crio[731]: time="2024-05-01 03:55:40.216539502Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4a5b07fa-551f-4e3b-8204-fd3c43772150 name=/runtime.v1.RuntimeService/ListContainers
	May 01 03:55:40 no-preload-892672 crio[731]: time="2024-05-01 03:55:40.216631645Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4a5b07fa-551f-4e3b-8204-fd3c43772150 name=/runtime.v1.RuntimeService/ListContainers
	May 01 03:55:40 no-preload-892672 crio[731]: time="2024-05-01 03:55:40.216936942Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:229139f4b20254ba487deecee0957c02e4f011770d365596c0c3b1a7cb75aafe,PodSandboxId:36d1422b84d96f29c7b5c5c115029f07e361297527c5cda590996788e6df2618,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714535195778139410,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-c6lnj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8b8c1f1-7696-43f2-98be-339f99963e7c,},Annotations:map[string]string{io.kubernetes.container.hash: dd92bd6c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b243772338376823399e57b784e713d89c9c15400f25af5fed738127fe432a08,PodSandboxId:541b7bcfe6dd1ba293905ff34808d1eaae351f7f54d5d8c239bf7fc63d25f7f4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714535195700634320,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-57k52,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: f98cb358-71ba-49c5-8213-0f3160c6e38b,},Annotations:map[string]string{io.kubernetes.container.hash: f7e62959,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04f79e62716c5afcb3b939952b3de3a05e6166f5847ade1fbd8dca444a3fa313,PodSandboxId:a46f8f22e3d4c24f67bf26ccaaca42a528a92292e7031601647d30ba5c57d02e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNIN
G,CreatedAt:1714535195294421620,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-czsqz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4254b019-b6c8-4ff9-a361-c96eaf20dc65,},Annotations:map[string]string{io.kubernetes.container.hash: 3d6570b6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a075f10431025603b7e9b5776296ff25449e3e5d51294564a01819472c4dca0,PodSandboxId:7a62bb2a7d3f6de789b97414c2171f092bb841d9041b3ec47c00196e6d8d1ecc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:171453519514
9151540,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b55b7e8b-4de0-40f8-96ff-bf0b550699d1,},Annotations:map[string]string{io.kubernetes.container.hash: 3f614e11,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbdcf5ff9f94c64a25e5d5db485d98b35e56f58847e7ee075ec3a11b9b03f77e,PodSandboxId:832bd8bc8daecc585f587d113c26ea91219068eef2b7f50c9f3dbf5975a1cd7e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714535173470897940,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-892672,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb6628181fb0fd531dcdedce99926112,},Annotations:map[string]string{io.kubernetes.container.hash: 4e95860,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:188801d1d61ccf3dc55289bf9fd5e10246328ef4baecbfa211addd80c00d256a,PodSandboxId:f37b4459a01badcd37acdb1d54d3055b85e5279497875c6733ac116960476f52,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714535173438381585,Labels:map[string]string{io.kubernetes.container.name
: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-892672,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 043564780f07ce23cfcadab65c7a3f99,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49c8b9ee369c3f7c9f427f2e761135d1c6b58c7847503aa7a66c55f5046fa31f,PodSandboxId:cc89d396df98699a11f805c34c3d86f49e0908ab5497295df6019472cd74c88d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714535173383895353,Labels:map[string]string{io.kubernetes.container.name: ku
be-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-892672,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46dd0fcfa81ec0b723fbe5f0491243d6,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94dcf48a1f2ca610f98afb5926a769111580b5a6b7ac380fe96fde3d9d32804e,PodSandboxId:242654a439354540c50f23961d2d1b4ed7eba4bcb23dc3009c96cb9d447706fb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714535173346150522,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-892672,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31f60d94b4d435b7b8a84f622c3f01ba,},Annotations:map[string]string{io.kubernetes.container.hash: 68bdc315,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4a5b07fa-551f-4e3b-8204-fd3c43772150 name=/runtime.v1.RuntimeService/ListContainers
	May 01 03:55:40 no-preload-892672 crio[731]: time="2024-05-01 03:55:40.256754796Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=776c4c7f-3a77-484c-9499-a8589990c48a name=/runtime.v1.RuntimeService/Version
	May 01 03:55:40 no-preload-892672 crio[731]: time="2024-05-01 03:55:40.256893687Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=776c4c7f-3a77-484c-9499-a8589990c48a name=/runtime.v1.RuntimeService/Version
	May 01 03:55:40 no-preload-892672 crio[731]: time="2024-05-01 03:55:40.258133335Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2574f9ca-b5b6-4e30-9088-94915c2a9ee3 name=/runtime.v1.ImageService/ImageFsInfo
	May 01 03:55:40 no-preload-892672 crio[731]: time="2024-05-01 03:55:40.258461247Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714535740258442409,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:99941,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2574f9ca-b5b6-4e30-9088-94915c2a9ee3 name=/runtime.v1.ImageService/ImageFsInfo
	May 01 03:55:40 no-preload-892672 crio[731]: time="2024-05-01 03:55:40.259104533Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=346b0c5c-6622-440b-be2c-7fcf3db41dcd name=/runtime.v1.RuntimeService/ListContainers
	May 01 03:55:40 no-preload-892672 crio[731]: time="2024-05-01 03:55:40.259184847Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=346b0c5c-6622-440b-be2c-7fcf3db41dcd name=/runtime.v1.RuntimeService/ListContainers
	May 01 03:55:40 no-preload-892672 crio[731]: time="2024-05-01 03:55:40.259381013Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:229139f4b20254ba487deecee0957c02e4f011770d365596c0c3b1a7cb75aafe,PodSandboxId:36d1422b84d96f29c7b5c5c115029f07e361297527c5cda590996788e6df2618,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714535195778139410,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-c6lnj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8b8c1f1-7696-43f2-98be-339f99963e7c,},Annotations:map[string]string{io.kubernetes.container.hash: dd92bd6c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b243772338376823399e57b784e713d89c9c15400f25af5fed738127fe432a08,PodSandboxId:541b7bcfe6dd1ba293905ff34808d1eaae351f7f54d5d8c239bf7fc63d25f7f4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714535195700634320,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-57k52,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: f98cb358-71ba-49c5-8213-0f3160c6e38b,},Annotations:map[string]string{io.kubernetes.container.hash: f7e62959,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04f79e62716c5afcb3b939952b3de3a05e6166f5847ade1fbd8dca444a3fa313,PodSandboxId:a46f8f22e3d4c24f67bf26ccaaca42a528a92292e7031601647d30ba5c57d02e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNIN
G,CreatedAt:1714535195294421620,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-czsqz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4254b019-b6c8-4ff9-a361-c96eaf20dc65,},Annotations:map[string]string{io.kubernetes.container.hash: 3d6570b6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a075f10431025603b7e9b5776296ff25449e3e5d51294564a01819472c4dca0,PodSandboxId:7a62bb2a7d3f6de789b97414c2171f092bb841d9041b3ec47c00196e6d8d1ecc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:171453519514
9151540,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b55b7e8b-4de0-40f8-96ff-bf0b550699d1,},Annotations:map[string]string{io.kubernetes.container.hash: 3f614e11,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbdcf5ff9f94c64a25e5d5db485d98b35e56f58847e7ee075ec3a11b9b03f77e,PodSandboxId:832bd8bc8daecc585f587d113c26ea91219068eef2b7f50c9f3dbf5975a1cd7e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714535173470897940,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-892672,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb6628181fb0fd531dcdedce99926112,},Annotations:map[string]string{io.kubernetes.container.hash: 4e95860,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:188801d1d61ccf3dc55289bf9fd5e10246328ef4baecbfa211addd80c00d256a,PodSandboxId:f37b4459a01badcd37acdb1d54d3055b85e5279497875c6733ac116960476f52,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714535173438381585,Labels:map[string]string{io.kubernetes.container.name
: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-892672,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 043564780f07ce23cfcadab65c7a3f99,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49c8b9ee369c3f7c9f427f2e761135d1c6b58c7847503aa7a66c55f5046fa31f,PodSandboxId:cc89d396df98699a11f805c34c3d86f49e0908ab5497295df6019472cd74c88d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714535173383895353,Labels:map[string]string{io.kubernetes.container.name: ku
be-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-892672,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46dd0fcfa81ec0b723fbe5f0491243d6,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94dcf48a1f2ca610f98afb5926a769111580b5a6b7ac380fe96fde3d9d32804e,PodSandboxId:242654a439354540c50f23961d2d1b4ed7eba4bcb23dc3009c96cb9d447706fb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714535173346150522,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-892672,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31f60d94b4d435b7b8a84f622c3f01ba,},Annotations:map[string]string{io.kubernetes.container.hash: 68bdc315,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=346b0c5c-6622-440b-be2c-7fcf3db41dcd name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	229139f4b2025       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   36d1422b84d96       coredns-7db6d8ff4d-c6lnj
	b243772338376       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   541b7bcfe6dd1       coredns-7db6d8ff4d-57k52
	04f79e62716c5       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b   9 minutes ago       Running             kube-proxy                0                   a46f8f22e3d4c       kube-proxy-czsqz
	9a075f1043102       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   7a62bb2a7d3f6       storage-provisioner
	fbdcf5ff9f94c       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   9 minutes ago       Running             etcd                      2                   832bd8bc8daec       etcd-no-preload-892672
	188801d1d61cc       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced   9 minutes ago       Running             kube-scheduler            2                   f37b4459a01ba       kube-scheduler-no-preload-892672
	49c8b9ee369c3       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b   9 minutes ago       Running             kube-controller-manager   2                   cc89d396df986       kube-controller-manager-no-preload-892672
	94dcf48a1f2ca       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0   9 minutes ago       Running             kube-apiserver            2                   242654a439354       kube-apiserver-no-preload-892672
	
	
	==> coredns [229139f4b20254ba487deecee0957c02e4f011770d365596c0c3b1a7cb75aafe] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [b243772338376823399e57b784e713d89c9c15400f25af5fed738127fe432a08] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               no-preload-892672
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-892672
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2c4eae41cda912e6a762d77f0d8868e00f97bb4e
	                    minikube.k8s.io/name=no-preload-892672
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_01T03_46_19_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 01 May 2024 03:46:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-892672
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 01 May 2024 03:55:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 01 May 2024 03:51:46 +0000   Wed, 01 May 2024 03:46:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 01 May 2024 03:51:46 +0000   Wed, 01 May 2024 03:46:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 01 May 2024 03:51:46 +0000   Wed, 01 May 2024 03:46:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 01 May 2024 03:51:46 +0000   Wed, 01 May 2024 03:46:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.144
	  Hostname:    no-preload-892672
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c0d4545d9aa14df2be84b492fcdf0657
	  System UUID:                c0d4545d-9aa1-4df2-be84-b492fcdf0657
	  Boot ID:                    17a54706-8e44-454d-a770-5b63194216fc
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-57k52                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m7s
	  kube-system                 coredns-7db6d8ff4d-c6lnj                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m8s
	  kube-system                 etcd-no-preload-892672                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m21s
	  kube-system                 kube-apiserver-no-preload-892672             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m21s
	  kube-system                 kube-controller-manager-no-preload-892672    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m21s
	  kube-system                 kube-proxy-czsqz                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m7s
	  kube-system                 kube-scheduler-no-preload-892672             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m21s
	  kube-system                 metrics-server-569cc877fc-5m5qf              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m6s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m7s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 9m4s   kube-proxy       
	  Normal  Starting                 9m22s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m21s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m21s  kubelet          Node no-preload-892672 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m21s  kubelet          Node no-preload-892672 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m21s  kubelet          Node no-preload-892672 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m7s   node-controller  Node no-preload-892672 event: Registered Node no-preload-892672 in Controller
	
	
	==> dmesg <==
	[  +0.046797] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.185471] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.663624] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.733241] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +10.536645] systemd-fstab-generator[647]: Ignoring "noauto" option for root device
	[  +0.063622] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.070077] systemd-fstab-generator[660]: Ignoring "noauto" option for root device
	[  +0.198940] systemd-fstab-generator[674]: Ignoring "noauto" option for root device
	[  +0.144815] systemd-fstab-generator[686]: Ignoring "noauto" option for root device
	[  +0.324977] systemd-fstab-generator[715]: Ignoring "noauto" option for root device
	[May 1 03:41] systemd-fstab-generator[1246]: Ignoring "noauto" option for root device
	[  +0.064165] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.066241] systemd-fstab-generator[1372]: Ignoring "noauto" option for root device
	[  +5.563183] kauditd_printk_skb: 94 callbacks suppressed
	[  +7.373870] kauditd_printk_skb: 50 callbacks suppressed
	[  +6.156674] kauditd_printk_skb: 24 callbacks suppressed
	[May 1 03:46] kauditd_printk_skb: 5 callbacks suppressed
	[  +2.503242] systemd-fstab-generator[4049]: Ignoring "noauto" option for root device
	[  +4.664129] kauditd_printk_skb: 57 callbacks suppressed
	[  +1.922428] systemd-fstab-generator[4369]: Ignoring "noauto" option for root device
	[ +14.484031] systemd-fstab-generator[4584]: Ignoring "noauto" option for root device
	[  +0.132829] kauditd_printk_skb: 14 callbacks suppressed
	[May 1 03:47] kauditd_printk_skb: 80 callbacks suppressed
	
	
	==> etcd [fbdcf5ff9f94c64a25e5d5db485d98b35e56f58847e7ee075ec3a11b9b03f77e] <==
	{"level":"info","ts":"2024-05-01T03:46:14.119653Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"42163c43c38ae515 switched to configuration voters=(4762059917732013333)"}
	{"level":"info","ts":"2024-05-01T03:46:14.119974Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"b6240fb2000e40e9","local-member-id":"42163c43c38ae515","added-peer-id":"42163c43c38ae515","added-peer-peer-urls":["https://192.168.39.144:2380"]}
	{"level":"info","ts":"2024-05-01T03:46:14.128578Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-05-01T03:46:14.128916Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"42163c43c38ae515","initial-advertise-peer-urls":["https://192.168.39.144:2380"],"listen-peer-urls":["https://192.168.39.144:2380"],"advertise-client-urls":["https://192.168.39.144:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.144:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-05-01T03:46:14.129025Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-05-01T03:46:14.129147Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.144:2380"}
	{"level":"info","ts":"2024-05-01T03:46:14.129189Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.144:2380"}
	{"level":"info","ts":"2024-05-01T03:46:14.172106Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"42163c43c38ae515 is starting a new election at term 1"}
	{"level":"info","ts":"2024-05-01T03:46:14.172283Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"42163c43c38ae515 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-05-01T03:46:14.172423Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"42163c43c38ae515 received MsgPreVoteResp from 42163c43c38ae515 at term 1"}
	{"level":"info","ts":"2024-05-01T03:46:14.172511Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"42163c43c38ae515 became candidate at term 2"}
	{"level":"info","ts":"2024-05-01T03:46:14.172567Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"42163c43c38ae515 received MsgVoteResp from 42163c43c38ae515 at term 2"}
	{"level":"info","ts":"2024-05-01T03:46:14.172598Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"42163c43c38ae515 became leader at term 2"}
	{"level":"info","ts":"2024-05-01T03:46:14.172679Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 42163c43c38ae515 elected leader 42163c43c38ae515 at term 2"}
	{"level":"info","ts":"2024-05-01T03:46:14.174773Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"42163c43c38ae515","local-member-attributes":"{Name:no-preload-892672 ClientURLs:[https://192.168.39.144:2379]}","request-path":"/0/members/42163c43c38ae515/attributes","cluster-id":"b6240fb2000e40e9","publish-timeout":"7s"}
	{"level":"info","ts":"2024-05-01T03:46:14.178072Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-01T03:46:14.178613Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-01T03:46:14.178933Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-01T03:46:14.181873Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-05-01T03:46:14.18192Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-05-01T03:46:14.187489Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.144:2379"}
	{"level":"info","ts":"2024-05-01T03:46:14.189997Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"b6240fb2000e40e9","local-member-id":"42163c43c38ae515","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-01T03:46:14.190171Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-01T03:46:14.191918Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-01T03:46:14.190595Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 03:55:40 up 15 min,  0 users,  load average: 0.01, 0.14, 0.16
	Linux no-preload-892672 5.10.207 #1 SMP Tue Apr 30 22:38:43 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [94dcf48a1f2ca610f98afb5926a769111580b5a6b7ac380fe96fde3d9d32804e] <==
	I0501 03:49:35.041257       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0501 03:51:16.337882       1 handler_proxy.go:93] no RequestInfo found in the context
	E0501 03:51:16.338035       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0501 03:51:17.338951       1 handler_proxy.go:93] no RequestInfo found in the context
	E0501 03:51:17.339094       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0501 03:51:17.339118       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0501 03:51:17.338967       1 handler_proxy.go:93] no RequestInfo found in the context
	E0501 03:51:17.339170       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0501 03:51:17.340160       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0501 03:52:17.339960       1 handler_proxy.go:93] no RequestInfo found in the context
	E0501 03:52:17.340183       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0501 03:52:17.340204       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0501 03:52:17.341155       1 handler_proxy.go:93] no RequestInfo found in the context
	E0501 03:52:17.341302       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0501 03:52:17.341346       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0501 03:54:17.341008       1 handler_proxy.go:93] no RequestInfo found in the context
	W0501 03:54:17.341456       1 handler_proxy.go:93] no RequestInfo found in the context
	E0501 03:54:17.341528       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0501 03:54:17.341561       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0501 03:54:17.341903       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0501 03:54:17.343657       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [49c8b9ee369c3f7c9f427f2e761135d1c6b58c7847503aa7a66c55f5046fa31f] <==
	I0501 03:50:03.567551       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0501 03:50:33.117134       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0501 03:50:33.576165       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0501 03:51:03.122677       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0501 03:51:03.585460       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0501 03:51:33.128997       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0501 03:51:33.593057       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0501 03:52:03.134099       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0501 03:52:03.601719       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0501 03:52:32.982142       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="247.985µs"
	E0501 03:52:33.142094       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0501 03:52:33.612690       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0501 03:52:45.972946       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="49.433µs"
	E0501 03:53:03.148308       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0501 03:53:03.624072       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0501 03:53:33.153673       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0501 03:53:33.633422       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0501 03:54:03.159375       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0501 03:54:03.642753       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0501 03:54:33.164718       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0501 03:54:33.651256       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0501 03:55:03.170564       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0501 03:55:03.664149       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0501 03:55:33.177704       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0501 03:55:33.673430       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [04f79e62716c5afcb3b939952b3de3a05e6166f5847ade1fbd8dca444a3fa313] <==
	I0501 03:46:36.027292       1 server_linux.go:69] "Using iptables proxy"
	I0501 03:46:36.041871       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.144"]
	I0501 03:46:36.164001       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0501 03:46:36.167908       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0501 03:46:36.168215       1 server_linux.go:165] "Using iptables Proxier"
	I0501 03:46:36.185580       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0501 03:46:36.185952       1 server.go:872] "Version info" version="v1.30.0"
	I0501 03:46:36.186055       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0501 03:46:36.187954       1 config.go:192] "Starting service config controller"
	I0501 03:46:36.188150       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0501 03:46:36.188204       1 config.go:101] "Starting endpoint slice config controller"
	I0501 03:46:36.188221       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0501 03:46:36.188889       1 config.go:319] "Starting node config controller"
	I0501 03:46:36.196508       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0501 03:46:36.288634       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0501 03:46:36.288683       1 shared_informer.go:320] Caches are synced for service config
	I0501 03:46:36.298157       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [188801d1d61ccf3dc55289bf9fd5e10246328ef4baecbfa211addd80c00d256a] <==
	W0501 03:46:16.398323       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0501 03:46:16.398444       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0501 03:46:16.398694       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0501 03:46:16.398750       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0501 03:46:16.402024       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0501 03:46:16.402071       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0501 03:46:17.240287       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0501 03:46:17.240345       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0501 03:46:17.246898       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0501 03:46:17.246990       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0501 03:46:17.267260       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0501 03:46:17.267552       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0501 03:46:17.328359       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0501 03:46:17.330203       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0501 03:46:17.397871       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0501 03:46:17.398034       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0501 03:46:17.461615       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0501 03:46:17.462264       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0501 03:46:17.532705       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0501 03:46:17.532840       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0501 03:46:17.601668       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0501 03:46:17.601964       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0501 03:46:17.802927       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0501 03:46:17.803050       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0501 03:46:20.989941       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	May 01 03:53:18 no-preload-892672 kubelet[4376]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 01 03:53:18 no-preload-892672 kubelet[4376]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 01 03:53:18 no-preload-892672 kubelet[4376]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 01 03:53:18 no-preload-892672 kubelet[4376]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 01 03:53:24 no-preload-892672 kubelet[4376]: E0501 03:53:24.956738    4376 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-5m5qf" podUID="a1ec3e6c-fe90-4168-b0ec-54f82f17b46d"
	May 01 03:53:39 no-preload-892672 kubelet[4376]: E0501 03:53:39.955109    4376 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-5m5qf" podUID="a1ec3e6c-fe90-4168-b0ec-54f82f17b46d"
	May 01 03:53:52 no-preload-892672 kubelet[4376]: E0501 03:53:52.956903    4376 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-5m5qf" podUID="a1ec3e6c-fe90-4168-b0ec-54f82f17b46d"
	May 01 03:54:07 no-preload-892672 kubelet[4376]: E0501 03:54:07.956747    4376 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-5m5qf" podUID="a1ec3e6c-fe90-4168-b0ec-54f82f17b46d"
	May 01 03:54:18 no-preload-892672 kubelet[4376]: E0501 03:54:18.962076    4376 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-5m5qf" podUID="a1ec3e6c-fe90-4168-b0ec-54f82f17b46d"
	May 01 03:54:18 no-preload-892672 kubelet[4376]: E0501 03:54:18.988232    4376 iptables.go:577] "Could not set up iptables canary" err=<
	May 01 03:54:18 no-preload-892672 kubelet[4376]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 01 03:54:18 no-preload-892672 kubelet[4376]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 01 03:54:18 no-preload-892672 kubelet[4376]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 01 03:54:18 no-preload-892672 kubelet[4376]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 01 03:54:29 no-preload-892672 kubelet[4376]: E0501 03:54:29.956152    4376 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-5m5qf" podUID="a1ec3e6c-fe90-4168-b0ec-54f82f17b46d"
	May 01 03:54:42 no-preload-892672 kubelet[4376]: E0501 03:54:42.959611    4376 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-5m5qf" podUID="a1ec3e6c-fe90-4168-b0ec-54f82f17b46d"
	May 01 03:54:55 no-preload-892672 kubelet[4376]: E0501 03:54:55.955616    4376 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-5m5qf" podUID="a1ec3e6c-fe90-4168-b0ec-54f82f17b46d"
	May 01 03:55:06 no-preload-892672 kubelet[4376]: E0501 03:55:06.957023    4376 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-5m5qf" podUID="a1ec3e6c-fe90-4168-b0ec-54f82f17b46d"
	May 01 03:55:18 no-preload-892672 kubelet[4376]: E0501 03:55:18.983465    4376 iptables.go:577] "Could not set up iptables canary" err=<
	May 01 03:55:18 no-preload-892672 kubelet[4376]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 01 03:55:18 no-preload-892672 kubelet[4376]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 01 03:55:18 no-preload-892672 kubelet[4376]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 01 03:55:18 no-preload-892672 kubelet[4376]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 01 03:55:21 no-preload-892672 kubelet[4376]: E0501 03:55:21.955880    4376 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-5m5qf" podUID="a1ec3e6c-fe90-4168-b0ec-54f82f17b46d"
	May 01 03:55:32 no-preload-892672 kubelet[4376]: E0501 03:55:32.956723    4376 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-5m5qf" podUID="a1ec3e6c-fe90-4168-b0ec-54f82f17b46d"
	
	
	==> storage-provisioner [9a075f10431025603b7e9b5776296ff25449e3e5d51294564a01819472c4dca0] <==
	I0501 03:46:35.400997       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0501 03:46:35.435610       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0501 03:46:35.436127       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0501 03:46:35.462332       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0501 03:46:35.462545       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-892672_a14caf25-04c4-401c-ab7b-a47f70852afc!
	I0501 03:46:35.468503       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"cd682fd8-a3d5-4611-8c6e-a47a39515fc6", APIVersion:"v1", ResourceVersion:"396", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-892672_a14caf25-04c4-401c-ab7b-a47f70852afc became leader
	I0501 03:46:35.593114       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-892672_a14caf25-04c4-401c-ab7b-a47f70852afc!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-892672 -n no-preload-892672
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-892672 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-5m5qf
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-892672 describe pod metrics-server-569cc877fc-5m5qf
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-892672 describe pod metrics-server-569cc877fc-5m5qf: exit status 1 (65.795572ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-5m5qf" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-892672 describe pod metrics-server-569cc877fc-5m5qf: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.37s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.63s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
E0501 03:49:56.198547   20724 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/functional-960026/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
E0501 03:51:24.419128   20724 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/addons-286595/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
E0501 03:54:56.198996   20724 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/functional-960026/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
E0501 03:56:24.419991   20724 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/addons-286595/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-503971 -n old-k8s-version-503971
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-503971 -n old-k8s-version-503971: exit status 2 (257.680239ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-503971" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-503971 -n old-k8s-version-503971
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-503971 -n old-k8s-version-503971: exit status 2 (252.123886ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-503971 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-503971 logs -n 25: (1.677692832s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p cert-options-582976                                 | cert-options-582976          | jenkins | v1.33.0 | 01 May 24 03:30 UTC | 01 May 24 03:30 UTC |
	| delete  | -p pause-542495                                        | pause-542495                 | jenkins | v1.33.0 | 01 May 24 03:30 UTC | 01 May 24 03:30 UTC |
	| start   | -p no-preload-892672                                   | no-preload-892672            | jenkins | v1.33.0 | 01 May 24 03:30 UTC | 01 May 24 03:32 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-277128                                  | embed-certs-277128           | jenkins | v1.33.0 | 01 May 24 03:30 UTC | 01 May 24 03:32 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-046243                           | kubernetes-upgrade-046243    | jenkins | v1.33.0 | 01 May 24 03:30 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-046243                           | kubernetes-upgrade-046243    | jenkins | v1.33.0 | 01 May 24 03:30 UTC | 01 May 24 03:32 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-046243                           | kubernetes-upgrade-046243    | jenkins | v1.33.0 | 01 May 24 03:32 UTC | 01 May 24 03:32 UTC |
	| delete  | -p                                                     | disable-driver-mounts-483221 | jenkins | v1.33.0 | 01 May 24 03:32 UTC | 01 May 24 03:32 UTC |
	|         | disable-driver-mounts-483221                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-715118 | jenkins | v1.33.0 | 01 May 24 03:32 UTC | 01 May 24 03:33 UTC |
	|         | default-k8s-diff-port-715118                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-892672             | no-preload-892672            | jenkins | v1.33.0 | 01 May 24 03:32 UTC | 01 May 24 03:32 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-892672                                   | no-preload-892672            | jenkins | v1.33.0 | 01 May 24 03:32 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-277128            | embed-certs-277128           | jenkins | v1.33.0 | 01 May 24 03:32 UTC | 01 May 24 03:32 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-277128                                  | embed-certs-277128           | jenkins | v1.33.0 | 01 May 24 03:32 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-715118  | default-k8s-diff-port-715118 | jenkins | v1.33.0 | 01 May 24 03:33 UTC | 01 May 24 03:33 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-715118 | jenkins | v1.33.0 | 01 May 24 03:33 UTC |                     |
	|         | default-k8s-diff-port-715118                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-892672                  | no-preload-892672            | jenkins | v1.33.0 | 01 May 24 03:34 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-277128                 | embed-certs-277128           | jenkins | v1.33.0 | 01 May 24 03:34 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-892672                                   | no-preload-892672            | jenkins | v1.33.0 | 01 May 24 03:34 UTC | 01 May 24 03:46 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-503971        | old-k8s-version-503971       | jenkins | v1.33.0 | 01 May 24 03:34 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p embed-certs-277128                                  | embed-certs-277128           | jenkins | v1.33.0 | 01 May 24 03:34 UTC | 01 May 24 03:44 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-715118       | default-k8s-diff-port-715118 | jenkins | v1.33.0 | 01 May 24 03:35 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-715118 | jenkins | v1.33.0 | 01 May 24 03:35 UTC | 01 May 24 03:45 UTC |
	|         | default-k8s-diff-port-715118                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-503971                              | old-k8s-version-503971       | jenkins | v1.33.0 | 01 May 24 03:36 UTC | 01 May 24 03:36 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-503971             | old-k8s-version-503971       | jenkins | v1.33.0 | 01 May 24 03:36 UTC | 01 May 24 03:36 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-503971                              | old-k8s-version-503971       | jenkins | v1.33.0 | 01 May 24 03:36 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/01 03:36:41
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0501 03:36:41.470152   69580 out.go:291] Setting OutFile to fd 1 ...
	I0501 03:36:41.470256   69580 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 03:36:41.470264   69580 out.go:304] Setting ErrFile to fd 2...
	I0501 03:36:41.470268   69580 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 03:36:41.470484   69580 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18779-13391/.minikube/bin
	I0501 03:36:41.470989   69580 out.go:298] Setting JSON to false
	I0501 03:36:41.471856   69580 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":8345,"bootTime":1714526257,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0501 03:36:41.471911   69580 start.go:139] virtualization: kvm guest
	I0501 03:36:41.473901   69580 out.go:177] * [old-k8s-version-503971] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0501 03:36:41.474994   69580 out.go:177]   - MINIKUBE_LOCATION=18779
	I0501 03:36:41.475003   69580 notify.go:220] Checking for updates...
	I0501 03:36:41.477150   69580 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0501 03:36:41.478394   69580 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18779-13391/kubeconfig
	I0501 03:36:41.479462   69580 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18779-13391/.minikube
	I0501 03:36:41.480507   69580 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0501 03:36:41.481543   69580 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0501 03:36:41.482907   69580 config.go:182] Loaded profile config "old-k8s-version-503971": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0501 03:36:41.483279   69580 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:36:41.483311   69580 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:36:41.497758   69580 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40255
	I0501 03:36:41.498090   69580 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:36:41.498591   69580 main.go:141] libmachine: Using API Version  1
	I0501 03:36:41.498616   69580 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:36:41.498891   69580 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:36:41.499055   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .DriverName
	I0501 03:36:41.500675   69580 out.go:177] * Kubernetes 1.30.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.0
	I0501 03:36:41.501716   69580 driver.go:392] Setting default libvirt URI to qemu:///system
	I0501 03:36:41.501974   69580 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:36:41.502024   69580 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:36:41.515991   69580 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40591
	I0501 03:36:41.516392   69580 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:36:41.516826   69580 main.go:141] libmachine: Using API Version  1
	I0501 03:36:41.516846   69580 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:36:41.517120   69580 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:36:41.517281   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .DriverName
	I0501 03:36:41.551130   69580 out.go:177] * Using the kvm2 driver based on existing profile
	I0501 03:36:41.552244   69580 start.go:297] selected driver: kvm2
	I0501 03:36:41.552253   69580 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-503971 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-503971 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.104 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 03:36:41.552369   69580 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0501 03:36:41.553004   69580 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0501 03:36:41.553071   69580 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18779-13391/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0501 03:36:41.567362   69580 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0501 03:36:41.567736   69580 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0501 03:36:41.567815   69580 cni.go:84] Creating CNI manager for ""
	I0501 03:36:41.567832   69580 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0501 03:36:41.567881   69580 start.go:340] cluster config:
	{Name:old-k8s-version-503971 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-503971 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.104 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 03:36:41.568012   69580 iso.go:125] acquiring lock: {Name:mkcd2496aadb29931e179193d707c731bfda98b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0501 03:36:41.569791   69580 out.go:177] * Starting "old-k8s-version-503971" primary control-plane node in "old-k8s-version-503971" cluster
	I0501 03:36:38.886755   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:36:41.571352   69580 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0501 03:36:41.571389   69580 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0501 03:36:41.571408   69580 cache.go:56] Caching tarball of preloaded images
	I0501 03:36:41.571478   69580 preload.go:173] Found /home/jenkins/minikube-integration/18779-13391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0501 03:36:41.571490   69580 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0501 03:36:41.571588   69580 profile.go:143] Saving config to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/old-k8s-version-503971/config.json ...
	I0501 03:36:41.571775   69580 start.go:360] acquireMachinesLock for old-k8s-version-503971: {Name:mkc4548e5afe1d4b0a833f6c522103562b0cefff Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0501 03:36:44.966689   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:36:48.038769   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:36:54.118675   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:36:57.190716   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:37:03.270653   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:37:06.342693   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:37:12.422726   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:37:15.494702   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:37:21.574646   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:37:24.646711   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:37:30.726724   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:37:33.798628   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:37:39.878657   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:37:42.950647   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:37:49.030699   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:37:52.102665   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:37:58.182647   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:38:01.254620   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:38:07.334707   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:38:10.406670   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:38:16.486684   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:38:19.558714   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:38:25.638642   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:38:28.710687   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:38:34.790659   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:38:37.862651   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:38:43.942639   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:38:47.014729   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:38:53.094674   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:38:56.166684   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:39:02.246662   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:39:05.318633   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:39:11.398705   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:39:14.470640   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:39:20.550642   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:39:23.622701   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:39:32.707273   68864 start.go:364] duration metric: took 4m38.787656406s to acquireMachinesLock for "embed-certs-277128"
	I0501 03:39:32.707327   68864 start.go:96] Skipping create...Using existing machine configuration
	I0501 03:39:32.707336   68864 fix.go:54] fixHost starting: 
	I0501 03:39:32.707655   68864 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:39:32.707697   68864 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:39:32.722689   68864 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35015
	I0501 03:39:32.723061   68864 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:39:32.723536   68864 main.go:141] libmachine: Using API Version  1
	I0501 03:39:32.723557   68864 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:39:32.723848   68864 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:39:32.724041   68864 main.go:141] libmachine: (embed-certs-277128) Calling .DriverName
	I0501 03:39:32.724164   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetState
	I0501 03:39:32.725542   68864 fix.go:112] recreateIfNeeded on embed-certs-277128: state=Stopped err=<nil>
	I0501 03:39:32.725569   68864 main.go:141] libmachine: (embed-certs-277128) Calling .DriverName
	W0501 03:39:32.725714   68864 fix.go:138] unexpected machine state, will restart: <nil>
	I0501 03:39:32.727403   68864 out.go:177] * Restarting existing kvm2 VM for "embed-certs-277128" ...
	I0501 03:39:29.702654   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:39:32.704906   68640 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0501 03:39:32.704940   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetMachineName
	I0501 03:39:32.705254   68640 buildroot.go:166] provisioning hostname "no-preload-892672"
	I0501 03:39:32.705278   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetMachineName
	I0501 03:39:32.705499   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHHostname
	I0501 03:39:32.707128   68640 machine.go:97] duration metric: took 4m44.649178925s to provisionDockerMachine
	I0501 03:39:32.707171   68640 fix.go:56] duration metric: took 4m44.67002247s for fixHost
	I0501 03:39:32.707178   68640 start.go:83] releasing machines lock for "no-preload-892672", held for 4m44.670048235s
	W0501 03:39:32.707201   68640 start.go:713] error starting host: provision: host is not running
	W0501 03:39:32.707293   68640 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0501 03:39:32.707305   68640 start.go:728] Will try again in 5 seconds ...
	I0501 03:39:32.728616   68864 main.go:141] libmachine: (embed-certs-277128) Calling .Start
	I0501 03:39:32.728768   68864 main.go:141] libmachine: (embed-certs-277128) Ensuring networks are active...
	I0501 03:39:32.729434   68864 main.go:141] libmachine: (embed-certs-277128) Ensuring network default is active
	I0501 03:39:32.729789   68864 main.go:141] libmachine: (embed-certs-277128) Ensuring network mk-embed-certs-277128 is active
	I0501 03:39:32.730218   68864 main.go:141] libmachine: (embed-certs-277128) Getting domain xml...
	I0501 03:39:32.730972   68864 main.go:141] libmachine: (embed-certs-277128) Creating domain...
	I0501 03:39:37.711605   68640 start.go:360] acquireMachinesLock for no-preload-892672: {Name:mkc4548e5afe1d4b0a833f6c522103562b0cefff Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0501 03:39:33.914124   68864 main.go:141] libmachine: (embed-certs-277128) Waiting to get IP...
	I0501 03:39:33.915022   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:33.915411   68864 main.go:141] libmachine: (embed-certs-277128) DBG | unable to find current IP address of domain embed-certs-277128 in network mk-embed-certs-277128
	I0501 03:39:33.915473   68864 main.go:141] libmachine: (embed-certs-277128) DBG | I0501 03:39:33.915391   70171 retry.go:31] will retry after 278.418743ms: waiting for machine to come up
	I0501 03:39:34.195933   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:34.196470   68864 main.go:141] libmachine: (embed-certs-277128) DBG | unable to find current IP address of domain embed-certs-277128 in network mk-embed-certs-277128
	I0501 03:39:34.196497   68864 main.go:141] libmachine: (embed-certs-277128) DBG | I0501 03:39:34.196417   70171 retry.go:31] will retry after 375.593174ms: waiting for machine to come up
	I0501 03:39:34.574178   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:34.574666   68864 main.go:141] libmachine: (embed-certs-277128) DBG | unable to find current IP address of domain embed-certs-277128 in network mk-embed-certs-277128
	I0501 03:39:34.574689   68864 main.go:141] libmachine: (embed-certs-277128) DBG | I0501 03:39:34.574617   70171 retry.go:31] will retry after 377.853045ms: waiting for machine to come up
	I0501 03:39:34.954022   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:34.954436   68864 main.go:141] libmachine: (embed-certs-277128) DBG | unable to find current IP address of domain embed-certs-277128 in network mk-embed-certs-277128
	I0501 03:39:34.954465   68864 main.go:141] libmachine: (embed-certs-277128) DBG | I0501 03:39:34.954375   70171 retry.go:31] will retry after 374.024178ms: waiting for machine to come up
	I0501 03:39:35.330087   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:35.330514   68864 main.go:141] libmachine: (embed-certs-277128) DBG | unable to find current IP address of domain embed-certs-277128 in network mk-embed-certs-277128
	I0501 03:39:35.330545   68864 main.go:141] libmachine: (embed-certs-277128) DBG | I0501 03:39:35.330478   70171 retry.go:31] will retry after 488.296666ms: waiting for machine to come up
	I0501 03:39:35.820177   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:35.820664   68864 main.go:141] libmachine: (embed-certs-277128) DBG | unable to find current IP address of domain embed-certs-277128 in network mk-embed-certs-277128
	I0501 03:39:35.820692   68864 main.go:141] libmachine: (embed-certs-277128) DBG | I0501 03:39:35.820629   70171 retry.go:31] will retry after 665.825717ms: waiting for machine to come up
	I0501 03:39:36.488492   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:36.488910   68864 main.go:141] libmachine: (embed-certs-277128) DBG | unable to find current IP address of domain embed-certs-277128 in network mk-embed-certs-277128
	I0501 03:39:36.488941   68864 main.go:141] libmachine: (embed-certs-277128) DBG | I0501 03:39:36.488860   70171 retry.go:31] will retry after 1.04269192s: waiting for machine to come up
	I0501 03:39:37.532622   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:37.533006   68864 main.go:141] libmachine: (embed-certs-277128) DBG | unable to find current IP address of domain embed-certs-277128 in network mk-embed-certs-277128
	I0501 03:39:37.533032   68864 main.go:141] libmachine: (embed-certs-277128) DBG | I0501 03:39:37.532968   70171 retry.go:31] will retry after 1.348239565s: waiting for machine to come up
	I0501 03:39:38.882895   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:38.883364   68864 main.go:141] libmachine: (embed-certs-277128) DBG | unable to find current IP address of domain embed-certs-277128 in network mk-embed-certs-277128
	I0501 03:39:38.883396   68864 main.go:141] libmachine: (embed-certs-277128) DBG | I0501 03:39:38.883301   70171 retry.go:31] will retry after 1.718495999s: waiting for machine to come up
	I0501 03:39:40.604329   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:40.604760   68864 main.go:141] libmachine: (embed-certs-277128) DBG | unable to find current IP address of domain embed-certs-277128 in network mk-embed-certs-277128
	I0501 03:39:40.604791   68864 main.go:141] libmachine: (embed-certs-277128) DBG | I0501 03:39:40.604703   70171 retry.go:31] will retry after 2.237478005s: waiting for machine to come up
	I0501 03:39:42.843398   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:42.843920   68864 main.go:141] libmachine: (embed-certs-277128) DBG | unable to find current IP address of domain embed-certs-277128 in network mk-embed-certs-277128
	I0501 03:39:42.843949   68864 main.go:141] libmachine: (embed-certs-277128) DBG | I0501 03:39:42.843869   70171 retry.go:31] will retry after 2.618059388s: waiting for machine to come up
	I0501 03:39:45.465576   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:45.465968   68864 main.go:141] libmachine: (embed-certs-277128) DBG | unable to find current IP address of domain embed-certs-277128 in network mk-embed-certs-277128
	I0501 03:39:45.465992   68864 main.go:141] libmachine: (embed-certs-277128) DBG | I0501 03:39:45.465928   70171 retry.go:31] will retry after 2.895120972s: waiting for machine to come up
	I0501 03:39:48.362239   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:48.362651   68864 main.go:141] libmachine: (embed-certs-277128) DBG | unable to find current IP address of domain embed-certs-277128 in network mk-embed-certs-277128
	I0501 03:39:48.362683   68864 main.go:141] libmachine: (embed-certs-277128) DBG | I0501 03:39:48.362617   70171 retry.go:31] will retry after 2.857441112s: waiting for machine to come up
	I0501 03:39:52.791989   69237 start.go:364] duration metric: took 4m2.036138912s to acquireMachinesLock for "default-k8s-diff-port-715118"
	I0501 03:39:52.792059   69237 start.go:96] Skipping create...Using existing machine configuration
	I0501 03:39:52.792071   69237 fix.go:54] fixHost starting: 
	I0501 03:39:52.792454   69237 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:39:52.792492   69237 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:39:52.809707   69237 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46419
	I0501 03:39:52.810075   69237 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:39:52.810544   69237 main.go:141] libmachine: Using API Version  1
	I0501 03:39:52.810564   69237 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:39:52.810881   69237 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:39:52.811067   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .DriverName
	I0501 03:39:52.811217   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetState
	I0501 03:39:52.812787   69237 fix.go:112] recreateIfNeeded on default-k8s-diff-port-715118: state=Stopped err=<nil>
	I0501 03:39:52.812820   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .DriverName
	W0501 03:39:52.812969   69237 fix.go:138] unexpected machine state, will restart: <nil>
	I0501 03:39:52.815136   69237 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-715118" ...
	I0501 03:39:51.223450   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:51.223938   68864 main.go:141] libmachine: (embed-certs-277128) Found IP for machine: 192.168.50.218
	I0501 03:39:51.223965   68864 main.go:141] libmachine: (embed-certs-277128) Reserving static IP address...
	I0501 03:39:51.223982   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has current primary IP address 192.168.50.218 and MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:51.224375   68864 main.go:141] libmachine: (embed-certs-277128) Reserved static IP address: 192.168.50.218
	I0501 03:39:51.224440   68864 main.go:141] libmachine: (embed-certs-277128) DBG | found host DHCP lease matching {name: "embed-certs-277128", mac: "52:54:00:96:11:7d", ip: "192.168.50.218"} in network mk-embed-certs-277128: {Iface:virbr2 ExpiryTime:2024-05-01 04:39:45 +0000 UTC Type:0 Mac:52:54:00:96:11:7d Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-277128 Clientid:01:52:54:00:96:11:7d}
	I0501 03:39:51.224454   68864 main.go:141] libmachine: (embed-certs-277128) Waiting for SSH to be available...
	I0501 03:39:51.224491   68864 main.go:141] libmachine: (embed-certs-277128) DBG | skip adding static IP to network mk-embed-certs-277128 - found existing host DHCP lease matching {name: "embed-certs-277128", mac: "52:54:00:96:11:7d", ip: "192.168.50.218"}
	I0501 03:39:51.224507   68864 main.go:141] libmachine: (embed-certs-277128) DBG | Getting to WaitForSSH function...
	I0501 03:39:51.226437   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:51.226733   68864 main.go:141] libmachine: (embed-certs-277128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:7d", ip: ""} in network mk-embed-certs-277128: {Iface:virbr2 ExpiryTime:2024-05-01 04:39:45 +0000 UTC Type:0 Mac:52:54:00:96:11:7d Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-277128 Clientid:01:52:54:00:96:11:7d}
	I0501 03:39:51.226764   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined IP address 192.168.50.218 and MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:51.226863   68864 main.go:141] libmachine: (embed-certs-277128) DBG | Using SSH client type: external
	I0501 03:39:51.226886   68864 main.go:141] libmachine: (embed-certs-277128) DBG | Using SSH private key: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/embed-certs-277128/id_rsa (-rw-------)
	I0501 03:39:51.226917   68864 main.go:141] libmachine: (embed-certs-277128) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.218 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18779-13391/.minikube/machines/embed-certs-277128/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0501 03:39:51.226930   68864 main.go:141] libmachine: (embed-certs-277128) DBG | About to run SSH command:
	I0501 03:39:51.226941   68864 main.go:141] libmachine: (embed-certs-277128) DBG | exit 0
	I0501 03:39:51.354225   68864 main.go:141] libmachine: (embed-certs-277128) DBG | SSH cmd err, output: <nil>: 
	I0501 03:39:51.354641   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetConfigRaw
	I0501 03:39:51.355337   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetIP
	I0501 03:39:51.357934   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:51.358265   68864 main.go:141] libmachine: (embed-certs-277128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:7d", ip: ""} in network mk-embed-certs-277128: {Iface:virbr2 ExpiryTime:2024-05-01 04:39:45 +0000 UTC Type:0 Mac:52:54:00:96:11:7d Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-277128 Clientid:01:52:54:00:96:11:7d}
	I0501 03:39:51.358302   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined IP address 192.168.50.218 and MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:51.358584   68864 profile.go:143] Saving config to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/embed-certs-277128/config.json ...
	I0501 03:39:51.358753   68864 machine.go:94] provisionDockerMachine start ...
	I0501 03:39:51.358771   68864 main.go:141] libmachine: (embed-certs-277128) Calling .DriverName
	I0501 03:39:51.358940   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHHostname
	I0501 03:39:51.361202   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:51.361564   68864 main.go:141] libmachine: (embed-certs-277128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:7d", ip: ""} in network mk-embed-certs-277128: {Iface:virbr2 ExpiryTime:2024-05-01 04:39:45 +0000 UTC Type:0 Mac:52:54:00:96:11:7d Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-277128 Clientid:01:52:54:00:96:11:7d}
	I0501 03:39:51.361600   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined IP address 192.168.50.218 and MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:51.361711   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHPort
	I0501 03:39:51.361884   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHKeyPath
	I0501 03:39:51.362054   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHKeyPath
	I0501 03:39:51.362170   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHUsername
	I0501 03:39:51.362344   68864 main.go:141] libmachine: Using SSH client type: native
	I0501 03:39:51.362572   68864 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.218 22 <nil> <nil>}
	I0501 03:39:51.362586   68864 main.go:141] libmachine: About to run SSH command:
	hostname
	I0501 03:39:51.467448   68864 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0501 03:39:51.467480   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetMachineName
	I0501 03:39:51.467740   68864 buildroot.go:166] provisioning hostname "embed-certs-277128"
	I0501 03:39:51.467772   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetMachineName
	I0501 03:39:51.467953   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHHostname
	I0501 03:39:51.470653   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:51.471022   68864 main.go:141] libmachine: (embed-certs-277128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:7d", ip: ""} in network mk-embed-certs-277128: {Iface:virbr2 ExpiryTime:2024-05-01 04:39:45 +0000 UTC Type:0 Mac:52:54:00:96:11:7d Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-277128 Clientid:01:52:54:00:96:11:7d}
	I0501 03:39:51.471044   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined IP address 192.168.50.218 and MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:51.471159   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHPort
	I0501 03:39:51.471341   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHKeyPath
	I0501 03:39:51.471482   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHKeyPath
	I0501 03:39:51.471590   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHUsername
	I0501 03:39:51.471729   68864 main.go:141] libmachine: Using SSH client type: native
	I0501 03:39:51.471913   68864 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.218 22 <nil> <nil>}
	I0501 03:39:51.471934   68864 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-277128 && echo "embed-certs-277128" | sudo tee /etc/hostname
	I0501 03:39:51.594372   68864 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-277128
	
	I0501 03:39:51.594422   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHHostname
	I0501 03:39:51.596978   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:51.597307   68864 main.go:141] libmachine: (embed-certs-277128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:7d", ip: ""} in network mk-embed-certs-277128: {Iface:virbr2 ExpiryTime:2024-05-01 04:39:45 +0000 UTC Type:0 Mac:52:54:00:96:11:7d Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-277128 Clientid:01:52:54:00:96:11:7d}
	I0501 03:39:51.597334   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined IP address 192.168.50.218 and MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:51.597495   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHPort
	I0501 03:39:51.597710   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHKeyPath
	I0501 03:39:51.597865   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHKeyPath
	I0501 03:39:51.597971   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHUsername
	I0501 03:39:51.598097   68864 main.go:141] libmachine: Using SSH client type: native
	I0501 03:39:51.598250   68864 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.218 22 <nil> <nil>}
	I0501 03:39:51.598271   68864 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-277128' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-277128/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-277128' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0501 03:39:51.712791   68864 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0501 03:39:51.712825   68864 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18779-13391/.minikube CaCertPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18779-13391/.minikube}
	I0501 03:39:51.712850   68864 buildroot.go:174] setting up certificates
	I0501 03:39:51.712860   68864 provision.go:84] configureAuth start
	I0501 03:39:51.712869   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetMachineName
	I0501 03:39:51.713158   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetIP
	I0501 03:39:51.715577   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:51.715885   68864 main.go:141] libmachine: (embed-certs-277128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:7d", ip: ""} in network mk-embed-certs-277128: {Iface:virbr2 ExpiryTime:2024-05-01 04:39:45 +0000 UTC Type:0 Mac:52:54:00:96:11:7d Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-277128 Clientid:01:52:54:00:96:11:7d}
	I0501 03:39:51.715918   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined IP address 192.168.50.218 and MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:51.716040   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHHostname
	I0501 03:39:51.718057   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:51.718342   68864 main.go:141] libmachine: (embed-certs-277128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:7d", ip: ""} in network mk-embed-certs-277128: {Iface:virbr2 ExpiryTime:2024-05-01 04:39:45 +0000 UTC Type:0 Mac:52:54:00:96:11:7d Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-277128 Clientid:01:52:54:00:96:11:7d}
	I0501 03:39:51.718367   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined IP address 192.168.50.218 and MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:51.718550   68864 provision.go:143] copyHostCerts
	I0501 03:39:51.718612   68864 exec_runner.go:144] found /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem, removing ...
	I0501 03:39:51.718622   68864 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem
	I0501 03:39:51.718685   68864 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem (1078 bytes)
	I0501 03:39:51.718790   68864 exec_runner.go:144] found /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem, removing ...
	I0501 03:39:51.718798   68864 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem
	I0501 03:39:51.718823   68864 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem (1123 bytes)
	I0501 03:39:51.718881   68864 exec_runner.go:144] found /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem, removing ...
	I0501 03:39:51.718888   68864 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem
	I0501 03:39:51.718907   68864 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem (1675 bytes)
	I0501 03:39:51.718957   68864 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca-key.pem org=jenkins.embed-certs-277128 san=[127.0.0.1 192.168.50.218 embed-certs-277128 localhost minikube]
	I0501 03:39:52.100402   68864 provision.go:177] copyRemoteCerts
	I0501 03:39:52.100459   68864 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0501 03:39:52.100494   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHHostname
	I0501 03:39:52.103133   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:52.103363   68864 main.go:141] libmachine: (embed-certs-277128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:7d", ip: ""} in network mk-embed-certs-277128: {Iface:virbr2 ExpiryTime:2024-05-01 04:39:45 +0000 UTC Type:0 Mac:52:54:00:96:11:7d Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-277128 Clientid:01:52:54:00:96:11:7d}
	I0501 03:39:52.103391   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined IP address 192.168.50.218 and MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:52.103522   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHPort
	I0501 03:39:52.103694   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHKeyPath
	I0501 03:39:52.103790   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHUsername
	I0501 03:39:52.103874   68864 sshutil.go:53] new ssh client: &{IP:192.168.50.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/embed-certs-277128/id_rsa Username:docker}
	I0501 03:39:52.186017   68864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0501 03:39:52.211959   68864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0501 03:39:52.237362   68864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0501 03:39:52.264036   68864 provision.go:87] duration metric: took 551.163591ms to configureAuth
	I0501 03:39:52.264060   68864 buildroot.go:189] setting minikube options for container-runtime
	I0501 03:39:52.264220   68864 config.go:182] Loaded profile config "embed-certs-277128": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0501 03:39:52.264290   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHHostname
	I0501 03:39:52.266809   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:52.267117   68864 main.go:141] libmachine: (embed-certs-277128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:7d", ip: ""} in network mk-embed-certs-277128: {Iface:virbr2 ExpiryTime:2024-05-01 04:39:45 +0000 UTC Type:0 Mac:52:54:00:96:11:7d Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-277128 Clientid:01:52:54:00:96:11:7d}
	I0501 03:39:52.267140   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined IP address 192.168.50.218 and MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:52.267336   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHPort
	I0501 03:39:52.267529   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHKeyPath
	I0501 03:39:52.267713   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHKeyPath
	I0501 03:39:52.267863   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHUsername
	I0501 03:39:52.268096   68864 main.go:141] libmachine: Using SSH client type: native
	I0501 03:39:52.268273   68864 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.218 22 <nil> <nil>}
	I0501 03:39:52.268290   68864 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0501 03:39:52.543539   68864 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0501 03:39:52.543569   68864 machine.go:97] duration metric: took 1.184800934s to provisionDockerMachine
	I0501 03:39:52.543585   68864 start.go:293] postStartSetup for "embed-certs-277128" (driver="kvm2")
	I0501 03:39:52.543600   68864 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0501 03:39:52.543621   68864 main.go:141] libmachine: (embed-certs-277128) Calling .DriverName
	I0501 03:39:52.543974   68864 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0501 03:39:52.544007   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHHostname
	I0501 03:39:52.546566   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:52.546918   68864 main.go:141] libmachine: (embed-certs-277128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:7d", ip: ""} in network mk-embed-certs-277128: {Iface:virbr2 ExpiryTime:2024-05-01 04:39:45 +0000 UTC Type:0 Mac:52:54:00:96:11:7d Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-277128 Clientid:01:52:54:00:96:11:7d}
	I0501 03:39:52.546955   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined IP address 192.168.50.218 and MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:52.547108   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHPort
	I0501 03:39:52.547310   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHKeyPath
	I0501 03:39:52.547480   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHUsername
	I0501 03:39:52.547622   68864 sshutil.go:53] new ssh client: &{IP:192.168.50.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/embed-certs-277128/id_rsa Username:docker}
	I0501 03:39:52.636313   68864 ssh_runner.go:195] Run: cat /etc/os-release
	I0501 03:39:52.641408   68864 info.go:137] Remote host: Buildroot 2023.02.9
	I0501 03:39:52.641435   68864 filesync.go:126] Scanning /home/jenkins/minikube-integration/18779-13391/.minikube/addons for local assets ...
	I0501 03:39:52.641514   68864 filesync.go:126] Scanning /home/jenkins/minikube-integration/18779-13391/.minikube/files for local assets ...
	I0501 03:39:52.641598   68864 filesync.go:149] local asset: /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem -> 207242.pem in /etc/ssl/certs
	I0501 03:39:52.641708   68864 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0501 03:39:52.653421   68864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem --> /etc/ssl/certs/207242.pem (1708 bytes)
	I0501 03:39:52.681796   68864 start.go:296] duration metric: took 138.197388ms for postStartSetup
	I0501 03:39:52.681840   68864 fix.go:56] duration metric: took 19.974504059s for fixHost
	I0501 03:39:52.681866   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHHostname
	I0501 03:39:52.684189   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:52.684447   68864 main.go:141] libmachine: (embed-certs-277128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:7d", ip: ""} in network mk-embed-certs-277128: {Iface:virbr2 ExpiryTime:2024-05-01 04:39:45 +0000 UTC Type:0 Mac:52:54:00:96:11:7d Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-277128 Clientid:01:52:54:00:96:11:7d}
	I0501 03:39:52.684475   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined IP address 192.168.50.218 and MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:52.684691   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHPort
	I0501 03:39:52.684901   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHKeyPath
	I0501 03:39:52.685077   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHKeyPath
	I0501 03:39:52.685226   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHUsername
	I0501 03:39:52.685393   68864 main.go:141] libmachine: Using SSH client type: native
	I0501 03:39:52.685556   68864 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.218 22 <nil> <nil>}
	I0501 03:39:52.685568   68864 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0501 03:39:52.791802   68864 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714534792.758254619
	
	I0501 03:39:52.791830   68864 fix.go:216] guest clock: 1714534792.758254619
	I0501 03:39:52.791841   68864 fix.go:229] Guest: 2024-05-01 03:39:52.758254619 +0000 UTC Remote: 2024-05-01 03:39:52.681844878 +0000 UTC m=+298.906990848 (delta=76.409741ms)
	I0501 03:39:52.791886   68864 fix.go:200] guest clock delta is within tolerance: 76.409741ms
	I0501 03:39:52.791892   68864 start.go:83] releasing machines lock for "embed-certs-277128", held for 20.08458366s
	I0501 03:39:52.791918   68864 main.go:141] libmachine: (embed-certs-277128) Calling .DriverName
	I0501 03:39:52.792188   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetIP
	I0501 03:39:52.794820   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:52.795217   68864 main.go:141] libmachine: (embed-certs-277128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:7d", ip: ""} in network mk-embed-certs-277128: {Iface:virbr2 ExpiryTime:2024-05-01 04:39:45 +0000 UTC Type:0 Mac:52:54:00:96:11:7d Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-277128 Clientid:01:52:54:00:96:11:7d}
	I0501 03:39:52.795256   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined IP address 192.168.50.218 and MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:52.795427   68864 main.go:141] libmachine: (embed-certs-277128) Calling .DriverName
	I0501 03:39:52.795971   68864 main.go:141] libmachine: (embed-certs-277128) Calling .DriverName
	I0501 03:39:52.796142   68864 main.go:141] libmachine: (embed-certs-277128) Calling .DriverName
	I0501 03:39:52.796235   68864 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0501 03:39:52.796285   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHHostname
	I0501 03:39:52.796324   68864 ssh_runner.go:195] Run: cat /version.json
	I0501 03:39:52.796346   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHHostname
	I0501 03:39:52.799128   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:52.799153   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:52.799536   68864 main.go:141] libmachine: (embed-certs-277128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:7d", ip: ""} in network mk-embed-certs-277128: {Iface:virbr2 ExpiryTime:2024-05-01 04:39:45 +0000 UTC Type:0 Mac:52:54:00:96:11:7d Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-277128 Clientid:01:52:54:00:96:11:7d}
	I0501 03:39:52.799570   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined IP address 192.168.50.218 and MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:52.799617   68864 main.go:141] libmachine: (embed-certs-277128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:7d", ip: ""} in network mk-embed-certs-277128: {Iface:virbr2 ExpiryTime:2024-05-01 04:39:45 +0000 UTC Type:0 Mac:52:54:00:96:11:7d Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-277128 Clientid:01:52:54:00:96:11:7d}
	I0501 03:39:52.799647   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined IP address 192.168.50.218 and MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:52.799779   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHPort
	I0501 03:39:52.799878   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHPort
	I0501 03:39:52.799961   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHKeyPath
	I0501 03:39:52.800048   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHKeyPath
	I0501 03:39:52.800117   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHUsername
	I0501 03:39:52.800189   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHUsername
	I0501 03:39:52.800243   68864 sshutil.go:53] new ssh client: &{IP:192.168.50.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/embed-certs-277128/id_rsa Username:docker}
	I0501 03:39:52.800299   68864 sshutil.go:53] new ssh client: &{IP:192.168.50.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/embed-certs-277128/id_rsa Username:docker}
	I0501 03:39:52.901147   68864 ssh_runner.go:195] Run: systemctl --version
	I0501 03:39:52.908399   68864 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0501 03:39:53.065012   68864 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0501 03:39:53.073635   68864 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0501 03:39:53.073724   68864 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0501 03:39:53.096146   68864 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0501 03:39:53.096179   68864 start.go:494] detecting cgroup driver to use...
	I0501 03:39:53.096253   68864 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0501 03:39:53.118525   68864 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0501 03:39:53.136238   68864 docker.go:217] disabling cri-docker service (if available) ...
	I0501 03:39:53.136301   68864 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0501 03:39:53.152535   68864 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0501 03:39:53.171415   68864 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0501 03:39:53.297831   68864 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0501 03:39:53.479469   68864 docker.go:233] disabling docker service ...
	I0501 03:39:53.479552   68864 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0501 03:39:53.497271   68864 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0501 03:39:53.512645   68864 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0501 03:39:53.658448   68864 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0501 03:39:53.787528   68864 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0501 03:39:53.804078   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0501 03:39:53.836146   68864 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0501 03:39:53.836206   68864 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:39:53.853846   68864 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0501 03:39:53.853915   68864 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:39:53.866319   68864 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:39:53.878410   68864 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:39:53.890304   68864 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0501 03:39:53.903821   68864 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:39:53.916750   68864 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:39:53.938933   68864 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:39:53.952103   68864 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0501 03:39:53.964833   68864 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0501 03:39:53.964893   68864 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0501 03:39:53.983039   68864 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0501 03:39:53.995830   68864 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 03:39:54.156748   68864 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0501 03:39:54.306973   68864 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0501 03:39:54.307051   68864 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0501 03:39:54.313515   68864 start.go:562] Will wait 60s for crictl version
	I0501 03:39:54.313569   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:39:54.317943   68864 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0501 03:39:54.356360   68864 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0501 03:39:54.356437   68864 ssh_runner.go:195] Run: crio --version
	I0501 03:39:54.391717   68864 ssh_runner.go:195] Run: crio --version
	I0501 03:39:54.428403   68864 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0501 03:39:52.816428   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .Start
	I0501 03:39:52.816592   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Ensuring networks are active...
	I0501 03:39:52.817317   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Ensuring network default is active
	I0501 03:39:52.817668   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Ensuring network mk-default-k8s-diff-port-715118 is active
	I0501 03:39:52.818040   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Getting domain xml...
	I0501 03:39:52.818777   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Creating domain...
	I0501 03:39:54.069624   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Waiting to get IP...
	I0501 03:39:54.070436   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:39:54.070855   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | unable to find current IP address of domain default-k8s-diff-port-715118 in network mk-default-k8s-diff-port-715118
	I0501 03:39:54.070891   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | I0501 03:39:54.070820   70304 retry.go:31] will retry after 260.072623ms: waiting for machine to come up
	I0501 03:39:54.332646   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:39:54.333077   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | unable to find current IP address of domain default-k8s-diff-port-715118 in network mk-default-k8s-diff-port-715118
	I0501 03:39:54.333115   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | I0501 03:39:54.333047   70304 retry.go:31] will retry after 270.897102ms: waiting for machine to come up
	I0501 03:39:54.605705   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:39:54.606102   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | unable to find current IP address of domain default-k8s-diff-port-715118 in network mk-default-k8s-diff-port-715118
	I0501 03:39:54.606155   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | I0501 03:39:54.606070   70304 retry.go:31] will retry after 417.613249ms: waiting for machine to come up
	I0501 03:39:55.025827   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:39:55.026340   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | unable to find current IP address of domain default-k8s-diff-port-715118 in network mk-default-k8s-diff-port-715118
	I0501 03:39:55.026371   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | I0501 03:39:55.026291   70304 retry.go:31] will retry after 428.515161ms: waiting for machine to come up
	I0501 03:39:55.456828   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:39:55.457443   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | unable to find current IP address of domain default-k8s-diff-port-715118 in network mk-default-k8s-diff-port-715118
	I0501 03:39:55.457480   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | I0501 03:39:55.457405   70304 retry.go:31] will retry after 701.294363ms: waiting for machine to come up
	I0501 03:39:54.429689   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetIP
	I0501 03:39:54.432488   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:54.432817   68864 main.go:141] libmachine: (embed-certs-277128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:7d", ip: ""} in network mk-embed-certs-277128: {Iface:virbr2 ExpiryTime:2024-05-01 04:39:45 +0000 UTC Type:0 Mac:52:54:00:96:11:7d Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-277128 Clientid:01:52:54:00:96:11:7d}
	I0501 03:39:54.432858   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined IP address 192.168.50.218 and MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:54.433039   68864 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0501 03:39:54.437866   68864 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0501 03:39:54.451509   68864 kubeadm.go:877] updating cluster {Name:embed-certs-277128 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.0 ClusterName:embed-certs-277128 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.218 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0501 03:39:54.451615   68864 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0501 03:39:54.451665   68864 ssh_runner.go:195] Run: sudo crictl images --output json
	I0501 03:39:54.494304   68864 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0501 03:39:54.494379   68864 ssh_runner.go:195] Run: which lz4
	I0501 03:39:54.499090   68864 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0501 03:39:54.503970   68864 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0501 03:39:54.503992   68864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394544937 bytes)
	I0501 03:39:56.216407   68864 crio.go:462] duration metric: took 1.717351739s to copy over tarball
	I0501 03:39:56.216488   68864 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0501 03:39:58.703133   68864 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.48661051s)
	I0501 03:39:58.703161   68864 crio.go:469] duration metric: took 2.486721448s to extract the tarball
	I0501 03:39:58.703171   68864 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0501 03:39:58.751431   68864 ssh_runner.go:195] Run: sudo crictl images --output json
	I0501 03:39:58.800353   68864 crio.go:514] all images are preloaded for cri-o runtime.
	I0501 03:39:58.800379   68864 cache_images.go:84] Images are preloaded, skipping loading
	I0501 03:39:58.800389   68864 kubeadm.go:928] updating node { 192.168.50.218 8443 v1.30.0 crio true true} ...
	I0501 03:39:58.800516   68864 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-277128 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.218
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:embed-certs-277128 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0501 03:39:58.800598   68864 ssh_runner.go:195] Run: crio config
	I0501 03:39:56.159966   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:39:56.160373   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | unable to find current IP address of domain default-k8s-diff-port-715118 in network mk-default-k8s-diff-port-715118
	I0501 03:39:56.160404   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | I0501 03:39:56.160334   70304 retry.go:31] will retry after 774.079459ms: waiting for machine to come up
	I0501 03:39:56.936654   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:39:56.937201   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | unable to find current IP address of domain default-k8s-diff-port-715118 in network mk-default-k8s-diff-port-715118
	I0501 03:39:56.937232   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | I0501 03:39:56.937161   70304 retry.go:31] will retry after 877.420181ms: waiting for machine to come up
	I0501 03:39:57.816002   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:39:57.816467   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | unable to find current IP address of domain default-k8s-diff-port-715118 in network mk-default-k8s-diff-port-715118
	I0501 03:39:57.816501   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | I0501 03:39:57.816425   70304 retry.go:31] will retry after 1.477997343s: waiting for machine to come up
	I0501 03:39:59.296533   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:39:59.296970   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | unable to find current IP address of domain default-k8s-diff-port-715118 in network mk-default-k8s-diff-port-715118
	I0501 03:39:59.296995   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | I0501 03:39:59.296922   70304 retry.go:31] will retry after 1.199617253s: waiting for machine to come up
	I0501 03:40:00.498388   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:00.498817   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | unable to find current IP address of domain default-k8s-diff-port-715118 in network mk-default-k8s-diff-port-715118
	I0501 03:40:00.498845   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | I0501 03:40:00.498770   70304 retry.go:31] will retry after 2.227608697s: waiting for machine to come up
	I0501 03:39:58.855600   68864 cni.go:84] Creating CNI manager for ""
	I0501 03:39:58.855630   68864 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0501 03:39:58.855650   68864 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0501 03:39:58.855686   68864 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.218 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-277128 NodeName:embed-certs-277128 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.218"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.218 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0501 03:39:58.855826   68864 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.218
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-277128"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.218
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.218"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0501 03:39:58.855890   68864 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0501 03:39:58.868074   68864 binaries.go:44] Found k8s binaries, skipping transfer
	I0501 03:39:58.868145   68864 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0501 03:39:58.879324   68864 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0501 03:39:58.897572   68864 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0501 03:39:58.918416   68864 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0501 03:39:58.940317   68864 ssh_runner.go:195] Run: grep 192.168.50.218	control-plane.minikube.internal$ /etc/hosts
	I0501 03:39:58.944398   68864 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.218	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0501 03:39:58.959372   68864 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 03:39:59.094172   68864 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0501 03:39:59.113612   68864 certs.go:68] Setting up /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/embed-certs-277128 for IP: 192.168.50.218
	I0501 03:39:59.113653   68864 certs.go:194] generating shared ca certs ...
	I0501 03:39:59.113669   68864 certs.go:226] acquiring lock for ca certs: {Name:mk85a75d14c00780d4bf09cd9bdaf0f96f1b8fce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 03:39:59.113863   68864 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18779-13391/.minikube/ca.key
	I0501 03:39:59.113919   68864 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.key
	I0501 03:39:59.113931   68864 certs.go:256] generating profile certs ...
	I0501 03:39:59.114044   68864 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/embed-certs-277128/client.key
	I0501 03:39:59.114117   68864 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/embed-certs-277128/apiserver.key.65584253
	I0501 03:39:59.114166   68864 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/embed-certs-277128/proxy-client.key
	I0501 03:39:59.114325   68864 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/20724.pem (1338 bytes)
	W0501 03:39:59.114369   68864 certs.go:480] ignoring /home/jenkins/minikube-integration/18779-13391/.minikube/certs/20724_empty.pem, impossibly tiny 0 bytes
	I0501 03:39:59.114383   68864 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca-key.pem (1679 bytes)
	I0501 03:39:59.114430   68864 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem (1078 bytes)
	I0501 03:39:59.114466   68864 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem (1123 bytes)
	I0501 03:39:59.114497   68864 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem (1675 bytes)
	I0501 03:39:59.114550   68864 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem (1708 bytes)
	I0501 03:39:59.115448   68864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0501 03:39:59.155890   68864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0501 03:39:59.209160   68864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0501 03:39:59.251552   68864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0501 03:39:59.288100   68864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/embed-certs-277128/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0501 03:39:59.325437   68864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/embed-certs-277128/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0501 03:39:59.352593   68864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/embed-certs-277128/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0501 03:39:59.378992   68864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/embed-certs-277128/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0501 03:39:59.405517   68864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/certs/20724.pem --> /usr/share/ca-certificates/20724.pem (1338 bytes)
	I0501 03:39:59.431253   68864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem --> /usr/share/ca-certificates/207242.pem (1708 bytes)
	I0501 03:39:59.457155   68864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0501 03:39:59.483696   68864 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0501 03:39:59.502758   68864 ssh_runner.go:195] Run: openssl version
	I0501 03:39:59.509307   68864 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20724.pem && ln -fs /usr/share/ca-certificates/20724.pem /etc/ssl/certs/20724.pem"
	I0501 03:39:59.521438   68864 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20724.pem
	I0501 03:39:59.526658   68864 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May  1 02:20 /usr/share/ca-certificates/20724.pem
	I0501 03:39:59.526706   68864 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20724.pem
	I0501 03:39:59.533201   68864 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20724.pem /etc/ssl/certs/51391683.0"
	I0501 03:39:59.546837   68864 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/207242.pem && ln -fs /usr/share/ca-certificates/207242.pem /etc/ssl/certs/207242.pem"
	I0501 03:39:59.560612   68864 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/207242.pem
	I0501 03:39:59.565545   68864 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May  1 02:20 /usr/share/ca-certificates/207242.pem
	I0501 03:39:59.565589   68864 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/207242.pem
	I0501 03:39:59.571737   68864 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/207242.pem /etc/ssl/certs/3ec20f2e.0"
	I0501 03:39:59.584602   68864 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0501 03:39:59.599088   68864 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0501 03:39:59.604230   68864 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May  1 02:08 /usr/share/ca-certificates/minikubeCA.pem
	I0501 03:39:59.604296   68864 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0501 03:39:59.610536   68864 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0501 03:39:59.624810   68864 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0501 03:39:59.629692   68864 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0501 03:39:59.636209   68864 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0501 03:39:59.642907   68864 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0501 03:39:59.649491   68864 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0501 03:39:59.655702   68864 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0501 03:39:59.661884   68864 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0501 03:39:59.668075   68864 kubeadm.go:391] StartCluster: {Name:embed-certs-277128 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.0 ClusterName:embed-certs-277128 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.218 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 03:39:59.668209   68864 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0501 03:39:59.668255   68864 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0501 03:39:59.712172   68864 cri.go:89] found id: ""
	I0501 03:39:59.712262   68864 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0501 03:39:59.723825   68864 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0501 03:39:59.723848   68864 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0501 03:39:59.723854   68864 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0501 03:39:59.723890   68864 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0501 03:39:59.735188   68864 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0501 03:39:59.736670   68864 kubeconfig.go:125] found "embed-certs-277128" server: "https://192.168.50.218:8443"
	I0501 03:39:59.739665   68864 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0501 03:39:59.750292   68864 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.218
	I0501 03:39:59.750329   68864 kubeadm.go:1154] stopping kube-system containers ...
	I0501 03:39:59.750339   68864 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0501 03:39:59.750388   68864 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0501 03:39:59.791334   68864 cri.go:89] found id: ""
	I0501 03:39:59.791436   68864 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0501 03:39:59.809162   68864 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0501 03:39:59.820979   68864 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0501 03:39:59.821013   68864 kubeadm.go:156] found existing configuration files:
	
	I0501 03:39:59.821072   68864 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0501 03:39:59.832368   68864 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0501 03:39:59.832443   68864 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0501 03:39:59.843920   68864 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0501 03:39:59.855489   68864 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0501 03:39:59.855562   68864 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0501 03:39:59.867337   68864 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0501 03:39:59.878582   68864 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0501 03:39:59.878659   68864 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0501 03:39:59.890049   68864 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0501 03:39:59.901054   68864 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0501 03:39:59.901110   68864 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0501 03:39:59.912900   68864 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0501 03:39:59.925358   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:40:00.065105   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:40:00.861756   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:40:01.089790   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:40:01.158944   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:40:01.249842   68864 api_server.go:52] waiting for apiserver process to appear ...
	I0501 03:40:01.250063   68864 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:01.750273   68864 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:02.250155   68864 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:02.291774   68864 api_server.go:72] duration metric: took 1.041932793s to wait for apiserver process to appear ...
	I0501 03:40:02.291807   68864 api_server.go:88] waiting for apiserver healthz status ...
	I0501 03:40:02.291831   68864 api_server.go:253] Checking apiserver healthz at https://192.168.50.218:8443/healthz ...
	I0501 03:40:02.292377   68864 api_server.go:269] stopped: https://192.168.50.218:8443/healthz: Get "https://192.168.50.218:8443/healthz": dial tcp 192.168.50.218:8443: connect: connection refused
	I0501 03:40:02.792584   68864 api_server.go:253] Checking apiserver healthz at https://192.168.50.218:8443/healthz ...
	I0501 03:40:02.727799   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:02.728314   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | unable to find current IP address of domain default-k8s-diff-port-715118 in network mk-default-k8s-diff-port-715118
	I0501 03:40:02.728347   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | I0501 03:40:02.728270   70304 retry.go:31] will retry after 1.844071576s: waiting for machine to come up
	I0501 03:40:04.574870   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:04.575326   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | unable to find current IP address of domain default-k8s-diff-port-715118 in network mk-default-k8s-diff-port-715118
	I0501 03:40:04.575349   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | I0501 03:40:04.575278   70304 retry.go:31] will retry after 2.989286916s: waiting for machine to come up
	I0501 03:40:04.843311   68864 api_server.go:279] https://192.168.50.218:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0501 03:40:04.843360   68864 api_server.go:103] status: https://192.168.50.218:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0501 03:40:04.843377   68864 api_server.go:253] Checking apiserver healthz at https://192.168.50.218:8443/healthz ...
	I0501 03:40:04.899616   68864 api_server.go:279] https://192.168.50.218:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0501 03:40:04.899655   68864 api_server.go:103] status: https://192.168.50.218:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0501 03:40:05.292097   68864 api_server.go:253] Checking apiserver healthz at https://192.168.50.218:8443/healthz ...
	I0501 03:40:05.300803   68864 api_server.go:279] https://192.168.50.218:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0501 03:40:05.300843   68864 api_server.go:103] status: https://192.168.50.218:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0501 03:40:05.792151   68864 api_server.go:253] Checking apiserver healthz at https://192.168.50.218:8443/healthz ...
	I0501 03:40:05.797124   68864 api_server.go:279] https://192.168.50.218:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0501 03:40:05.797158   68864 api_server.go:103] status: https://192.168.50.218:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0501 03:40:06.292821   68864 api_server.go:253] Checking apiserver healthz at https://192.168.50.218:8443/healthz ...
	I0501 03:40:06.297912   68864 api_server.go:279] https://192.168.50.218:8443/healthz returned 200:
	ok
	I0501 03:40:06.305165   68864 api_server.go:141] control plane version: v1.30.0
	I0501 03:40:06.305199   68864 api_server.go:131] duration metric: took 4.013383351s to wait for apiserver health ...
	I0501 03:40:06.305211   68864 cni.go:84] Creating CNI manager for ""
	I0501 03:40:06.305220   68864 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0501 03:40:06.306925   68864 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0501 03:40:06.308450   68864 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0501 03:40:06.325186   68864 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0501 03:40:06.380997   68864 system_pods.go:43] waiting for kube-system pods to appear ...
	I0501 03:40:06.394134   68864 system_pods.go:59] 8 kube-system pods found
	I0501 03:40:06.394178   68864 system_pods.go:61] "coredns-7db6d8ff4d-sjplt" [6701ee8e-0630-4332-b01c-26741ed3a7b7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0501 03:40:06.394191   68864 system_pods.go:61] "etcd-embed-certs-277128" [744d7481-dd80-4435-90da-685fc16a76a4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0501 03:40:06.394206   68864 system_pods.go:61] "kube-apiserver-embed-certs-277128" [2f1d0f09-b270-4cdc-af3c-e17e3a244bbf] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0501 03:40:06.394215   68864 system_pods.go:61] "kube-controller-manager-embed-certs-277128" [729aa138-dc0d-4616-97d4-1592453971b8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0501 03:40:06.394222   68864 system_pods.go:61] "kube-proxy-phx7x" [56c0381e-c140-4f69-bbe4-09d393db8b23] Running
	I0501 03:40:06.394232   68864 system_pods.go:61] "kube-scheduler-embed-certs-277128" [cc56e9bc-7f21-4f57-a65a-73a10ffd7145] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0501 03:40:06.394253   68864 system_pods.go:61] "metrics-server-569cc877fc-p8j59" [f8ad6c24-dd5d-4515-9052-c9aca7412b55] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0501 03:40:06.394258   68864 system_pods.go:61] "storage-provisioner" [785be666-58d5-4b9d-92fd-bcacdbdebeb2] Running
	I0501 03:40:06.394273   68864 system_pods.go:74] duration metric: took 13.25246ms to wait for pod list to return data ...
	I0501 03:40:06.394293   68864 node_conditions.go:102] verifying NodePressure condition ...
	I0501 03:40:06.399912   68864 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0501 03:40:06.399950   68864 node_conditions.go:123] node cpu capacity is 2
	I0501 03:40:06.399974   68864 node_conditions.go:105] duration metric: took 5.664461ms to run NodePressure ...
	I0501 03:40:06.399996   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:40:06.675573   68864 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0501 03:40:06.680567   68864 kubeadm.go:733] kubelet initialised
	I0501 03:40:06.680591   68864 kubeadm.go:734] duration metric: took 4.987942ms waiting for restarted kubelet to initialise ...
	I0501 03:40:06.680598   68864 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0501 03:40:06.687295   68864 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-sjplt" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:06.692224   68864 pod_ready.go:97] node "embed-certs-277128" hosting pod "coredns-7db6d8ff4d-sjplt" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-277128" has status "Ready":"False"
	I0501 03:40:06.692248   68864 pod_ready.go:81] duration metric: took 4.930388ms for pod "coredns-7db6d8ff4d-sjplt" in "kube-system" namespace to be "Ready" ...
	E0501 03:40:06.692258   68864 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-277128" hosting pod "coredns-7db6d8ff4d-sjplt" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-277128" has status "Ready":"False"
	I0501 03:40:06.692266   68864 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-277128" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:06.699559   68864 pod_ready.go:97] node "embed-certs-277128" hosting pod "etcd-embed-certs-277128" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-277128" has status "Ready":"False"
	I0501 03:40:06.699591   68864 pod_ready.go:81] duration metric: took 7.309622ms for pod "etcd-embed-certs-277128" in "kube-system" namespace to be "Ready" ...
	E0501 03:40:06.699602   68864 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-277128" hosting pod "etcd-embed-certs-277128" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-277128" has status "Ready":"False"
	I0501 03:40:06.699613   68864 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-277128" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:06.705459   68864 pod_ready.go:97] node "embed-certs-277128" hosting pod "kube-apiserver-embed-certs-277128" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-277128" has status "Ready":"False"
	I0501 03:40:06.705485   68864 pod_ready.go:81] duration metric: took 5.86335ms for pod "kube-apiserver-embed-certs-277128" in "kube-system" namespace to be "Ready" ...
	E0501 03:40:06.705497   68864 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-277128" hosting pod "kube-apiserver-embed-certs-277128" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-277128" has status "Ready":"False"
	I0501 03:40:06.705504   68864 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-277128" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:06.786157   68864 pod_ready.go:97] node "embed-certs-277128" hosting pod "kube-controller-manager-embed-certs-277128" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-277128" has status "Ready":"False"
	I0501 03:40:06.786186   68864 pod_ready.go:81] duration metric: took 80.673223ms for pod "kube-controller-manager-embed-certs-277128" in "kube-system" namespace to be "Ready" ...
	E0501 03:40:06.786198   68864 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-277128" hosting pod "kube-controller-manager-embed-certs-277128" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-277128" has status "Ready":"False"
	I0501 03:40:06.786205   68864 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-phx7x" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:07.184262   68864 pod_ready.go:97] node "embed-certs-277128" hosting pod "kube-proxy-phx7x" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-277128" has status "Ready":"False"
	I0501 03:40:07.184297   68864 pod_ready.go:81] duration metric: took 398.081204ms for pod "kube-proxy-phx7x" in "kube-system" namespace to be "Ready" ...
	E0501 03:40:07.184309   68864 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-277128" hosting pod "kube-proxy-phx7x" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-277128" has status "Ready":"False"
	I0501 03:40:07.184319   68864 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-277128" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:07.584569   68864 pod_ready.go:97] node "embed-certs-277128" hosting pod "kube-scheduler-embed-certs-277128" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-277128" has status "Ready":"False"
	I0501 03:40:07.584607   68864 pod_ready.go:81] duration metric: took 400.279023ms for pod "kube-scheduler-embed-certs-277128" in "kube-system" namespace to be "Ready" ...
	E0501 03:40:07.584620   68864 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-277128" hosting pod "kube-scheduler-embed-certs-277128" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-277128" has status "Ready":"False"
	I0501 03:40:07.584630   68864 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:07.984376   68864 pod_ready.go:97] node "embed-certs-277128" hosting pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-277128" has status "Ready":"False"
	I0501 03:40:07.984408   68864 pod_ready.go:81] duration metric: took 399.766342ms for pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace to be "Ready" ...
	E0501 03:40:07.984419   68864 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-277128" hosting pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-277128" has status "Ready":"False"
	I0501 03:40:07.984428   68864 pod_ready.go:38] duration metric: took 1.303821777s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0501 03:40:07.984448   68864 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0501 03:40:08.000370   68864 ops.go:34] apiserver oom_adj: -16
	I0501 03:40:08.000391   68864 kubeadm.go:591] duration metric: took 8.276531687s to restartPrimaryControlPlane
	I0501 03:40:08.000401   68864 kubeadm.go:393] duration metric: took 8.332343707s to StartCluster
	I0501 03:40:08.000416   68864 settings.go:142] acquiring lock: {Name:mkcfe08bcadd45d99c00ed1eaf4e9c226a489fe2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 03:40:08.000482   68864 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18779-13391/kubeconfig
	I0501 03:40:08.002013   68864 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18779-13391/kubeconfig: {Name:mk5d1131557f885800877622151c0915337cb23a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 03:40:08.002343   68864 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.218 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0501 03:40:08.004301   68864 out.go:177] * Verifying Kubernetes components...
	I0501 03:40:08.002423   68864 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0501 03:40:08.002582   68864 config.go:182] Loaded profile config "embed-certs-277128": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0501 03:40:08.005608   68864 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-277128"
	I0501 03:40:08.005624   68864 addons.go:69] Setting metrics-server=true in profile "embed-certs-277128"
	I0501 03:40:08.005658   68864 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-277128"
	W0501 03:40:08.005670   68864 addons.go:243] addon storage-provisioner should already be in state true
	I0501 03:40:08.005609   68864 addons.go:69] Setting default-storageclass=true in profile "embed-certs-277128"
	I0501 03:40:08.005785   68864 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-277128"
	I0501 03:40:08.005659   68864 addons.go:234] Setting addon metrics-server=true in "embed-certs-277128"
	W0501 03:40:08.005819   68864 addons.go:243] addon metrics-server should already be in state true
	I0501 03:40:08.005851   68864 host.go:66] Checking if "embed-certs-277128" exists ...
	I0501 03:40:08.005613   68864 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 03:40:08.005695   68864 host.go:66] Checking if "embed-certs-277128" exists ...
	I0501 03:40:08.006230   68864 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:40:08.006258   68864 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:40:08.006291   68864 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:40:08.006310   68864 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:40:08.006326   68864 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:40:08.006378   68864 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:40:08.021231   68864 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35583
	I0501 03:40:08.021276   68864 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44971
	I0501 03:40:08.021621   68864 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:40:08.021673   68864 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:40:08.022126   68864 main.go:141] libmachine: Using API Version  1
	I0501 03:40:08.022146   68864 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:40:08.022353   68864 main.go:141] libmachine: Using API Version  1
	I0501 03:40:08.022390   68864 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:40:08.022537   68864 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:40:08.022730   68864 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:40:08.022904   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetState
	I0501 03:40:08.023118   68864 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:40:08.023165   68864 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:40:08.024792   68864 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33047
	I0501 03:40:08.025226   68864 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:40:08.025734   68864 main.go:141] libmachine: Using API Version  1
	I0501 03:40:08.025761   68864 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:40:08.026090   68864 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:40:08.026569   68864 addons.go:234] Setting addon default-storageclass=true in "embed-certs-277128"
	W0501 03:40:08.026593   68864 addons.go:243] addon default-storageclass should already be in state true
	I0501 03:40:08.026620   68864 host.go:66] Checking if "embed-certs-277128" exists ...
	I0501 03:40:08.026696   68864 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:40:08.026730   68864 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:40:08.026977   68864 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:40:08.027033   68864 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:40:08.039119   68864 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42975
	I0501 03:40:08.039585   68864 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:40:08.040083   68864 main.go:141] libmachine: Using API Version  1
	I0501 03:40:08.040106   68864 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:40:08.040419   68864 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:40:08.040599   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetState
	I0501 03:40:08.042228   68864 main.go:141] libmachine: (embed-certs-277128) Calling .DriverName
	I0501 03:40:08.044289   68864 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0501 03:40:08.045766   68864 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0501 03:40:08.045787   68864 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0501 03:40:08.045804   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHHostname
	I0501 03:40:08.043677   68864 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37141
	I0501 03:40:08.045633   68864 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41653
	I0501 03:40:08.046247   68864 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:40:08.046326   68864 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:40:08.046989   68864 main.go:141] libmachine: Using API Version  1
	I0501 03:40:08.047012   68864 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:40:08.047196   68864 main.go:141] libmachine: Using API Version  1
	I0501 03:40:08.047216   68864 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:40:08.047279   68864 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:40:08.047403   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetState
	I0501 03:40:08.047515   68864 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:40:08.048047   68864 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:40:08.048081   68864 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:40:08.049225   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:40:08.049623   68864 main.go:141] libmachine: (embed-certs-277128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:7d", ip: ""} in network mk-embed-certs-277128: {Iface:virbr2 ExpiryTime:2024-05-01 04:39:45 +0000 UTC Type:0 Mac:52:54:00:96:11:7d Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-277128 Clientid:01:52:54:00:96:11:7d}
	I0501 03:40:08.049649   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined IP address 192.168.50.218 and MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:40:08.049773   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHPort
	I0501 03:40:08.049915   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHKeyPath
	I0501 03:40:08.050096   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHUsername
	I0501 03:40:08.050165   68864 main.go:141] libmachine: (embed-certs-277128) Calling .DriverName
	I0501 03:40:08.050297   68864 sshutil.go:53] new ssh client: &{IP:192.168.50.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/embed-certs-277128/id_rsa Username:docker}
	I0501 03:40:08.052006   68864 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0501 03:40:08.053365   68864 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0501 03:40:08.053380   68864 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0501 03:40:08.053394   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHHostname
	I0501 03:40:08.056360   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:40:08.056752   68864 main.go:141] libmachine: (embed-certs-277128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:7d", ip: ""} in network mk-embed-certs-277128: {Iface:virbr2 ExpiryTime:2024-05-01 04:39:45 +0000 UTC Type:0 Mac:52:54:00:96:11:7d Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-277128 Clientid:01:52:54:00:96:11:7d}
	I0501 03:40:08.056782   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined IP address 192.168.50.218 and MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:40:08.056892   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHPort
	I0501 03:40:08.057074   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHKeyPath
	I0501 03:40:08.057215   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHUsername
	I0501 03:40:08.057334   68864 sshutil.go:53] new ssh client: &{IP:192.168.50.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/embed-certs-277128/id_rsa Username:docker}
	I0501 03:40:08.064476   68864 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43029
	I0501 03:40:08.064882   68864 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:40:08.065323   68864 main.go:141] libmachine: Using API Version  1
	I0501 03:40:08.065352   68864 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:40:08.065696   68864 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:40:08.065895   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetState
	I0501 03:40:08.067420   68864 main.go:141] libmachine: (embed-certs-277128) Calling .DriverName
	I0501 03:40:08.067740   68864 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0501 03:40:08.067762   68864 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0501 03:40:08.067774   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHHostname
	I0501 03:40:08.070587   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:40:08.071043   68864 main.go:141] libmachine: (embed-certs-277128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:7d", ip: ""} in network mk-embed-certs-277128: {Iface:virbr2 ExpiryTime:2024-05-01 04:39:45 +0000 UTC Type:0 Mac:52:54:00:96:11:7d Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-277128 Clientid:01:52:54:00:96:11:7d}
	I0501 03:40:08.071073   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined IP address 192.168.50.218 and MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:40:08.071225   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHPort
	I0501 03:40:08.071401   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHKeyPath
	I0501 03:40:08.071554   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHUsername
	I0501 03:40:08.071688   68864 sshutil.go:53] new ssh client: &{IP:192.168.50.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/embed-certs-277128/id_rsa Username:docker}
	I0501 03:40:08.204158   68864 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0501 03:40:08.229990   68864 node_ready.go:35] waiting up to 6m0s for node "embed-certs-277128" to be "Ready" ...
	I0501 03:40:08.289511   68864 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0501 03:40:08.289535   68864 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0501 03:40:08.301855   68864 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0501 03:40:08.311966   68864 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0501 03:40:08.330943   68864 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0501 03:40:08.330973   68864 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0501 03:40:08.384842   68864 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0501 03:40:08.384867   68864 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0501 03:40:08.445206   68864 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0501 03:40:09.434390   68864 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.122391479s)
	I0501 03:40:09.434458   68864 main.go:141] libmachine: Making call to close driver server
	I0501 03:40:09.434471   68864 main.go:141] libmachine: (embed-certs-277128) Calling .Close
	I0501 03:40:09.434518   68864 main.go:141] libmachine: Making call to close driver server
	I0501 03:40:09.434541   68864 main.go:141] libmachine: (embed-certs-277128) Calling .Close
	I0501 03:40:09.434567   68864 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.132680542s)
	I0501 03:40:09.434595   68864 main.go:141] libmachine: Making call to close driver server
	I0501 03:40:09.434604   68864 main.go:141] libmachine: (embed-certs-277128) Calling .Close
	I0501 03:40:09.434833   68864 main.go:141] libmachine: Successfully made call to close driver server
	I0501 03:40:09.434859   68864 main.go:141] libmachine: Successfully made call to close driver server
	I0501 03:40:09.434870   68864 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 03:40:09.434872   68864 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 03:40:09.434881   68864 main.go:141] libmachine: Making call to close driver server
	I0501 03:40:09.434882   68864 main.go:141] libmachine: Making call to close driver server
	I0501 03:40:09.434889   68864 main.go:141] libmachine: (embed-certs-277128) Calling .Close
	I0501 03:40:09.434890   68864 main.go:141] libmachine: (embed-certs-277128) Calling .Close
	I0501 03:40:09.434936   68864 main.go:141] libmachine: Successfully made call to close driver server
	I0501 03:40:09.434949   68864 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 03:40:09.434967   68864 main.go:141] libmachine: Making call to close driver server
	I0501 03:40:09.434994   68864 main.go:141] libmachine: (embed-certs-277128) Calling .Close
	I0501 03:40:09.434832   68864 main.go:141] libmachine: (embed-certs-277128) DBG | Closing plugin on server side
	I0501 03:40:09.435072   68864 main.go:141] libmachine: (embed-certs-277128) DBG | Closing plugin on server side
	I0501 03:40:09.437116   68864 main.go:141] libmachine: Successfully made call to close driver server
	I0501 03:40:09.437138   68864 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 03:40:09.437146   68864 main.go:141] libmachine: (embed-certs-277128) DBG | Closing plugin on server side
	I0501 03:40:09.437179   68864 main.go:141] libmachine: (embed-certs-277128) DBG | Closing plugin on server side
	I0501 03:40:09.437194   68864 main.go:141] libmachine: Successfully made call to close driver server
	I0501 03:40:09.437215   68864 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 03:40:09.437297   68864 main.go:141] libmachine: Successfully made call to close driver server
	I0501 03:40:09.437342   68864 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 03:40:09.437359   68864 addons.go:470] Verifying addon metrics-server=true in "embed-certs-277128"
	I0501 03:40:09.445787   68864 main.go:141] libmachine: Making call to close driver server
	I0501 03:40:09.445817   68864 main.go:141] libmachine: (embed-certs-277128) Calling .Close
	I0501 03:40:09.446053   68864 main.go:141] libmachine: Successfully made call to close driver server
	I0501 03:40:09.446090   68864 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 03:40:09.446112   68864 main.go:141] libmachine: (embed-certs-277128) DBG | Closing plugin on server side
	I0501 03:40:09.448129   68864 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass
	I0501 03:40:07.567551   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:07.567914   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | unable to find current IP address of domain default-k8s-diff-port-715118 in network mk-default-k8s-diff-port-715118
	I0501 03:40:07.567948   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | I0501 03:40:07.567860   70304 retry.go:31] will retry after 4.440791777s: waiting for machine to come up
	I0501 03:40:13.516002   69580 start.go:364] duration metric: took 3m31.9441828s to acquireMachinesLock for "old-k8s-version-503971"
	I0501 03:40:13.516087   69580 start.go:96] Skipping create...Using existing machine configuration
	I0501 03:40:13.516100   69580 fix.go:54] fixHost starting: 
	I0501 03:40:13.516559   69580 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:40:13.516601   69580 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:40:13.537158   69580 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38535
	I0501 03:40:13.537631   69580 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:40:13.538169   69580 main.go:141] libmachine: Using API Version  1
	I0501 03:40:13.538197   69580 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:40:13.538570   69580 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:40:13.538769   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .DriverName
	I0501 03:40:13.538958   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetState
	I0501 03:40:13.540454   69580 fix.go:112] recreateIfNeeded on old-k8s-version-503971: state=Stopped err=<nil>
	I0501 03:40:13.540486   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .DriverName
	W0501 03:40:13.540787   69580 fix.go:138] unexpected machine state, will restart: <nil>
	I0501 03:40:13.542670   69580 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-503971" ...
	I0501 03:40:09.449483   68864 addons.go:505] duration metric: took 1.447068548s for enable addons: enabled=[storage-provisioner metrics-server default-storageclass]
	I0501 03:40:10.233650   68864 node_ready.go:53] node "embed-certs-277128" has status "Ready":"False"
	I0501 03:40:12.234270   68864 node_ready.go:53] node "embed-certs-277128" has status "Ready":"False"
	I0501 03:40:12.011886   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:12.012305   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Found IP for machine: 192.168.72.158
	I0501 03:40:12.012335   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has current primary IP address 192.168.72.158 and MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:12.012347   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Reserving static IP address...
	I0501 03:40:12.012759   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-715118", mac: "52:54:00:87:12:31", ip: "192.168.72.158"} in network mk-default-k8s-diff-port-715118: {Iface:virbr3 ExpiryTime:2024-05-01 04:40:05 +0000 UTC Type:0 Mac:52:54:00:87:12:31 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-715118 Clientid:01:52:54:00:87:12:31}
	I0501 03:40:12.012796   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | skip adding static IP to network mk-default-k8s-diff-port-715118 - found existing host DHCP lease matching {name: "default-k8s-diff-port-715118", mac: "52:54:00:87:12:31", ip: "192.168.72.158"}
	I0501 03:40:12.012809   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Reserved static IP address: 192.168.72.158
	I0501 03:40:12.012828   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Waiting for SSH to be available...
	I0501 03:40:12.012835   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | Getting to WaitForSSH function...
	I0501 03:40:12.014719   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:12.015044   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:12:31", ip: ""} in network mk-default-k8s-diff-port-715118: {Iface:virbr3 ExpiryTime:2024-05-01 04:40:05 +0000 UTC Type:0 Mac:52:54:00:87:12:31 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-715118 Clientid:01:52:54:00:87:12:31}
	I0501 03:40:12.015080   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined IP address 192.168.72.158 and MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:12.015193   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | Using SSH client type: external
	I0501 03:40:12.015220   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | Using SSH private key: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/default-k8s-diff-port-715118/id_rsa (-rw-------)
	I0501 03:40:12.015269   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.158 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18779-13391/.minikube/machines/default-k8s-diff-port-715118/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0501 03:40:12.015280   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | About to run SSH command:
	I0501 03:40:12.015289   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | exit 0
	I0501 03:40:12.138881   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | SSH cmd err, output: <nil>: 
	I0501 03:40:12.139286   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetConfigRaw
	I0501 03:40:12.140056   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetIP
	I0501 03:40:12.142869   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:12.143322   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:12:31", ip: ""} in network mk-default-k8s-diff-port-715118: {Iface:virbr3 ExpiryTime:2024-05-01 04:40:05 +0000 UTC Type:0 Mac:52:54:00:87:12:31 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-715118 Clientid:01:52:54:00:87:12:31}
	I0501 03:40:12.143353   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined IP address 192.168.72.158 and MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:12.143662   69237 profile.go:143] Saving config to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/default-k8s-diff-port-715118/config.json ...
	I0501 03:40:12.143858   69237 machine.go:94] provisionDockerMachine start ...
	I0501 03:40:12.143876   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .DriverName
	I0501 03:40:12.144117   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHHostname
	I0501 03:40:12.146145   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:12.146535   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:12:31", ip: ""} in network mk-default-k8s-diff-port-715118: {Iface:virbr3 ExpiryTime:2024-05-01 04:40:05 +0000 UTC Type:0 Mac:52:54:00:87:12:31 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-715118 Clientid:01:52:54:00:87:12:31}
	I0501 03:40:12.146563   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined IP address 192.168.72.158 and MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:12.146712   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHPort
	I0501 03:40:12.146889   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHKeyPath
	I0501 03:40:12.147021   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHKeyPath
	I0501 03:40:12.147130   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHUsername
	I0501 03:40:12.147310   69237 main.go:141] libmachine: Using SSH client type: native
	I0501 03:40:12.147558   69237 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.158 22 <nil> <nil>}
	I0501 03:40:12.147574   69237 main.go:141] libmachine: About to run SSH command:
	hostname
	I0501 03:40:12.251357   69237 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0501 03:40:12.251387   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetMachineName
	I0501 03:40:12.251629   69237 buildroot.go:166] provisioning hostname "default-k8s-diff-port-715118"
	I0501 03:40:12.251658   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetMachineName
	I0501 03:40:12.251862   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHHostname
	I0501 03:40:12.254582   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:12.254892   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:12:31", ip: ""} in network mk-default-k8s-diff-port-715118: {Iface:virbr3 ExpiryTime:2024-05-01 04:40:05 +0000 UTC Type:0 Mac:52:54:00:87:12:31 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-715118 Clientid:01:52:54:00:87:12:31}
	I0501 03:40:12.254924   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined IP address 192.168.72.158 and MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:12.255073   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHPort
	I0501 03:40:12.255276   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHKeyPath
	I0501 03:40:12.255435   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHKeyPath
	I0501 03:40:12.255575   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHUsername
	I0501 03:40:12.255744   69237 main.go:141] libmachine: Using SSH client type: native
	I0501 03:40:12.255905   69237 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.158 22 <nil> <nil>}
	I0501 03:40:12.255917   69237 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-715118 && echo "default-k8s-diff-port-715118" | sudo tee /etc/hostname
	I0501 03:40:12.377588   69237 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-715118
	
	I0501 03:40:12.377628   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHHostname
	I0501 03:40:12.380627   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:12.380927   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:12:31", ip: ""} in network mk-default-k8s-diff-port-715118: {Iface:virbr3 ExpiryTime:2024-05-01 04:40:05 +0000 UTC Type:0 Mac:52:54:00:87:12:31 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-715118 Clientid:01:52:54:00:87:12:31}
	I0501 03:40:12.380958   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined IP address 192.168.72.158 and MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:12.381155   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHPort
	I0501 03:40:12.381372   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHKeyPath
	I0501 03:40:12.381550   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHKeyPath
	I0501 03:40:12.381723   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHUsername
	I0501 03:40:12.381907   69237 main.go:141] libmachine: Using SSH client type: native
	I0501 03:40:12.382148   69237 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.158 22 <nil> <nil>}
	I0501 03:40:12.382170   69237 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-715118' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-715118/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-715118' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0501 03:40:12.494424   69237 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0501 03:40:12.494454   69237 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18779-13391/.minikube CaCertPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18779-13391/.minikube}
	I0501 03:40:12.494484   69237 buildroot.go:174] setting up certificates
	I0501 03:40:12.494493   69237 provision.go:84] configureAuth start
	I0501 03:40:12.494504   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetMachineName
	I0501 03:40:12.494786   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetIP
	I0501 03:40:12.497309   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:12.497584   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:12:31", ip: ""} in network mk-default-k8s-diff-port-715118: {Iface:virbr3 ExpiryTime:2024-05-01 04:40:05 +0000 UTC Type:0 Mac:52:54:00:87:12:31 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-715118 Clientid:01:52:54:00:87:12:31}
	I0501 03:40:12.497616   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined IP address 192.168.72.158 and MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:12.497746   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHHostname
	I0501 03:40:12.500010   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:12.500302   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:12:31", ip: ""} in network mk-default-k8s-diff-port-715118: {Iface:virbr3 ExpiryTime:2024-05-01 04:40:05 +0000 UTC Type:0 Mac:52:54:00:87:12:31 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-715118 Clientid:01:52:54:00:87:12:31}
	I0501 03:40:12.500322   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined IP address 192.168.72.158 and MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:12.500449   69237 provision.go:143] copyHostCerts
	I0501 03:40:12.500505   69237 exec_runner.go:144] found /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem, removing ...
	I0501 03:40:12.500524   69237 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem
	I0501 03:40:12.500598   69237 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem (1078 bytes)
	I0501 03:40:12.500759   69237 exec_runner.go:144] found /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem, removing ...
	I0501 03:40:12.500772   69237 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem
	I0501 03:40:12.500815   69237 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem (1123 bytes)
	I0501 03:40:12.500891   69237 exec_runner.go:144] found /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem, removing ...
	I0501 03:40:12.500900   69237 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem
	I0501 03:40:12.500925   69237 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem (1675 bytes)
	I0501 03:40:12.500991   69237 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-715118 san=[127.0.0.1 192.168.72.158 default-k8s-diff-port-715118 localhost minikube]
	I0501 03:40:12.779037   69237 provision.go:177] copyRemoteCerts
	I0501 03:40:12.779104   69237 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0501 03:40:12.779139   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHHostname
	I0501 03:40:12.781800   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:12.782159   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:12:31", ip: ""} in network mk-default-k8s-diff-port-715118: {Iface:virbr3 ExpiryTime:2024-05-01 04:40:05 +0000 UTC Type:0 Mac:52:54:00:87:12:31 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-715118 Clientid:01:52:54:00:87:12:31}
	I0501 03:40:12.782195   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined IP address 192.168.72.158 and MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:12.782356   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHPort
	I0501 03:40:12.782655   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHKeyPath
	I0501 03:40:12.782812   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHUsername
	I0501 03:40:12.782946   69237 sshutil.go:53] new ssh client: &{IP:192.168.72.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/default-k8s-diff-port-715118/id_rsa Username:docker}
	I0501 03:40:12.867622   69237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0501 03:40:12.897105   69237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0501 03:40:12.926675   69237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0501 03:40:12.955373   69237 provision.go:87] duration metric: took 460.865556ms to configureAuth
	I0501 03:40:12.955405   69237 buildroot.go:189] setting minikube options for container-runtime
	I0501 03:40:12.955606   69237 config.go:182] Loaded profile config "default-k8s-diff-port-715118": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0501 03:40:12.955700   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHHostname
	I0501 03:40:12.958286   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:12.958632   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:12:31", ip: ""} in network mk-default-k8s-diff-port-715118: {Iface:virbr3 ExpiryTime:2024-05-01 04:40:05 +0000 UTC Type:0 Mac:52:54:00:87:12:31 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-715118 Clientid:01:52:54:00:87:12:31}
	I0501 03:40:12.958670   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined IP address 192.168.72.158 and MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:12.958800   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHPort
	I0501 03:40:12.959007   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHKeyPath
	I0501 03:40:12.959225   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHKeyPath
	I0501 03:40:12.959374   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHUsername
	I0501 03:40:12.959554   69237 main.go:141] libmachine: Using SSH client type: native
	I0501 03:40:12.959729   69237 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.158 22 <nil> <nil>}
	I0501 03:40:12.959748   69237 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0501 03:40:13.253328   69237 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0501 03:40:13.253356   69237 machine.go:97] duration metric: took 1.109484866s to provisionDockerMachine
	I0501 03:40:13.253371   69237 start.go:293] postStartSetup for "default-k8s-diff-port-715118" (driver="kvm2")
	I0501 03:40:13.253385   69237 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0501 03:40:13.253405   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .DriverName
	I0501 03:40:13.253753   69237 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0501 03:40:13.253790   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHHostname
	I0501 03:40:13.256734   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:13.257187   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:12:31", ip: ""} in network mk-default-k8s-diff-port-715118: {Iface:virbr3 ExpiryTime:2024-05-01 04:40:05 +0000 UTC Type:0 Mac:52:54:00:87:12:31 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-715118 Clientid:01:52:54:00:87:12:31}
	I0501 03:40:13.257214   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined IP address 192.168.72.158 and MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:13.257345   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHPort
	I0501 03:40:13.257547   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHKeyPath
	I0501 03:40:13.257708   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHUsername
	I0501 03:40:13.257856   69237 sshutil.go:53] new ssh client: &{IP:192.168.72.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/default-k8s-diff-port-715118/id_rsa Username:docker}
	I0501 03:40:13.353373   69237 ssh_runner.go:195] Run: cat /etc/os-release
	I0501 03:40:13.359653   69237 info.go:137] Remote host: Buildroot 2023.02.9
	I0501 03:40:13.359679   69237 filesync.go:126] Scanning /home/jenkins/minikube-integration/18779-13391/.minikube/addons for local assets ...
	I0501 03:40:13.359747   69237 filesync.go:126] Scanning /home/jenkins/minikube-integration/18779-13391/.minikube/files for local assets ...
	I0501 03:40:13.359854   69237 filesync.go:149] local asset: /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem -> 207242.pem in /etc/ssl/certs
	I0501 03:40:13.359964   69237 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0501 03:40:13.370608   69237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem --> /etc/ssl/certs/207242.pem (1708 bytes)
	I0501 03:40:13.402903   69237 start.go:296] duration metric: took 149.518346ms for postStartSetup
	I0501 03:40:13.402946   69237 fix.go:56] duration metric: took 20.610871873s for fixHost
	I0501 03:40:13.402967   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHHostname
	I0501 03:40:13.406324   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:13.406762   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:12:31", ip: ""} in network mk-default-k8s-diff-port-715118: {Iface:virbr3 ExpiryTime:2024-05-01 04:40:05 +0000 UTC Type:0 Mac:52:54:00:87:12:31 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-715118 Clientid:01:52:54:00:87:12:31}
	I0501 03:40:13.406792   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined IP address 192.168.72.158 and MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:13.407028   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHPort
	I0501 03:40:13.407274   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHKeyPath
	I0501 03:40:13.407505   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHKeyPath
	I0501 03:40:13.407645   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHUsername
	I0501 03:40:13.407831   69237 main.go:141] libmachine: Using SSH client type: native
	I0501 03:40:13.408034   69237 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.158 22 <nil> <nil>}
	I0501 03:40:13.408045   69237 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0501 03:40:13.515775   69237 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714534813.490981768
	
	I0501 03:40:13.515814   69237 fix.go:216] guest clock: 1714534813.490981768
	I0501 03:40:13.515852   69237 fix.go:229] Guest: 2024-05-01 03:40:13.490981768 +0000 UTC Remote: 2024-05-01 03:40:13.402950224 +0000 UTC m=+262.796298359 (delta=88.031544ms)
	I0501 03:40:13.515884   69237 fix.go:200] guest clock delta is within tolerance: 88.031544ms
	I0501 03:40:13.515891   69237 start.go:83] releasing machines lock for "default-k8s-diff-port-715118", held for 20.723857967s
	I0501 03:40:13.515976   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .DriverName
	I0501 03:40:13.516272   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetIP
	I0501 03:40:13.519627   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:13.520098   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:12:31", ip: ""} in network mk-default-k8s-diff-port-715118: {Iface:virbr3 ExpiryTime:2024-05-01 04:40:05 +0000 UTC Type:0 Mac:52:54:00:87:12:31 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-715118 Clientid:01:52:54:00:87:12:31}
	I0501 03:40:13.520128   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined IP address 192.168.72.158 and MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:13.520304   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .DriverName
	I0501 03:40:13.520922   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .DriverName
	I0501 03:40:13.521122   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .DriverName
	I0501 03:40:13.521212   69237 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0501 03:40:13.521292   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHHostname
	I0501 03:40:13.521355   69237 ssh_runner.go:195] Run: cat /version.json
	I0501 03:40:13.521387   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHHostname
	I0501 03:40:13.524292   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:13.524328   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:13.524612   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:12:31", ip: ""} in network mk-default-k8s-diff-port-715118: {Iface:virbr3 ExpiryTime:2024-05-01 04:40:05 +0000 UTC Type:0 Mac:52:54:00:87:12:31 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-715118 Clientid:01:52:54:00:87:12:31}
	I0501 03:40:13.524672   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined IP address 192.168.72.158 and MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:13.524819   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:12:31", ip: ""} in network mk-default-k8s-diff-port-715118: {Iface:virbr3 ExpiryTime:2024-05-01 04:40:05 +0000 UTC Type:0 Mac:52:54:00:87:12:31 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-715118 Clientid:01:52:54:00:87:12:31}
	I0501 03:40:13.524948   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined IP address 192.168.72.158 and MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:13.524989   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHPort
	I0501 03:40:13.525033   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHPort
	I0501 03:40:13.525171   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHKeyPath
	I0501 03:40:13.525196   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHKeyPath
	I0501 03:40:13.525306   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHUsername
	I0501 03:40:13.525401   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHUsername
	I0501 03:40:13.525490   69237 sshutil.go:53] new ssh client: &{IP:192.168.72.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/default-k8s-diff-port-715118/id_rsa Username:docker}
	I0501 03:40:13.525553   69237 sshutil.go:53] new ssh client: &{IP:192.168.72.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/default-k8s-diff-port-715118/id_rsa Username:docker}
	I0501 03:40:13.628623   69237 ssh_runner.go:195] Run: systemctl --version
	I0501 03:40:13.636013   69237 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0501 03:40:13.787414   69237 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0501 03:40:13.795777   69237 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0501 03:40:13.795867   69237 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0501 03:40:13.822287   69237 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0501 03:40:13.822326   69237 start.go:494] detecting cgroup driver to use...
	I0501 03:40:13.822507   69237 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0501 03:40:13.841310   69237 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0501 03:40:13.857574   69237 docker.go:217] disabling cri-docker service (if available) ...
	I0501 03:40:13.857645   69237 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0501 03:40:13.872903   69237 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0501 03:40:13.889032   69237 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0501 03:40:14.020563   69237 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0501 03:40:14.222615   69237 docker.go:233] disabling docker service ...
	I0501 03:40:14.222691   69237 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0501 03:40:14.245841   69237 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0501 03:40:14.261001   69237 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0501 03:40:14.385943   69237 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0501 03:40:14.516899   69237 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0501 03:40:14.545138   69237 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0501 03:40:14.570308   69237 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0501 03:40:14.570373   69237 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:40:14.586460   69237 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0501 03:40:14.586535   69237 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:40:14.598947   69237 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:40:14.617581   69237 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:40:14.630097   69237 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0501 03:40:14.642379   69237 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:40:14.653723   69237 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:40:14.674508   69237 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:40:14.685890   69237 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0501 03:40:14.696560   69237 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0501 03:40:14.696614   69237 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0501 03:40:14.713050   69237 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0501 03:40:14.723466   69237 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 03:40:14.884910   69237 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0501 03:40:15.030618   69237 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0501 03:40:15.030689   69237 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0501 03:40:15.036403   69237 start.go:562] Will wait 60s for crictl version
	I0501 03:40:15.036470   69237 ssh_runner.go:195] Run: which crictl
	I0501 03:40:15.040924   69237 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0501 03:40:15.082944   69237 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0501 03:40:15.083037   69237 ssh_runner.go:195] Run: crio --version
	I0501 03:40:15.123492   69237 ssh_runner.go:195] Run: crio --version
	I0501 03:40:15.160739   69237 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0501 03:40:15.162026   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetIP
	I0501 03:40:15.164966   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:15.165378   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:12:31", ip: ""} in network mk-default-k8s-diff-port-715118: {Iface:virbr3 ExpiryTime:2024-05-01 04:40:05 +0000 UTC Type:0 Mac:52:54:00:87:12:31 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-715118 Clientid:01:52:54:00:87:12:31}
	I0501 03:40:15.165417   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined IP address 192.168.72.158 and MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:15.165621   69237 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0501 03:40:15.171717   69237 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0501 03:40:15.190203   69237 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-715118 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.0 ClusterName:default-k8s-diff-port-715118 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.158 Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0501 03:40:15.190359   69237 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0501 03:40:15.190439   69237 ssh_runner.go:195] Run: sudo crictl images --output json
	I0501 03:40:15.240549   69237 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0501 03:40:15.240606   69237 ssh_runner.go:195] Run: which lz4
	I0501 03:40:15.246523   69237 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0501 03:40:15.253094   69237 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0501 03:40:15.253139   69237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394544937 bytes)
	I0501 03:40:13.544100   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .Start
	I0501 03:40:13.544328   69580 main.go:141] libmachine: (old-k8s-version-503971) Ensuring networks are active...
	I0501 03:40:13.545238   69580 main.go:141] libmachine: (old-k8s-version-503971) Ensuring network default is active
	I0501 03:40:13.545621   69580 main.go:141] libmachine: (old-k8s-version-503971) Ensuring network mk-old-k8s-version-503971 is active
	I0501 03:40:13.546072   69580 main.go:141] libmachine: (old-k8s-version-503971) Getting domain xml...
	I0501 03:40:13.546928   69580 main.go:141] libmachine: (old-k8s-version-503971) Creating domain...
	I0501 03:40:14.858558   69580 main.go:141] libmachine: (old-k8s-version-503971) Waiting to get IP...
	I0501 03:40:14.859690   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:14.860108   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | unable to find current IP address of domain old-k8s-version-503971 in network mk-old-k8s-version-503971
	I0501 03:40:14.860215   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | I0501 03:40:14.860103   70499 retry.go:31] will retry after 294.057322ms: waiting for machine to come up
	I0501 03:40:15.155490   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:15.155922   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | unable to find current IP address of domain old-k8s-version-503971 in network mk-old-k8s-version-503971
	I0501 03:40:15.155954   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | I0501 03:40:15.155870   70499 retry.go:31] will retry after 281.238966ms: waiting for machine to come up
	I0501 03:40:15.439196   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:15.439735   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | unable to find current IP address of domain old-k8s-version-503971 in network mk-old-k8s-version-503971
	I0501 03:40:15.439783   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | I0501 03:40:15.439697   70499 retry.go:31] will retry after 429.353689ms: waiting for machine to come up
	I0501 03:40:15.871266   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:15.871947   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | unable to find current IP address of domain old-k8s-version-503971 in network mk-old-k8s-version-503971
	I0501 03:40:15.871970   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | I0501 03:40:15.871895   70499 retry.go:31] will retry after 478.685219ms: waiting for machine to come up
	I0501 03:40:16.352661   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:16.353125   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | unable to find current IP address of domain old-k8s-version-503971 in network mk-old-k8s-version-503971
	I0501 03:40:16.353161   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | I0501 03:40:16.353087   70499 retry.go:31] will retry after 642.905156ms: waiting for machine to come up
	I0501 03:40:14.235378   68864 node_ready.go:53] node "embed-certs-277128" has status "Ready":"False"
	I0501 03:40:15.735465   68864 node_ready.go:49] node "embed-certs-277128" has status "Ready":"True"
	I0501 03:40:15.735494   68864 node_ready.go:38] duration metric: took 7.50546727s for node "embed-certs-277128" to be "Ready" ...
	I0501 03:40:15.735503   68864 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0501 03:40:15.743215   68864 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-sjplt" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:17.752821   68864 pod_ready.go:102] pod "coredns-7db6d8ff4d-sjplt" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:17.121023   69237 crio.go:462] duration metric: took 1.874524806s to copy over tarball
	I0501 03:40:17.121097   69237 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0501 03:40:19.792970   69237 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.671840765s)
	I0501 03:40:19.793004   69237 crio.go:469] duration metric: took 2.67194801s to extract the tarball
	I0501 03:40:19.793014   69237 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0501 03:40:19.834845   69237 ssh_runner.go:195] Run: sudo crictl images --output json
	I0501 03:40:19.896841   69237 crio.go:514] all images are preloaded for cri-o runtime.
	I0501 03:40:19.896881   69237 cache_images.go:84] Images are preloaded, skipping loading
	I0501 03:40:19.896892   69237 kubeadm.go:928] updating node { 192.168.72.158 8444 v1.30.0 crio true true} ...
	I0501 03:40:19.897027   69237 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-715118 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.158
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:default-k8s-diff-port-715118 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0501 03:40:19.897113   69237 ssh_runner.go:195] Run: crio config
	I0501 03:40:19.953925   69237 cni.go:84] Creating CNI manager for ""
	I0501 03:40:19.953956   69237 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0501 03:40:19.953971   69237 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0501 03:40:19.953991   69237 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.158 APIServerPort:8444 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-715118 NodeName:default-k8s-diff-port-715118 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.158"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.158 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0501 03:40:19.954133   69237 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.158
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-715118"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.158
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.158"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0501 03:40:19.954198   69237 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0501 03:40:19.967632   69237 binaries.go:44] Found k8s binaries, skipping transfer
	I0501 03:40:19.967708   69237 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0501 03:40:19.984161   69237 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0501 03:40:20.006540   69237 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0501 03:40:20.029218   69237 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0501 03:40:20.051612   69237 ssh_runner.go:195] Run: grep 192.168.72.158	control-plane.minikube.internal$ /etc/hosts
	I0501 03:40:20.056502   69237 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.158	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0501 03:40:20.071665   69237 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 03:40:20.194289   69237 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0501 03:40:20.215402   69237 certs.go:68] Setting up /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/default-k8s-diff-port-715118 for IP: 192.168.72.158
	I0501 03:40:20.215440   69237 certs.go:194] generating shared ca certs ...
	I0501 03:40:20.215471   69237 certs.go:226] acquiring lock for ca certs: {Name:mk85a75d14c00780d4bf09cd9bdaf0f96f1b8fce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 03:40:20.215698   69237 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18779-13391/.minikube/ca.key
	I0501 03:40:20.215769   69237 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.key
	I0501 03:40:20.215785   69237 certs.go:256] generating profile certs ...
	I0501 03:40:20.215922   69237 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/default-k8s-diff-port-715118/client.key
	I0501 03:40:20.216023   69237 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/default-k8s-diff-port-715118/apiserver.key.91bc3872
	I0501 03:40:20.216094   69237 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/default-k8s-diff-port-715118/proxy-client.key
	I0501 03:40:20.216275   69237 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/20724.pem (1338 bytes)
	W0501 03:40:20.216321   69237 certs.go:480] ignoring /home/jenkins/minikube-integration/18779-13391/.minikube/certs/20724_empty.pem, impossibly tiny 0 bytes
	I0501 03:40:20.216337   69237 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca-key.pem (1679 bytes)
	I0501 03:40:20.216375   69237 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem (1078 bytes)
	I0501 03:40:20.216439   69237 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem (1123 bytes)
	I0501 03:40:20.216472   69237 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem (1675 bytes)
	I0501 03:40:20.216560   69237 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem (1708 bytes)
	I0501 03:40:20.217306   69237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0501 03:40:20.256162   69237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0501 03:40:20.293643   69237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0501 03:40:20.329175   69237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0501 03:40:20.367715   69237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/default-k8s-diff-port-715118/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0501 03:40:20.400024   69237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/default-k8s-diff-port-715118/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0501 03:40:20.428636   69237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/default-k8s-diff-port-715118/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0501 03:40:20.458689   69237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/default-k8s-diff-port-715118/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0501 03:40:20.487619   69237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem --> /usr/share/ca-certificates/207242.pem (1708 bytes)
	I0501 03:40:20.518140   69237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0501 03:40:20.547794   69237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/certs/20724.pem --> /usr/share/ca-certificates/20724.pem (1338 bytes)
	I0501 03:40:20.580453   69237 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0501 03:40:20.605211   69237 ssh_runner.go:195] Run: openssl version
	I0501 03:40:20.612269   69237 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/207242.pem && ln -fs /usr/share/ca-certificates/207242.pem /etc/ssl/certs/207242.pem"
	I0501 03:40:20.626575   69237 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/207242.pem
	I0501 03:40:20.632370   69237 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May  1 02:20 /usr/share/ca-certificates/207242.pem
	I0501 03:40:20.632439   69237 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/207242.pem
	I0501 03:40:20.639563   69237 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/207242.pem /etc/ssl/certs/3ec20f2e.0"
	I0501 03:40:16.997533   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:16.998034   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | unable to find current IP address of domain old-k8s-version-503971 in network mk-old-k8s-version-503971
	I0501 03:40:16.998076   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | I0501 03:40:16.997984   70499 retry.go:31] will retry after 596.56948ms: waiting for machine to come up
	I0501 03:40:17.596671   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:17.597182   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | unable to find current IP address of domain old-k8s-version-503971 in network mk-old-k8s-version-503971
	I0501 03:40:17.597207   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | I0501 03:40:17.597132   70499 retry.go:31] will retry after 770.742109ms: waiting for machine to come up
	I0501 03:40:18.369337   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:18.369833   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | unable to find current IP address of domain old-k8s-version-503971 in network mk-old-k8s-version-503971
	I0501 03:40:18.369864   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | I0501 03:40:18.369780   70499 retry.go:31] will retry after 1.382502808s: waiting for machine to come up
	I0501 03:40:19.753936   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:19.754419   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | unable to find current IP address of domain old-k8s-version-503971 in network mk-old-k8s-version-503971
	I0501 03:40:19.754458   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | I0501 03:40:19.754363   70499 retry.go:31] will retry after 1.344792989s: waiting for machine to come up
	I0501 03:40:21.101047   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:21.101474   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | unable to find current IP address of domain old-k8s-version-503971 in network mk-old-k8s-version-503971
	I0501 03:40:21.101514   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | I0501 03:40:21.101442   70499 retry.go:31] will retry after 1.636964906s: waiting for machine to come up
	I0501 03:40:20.252239   68864 pod_ready.go:102] pod "coredns-7db6d8ff4d-sjplt" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:22.751407   68864 pod_ready.go:92] pod "coredns-7db6d8ff4d-sjplt" in "kube-system" namespace has status "Ready":"True"
	I0501 03:40:22.751431   68864 pod_ready.go:81] duration metric: took 7.008190087s for pod "coredns-7db6d8ff4d-sjplt" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:22.751442   68864 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-277128" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:22.757104   68864 pod_ready.go:92] pod "etcd-embed-certs-277128" in "kube-system" namespace has status "Ready":"True"
	I0501 03:40:22.757124   68864 pod_ready.go:81] duration metric: took 5.677117ms for pod "etcd-embed-certs-277128" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:22.757141   68864 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-277128" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:22.763083   68864 pod_ready.go:92] pod "kube-apiserver-embed-certs-277128" in "kube-system" namespace has status "Ready":"True"
	I0501 03:40:22.763107   68864 pod_ready.go:81] duration metric: took 5.958961ms for pod "kube-apiserver-embed-certs-277128" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:22.763119   68864 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-277128" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:22.768163   68864 pod_ready.go:92] pod "kube-controller-manager-embed-certs-277128" in "kube-system" namespace has status "Ready":"True"
	I0501 03:40:22.768182   68864 pod_ready.go:81] duration metric: took 5.055934ms for pod "kube-controller-manager-embed-certs-277128" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:22.768193   68864 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-phx7x" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:22.772478   68864 pod_ready.go:92] pod "kube-proxy-phx7x" in "kube-system" namespace has status "Ready":"True"
	I0501 03:40:22.772497   68864 pod_ready.go:81] duration metric: took 4.297358ms for pod "kube-proxy-phx7x" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:22.772505   68864 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-277128" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:23.149692   68864 pod_ready.go:92] pod "kube-scheduler-embed-certs-277128" in "kube-system" namespace has status "Ready":"True"
	I0501 03:40:23.149726   68864 pod_ready.go:81] duration metric: took 377.213314ms for pod "kube-scheduler-embed-certs-277128" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:23.149741   68864 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:20.653202   69237 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0501 03:40:20.878582   69237 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0501 03:40:20.884671   69237 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May  1 02:08 /usr/share/ca-certificates/minikubeCA.pem
	I0501 03:40:20.884755   69237 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0501 03:40:20.891633   69237 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0501 03:40:20.906032   69237 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20724.pem && ln -fs /usr/share/ca-certificates/20724.pem /etc/ssl/certs/20724.pem"
	I0501 03:40:20.924491   69237 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20724.pem
	I0501 03:40:20.931346   69237 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May  1 02:20 /usr/share/ca-certificates/20724.pem
	I0501 03:40:20.931421   69237 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20724.pem
	I0501 03:40:20.937830   69237 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20724.pem /etc/ssl/certs/51391683.0"
	I0501 03:40:20.951239   69237 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0501 03:40:20.956883   69237 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0501 03:40:20.964048   69237 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0501 03:40:20.971156   69237 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0501 03:40:20.978243   69237 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0501 03:40:20.985183   69237 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0501 03:40:20.991709   69237 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0501 03:40:20.998390   69237 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-715118 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.0 ClusterName:default-k8s-diff-port-715118 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.158 Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 03:40:20.998509   69237 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0501 03:40:20.998558   69237 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0501 03:40:21.051469   69237 cri.go:89] found id: ""
	I0501 03:40:21.051575   69237 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0501 03:40:21.063280   69237 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0501 03:40:21.063301   69237 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0501 03:40:21.063307   69237 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0501 03:40:21.063381   69237 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0501 03:40:21.077380   69237 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0501 03:40:21.078445   69237 kubeconfig.go:125] found "default-k8s-diff-port-715118" server: "https://192.168.72.158:8444"
	I0501 03:40:21.080872   69237 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0501 03:40:21.095004   69237 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.158
	I0501 03:40:21.095045   69237 kubeadm.go:1154] stopping kube-system containers ...
	I0501 03:40:21.095059   69237 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0501 03:40:21.095123   69237 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0501 03:40:21.151629   69237 cri.go:89] found id: ""
	I0501 03:40:21.151711   69237 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0501 03:40:21.177077   69237 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0501 03:40:21.192057   69237 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0501 03:40:21.192087   69237 kubeadm.go:156] found existing configuration files:
	
	I0501 03:40:21.192146   69237 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0501 03:40:21.206784   69237 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0501 03:40:21.206870   69237 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0501 03:40:21.221942   69237 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0501 03:40:21.236442   69237 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0501 03:40:21.236516   69237 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0501 03:40:21.251285   69237 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0501 03:40:21.265997   69237 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0501 03:40:21.266049   69237 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0501 03:40:21.281137   69237 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0501 03:40:21.297713   69237 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0501 03:40:21.297783   69237 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0501 03:40:21.314264   69237 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0501 03:40:21.328605   69237 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:40:21.478475   69237 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:40:22.161692   69237 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:40:22.432136   69237 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:40:22.514744   69237 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:40:22.597689   69237 api_server.go:52] waiting for apiserver process to appear ...
	I0501 03:40:22.597770   69237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:23.098146   69237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:23.597831   69237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:23.629375   69237 api_server.go:72] duration metric: took 1.031684055s to wait for apiserver process to appear ...
	I0501 03:40:23.629462   69237 api_server.go:88] waiting for apiserver healthz status ...
	I0501 03:40:23.629500   69237 api_server.go:253] Checking apiserver healthz at https://192.168.72.158:8444/healthz ...
	I0501 03:40:23.630045   69237 api_server.go:269] stopped: https://192.168.72.158:8444/healthz: Get "https://192.168.72.158:8444/healthz": dial tcp 192.168.72.158:8444: connect: connection refused
	I0501 03:40:24.129831   69237 api_server.go:253] Checking apiserver healthz at https://192.168.72.158:8444/healthz ...
	I0501 03:40:22.740241   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:22.740692   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | unable to find current IP address of domain old-k8s-version-503971 in network mk-old-k8s-version-503971
	I0501 03:40:22.740722   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | I0501 03:40:22.740656   70499 retry.go:31] will retry after 1.899831455s: waiting for machine to come up
	I0501 03:40:24.642609   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:24.643075   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | unable to find current IP address of domain old-k8s-version-503971 in network mk-old-k8s-version-503971
	I0501 03:40:24.643104   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | I0501 03:40:24.643019   70499 retry.go:31] will retry after 3.503333894s: waiting for machine to come up
	I0501 03:40:25.157335   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:27.160083   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:27.091079   69237 api_server.go:279] https://192.168.72.158:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0501 03:40:27.091134   69237 api_server.go:103] status: https://192.168.72.158:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0501 03:40:27.091152   69237 api_server.go:253] Checking apiserver healthz at https://192.168.72.158:8444/healthz ...
	I0501 03:40:27.163481   69237 api_server.go:279] https://192.168.72.158:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0501 03:40:27.163509   69237 api_server.go:103] status: https://192.168.72.158:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0501 03:40:27.163522   69237 api_server.go:253] Checking apiserver healthz at https://192.168.72.158:8444/healthz ...
	I0501 03:40:27.175097   69237 api_server.go:279] https://192.168.72.158:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0501 03:40:27.175129   69237 api_server.go:103] status: https://192.168.72.158:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0501 03:40:27.629613   69237 api_server.go:253] Checking apiserver healthz at https://192.168.72.158:8444/healthz ...
	I0501 03:40:27.637166   69237 api_server.go:279] https://192.168.72.158:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0501 03:40:27.637202   69237 api_server.go:103] status: https://192.168.72.158:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0501 03:40:28.130467   69237 api_server.go:253] Checking apiserver healthz at https://192.168.72.158:8444/healthz ...
	I0501 03:40:28.148799   69237 api_server.go:279] https://192.168.72.158:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0501 03:40:28.148823   69237 api_server.go:103] status: https://192.168.72.158:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0501 03:40:28.630500   69237 api_server.go:253] Checking apiserver healthz at https://192.168.72.158:8444/healthz ...
	I0501 03:40:28.642856   69237 api_server.go:279] https://192.168.72.158:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0501 03:40:28.642890   69237 api_server.go:103] status: https://192.168.72.158:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0501 03:40:29.130453   69237 api_server.go:253] Checking apiserver healthz at https://192.168.72.158:8444/healthz ...
	I0501 03:40:29.137783   69237 api_server.go:279] https://192.168.72.158:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0501 03:40:29.137819   69237 api_server.go:103] status: https://192.168.72.158:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0501 03:40:29.630448   69237 api_server.go:253] Checking apiserver healthz at https://192.168.72.158:8444/healthz ...
	I0501 03:40:29.634736   69237 api_server.go:279] https://192.168.72.158:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0501 03:40:29.634764   69237 api_server.go:103] status: https://192.168.72.158:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0501 03:40:30.130371   69237 api_server.go:253] Checking apiserver healthz at https://192.168.72.158:8444/healthz ...
	I0501 03:40:30.134727   69237 api_server.go:279] https://192.168.72.158:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0501 03:40:30.134755   69237 api_server.go:103] status: https://192.168.72.158:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0501 03:40:30.630555   69237 api_server.go:253] Checking apiserver healthz at https://192.168.72.158:8444/healthz ...
	I0501 03:40:30.637025   69237 api_server.go:279] https://192.168.72.158:8444/healthz returned 200:
	ok
	I0501 03:40:30.644179   69237 api_server.go:141] control plane version: v1.30.0
	I0501 03:40:30.644209   69237 api_server.go:131] duration metric: took 7.014727807s to wait for apiserver health ...
	I0501 03:40:30.644217   69237 cni.go:84] Creating CNI manager for ""
	I0501 03:40:30.644223   69237 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0501 03:40:30.646018   69237 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0501 03:40:30.647222   69237 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0501 03:40:28.148102   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:28.148506   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | unable to find current IP address of domain old-k8s-version-503971 in network mk-old-k8s-version-503971
	I0501 03:40:28.148547   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | I0501 03:40:28.148463   70499 retry.go:31] will retry after 4.150508159s: waiting for machine to come up
	I0501 03:40:33.783990   68640 start.go:364] duration metric: took 56.072338201s to acquireMachinesLock for "no-preload-892672"
	I0501 03:40:33.784047   68640 start.go:96] Skipping create...Using existing machine configuration
	I0501 03:40:33.784056   68640 fix.go:54] fixHost starting: 
	I0501 03:40:33.784468   68640 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:40:33.784504   68640 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:40:33.801460   68640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37745
	I0501 03:40:33.802023   68640 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:40:33.802634   68640 main.go:141] libmachine: Using API Version  1
	I0501 03:40:33.802669   68640 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:40:33.803062   68640 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:40:33.803262   68640 main.go:141] libmachine: (no-preload-892672) Calling .DriverName
	I0501 03:40:33.803379   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetState
	I0501 03:40:33.805241   68640 fix.go:112] recreateIfNeeded on no-preload-892672: state=Stopped err=<nil>
	I0501 03:40:33.805266   68640 main.go:141] libmachine: (no-preload-892672) Calling .DriverName
	W0501 03:40:33.805452   68640 fix.go:138] unexpected machine state, will restart: <nil>
	I0501 03:40:33.807020   68640 out.go:177] * Restarting existing kvm2 VM for "no-preload-892672" ...
	I0501 03:40:29.656911   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:32.158119   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:32.303427   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:32.303804   69580 main.go:141] libmachine: (old-k8s-version-503971) Found IP for machine: 192.168.61.104
	I0501 03:40:32.303837   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has current primary IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:32.303851   69580 main.go:141] libmachine: (old-k8s-version-503971) Reserving static IP address...
	I0501 03:40:32.304254   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "old-k8s-version-503971", mac: "52:54:00:7d:68:83", ip: "192.168.61.104"} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:40:32.304286   69580 main.go:141] libmachine: (old-k8s-version-503971) Reserved static IP address: 192.168.61.104
	I0501 03:40:32.304305   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | skip adding static IP to network mk-old-k8s-version-503971 - found existing host DHCP lease matching {name: "old-k8s-version-503971", mac: "52:54:00:7d:68:83", ip: "192.168.61.104"}
	I0501 03:40:32.304323   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | Getting to WaitForSSH function...
	I0501 03:40:32.304337   69580 main.go:141] libmachine: (old-k8s-version-503971) Waiting for SSH to be available...
	I0501 03:40:32.306619   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:32.306972   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:68:83", ip: ""} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:40:32.307011   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:32.307114   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | Using SSH client type: external
	I0501 03:40:32.307138   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | Using SSH private key: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/old-k8s-version-503971/id_rsa (-rw-------)
	I0501 03:40:32.307174   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.104 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18779-13391/.minikube/machines/old-k8s-version-503971/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0501 03:40:32.307188   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | About to run SSH command:
	I0501 03:40:32.307224   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | exit 0
	I0501 03:40:32.438508   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | SSH cmd err, output: <nil>: 
	I0501 03:40:32.438882   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetConfigRaw
	I0501 03:40:32.439452   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetIP
	I0501 03:40:32.441984   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:32.442342   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:68:83", ip: ""} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:40:32.442369   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:32.442668   69580 profile.go:143] Saving config to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/old-k8s-version-503971/config.json ...
	I0501 03:40:32.442875   69580 machine.go:94] provisionDockerMachine start ...
	I0501 03:40:32.442897   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .DriverName
	I0501 03:40:32.443077   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHHostname
	I0501 03:40:32.445129   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:32.445442   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:68:83", ip: ""} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:40:32.445480   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:32.445628   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHPort
	I0501 03:40:32.445806   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHKeyPath
	I0501 03:40:32.445974   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHKeyPath
	I0501 03:40:32.446122   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHUsername
	I0501 03:40:32.446314   69580 main.go:141] libmachine: Using SSH client type: native
	I0501 03:40:32.446548   69580 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.104 22 <nil> <nil>}
	I0501 03:40:32.446564   69580 main.go:141] libmachine: About to run SSH command:
	hostname
	I0501 03:40:32.559346   69580 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0501 03:40:32.559379   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetMachineName
	I0501 03:40:32.559630   69580 buildroot.go:166] provisioning hostname "old-k8s-version-503971"
	I0501 03:40:32.559654   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetMachineName
	I0501 03:40:32.559832   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHHostname
	I0501 03:40:32.562176   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:32.562547   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:68:83", ip: ""} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:40:32.562582   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:32.562716   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHPort
	I0501 03:40:32.562892   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHKeyPath
	I0501 03:40:32.563019   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHKeyPath
	I0501 03:40:32.563161   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHUsername
	I0501 03:40:32.563332   69580 main.go:141] libmachine: Using SSH client type: native
	I0501 03:40:32.563545   69580 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.104 22 <nil> <nil>}
	I0501 03:40:32.563564   69580 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-503971 && echo "old-k8s-version-503971" | sudo tee /etc/hostname
	I0501 03:40:32.699918   69580 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-503971
	
	I0501 03:40:32.699961   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHHostname
	I0501 03:40:32.702721   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:32.703134   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:68:83", ip: ""} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:40:32.703158   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:32.703361   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHPort
	I0501 03:40:32.703547   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHKeyPath
	I0501 03:40:32.703744   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHKeyPath
	I0501 03:40:32.703881   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHUsername
	I0501 03:40:32.704037   69580 main.go:141] libmachine: Using SSH client type: native
	I0501 03:40:32.704199   69580 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.104 22 <nil> <nil>}
	I0501 03:40:32.704215   69580 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-503971' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-503971/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-503971' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0501 03:40:32.830277   69580 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0501 03:40:32.830307   69580 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18779-13391/.minikube CaCertPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18779-13391/.minikube}
	I0501 03:40:32.830323   69580 buildroot.go:174] setting up certificates
	I0501 03:40:32.830331   69580 provision.go:84] configureAuth start
	I0501 03:40:32.830340   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetMachineName
	I0501 03:40:32.830629   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetIP
	I0501 03:40:32.833575   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:32.833887   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:68:83", ip: ""} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:40:32.833932   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:32.834070   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHHostname
	I0501 03:40:32.836309   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:32.836664   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:68:83", ip: ""} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:40:32.836691   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:32.836824   69580 provision.go:143] copyHostCerts
	I0501 03:40:32.836885   69580 exec_runner.go:144] found /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem, removing ...
	I0501 03:40:32.836895   69580 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem
	I0501 03:40:32.836945   69580 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem (1078 bytes)
	I0501 03:40:32.837046   69580 exec_runner.go:144] found /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem, removing ...
	I0501 03:40:32.837054   69580 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem
	I0501 03:40:32.837072   69580 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem (1123 bytes)
	I0501 03:40:32.837129   69580 exec_runner.go:144] found /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem, removing ...
	I0501 03:40:32.837136   69580 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem
	I0501 03:40:32.837152   69580 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem (1675 bytes)
	I0501 03:40:32.837202   69580 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-503971 san=[127.0.0.1 192.168.61.104 localhost minikube old-k8s-version-503971]
	I0501 03:40:33.047948   69580 provision.go:177] copyRemoteCerts
	I0501 03:40:33.048004   69580 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0501 03:40:33.048030   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHHostname
	I0501 03:40:33.050591   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:33.050975   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:68:83", ip: ""} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:40:33.051012   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:33.051142   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHPort
	I0501 03:40:33.051310   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHKeyPath
	I0501 03:40:33.051465   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHUsername
	I0501 03:40:33.051574   69580 sshutil.go:53] new ssh client: &{IP:192.168.61.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/old-k8s-version-503971/id_rsa Username:docker}
	I0501 03:40:33.143991   69580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0501 03:40:33.175494   69580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0501 03:40:33.204770   69580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0501 03:40:33.232728   69580 provision.go:87] duration metric: took 402.386279ms to configureAuth
	I0501 03:40:33.232756   69580 buildroot.go:189] setting minikube options for container-runtime
	I0501 03:40:33.232962   69580 config.go:182] Loaded profile config "old-k8s-version-503971": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0501 03:40:33.233051   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHHostname
	I0501 03:40:33.235656   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:33.236006   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:68:83", ip: ""} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:40:33.236038   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:33.236162   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHPort
	I0501 03:40:33.236339   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHKeyPath
	I0501 03:40:33.236484   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHKeyPath
	I0501 03:40:33.236633   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHUsername
	I0501 03:40:33.236817   69580 main.go:141] libmachine: Using SSH client type: native
	I0501 03:40:33.236980   69580 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.104 22 <nil> <nil>}
	I0501 03:40:33.236997   69580 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0501 03:40:33.526370   69580 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0501 03:40:33.526419   69580 machine.go:97] duration metric: took 1.083510254s to provisionDockerMachine
	I0501 03:40:33.526432   69580 start.go:293] postStartSetup for "old-k8s-version-503971" (driver="kvm2")
	I0501 03:40:33.526443   69580 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0501 03:40:33.526470   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .DriverName
	I0501 03:40:33.526788   69580 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0501 03:40:33.526831   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHHostname
	I0501 03:40:33.529815   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:33.530209   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:68:83", ip: ""} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:40:33.530268   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:33.530364   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHPort
	I0501 03:40:33.530559   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHKeyPath
	I0501 03:40:33.530741   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHUsername
	I0501 03:40:33.530909   69580 sshutil.go:53] new ssh client: &{IP:192.168.61.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/old-k8s-version-503971/id_rsa Username:docker}
	I0501 03:40:33.620224   69580 ssh_runner.go:195] Run: cat /etc/os-release
	I0501 03:40:33.625417   69580 info.go:137] Remote host: Buildroot 2023.02.9
	I0501 03:40:33.625447   69580 filesync.go:126] Scanning /home/jenkins/minikube-integration/18779-13391/.minikube/addons for local assets ...
	I0501 03:40:33.625511   69580 filesync.go:126] Scanning /home/jenkins/minikube-integration/18779-13391/.minikube/files for local assets ...
	I0501 03:40:33.625594   69580 filesync.go:149] local asset: /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem -> 207242.pem in /etc/ssl/certs
	I0501 03:40:33.625691   69580 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0501 03:40:33.637311   69580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem --> /etc/ssl/certs/207242.pem (1708 bytes)
	I0501 03:40:33.666707   69580 start.go:296] duration metric: took 140.263297ms for postStartSetup
	I0501 03:40:33.666740   69580 fix.go:56] duration metric: took 20.150640355s for fixHost
	I0501 03:40:33.666758   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHHostname
	I0501 03:40:33.669394   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:33.669822   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:68:83", ip: ""} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:40:33.669852   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:33.669963   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHPort
	I0501 03:40:33.670213   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHKeyPath
	I0501 03:40:33.670388   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHKeyPath
	I0501 03:40:33.670589   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHUsername
	I0501 03:40:33.670794   69580 main.go:141] libmachine: Using SSH client type: native
	I0501 03:40:33.670972   69580 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.104 22 <nil> <nil>}
	I0501 03:40:33.670984   69580 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0501 03:40:33.783810   69580 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714534833.728910946
	
	I0501 03:40:33.783839   69580 fix.go:216] guest clock: 1714534833.728910946
	I0501 03:40:33.783850   69580 fix.go:229] Guest: 2024-05-01 03:40:33.728910946 +0000 UTC Remote: 2024-05-01 03:40:33.666743363 +0000 UTC m=+232.246108464 (delta=62.167583ms)
	I0501 03:40:33.783893   69580 fix.go:200] guest clock delta is within tolerance: 62.167583ms
	I0501 03:40:33.783903   69580 start.go:83] releasing machines lock for "old-k8s-version-503971", held for 20.267840723s
	I0501 03:40:33.783933   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .DriverName
	I0501 03:40:33.784203   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetIP
	I0501 03:40:33.786846   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:33.787202   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:68:83", ip: ""} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:40:33.787230   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:33.787385   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .DriverName
	I0501 03:40:33.787837   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .DriverName
	I0501 03:40:33.787997   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .DriverName
	I0501 03:40:33.788085   69580 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0501 03:40:33.788126   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHHostname
	I0501 03:40:33.788252   69580 ssh_runner.go:195] Run: cat /version.json
	I0501 03:40:33.788279   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHHostname
	I0501 03:40:33.790748   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:33.791086   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:33.791118   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:68:83", ip: ""} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:40:33.791142   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:33.791435   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHPort
	I0501 03:40:33.791491   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:68:83", ip: ""} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:40:33.791532   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:33.791618   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHKeyPath
	I0501 03:40:33.791740   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHPort
	I0501 03:40:33.791815   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHUsername
	I0501 03:40:33.791937   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHKeyPath
	I0501 03:40:33.792014   69580 sshutil.go:53] new ssh client: &{IP:192.168.61.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/old-k8s-version-503971/id_rsa Username:docker}
	I0501 03:40:33.792069   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHUsername
	I0501 03:40:33.792206   69580 sshutil.go:53] new ssh client: &{IP:192.168.61.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/old-k8s-version-503971/id_rsa Username:docker}
	I0501 03:40:33.876242   69580 ssh_runner.go:195] Run: systemctl --version
	I0501 03:40:33.901692   69580 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0501 03:40:34.056758   69580 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0501 03:40:34.065070   69580 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0501 03:40:34.065156   69580 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0501 03:40:34.085337   69580 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0501 03:40:34.085364   69580 start.go:494] detecting cgroup driver to use...
	I0501 03:40:34.085432   69580 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0501 03:40:34.102723   69580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0501 03:40:34.118792   69580 docker.go:217] disabling cri-docker service (if available) ...
	I0501 03:40:34.118847   69580 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0501 03:40:34.133978   69580 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0501 03:40:34.153890   69580 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0501 03:40:34.283815   69580 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0501 03:40:34.475851   69580 docker.go:233] disabling docker service ...
	I0501 03:40:34.475926   69580 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0501 03:40:34.500769   69580 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0501 03:40:34.517315   69580 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0501 03:40:34.674322   69580 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0501 03:40:34.833281   69580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0501 03:40:34.852610   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0501 03:40:34.879434   69580 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0501 03:40:34.879517   69580 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:40:34.892197   69580 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0501 03:40:34.892269   69580 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:40:34.904437   69580 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:40:34.919950   69580 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:40:34.933772   69580 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0501 03:40:34.947563   69580 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0501 03:40:34.965724   69580 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0501 03:40:34.965795   69580 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0501 03:40:34.984251   69580 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0501 03:40:34.997050   69580 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 03:40:35.155852   69580 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0501 03:40:35.362090   69580 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0501 03:40:35.362164   69580 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0501 03:40:35.368621   69580 start.go:562] Will wait 60s for crictl version
	I0501 03:40:35.368701   69580 ssh_runner.go:195] Run: which crictl
	I0501 03:40:35.373792   69580 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0501 03:40:35.436905   69580 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0501 03:40:35.437018   69580 ssh_runner.go:195] Run: crio --version
	I0501 03:40:35.485130   69580 ssh_runner.go:195] Run: crio --version
	I0501 03:40:35.528700   69580 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0501 03:40:30.661395   69237 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0501 03:40:30.682810   69237 system_pods.go:43] waiting for kube-system pods to appear ...
	I0501 03:40:30.694277   69237 system_pods.go:59] 8 kube-system pods found
	I0501 03:40:30.694326   69237 system_pods.go:61] "coredns-7db6d8ff4d-9r7dt" [75d43a25-d309-427e-befc-7f1851b90d8c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0501 03:40:30.694343   69237 system_pods.go:61] "etcd-default-k8s-diff-port-715118" [21f6a4cd-f662-4865-9208-83959f0a6782] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0501 03:40:30.694354   69237 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-715118" [4dc3e45e-a5d8-480f-a8e8-763ecab0976b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0501 03:40:30.694369   69237 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-715118" [340580a3-040e-48fc-b89c-36a4f6fccfc2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0501 03:40:30.694376   69237 system_pods.go:61] "kube-proxy-vg7ts" [e55f3363-178c-427a-819d-0dc94c3116f3] Running
	I0501 03:40:30.694388   69237 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-715118" [b850fc4a-da6b-4714-98bb-e36e185880dd] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0501 03:40:30.694417   69237 system_pods.go:61] "metrics-server-569cc877fc-2btjj" [9b8ff94d-9e59-46d4-ac6d-7accca8b3552] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0501 03:40:30.694427   69237 system_pods.go:61] "storage-provisioner" [d44a3cf1-c8a5-4a20-8dd6-b854680b33b9] Running
	I0501 03:40:30.694435   69237 system_pods.go:74] duration metric: took 11.599113ms to wait for pod list to return data ...
	I0501 03:40:30.694449   69237 node_conditions.go:102] verifying NodePressure condition ...
	I0501 03:40:30.697795   69237 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0501 03:40:30.697825   69237 node_conditions.go:123] node cpu capacity is 2
	I0501 03:40:30.697838   69237 node_conditions.go:105] duration metric: took 3.383507ms to run NodePressure ...
	I0501 03:40:30.697858   69237 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:40:30.978827   69237 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0501 03:40:30.984628   69237 kubeadm.go:733] kubelet initialised
	I0501 03:40:30.984650   69237 kubeadm.go:734] duration metric: took 5.799905ms waiting for restarted kubelet to initialise ...
	I0501 03:40:30.984656   69237 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0501 03:40:30.992354   69237 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-9r7dt" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:30.999663   69237 pod_ready.go:97] node "default-k8s-diff-port-715118" hosting pod "coredns-7db6d8ff4d-9r7dt" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-715118" has status "Ready":"False"
	I0501 03:40:30.999690   69237 pod_ready.go:81] duration metric: took 7.312969ms for pod "coredns-7db6d8ff4d-9r7dt" in "kube-system" namespace to be "Ready" ...
	E0501 03:40:30.999700   69237 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-715118" hosting pod "coredns-7db6d8ff4d-9r7dt" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-715118" has status "Ready":"False"
	I0501 03:40:30.999706   69237 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-715118" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:31.006163   69237 pod_ready.go:97] node "default-k8s-diff-port-715118" hosting pod "etcd-default-k8s-diff-port-715118" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-715118" has status "Ready":"False"
	I0501 03:40:31.006187   69237 pod_ready.go:81] duration metric: took 6.471262ms for pod "etcd-default-k8s-diff-port-715118" in "kube-system" namespace to be "Ready" ...
	E0501 03:40:31.006199   69237 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-715118" hosting pod "etcd-default-k8s-diff-port-715118" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-715118" has status "Ready":"False"
	I0501 03:40:31.006208   69237 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-715118" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:31.011772   69237 pod_ready.go:97] node "default-k8s-diff-port-715118" hosting pod "kube-apiserver-default-k8s-diff-port-715118" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-715118" has status "Ready":"False"
	I0501 03:40:31.011793   69237 pod_ready.go:81] duration metric: took 5.576722ms for pod "kube-apiserver-default-k8s-diff-port-715118" in "kube-system" namespace to be "Ready" ...
	E0501 03:40:31.011803   69237 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-715118" hosting pod "kube-apiserver-default-k8s-diff-port-715118" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-715118" has status "Ready":"False"
	I0501 03:40:31.011810   69237 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-715118" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:31.086163   69237 pod_ready.go:97] node "default-k8s-diff-port-715118" hosting pod "kube-controller-manager-default-k8s-diff-port-715118" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-715118" has status "Ready":"False"
	I0501 03:40:31.086194   69237 pod_ready.go:81] duration metric: took 74.377197ms for pod "kube-controller-manager-default-k8s-diff-port-715118" in "kube-system" namespace to be "Ready" ...
	E0501 03:40:31.086207   69237 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-715118" hosting pod "kube-controller-manager-default-k8s-diff-port-715118" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-715118" has status "Ready":"False"
	I0501 03:40:31.086214   69237 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-vg7ts" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:31.487056   69237 pod_ready.go:92] pod "kube-proxy-vg7ts" in "kube-system" namespace has status "Ready":"True"
	I0501 03:40:31.487078   69237 pod_ready.go:81] duration metric: took 400.857543ms for pod "kube-proxy-vg7ts" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:31.487088   69237 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-715118" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:33.502448   69237 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-715118" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:35.530015   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetIP
	I0501 03:40:35.533706   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:35.534178   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:68:83", ip: ""} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:40:35.534254   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:35.534515   69580 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0501 03:40:35.541542   69580 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0501 03:40:35.563291   69580 kubeadm.go:877] updating cluster {Name:old-k8s-version-503971 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-503971 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.104 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0501 03:40:35.563434   69580 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0501 03:40:35.563512   69580 ssh_runner.go:195] Run: sudo crictl images --output json
	I0501 03:40:35.646548   69580 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0501 03:40:35.646635   69580 ssh_runner.go:195] Run: which lz4
	I0501 03:40:35.652824   69580 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0501 03:40:35.660056   69580 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0501 03:40:35.660099   69580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0501 03:40:33.808828   68640 main.go:141] libmachine: (no-preload-892672) Calling .Start
	I0501 03:40:33.809083   68640 main.go:141] libmachine: (no-preload-892672) Ensuring networks are active...
	I0501 03:40:33.809829   68640 main.go:141] libmachine: (no-preload-892672) Ensuring network default is active
	I0501 03:40:33.810166   68640 main.go:141] libmachine: (no-preload-892672) Ensuring network mk-no-preload-892672 is active
	I0501 03:40:33.810632   68640 main.go:141] libmachine: (no-preload-892672) Getting domain xml...
	I0501 03:40:33.811386   68640 main.go:141] libmachine: (no-preload-892672) Creating domain...
	I0501 03:40:35.133886   68640 main.go:141] libmachine: (no-preload-892672) Waiting to get IP...
	I0501 03:40:35.134756   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:35.135216   68640 main.go:141] libmachine: (no-preload-892672) DBG | unable to find current IP address of domain no-preload-892672 in network mk-no-preload-892672
	I0501 03:40:35.135280   68640 main.go:141] libmachine: (no-preload-892672) DBG | I0501 03:40:35.135178   70664 retry.go:31] will retry after 275.796908ms: waiting for machine to come up
	I0501 03:40:35.412670   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:35.413206   68640 main.go:141] libmachine: (no-preload-892672) DBG | unable to find current IP address of domain no-preload-892672 in network mk-no-preload-892672
	I0501 03:40:35.413232   68640 main.go:141] libmachine: (no-preload-892672) DBG | I0501 03:40:35.413162   70664 retry.go:31] will retry after 326.173381ms: waiting for machine to come up
	I0501 03:40:35.740734   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:35.741314   68640 main.go:141] libmachine: (no-preload-892672) DBG | unable to find current IP address of domain no-preload-892672 in network mk-no-preload-892672
	I0501 03:40:35.741342   68640 main.go:141] libmachine: (no-preload-892672) DBG | I0501 03:40:35.741260   70664 retry.go:31] will retry after 476.50915ms: waiting for machine to come up
	I0501 03:40:36.219908   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:36.220440   68640 main.go:141] libmachine: (no-preload-892672) DBG | unable to find current IP address of domain no-preload-892672 in network mk-no-preload-892672
	I0501 03:40:36.220473   68640 main.go:141] libmachine: (no-preload-892672) DBG | I0501 03:40:36.220399   70664 retry.go:31] will retry after 377.277784ms: waiting for machine to come up
	I0501 03:40:36.598936   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:36.599391   68640 main.go:141] libmachine: (no-preload-892672) DBG | unable to find current IP address of domain no-preload-892672 in network mk-no-preload-892672
	I0501 03:40:36.599417   68640 main.go:141] libmachine: (no-preload-892672) DBG | I0501 03:40:36.599348   70664 retry.go:31] will retry after 587.166276ms: waiting for machine to come up
	I0501 03:40:37.188757   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:37.189406   68640 main.go:141] libmachine: (no-preload-892672) DBG | unable to find current IP address of domain no-preload-892672 in network mk-no-preload-892672
	I0501 03:40:37.189441   68640 main.go:141] libmachine: (no-preload-892672) DBG | I0501 03:40:37.189311   70664 retry.go:31] will retry after 801.958256ms: waiting for machine to come up
	I0501 03:40:34.658104   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:36.660517   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:35.998453   69237 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-715118" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:38.495088   69237 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-715118" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:39.004175   69237 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-715118" in "kube-system" namespace has status "Ready":"True"
	I0501 03:40:39.004198   69237 pod_ready.go:81] duration metric: took 7.517103824s for pod "kube-scheduler-default-k8s-diff-port-715118" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:39.004209   69237 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:37.870306   69580 crio.go:462] duration metric: took 2.217531377s to copy over tarball
	I0501 03:40:37.870393   69580 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0501 03:40:37.992669   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:37.993052   68640 main.go:141] libmachine: (no-preload-892672) DBG | unable to find current IP address of domain no-preload-892672 in network mk-no-preload-892672
	I0501 03:40:37.993080   68640 main.go:141] libmachine: (no-preload-892672) DBG | I0501 03:40:37.993016   70664 retry.go:31] will retry after 1.085029482s: waiting for machine to come up
	I0501 03:40:39.079315   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:39.079739   68640 main.go:141] libmachine: (no-preload-892672) DBG | unable to find current IP address of domain no-preload-892672 in network mk-no-preload-892672
	I0501 03:40:39.079779   68640 main.go:141] libmachine: (no-preload-892672) DBG | I0501 03:40:39.079682   70664 retry.go:31] will retry after 1.140448202s: waiting for machine to come up
	I0501 03:40:40.221645   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:40.222165   68640 main.go:141] libmachine: (no-preload-892672) DBG | unable to find current IP address of domain no-preload-892672 in network mk-no-preload-892672
	I0501 03:40:40.222192   68640 main.go:141] libmachine: (no-preload-892672) DBG | I0501 03:40:40.222103   70664 retry.go:31] will retry after 1.434247869s: waiting for machine to come up
	I0501 03:40:41.658447   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:41.659034   68640 main.go:141] libmachine: (no-preload-892672) DBG | unable to find current IP address of domain no-preload-892672 in network mk-no-preload-892672
	I0501 03:40:41.659072   68640 main.go:141] libmachine: (no-preload-892672) DBG | I0501 03:40:41.659003   70664 retry.go:31] will retry after 1.759453732s: waiting for machine to come up
	I0501 03:40:39.157834   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:41.164729   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:43.658248   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:41.014770   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:43.513038   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:45.516821   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:41.534681   69580 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.664236925s)
	I0501 03:40:41.599216   69580 crio.go:469] duration metric: took 3.72886857s to extract the tarball
	I0501 03:40:41.599238   69580 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0501 03:40:41.649221   69580 ssh_runner.go:195] Run: sudo crictl images --output json
	I0501 03:40:41.697169   69580 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0501 03:40:41.697198   69580 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0501 03:40:41.697302   69580 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0501 03:40:41.697346   69580 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0501 03:40:41.697367   69580 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0501 03:40:41.697352   69580 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0501 03:40:41.697375   69580 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0501 03:40:41.697329   69580 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0501 03:40:41.697275   69580 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0501 03:40:41.697329   69580 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0501 03:40:41.698950   69580 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0501 03:40:41.699010   69580 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0501 03:40:41.699114   69580 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0501 03:40:41.699251   69580 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0501 03:40:41.699292   69580 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0501 03:40:41.699020   69580 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0501 03:40:41.699550   69580 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0501 03:40:41.699715   69580 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0501 03:40:41.830042   69580 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0501 03:40:41.881770   69580 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0501 03:40:41.881834   69580 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0501 03:40:41.881896   69580 ssh_runner.go:195] Run: which crictl
	I0501 03:40:41.887083   69580 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0501 03:40:41.894597   69580 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0501 03:40:41.935993   69580 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0501 03:40:41.937339   69580 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0501 03:40:41.961728   69580 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0501 03:40:41.961778   69580 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0501 03:40:41.961827   69580 ssh_runner.go:195] Run: which crictl
	I0501 03:40:42.004327   69580 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0501 03:40:42.004395   69580 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0501 03:40:42.004435   69580 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0501 03:40:42.004493   69580 ssh_runner.go:195] Run: which crictl
	I0501 03:40:42.053743   69580 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0501 03:40:42.055914   69580 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0501 03:40:42.056267   69580 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0501 03:40:42.056610   69580 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0501 03:40:42.060229   69580 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0501 03:40:42.070489   69580 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0501 03:40:42.127829   69580 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0501 03:40:42.127880   69580 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0501 03:40:42.127927   69580 ssh_runner.go:195] Run: which crictl
	I0501 03:40:42.201731   69580 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0501 03:40:42.201783   69580 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0501 03:40:42.201814   69580 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0501 03:40:42.201842   69580 ssh_runner.go:195] Run: which crictl
	I0501 03:40:42.211112   69580 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0501 03:40:42.211163   69580 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0501 03:40:42.211227   69580 ssh_runner.go:195] Run: which crictl
	I0501 03:40:42.217794   69580 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0501 03:40:42.217835   69580 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0501 03:40:42.217873   69580 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0501 03:40:42.217917   69580 ssh_runner.go:195] Run: which crictl
	I0501 03:40:42.217873   69580 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0501 03:40:42.220250   69580 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0501 03:40:42.274880   69580 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0501 03:40:42.294354   69580 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0501 03:40:42.294436   69580 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0501 03:40:42.305191   69580 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0501 03:40:42.342502   69580 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0501 03:40:42.560474   69580 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0501 03:40:42.712970   69580 cache_images.go:92] duration metric: took 1.015752585s to LoadCachedImages
	W0501 03:40:42.713057   69580 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	I0501 03:40:42.713074   69580 kubeadm.go:928] updating node { 192.168.61.104 8443 v1.20.0 crio true true} ...
	I0501 03:40:42.713227   69580 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-503971 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.104
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-503971 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0501 03:40:42.713323   69580 ssh_runner.go:195] Run: crio config
	I0501 03:40:42.771354   69580 cni.go:84] Creating CNI manager for ""
	I0501 03:40:42.771384   69580 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0501 03:40:42.771403   69580 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0501 03:40:42.771428   69580 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.104 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-503971 NodeName:old-k8s-version-503971 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.104"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.104 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0501 03:40:42.771644   69580 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.104
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-503971"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.104
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.104"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0501 03:40:42.771722   69580 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0501 03:40:42.784978   69580 binaries.go:44] Found k8s binaries, skipping transfer
	I0501 03:40:42.785057   69580 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0501 03:40:42.800945   69580 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0501 03:40:42.824293   69580 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0501 03:40:42.845949   69580 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0501 03:40:42.867390   69580 ssh_runner.go:195] Run: grep 192.168.61.104	control-plane.minikube.internal$ /etc/hosts
	I0501 03:40:42.872038   69580 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.104	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0501 03:40:42.890213   69580 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 03:40:43.041533   69580 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0501 03:40:43.070048   69580 certs.go:68] Setting up /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/old-k8s-version-503971 for IP: 192.168.61.104
	I0501 03:40:43.070075   69580 certs.go:194] generating shared ca certs ...
	I0501 03:40:43.070097   69580 certs.go:226] acquiring lock for ca certs: {Name:mk85a75d14c00780d4bf09cd9bdaf0f96f1b8fce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 03:40:43.070315   69580 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18779-13391/.minikube/ca.key
	I0501 03:40:43.070388   69580 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.key
	I0501 03:40:43.070419   69580 certs.go:256] generating profile certs ...
	I0501 03:40:43.070558   69580 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/old-k8s-version-503971/client.key
	I0501 03:40:43.070631   69580 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/old-k8s-version-503971/apiserver.key.760b883a
	I0501 03:40:43.070670   69580 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/old-k8s-version-503971/proxy-client.key
	I0501 03:40:43.070804   69580 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/20724.pem (1338 bytes)
	W0501 03:40:43.070852   69580 certs.go:480] ignoring /home/jenkins/minikube-integration/18779-13391/.minikube/certs/20724_empty.pem, impossibly tiny 0 bytes
	I0501 03:40:43.070865   69580 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca-key.pem (1679 bytes)
	I0501 03:40:43.070914   69580 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem (1078 bytes)
	I0501 03:40:43.070955   69580 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem (1123 bytes)
	I0501 03:40:43.070985   69580 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem (1675 bytes)
	I0501 03:40:43.071044   69580 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem (1708 bytes)
	I0501 03:40:43.071869   69580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0501 03:40:43.110078   69580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0501 03:40:43.164382   69580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0501 03:40:43.197775   69580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0501 03:40:43.230575   69580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/old-k8s-version-503971/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0501 03:40:43.260059   69580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/old-k8s-version-503971/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0501 03:40:43.288704   69580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/old-k8s-version-503971/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0501 03:40:43.315417   69580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/old-k8s-version-503971/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0501 03:40:43.363440   69580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0501 03:40:43.396043   69580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/certs/20724.pem --> /usr/share/ca-certificates/20724.pem (1338 bytes)
	I0501 03:40:43.425997   69580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem --> /usr/share/ca-certificates/207242.pem (1708 bytes)
	I0501 03:40:43.456927   69580 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0501 03:40:43.478177   69580 ssh_runner.go:195] Run: openssl version
	I0501 03:40:43.484513   69580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0501 03:40:43.497230   69580 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0501 03:40:43.504025   69580 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May  1 02:08 /usr/share/ca-certificates/minikubeCA.pem
	I0501 03:40:43.504112   69580 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0501 03:40:43.513309   69580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0501 03:40:43.528592   69580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20724.pem && ln -fs /usr/share/ca-certificates/20724.pem /etc/ssl/certs/20724.pem"
	I0501 03:40:43.544560   69580 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20724.pem
	I0501 03:40:43.550975   69580 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May  1 02:20 /usr/share/ca-certificates/20724.pem
	I0501 03:40:43.551047   69580 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20724.pem
	I0501 03:40:43.559214   69580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20724.pem /etc/ssl/certs/51391683.0"
	I0501 03:40:43.575362   69580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/207242.pem && ln -fs /usr/share/ca-certificates/207242.pem /etc/ssl/certs/207242.pem"
	I0501 03:40:43.587848   69580 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/207242.pem
	I0501 03:40:43.593131   69580 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May  1 02:20 /usr/share/ca-certificates/207242.pem
	I0501 03:40:43.593183   69580 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/207242.pem
	I0501 03:40:43.600365   69580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/207242.pem /etc/ssl/certs/3ec20f2e.0"
	I0501 03:40:43.613912   69580 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0501 03:40:43.619576   69580 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0501 03:40:43.628551   69580 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0501 03:40:43.637418   69580 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0501 03:40:43.645060   69580 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0501 03:40:43.654105   69580 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0501 03:40:43.663501   69580 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0501 03:40:43.670855   69580 kubeadm.go:391] StartCluster: {Name:old-k8s-version-503971 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-503971 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.104 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 03:40:43.670937   69580 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0501 03:40:43.670982   69580 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0501 03:40:43.720350   69580 cri.go:89] found id: ""
	I0501 03:40:43.720419   69580 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0501 03:40:43.732518   69580 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0501 03:40:43.732544   69580 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0501 03:40:43.732552   69580 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0501 03:40:43.732612   69580 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0501 03:40:43.743804   69580 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0501 03:40:43.745071   69580 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-503971" does not appear in /home/jenkins/minikube-integration/18779-13391/kubeconfig
	I0501 03:40:43.745785   69580 kubeconfig.go:62] /home/jenkins/minikube-integration/18779-13391/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-503971" cluster setting kubeconfig missing "old-k8s-version-503971" context setting]
	I0501 03:40:43.747054   69580 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18779-13391/kubeconfig: {Name:mk5d1131557f885800877622151c0915337cb23a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 03:40:43.748989   69580 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0501 03:40:43.760349   69580 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.104
	I0501 03:40:43.760389   69580 kubeadm.go:1154] stopping kube-system containers ...
	I0501 03:40:43.760403   69580 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0501 03:40:43.760473   69580 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0501 03:40:43.804745   69580 cri.go:89] found id: ""
	I0501 03:40:43.804841   69580 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0501 03:40:43.825960   69580 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0501 03:40:43.838038   69580 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0501 03:40:43.838062   69580 kubeadm.go:156] found existing configuration files:
	
	I0501 03:40:43.838115   69580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0501 03:40:43.849075   69580 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0501 03:40:43.849164   69580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0501 03:40:43.860634   69580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0501 03:40:43.871244   69580 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0501 03:40:43.871313   69580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0501 03:40:43.882184   69580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0501 03:40:43.893193   69580 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0501 03:40:43.893254   69580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0501 03:40:43.904257   69580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0501 03:40:43.915414   69580 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0501 03:40:43.915492   69580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0501 03:40:43.927372   69580 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0501 03:40:43.939117   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:40:44.098502   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:40:45.150125   69580 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.051581029s)
	I0501 03:40:45.150161   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:40:45.443307   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:40:45.563369   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:40:45.678620   69580 api_server.go:52] waiting for apiserver process to appear ...
	I0501 03:40:45.678731   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:46.179675   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:43.419480   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:43.419952   68640 main.go:141] libmachine: (no-preload-892672) DBG | unable to find current IP address of domain no-preload-892672 in network mk-no-preload-892672
	I0501 03:40:43.419980   68640 main.go:141] libmachine: (no-preload-892672) DBG | I0501 03:40:43.419907   70664 retry.go:31] will retry after 2.329320519s: waiting for machine to come up
	I0501 03:40:45.751405   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:45.751871   68640 main.go:141] libmachine: (no-preload-892672) DBG | unable to find current IP address of domain no-preload-892672 in network mk-no-preload-892672
	I0501 03:40:45.751902   68640 main.go:141] libmachine: (no-preload-892672) DBG | I0501 03:40:45.751822   70664 retry.go:31] will retry after 3.262804058s: waiting for machine to come up
	I0501 03:40:45.659845   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:48.157145   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:48.013520   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:50.514729   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:46.679449   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:47.179179   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:47.678890   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:48.179190   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:48.679276   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:49.179698   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:49.679121   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:50.179723   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:50.679603   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:51.179094   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:49.016460   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:49.016856   68640 main.go:141] libmachine: (no-preload-892672) DBG | unable to find current IP address of domain no-preload-892672 in network mk-no-preload-892672
	I0501 03:40:49.016878   68640 main.go:141] libmachine: (no-preload-892672) DBG | I0501 03:40:49.016826   70664 retry.go:31] will retry after 3.440852681s: waiting for machine to come up
	I0501 03:40:52.461349   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:52.461771   68640 main.go:141] libmachine: (no-preload-892672) DBG | unable to find current IP address of domain no-preload-892672 in network mk-no-preload-892672
	I0501 03:40:52.461800   68640 main.go:141] libmachine: (no-preload-892672) DBG | I0501 03:40:52.461722   70664 retry.go:31] will retry after 4.871322728s: waiting for machine to come up
	I0501 03:40:50.157703   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:52.655677   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:53.011851   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:55.510458   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:51.679850   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:52.179568   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:52.679100   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:53.179470   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:53.679115   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:54.178815   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:54.679769   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:55.179576   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:55.678864   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:56.179617   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:57.335855   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:57.336228   68640 main.go:141] libmachine: (no-preload-892672) Found IP for machine: 192.168.39.144
	I0501 03:40:57.336263   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has current primary IP address 192.168.39.144 and MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:57.336281   68640 main.go:141] libmachine: (no-preload-892672) Reserving static IP address...
	I0501 03:40:57.336629   68640 main.go:141] libmachine: (no-preload-892672) DBG | found host DHCP lease matching {name: "no-preload-892672", mac: "52:54:00:c7:6d:9a", ip: "192.168.39.144"} in network mk-no-preload-892672: {Iface:virbr1 ExpiryTime:2024-05-01 04:40:47 +0000 UTC Type:0 Mac:52:54:00:c7:6d:9a Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:no-preload-892672 Clientid:01:52:54:00:c7:6d:9a}
	I0501 03:40:57.336649   68640 main.go:141] libmachine: (no-preload-892672) DBG | skip adding static IP to network mk-no-preload-892672 - found existing host DHCP lease matching {name: "no-preload-892672", mac: "52:54:00:c7:6d:9a", ip: "192.168.39.144"}
	I0501 03:40:57.336661   68640 main.go:141] libmachine: (no-preload-892672) Reserved static IP address: 192.168.39.144
	I0501 03:40:57.336671   68640 main.go:141] libmachine: (no-preload-892672) Waiting for SSH to be available...
	I0501 03:40:57.336680   68640 main.go:141] libmachine: (no-preload-892672) DBG | Getting to WaitForSSH function...
	I0501 03:40:57.338862   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:57.339135   68640 main.go:141] libmachine: (no-preload-892672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6d:9a", ip: ""} in network mk-no-preload-892672: {Iface:virbr1 ExpiryTime:2024-05-01 04:40:47 +0000 UTC Type:0 Mac:52:54:00:c7:6d:9a Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:no-preload-892672 Clientid:01:52:54:00:c7:6d:9a}
	I0501 03:40:57.339163   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined IP address 192.168.39.144 and MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:57.339268   68640 main.go:141] libmachine: (no-preload-892672) DBG | Using SSH client type: external
	I0501 03:40:57.339296   68640 main.go:141] libmachine: (no-preload-892672) DBG | Using SSH private key: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/no-preload-892672/id_rsa (-rw-------)
	I0501 03:40:57.339328   68640 main.go:141] libmachine: (no-preload-892672) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.144 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18779-13391/.minikube/machines/no-preload-892672/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0501 03:40:57.339341   68640 main.go:141] libmachine: (no-preload-892672) DBG | About to run SSH command:
	I0501 03:40:57.339370   68640 main.go:141] libmachine: (no-preload-892672) DBG | exit 0
	I0501 03:40:57.466775   68640 main.go:141] libmachine: (no-preload-892672) DBG | SSH cmd err, output: <nil>: 
	I0501 03:40:57.467183   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetConfigRaw
	I0501 03:40:57.467890   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetIP
	I0501 03:40:57.470097   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:57.470527   68640 main.go:141] libmachine: (no-preload-892672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6d:9a", ip: ""} in network mk-no-preload-892672: {Iface:virbr1 ExpiryTime:2024-05-01 04:40:47 +0000 UTC Type:0 Mac:52:54:00:c7:6d:9a Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:no-preload-892672 Clientid:01:52:54:00:c7:6d:9a}
	I0501 03:40:57.470555   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined IP address 192.168.39.144 and MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:57.470767   68640 profile.go:143] Saving config to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/no-preload-892672/config.json ...
	I0501 03:40:57.470929   68640 machine.go:94] provisionDockerMachine start ...
	I0501 03:40:57.470950   68640 main.go:141] libmachine: (no-preload-892672) Calling .DriverName
	I0501 03:40:57.471177   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHHostname
	I0501 03:40:57.473301   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:57.473599   68640 main.go:141] libmachine: (no-preload-892672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6d:9a", ip: ""} in network mk-no-preload-892672: {Iface:virbr1 ExpiryTime:2024-05-01 04:40:47 +0000 UTC Type:0 Mac:52:54:00:c7:6d:9a Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:no-preload-892672 Clientid:01:52:54:00:c7:6d:9a}
	I0501 03:40:57.473626   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined IP address 192.168.39.144 and MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:57.473724   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHPort
	I0501 03:40:57.473863   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHKeyPath
	I0501 03:40:57.474032   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHKeyPath
	I0501 03:40:57.474181   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHUsername
	I0501 03:40:57.474337   68640 main.go:141] libmachine: Using SSH client type: native
	I0501 03:40:57.474545   68640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.144 22 <nil> <nil>}
	I0501 03:40:57.474558   68640 main.go:141] libmachine: About to run SSH command:
	hostname
	I0501 03:40:57.591733   68640 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0501 03:40:57.591766   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetMachineName
	I0501 03:40:57.592016   68640 buildroot.go:166] provisioning hostname "no-preload-892672"
	I0501 03:40:57.592048   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetMachineName
	I0501 03:40:57.592308   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHHostname
	I0501 03:40:57.595192   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:57.595593   68640 main.go:141] libmachine: (no-preload-892672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6d:9a", ip: ""} in network mk-no-preload-892672: {Iface:virbr1 ExpiryTime:2024-05-01 04:40:47 +0000 UTC Type:0 Mac:52:54:00:c7:6d:9a Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:no-preload-892672 Clientid:01:52:54:00:c7:6d:9a}
	I0501 03:40:57.595618   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined IP address 192.168.39.144 and MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:57.595697   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHPort
	I0501 03:40:57.595891   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHKeyPath
	I0501 03:40:57.596041   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHKeyPath
	I0501 03:40:57.596192   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHUsername
	I0501 03:40:57.596376   68640 main.go:141] libmachine: Using SSH client type: native
	I0501 03:40:57.596544   68640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.144 22 <nil> <nil>}
	I0501 03:40:57.596559   68640 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-892672 && echo "no-preload-892672" | sudo tee /etc/hostname
	I0501 03:40:57.727738   68640 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-892672
	
	I0501 03:40:57.727770   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHHostname
	I0501 03:40:57.730673   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:57.731033   68640 main.go:141] libmachine: (no-preload-892672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6d:9a", ip: ""} in network mk-no-preload-892672: {Iface:virbr1 ExpiryTime:2024-05-01 04:40:47 +0000 UTC Type:0 Mac:52:54:00:c7:6d:9a Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:no-preload-892672 Clientid:01:52:54:00:c7:6d:9a}
	I0501 03:40:57.731066   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined IP address 192.168.39.144 and MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:57.731202   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHPort
	I0501 03:40:57.731383   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHKeyPath
	I0501 03:40:57.731577   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHKeyPath
	I0501 03:40:57.731744   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHUsername
	I0501 03:40:57.731936   68640 main.go:141] libmachine: Using SSH client type: native
	I0501 03:40:57.732155   68640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.144 22 <nil> <nil>}
	I0501 03:40:57.732173   68640 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-892672' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-892672/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-892672' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0501 03:40:57.857465   68640 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0501 03:40:57.857492   68640 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18779-13391/.minikube CaCertPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18779-13391/.minikube}
	I0501 03:40:57.857515   68640 buildroot.go:174] setting up certificates
	I0501 03:40:57.857524   68640 provision.go:84] configureAuth start
	I0501 03:40:57.857532   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetMachineName
	I0501 03:40:57.857791   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetIP
	I0501 03:40:57.860530   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:57.860881   68640 main.go:141] libmachine: (no-preload-892672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6d:9a", ip: ""} in network mk-no-preload-892672: {Iface:virbr1 ExpiryTime:2024-05-01 04:40:47 +0000 UTC Type:0 Mac:52:54:00:c7:6d:9a Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:no-preload-892672 Clientid:01:52:54:00:c7:6d:9a}
	I0501 03:40:57.860911   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined IP address 192.168.39.144 and MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:57.861035   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHHostname
	I0501 03:40:57.863122   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:57.863445   68640 main.go:141] libmachine: (no-preload-892672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6d:9a", ip: ""} in network mk-no-preload-892672: {Iface:virbr1 ExpiryTime:2024-05-01 04:40:47 +0000 UTC Type:0 Mac:52:54:00:c7:6d:9a Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:no-preload-892672 Clientid:01:52:54:00:c7:6d:9a}
	I0501 03:40:57.863472   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined IP address 192.168.39.144 and MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:57.863565   68640 provision.go:143] copyHostCerts
	I0501 03:40:57.863614   68640 exec_runner.go:144] found /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem, removing ...
	I0501 03:40:57.863624   68640 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem
	I0501 03:40:57.863689   68640 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem (1078 bytes)
	I0501 03:40:57.863802   68640 exec_runner.go:144] found /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem, removing ...
	I0501 03:40:57.863814   68640 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem
	I0501 03:40:57.863843   68640 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem (1123 bytes)
	I0501 03:40:57.863928   68640 exec_runner.go:144] found /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem, removing ...
	I0501 03:40:57.863938   68640 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem
	I0501 03:40:57.863962   68640 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem (1675 bytes)
	I0501 03:40:57.864040   68640 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca-key.pem org=jenkins.no-preload-892672 san=[127.0.0.1 192.168.39.144 localhost minikube no-preload-892672]
	I0501 03:40:54.658003   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:56.658041   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:58.125270   68640 provision.go:177] copyRemoteCerts
	I0501 03:40:58.125321   68640 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0501 03:40:58.125342   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHHostname
	I0501 03:40:58.127890   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:58.128299   68640 main.go:141] libmachine: (no-preload-892672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6d:9a", ip: ""} in network mk-no-preload-892672: {Iface:virbr1 ExpiryTime:2024-05-01 04:40:47 +0000 UTC Type:0 Mac:52:54:00:c7:6d:9a Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:no-preload-892672 Clientid:01:52:54:00:c7:6d:9a}
	I0501 03:40:58.128330   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined IP address 192.168.39.144 and MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:58.128469   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHPort
	I0501 03:40:58.128645   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHKeyPath
	I0501 03:40:58.128809   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHUsername
	I0501 03:40:58.128941   68640 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/no-preload-892672/id_rsa Username:docker}
	I0501 03:40:58.222112   68640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0501 03:40:58.249760   68640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0501 03:40:58.277574   68640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0501 03:40:58.304971   68640 provision.go:87] duration metric: took 447.420479ms to configureAuth
	I0501 03:40:58.305017   68640 buildroot.go:189] setting minikube options for container-runtime
	I0501 03:40:58.305270   68640 config.go:182] Loaded profile config "no-preload-892672": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0501 03:40:58.305434   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHHostname
	I0501 03:40:58.308098   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:58.308487   68640 main.go:141] libmachine: (no-preload-892672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6d:9a", ip: ""} in network mk-no-preload-892672: {Iface:virbr1 ExpiryTime:2024-05-01 04:40:47 +0000 UTC Type:0 Mac:52:54:00:c7:6d:9a Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:no-preload-892672 Clientid:01:52:54:00:c7:6d:9a}
	I0501 03:40:58.308528   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined IP address 192.168.39.144 and MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:58.308658   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHPort
	I0501 03:40:58.308857   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHKeyPath
	I0501 03:40:58.309025   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHKeyPath
	I0501 03:40:58.309173   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHUsername
	I0501 03:40:58.309354   68640 main.go:141] libmachine: Using SSH client type: native
	I0501 03:40:58.309510   68640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.144 22 <nil> <nil>}
	I0501 03:40:58.309526   68640 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0501 03:40:58.609833   68640 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0501 03:40:58.609859   68640 machine.go:97] duration metric: took 1.138916322s to provisionDockerMachine
	I0501 03:40:58.609873   68640 start.go:293] postStartSetup for "no-preload-892672" (driver="kvm2")
	I0501 03:40:58.609885   68640 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0501 03:40:58.609905   68640 main.go:141] libmachine: (no-preload-892672) Calling .DriverName
	I0501 03:40:58.610271   68640 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0501 03:40:58.610307   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHHostname
	I0501 03:40:58.612954   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:58.613308   68640 main.go:141] libmachine: (no-preload-892672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6d:9a", ip: ""} in network mk-no-preload-892672: {Iface:virbr1 ExpiryTime:2024-05-01 04:40:47 +0000 UTC Type:0 Mac:52:54:00:c7:6d:9a Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:no-preload-892672 Clientid:01:52:54:00:c7:6d:9a}
	I0501 03:40:58.613322   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined IP address 192.168.39.144 and MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:58.613485   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHPort
	I0501 03:40:58.613683   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHKeyPath
	I0501 03:40:58.613871   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHUsername
	I0501 03:40:58.614005   68640 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/no-preload-892672/id_rsa Username:docker}
	I0501 03:40:58.702752   68640 ssh_runner.go:195] Run: cat /etc/os-release
	I0501 03:40:58.707441   68640 info.go:137] Remote host: Buildroot 2023.02.9
	I0501 03:40:58.707468   68640 filesync.go:126] Scanning /home/jenkins/minikube-integration/18779-13391/.minikube/addons for local assets ...
	I0501 03:40:58.707577   68640 filesync.go:126] Scanning /home/jenkins/minikube-integration/18779-13391/.minikube/files for local assets ...
	I0501 03:40:58.707646   68640 filesync.go:149] local asset: /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem -> 207242.pem in /etc/ssl/certs
	I0501 03:40:58.707728   68640 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0501 03:40:58.718247   68640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem --> /etc/ssl/certs/207242.pem (1708 bytes)
	I0501 03:40:58.745184   68640 start.go:296] duration metric: took 135.29943ms for postStartSetup
	I0501 03:40:58.745218   68640 fix.go:56] duration metric: took 24.96116093s for fixHost
	I0501 03:40:58.745236   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHHostname
	I0501 03:40:58.747809   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:58.748228   68640 main.go:141] libmachine: (no-preload-892672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6d:9a", ip: ""} in network mk-no-preload-892672: {Iface:virbr1 ExpiryTime:2024-05-01 04:40:47 +0000 UTC Type:0 Mac:52:54:00:c7:6d:9a Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:no-preload-892672 Clientid:01:52:54:00:c7:6d:9a}
	I0501 03:40:58.748261   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined IP address 192.168.39.144 and MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:58.748380   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHPort
	I0501 03:40:58.748591   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHKeyPath
	I0501 03:40:58.748747   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHKeyPath
	I0501 03:40:58.748870   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHUsername
	I0501 03:40:58.749049   68640 main.go:141] libmachine: Using SSH client type: native
	I0501 03:40:58.749262   68640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.144 22 <nil> <nil>}
	I0501 03:40:58.749275   68640 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0501 03:40:58.867651   68640 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714534858.808639015
	
	I0501 03:40:58.867676   68640 fix.go:216] guest clock: 1714534858.808639015
	I0501 03:40:58.867686   68640 fix.go:229] Guest: 2024-05-01 03:40:58.808639015 +0000 UTC Remote: 2024-05-01 03:40:58.745221709 +0000 UTC m=+370.854832040 (delta=63.417306ms)
	I0501 03:40:58.867735   68640 fix.go:200] guest clock delta is within tolerance: 63.417306ms
	I0501 03:40:58.867746   68640 start.go:83] releasing machines lock for "no-preload-892672", held for 25.083724737s
	I0501 03:40:58.867770   68640 main.go:141] libmachine: (no-preload-892672) Calling .DriverName
	I0501 03:40:58.868053   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetIP
	I0501 03:40:58.871193   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:58.871618   68640 main.go:141] libmachine: (no-preload-892672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6d:9a", ip: ""} in network mk-no-preload-892672: {Iface:virbr1 ExpiryTime:2024-05-01 04:40:47 +0000 UTC Type:0 Mac:52:54:00:c7:6d:9a Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:no-preload-892672 Clientid:01:52:54:00:c7:6d:9a}
	I0501 03:40:58.871664   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined IP address 192.168.39.144 and MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:58.871815   68640 main.go:141] libmachine: (no-preload-892672) Calling .DriverName
	I0501 03:40:58.872441   68640 main.go:141] libmachine: (no-preload-892672) Calling .DriverName
	I0501 03:40:58.872665   68640 main.go:141] libmachine: (no-preload-892672) Calling .DriverName
	I0501 03:40:58.872750   68640 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0501 03:40:58.872787   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHHostname
	I0501 03:40:58.872918   68640 ssh_runner.go:195] Run: cat /version.json
	I0501 03:40:58.872946   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHHostname
	I0501 03:40:58.875797   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:58.875976   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:58.876230   68640 main.go:141] libmachine: (no-preload-892672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6d:9a", ip: ""} in network mk-no-preload-892672: {Iface:virbr1 ExpiryTime:2024-05-01 04:40:47 +0000 UTC Type:0 Mac:52:54:00:c7:6d:9a Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:no-preload-892672 Clientid:01:52:54:00:c7:6d:9a}
	I0501 03:40:58.876341   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined IP address 192.168.39.144 and MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:58.876377   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHPort
	I0501 03:40:58.876502   68640 main.go:141] libmachine: (no-preload-892672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6d:9a", ip: ""} in network mk-no-preload-892672: {Iface:virbr1 ExpiryTime:2024-05-01 04:40:47 +0000 UTC Type:0 Mac:52:54:00:c7:6d:9a Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:no-preload-892672 Clientid:01:52:54:00:c7:6d:9a}
	I0501 03:40:58.876539   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined IP address 192.168.39.144 and MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:58.876587   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHKeyPath
	I0501 03:40:58.876756   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHUsername
	I0501 03:40:58.876894   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHPort
	I0501 03:40:58.876969   68640 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/no-preload-892672/id_rsa Username:docker}
	I0501 03:40:58.877057   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHKeyPath
	I0501 03:40:58.877246   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHUsername
	I0501 03:40:58.877424   68640 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/no-preload-892672/id_rsa Username:docker}
	I0501 03:40:58.983384   68640 ssh_runner.go:195] Run: systemctl --version
	I0501 03:40:58.991625   68640 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0501 03:40:59.143916   68640 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0501 03:40:59.151065   68640 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0501 03:40:59.151124   68640 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0501 03:40:59.168741   68640 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0501 03:40:59.168763   68640 start.go:494] detecting cgroup driver to use...
	I0501 03:40:59.168825   68640 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0501 03:40:59.188524   68640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0501 03:40:59.205602   68640 docker.go:217] disabling cri-docker service (if available) ...
	I0501 03:40:59.205668   68640 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0501 03:40:59.221173   68640 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0501 03:40:59.236546   68640 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0501 03:40:59.364199   68640 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0501 03:40:59.533188   68640 docker.go:233] disabling docker service ...
	I0501 03:40:59.533266   68640 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0501 03:40:59.549488   68640 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0501 03:40:59.562910   68640 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0501 03:40:59.705451   68640 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0501 03:40:59.843226   68640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0501 03:40:59.858878   68640 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0501 03:40:59.882729   68640 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0501 03:40:59.882808   68640 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:40:59.895678   68640 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0501 03:40:59.895763   68640 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:40:59.908439   68640 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:40:59.921319   68640 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:40:59.934643   68640 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0501 03:40:59.947416   68640 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:40:59.959887   68640 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:40:59.981849   68640 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:40:59.994646   68640 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0501 03:41:00.006059   68640 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0501 03:41:00.006133   68640 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0501 03:41:00.024850   68640 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0501 03:41:00.036834   68640 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 03:41:00.161283   68640 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0501 03:41:00.312304   68640 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0501 03:41:00.312375   68640 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0501 03:41:00.317980   68640 start.go:562] Will wait 60s for crictl version
	I0501 03:41:00.318043   68640 ssh_runner.go:195] Run: which crictl
	I0501 03:41:00.322780   68640 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0501 03:41:00.362830   68640 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0501 03:41:00.362920   68640 ssh_runner.go:195] Run: crio --version
	I0501 03:41:00.399715   68640 ssh_runner.go:195] Run: crio --version
	I0501 03:41:00.432510   68640 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0501 03:40:57.511719   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:00.013693   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:56.679034   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:57.179062   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:57.679579   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:58.179221   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:58.679728   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:59.178851   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:59.679647   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:00.179397   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:00.678839   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:01.179679   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:00.433777   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetIP
	I0501 03:41:00.436557   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:41:00.436892   68640 main.go:141] libmachine: (no-preload-892672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6d:9a", ip: ""} in network mk-no-preload-892672: {Iface:virbr1 ExpiryTime:2024-05-01 04:40:47 +0000 UTC Type:0 Mac:52:54:00:c7:6d:9a Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:no-preload-892672 Clientid:01:52:54:00:c7:6d:9a}
	I0501 03:41:00.436920   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined IP address 192.168.39.144 and MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:41:00.437124   68640 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0501 03:41:00.441861   68640 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0501 03:41:00.455315   68640 kubeadm.go:877] updating cluster {Name:no-preload-892672 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.0 ClusterName:no-preload-892672 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.144 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0501 03:41:00.455417   68640 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0501 03:41:00.455462   68640 ssh_runner.go:195] Run: sudo crictl images --output json
	I0501 03:41:00.496394   68640 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0501 03:41:00.496422   68640 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.30.0 registry.k8s.io/kube-controller-manager:v1.30.0 registry.k8s.io/kube-scheduler:v1.30.0 registry.k8s.io/kube-proxy:v1.30.0 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.12-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0501 03:41:00.496508   68640 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0501 03:41:00.496532   68640 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.0
	I0501 03:41:00.496551   68640 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.0
	I0501 03:41:00.496581   68640 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0501 03:41:00.496679   68640 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.0
	I0501 03:41:00.496701   68640 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0501 03:41:00.496736   68640 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0501 03:41:00.496529   68640 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.0
	I0501 03:41:00.498207   68640 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.0
	I0501 03:41:00.498227   68640 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0501 03:41:00.498246   68640 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.0
	I0501 03:41:00.498250   68640 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.0
	I0501 03:41:00.498270   68640 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.0
	I0501 03:41:00.498254   68640 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0501 03:41:00.498298   68640 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0501 03:41:00.498477   68640 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0501 03:41:00.617430   68640 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.30.0
	I0501 03:41:00.621346   68640 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.30.0
	I0501 03:41:00.622759   68640 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0501 03:41:00.628313   68640 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.30.0
	I0501 03:41:00.629087   68640 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0501 03:41:00.633625   68640 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.12-0
	I0501 03:41:00.652130   68640 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.30.0
	I0501 03:41:00.722500   68640 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.30.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.30.0" does not exist at hash "c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0" in container runtime
	I0501 03:41:00.722554   68640 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.30.0
	I0501 03:41:00.722623   68640 ssh_runner.go:195] Run: which crictl
	I0501 03:41:00.796476   68640 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.30.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.30.0" does not exist at hash "259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced" in container runtime
	I0501 03:41:00.796530   68640 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.30.0
	I0501 03:41:00.796580   68640 ssh_runner.go:195] Run: which crictl
	I0501 03:41:00.944235   68640 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.30.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.30.0" does not exist at hash "c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b" in container runtime
	I0501 03:41:00.944262   68640 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0501 03:41:00.944289   68640 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.30.0
	I0501 03:41:00.944297   68640 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0501 03:41:00.944305   68640 cache_images.go:116] "registry.k8s.io/etcd:3.5.12-0" needs transfer: "registry.k8s.io/etcd:3.5.12-0" does not exist at hash "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899" in container runtime
	I0501 03:41:00.944325   68640 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.12-0
	I0501 03:41:00.944344   68640 ssh_runner.go:195] Run: which crictl
	I0501 03:41:00.944357   68640 ssh_runner.go:195] Run: which crictl
	I0501 03:41:00.944398   68640 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.30.0" needs transfer: "registry.k8s.io/kube-proxy:v1.30.0" does not exist at hash "a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b" in container runtime
	I0501 03:41:00.944348   68640 ssh_runner.go:195] Run: which crictl
	I0501 03:41:00.944434   68640 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.30.0
	I0501 03:41:00.944422   68640 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.30.0
	I0501 03:41:00.944452   68640 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.0
	I0501 03:41:00.944464   68640 ssh_runner.go:195] Run: which crictl
	I0501 03:41:00.998765   68640 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.30.0
	I0501 03:41:00.998791   68640 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0
	I0501 03:41:00.998846   68640 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.12-0
	I0501 03:41:00.998891   68640 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.0
	I0501 03:41:01.017469   68640 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.30.0
	I0501 03:41:01.017494   68640 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0
	I0501 03:41:01.017584   68640 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.0
	I0501 03:41:01.018040   68640 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0501 03:41:01.105445   68640 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0
	I0501 03:41:01.105517   68640 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0
	I0501 03:41:01.105560   68640 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0
	I0501 03:41:01.105583   68640 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.30.0 (exists)
	I0501 03:41:01.105595   68640 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.30.0
	I0501 03:41:01.105635   68640 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.0
	I0501 03:41:01.105645   68640 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0
	I0501 03:41:01.105734   68640 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.30.0 (exists)
	I0501 03:41:01.105814   68640 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0
	I0501 03:41:01.105888   68640 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.0
	I0501 03:41:01.120943   68640 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0501 03:41:01.121044   68640 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0501 03:41:01.127975   68640 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.12-0 (exists)
	I0501 03:41:01.359381   68640 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0501 03:40:59.156924   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:01.659307   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:03.661498   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:02.511652   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:05.011220   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:01.679527   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:02.179435   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:02.679626   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:03.179351   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:03.679618   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:04.179426   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:04.678853   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:05.179143   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:05.679065   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:06.179513   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:04.315680   68640 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.0: (3.210016587s)
	I0501 03:41:04.315725   68640 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.30.0 (exists)
	I0501 03:41:04.315756   68640 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.0: (3.209843913s)
	I0501 03:41:04.315784   68640 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1: (3.194721173s)
	I0501 03:41:04.315799   68640 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0: (3.210139611s)
	I0501 03:41:04.315812   68640 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.30.0 (exists)
	I0501 03:41:04.315813   68640 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0501 03:41:04.315813   68640 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0 from cache
	I0501 03:41:04.315844   68640 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.956432506s)
	I0501 03:41:04.315859   68640 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.30.0
	I0501 03:41:04.315902   68640 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0501 03:41:04.315905   68640 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0
	I0501 03:41:04.315927   68640 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0501 03:41:04.315962   68640 ssh_runner.go:195] Run: which crictl
	I0501 03:41:05.691351   68640 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0: (1.375419764s)
	I0501 03:41:05.691394   68640 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0 from cache
	I0501 03:41:05.691418   68640 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.12-0
	I0501 03:41:05.691467   68640 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0
	I0501 03:41:05.691477   68640 ssh_runner.go:235] Completed: which crictl: (1.375499162s)
	I0501 03:41:05.691529   68640 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0501 03:41:06.159381   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:08.659756   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:07.012126   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:09.511459   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:06.679246   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:07.179629   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:07.679601   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:08.179634   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:08.679603   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:09.179675   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:09.678837   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:10.178860   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:10.679638   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:11.179802   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:09.757005   68640 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0: (4.065509843s)
	I0501 03:41:09.757044   68640 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 from cache
	I0501 03:41:09.757079   68640 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.30.0
	I0501 03:41:09.757093   68640 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (4.065539206s)
	I0501 03:41:09.757137   68640 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0501 03:41:09.757158   68640 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0
	I0501 03:41:09.757222   68640 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0501 03:41:12.125691   68640 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0: (2.368504788s)
	I0501 03:41:12.125729   68640 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0 from cache
	I0501 03:41:12.125726   68640 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.368475622s)
	I0501 03:41:12.125755   68640 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0501 03:41:12.125754   68640 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.30.0
	I0501 03:41:12.125817   68640 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0
	I0501 03:41:11.157019   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:13.157632   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:11.513027   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:14.013463   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:11.679355   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:12.178847   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:12.679660   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:13.179641   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:13.678808   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:14.178955   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:14.679651   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:15.179623   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:15.678862   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:16.179775   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:14.315765   68640 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0: (2.18991878s)
	I0501 03:41:14.315791   68640 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0 from cache
	I0501 03:41:14.315835   68640 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0501 03:41:14.315911   68640 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0501 03:41:16.401221   68640 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.085281928s)
	I0501 03:41:16.401261   68640 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0501 03:41:16.401291   68640 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0501 03:41:16.401335   68640 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0501 03:41:17.152926   68640 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0501 03:41:17.152969   68640 cache_images.go:123] Successfully loaded all cached images
	I0501 03:41:17.152976   68640 cache_images.go:92] duration metric: took 16.656540612s to LoadCachedImages
	I0501 03:41:17.152989   68640 kubeadm.go:928] updating node { 192.168.39.144 8443 v1.30.0 crio true true} ...
	I0501 03:41:17.153119   68640 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-892672 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.144
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:no-preload-892672 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0501 03:41:17.153241   68640 ssh_runner.go:195] Run: crio config
	I0501 03:41:17.207153   68640 cni.go:84] Creating CNI manager for ""
	I0501 03:41:17.207181   68640 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0501 03:41:17.207196   68640 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0501 03:41:17.207225   68640 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.144 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-892672 NodeName:no-preload-892672 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.144"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.144 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0501 03:41:17.207407   68640 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.144
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-892672"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.144
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.144"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0501 03:41:17.207488   68640 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0501 03:41:17.221033   68640 binaries.go:44] Found k8s binaries, skipping transfer
	I0501 03:41:17.221099   68640 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0501 03:41:17.232766   68640 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0501 03:41:17.252543   68640 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0501 03:41:17.272030   68640 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0501 03:41:17.291541   68640 ssh_runner.go:195] Run: grep 192.168.39.144	control-plane.minikube.internal$ /etc/hosts
	I0501 03:41:17.295801   68640 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.144	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0501 03:41:17.309880   68640 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 03:41:17.432917   68640 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0501 03:41:17.452381   68640 certs.go:68] Setting up /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/no-preload-892672 for IP: 192.168.39.144
	I0501 03:41:17.452406   68640 certs.go:194] generating shared ca certs ...
	I0501 03:41:17.452425   68640 certs.go:226] acquiring lock for ca certs: {Name:mk85a75d14c00780d4bf09cd9bdaf0f96f1b8fce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 03:41:17.452606   68640 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18779-13391/.minikube/ca.key
	I0501 03:41:17.452655   68640 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.key
	I0501 03:41:17.452669   68640 certs.go:256] generating profile certs ...
	I0501 03:41:17.452746   68640 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/no-preload-892672/client.key
	I0501 03:41:17.452809   68640 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/no-preload-892672/apiserver.key.3644a8af
	I0501 03:41:17.452848   68640 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/no-preload-892672/proxy-client.key
	I0501 03:41:17.452963   68640 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/20724.pem (1338 bytes)
	W0501 03:41:17.453007   68640 certs.go:480] ignoring /home/jenkins/minikube-integration/18779-13391/.minikube/certs/20724_empty.pem, impossibly tiny 0 bytes
	I0501 03:41:17.453021   68640 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca-key.pem (1679 bytes)
	I0501 03:41:17.453050   68640 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem (1078 bytes)
	I0501 03:41:17.453083   68640 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem (1123 bytes)
	I0501 03:41:17.453116   68640 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem (1675 bytes)
	I0501 03:41:17.453166   68640 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem (1708 bytes)
	I0501 03:41:17.453767   68640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0501 03:41:17.490616   68640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0501 03:41:17.545217   68640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0501 03:41:17.576908   68640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0501 03:41:17.607371   68640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/no-preload-892672/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0501 03:41:17.657675   68640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/no-preload-892672/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0501 03:41:17.684681   68640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/no-preload-892672/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0501 03:41:17.716319   68640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/no-preload-892672/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0501 03:41:17.745731   68640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0501 03:41:17.770939   68640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/certs/20724.pem --> /usr/share/ca-certificates/20724.pem (1338 bytes)
	I0501 03:41:17.796366   68640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem --> /usr/share/ca-certificates/207242.pem (1708 bytes)
	I0501 03:41:17.823301   68640 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0501 03:41:17.841496   68640 ssh_runner.go:195] Run: openssl version
	I0501 03:41:17.848026   68640 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0501 03:41:17.860734   68640 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0501 03:41:17.865978   68640 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May  1 02:08 /usr/share/ca-certificates/minikubeCA.pem
	I0501 03:41:17.866037   68640 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0501 03:41:17.872644   68640 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0501 03:41:17.886241   68640 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20724.pem && ln -fs /usr/share/ca-certificates/20724.pem /etc/ssl/certs/20724.pem"
	I0501 03:41:17.899619   68640 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20724.pem
	I0501 03:41:17.904664   68640 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May  1 02:20 /usr/share/ca-certificates/20724.pem
	I0501 03:41:17.904701   68640 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20724.pem
	I0501 03:41:17.910799   68640 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20724.pem /etc/ssl/certs/51391683.0"
	I0501 03:41:17.923007   68640 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/207242.pem && ln -fs /usr/share/ca-certificates/207242.pem /etc/ssl/certs/207242.pem"
	I0501 03:41:15.657403   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:18.156777   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:16.511834   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:18.512735   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:20.513144   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:16.679614   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:17.179604   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:17.679100   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:18.179166   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:18.679202   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:19.179631   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:19.679583   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:20.179584   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:20.679493   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:21.178945   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:17.935647   68640 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/207242.pem
	I0501 03:41:17.942147   68640 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May  1 02:20 /usr/share/ca-certificates/207242.pem
	I0501 03:41:17.942187   68640 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/207242.pem
	I0501 03:41:17.948468   68640 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/207242.pem /etc/ssl/certs/3ec20f2e.0"
	I0501 03:41:17.962737   68640 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0501 03:41:17.968953   68640 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0501 03:41:17.975849   68640 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0501 03:41:17.982324   68640 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0501 03:41:17.988930   68640 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0501 03:41:17.995221   68640 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0501 03:41:18.001868   68640 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0501 03:41:18.008701   68640 kubeadm.go:391] StartCluster: {Name:no-preload-892672 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.0 ClusterName:no-preload-892672 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.144 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 03:41:18.008831   68640 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0501 03:41:18.008893   68640 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0501 03:41:18.056939   68640 cri.go:89] found id: ""
	I0501 03:41:18.057005   68640 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0501 03:41:18.070898   68640 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0501 03:41:18.070921   68640 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0501 03:41:18.070926   68640 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0501 03:41:18.070968   68640 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0501 03:41:18.083907   68640 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0501 03:41:18.085116   68640 kubeconfig.go:125] found "no-preload-892672" server: "https://192.168.39.144:8443"
	I0501 03:41:18.088582   68640 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0501 03:41:18.101426   68640 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.144
	I0501 03:41:18.101471   68640 kubeadm.go:1154] stopping kube-system containers ...
	I0501 03:41:18.101493   68640 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0501 03:41:18.101543   68640 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0501 03:41:18.153129   68640 cri.go:89] found id: ""
	I0501 03:41:18.153193   68640 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0501 03:41:18.173100   68640 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0501 03:41:18.188443   68640 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0501 03:41:18.188463   68640 kubeadm.go:156] found existing configuration files:
	
	I0501 03:41:18.188509   68640 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0501 03:41:18.202153   68640 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0501 03:41:18.202204   68640 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0501 03:41:18.215390   68640 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0501 03:41:18.227339   68640 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0501 03:41:18.227404   68640 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0501 03:41:18.239160   68640 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0501 03:41:18.251992   68640 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0501 03:41:18.252053   68640 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0501 03:41:18.265088   68640 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0501 03:41:18.277922   68640 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0501 03:41:18.277983   68640 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0501 03:41:18.291307   68640 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0501 03:41:18.304879   68640 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:41:18.417921   68640 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:41:19.350848   68640 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:41:19.586348   68640 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:41:19.761056   68640 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:41:19.867315   68640 api_server.go:52] waiting for apiserver process to appear ...
	I0501 03:41:19.867435   68640 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:20.368520   68640 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:20.868444   68640 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:20.913411   68640 api_server.go:72] duration metric: took 1.046095165s to wait for apiserver process to appear ...
	I0501 03:41:20.913444   68640 api_server.go:88] waiting for apiserver healthz status ...
	I0501 03:41:20.913469   68640 api_server.go:253] Checking apiserver healthz at https://192.168.39.144:8443/healthz ...
	I0501 03:41:20.914000   68640 api_server.go:269] stopped: https://192.168.39.144:8443/healthz: Get "https://192.168.39.144:8443/healthz": dial tcp 192.168.39.144:8443: connect: connection refused
	I0501 03:41:21.414544   68640 api_server.go:253] Checking apiserver healthz at https://192.168.39.144:8443/healthz ...
	I0501 03:41:20.658333   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:23.157298   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:23.011395   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:25.012164   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:21.678785   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:22.179435   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:22.679615   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:23.179610   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:23.679473   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:24.179613   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:24.679672   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:25.179400   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:25.679793   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:26.179809   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:24.166756   68640 api_server.go:279] https://192.168.39.144:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0501 03:41:24.166786   68640 api_server.go:103] status: https://192.168.39.144:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0501 03:41:24.166807   68640 api_server.go:253] Checking apiserver healthz at https://192.168.39.144:8443/healthz ...
	I0501 03:41:24.205679   68640 api_server.go:279] https://192.168.39.144:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0501 03:41:24.205713   68640 api_server.go:103] status: https://192.168.39.144:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0501 03:41:24.414055   68640 api_server.go:253] Checking apiserver healthz at https://192.168.39.144:8443/healthz ...
	I0501 03:41:24.420468   68640 api_server.go:279] https://192.168.39.144:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0501 03:41:24.420502   68640 api_server.go:103] status: https://192.168.39.144:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0501 03:41:24.914021   68640 api_server.go:253] Checking apiserver healthz at https://192.168.39.144:8443/healthz ...
	I0501 03:41:24.919717   68640 api_server.go:279] https://192.168.39.144:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0501 03:41:24.919754   68640 api_server.go:103] status: https://192.168.39.144:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0501 03:41:25.414015   68640 api_server.go:253] Checking apiserver healthz at https://192.168.39.144:8443/healthz ...
	I0501 03:41:25.422149   68640 api_server.go:279] https://192.168.39.144:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0501 03:41:25.422180   68640 api_server.go:103] status: https://192.168.39.144:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0501 03:41:25.913751   68640 api_server.go:253] Checking apiserver healthz at https://192.168.39.144:8443/healthz ...
	I0501 03:41:25.917839   68640 api_server.go:279] https://192.168.39.144:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0501 03:41:25.917865   68640 api_server.go:103] status: https://192.168.39.144:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0501 03:41:26.414458   68640 api_server.go:253] Checking apiserver healthz at https://192.168.39.144:8443/healthz ...
	I0501 03:41:26.419346   68640 api_server.go:279] https://192.168.39.144:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0501 03:41:26.419367   68640 api_server.go:103] status: https://192.168.39.144:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0501 03:41:26.913912   68640 api_server.go:253] Checking apiserver healthz at https://192.168.39.144:8443/healthz ...
	I0501 03:41:26.918504   68640 api_server.go:279] https://192.168.39.144:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0501 03:41:26.918537   68640 api_server.go:103] status: https://192.168.39.144:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0501 03:41:27.413693   68640 api_server.go:253] Checking apiserver healthz at https://192.168.39.144:8443/healthz ...
	I0501 03:41:27.421752   68640 api_server.go:279] https://192.168.39.144:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0501 03:41:27.421776   68640 api_server.go:103] status: https://192.168.39.144:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0501 03:41:27.913582   68640 api_server.go:253] Checking apiserver healthz at https://192.168.39.144:8443/healthz ...
	I0501 03:41:27.918116   68640 api_server.go:279] https://192.168.39.144:8443/healthz returned 200:
	ok
	I0501 03:41:27.927764   68640 api_server.go:141] control plane version: v1.30.0
	I0501 03:41:27.927790   68640 api_server.go:131] duration metric: took 7.014339409s to wait for apiserver health ...
	I0501 03:41:27.927799   68640 cni.go:84] Creating CNI manager for ""
	I0501 03:41:27.927805   68640 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0501 03:41:27.929889   68640 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0501 03:41:27.931210   68640 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0501 03:41:25.158177   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:27.656879   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:27.511692   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:30.010468   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:26.679430   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:27.179043   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:27.678801   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:28.179629   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:28.679111   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:29.179599   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:29.679624   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:30.179585   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:30.679442   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:31.179530   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:27.945852   68640 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0501 03:41:27.968311   68640 system_pods.go:43] waiting for kube-system pods to appear ...
	I0501 03:41:27.981571   68640 system_pods.go:59] 8 kube-system pods found
	I0501 03:41:27.981609   68640 system_pods.go:61] "coredns-7db6d8ff4d-v8bqq" [bf389521-9f19-4f2b-83a5-6d469c7ce0fe] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0501 03:41:27.981615   68640 system_pods.go:61] "etcd-no-preload-892672" [108fce6d-03f3-4bb9-a410-a58c58e8f186] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0501 03:41:27.981621   68640 system_pods.go:61] "kube-apiserver-no-preload-892672" [a18b7242-1865-4a67-aab6-c6cc19552326] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0501 03:41:27.981629   68640 system_pods.go:61] "kube-controller-manager-no-preload-892672" [318d39e1-5265-42e5-a3d5-4408b7b73542] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0501 03:41:27.981636   68640 system_pods.go:61] "kube-proxy-dwvdl" [f7a97598-aaa1-4df5-8d6a-8f6286568ad6] Running
	I0501 03:41:27.981642   68640 system_pods.go:61] "kube-scheduler-no-preload-892672" [cbf1c183-16df-42c8-b1c8-b9adf3c25a7a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0501 03:41:27.981647   68640 system_pods.go:61] "metrics-server-569cc877fc-k8jnl" [1dd0fb29-4d90-41c8-9de2-d163eeb0247b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0501 03:41:27.981651   68640 system_pods.go:61] "storage-provisioner" [fc703ab1-f14b-4766-8ee2-a43477d3df21] Running
	I0501 03:41:27.981657   68640 system_pods.go:74] duration metric: took 13.322893ms to wait for pod list to return data ...
	I0501 03:41:27.981667   68640 node_conditions.go:102] verifying NodePressure condition ...
	I0501 03:41:27.985896   68640 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0501 03:41:27.985931   68640 node_conditions.go:123] node cpu capacity is 2
	I0501 03:41:27.985944   68640 node_conditions.go:105] duration metric: took 4.271726ms to run NodePressure ...
	I0501 03:41:27.985966   68640 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:41:28.269675   68640 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0501 03:41:28.276487   68640 kubeadm.go:733] kubelet initialised
	I0501 03:41:28.276512   68640 kubeadm.go:734] duration metric: took 6.808875ms waiting for restarted kubelet to initialise ...
	I0501 03:41:28.276522   68640 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0501 03:41:28.287109   68640 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-v8bqq" in "kube-system" namespace to be "Ready" ...
	I0501 03:41:28.297143   68640 pod_ready.go:97] node "no-preload-892672" hosting pod "coredns-7db6d8ff4d-v8bqq" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-892672" has status "Ready":"False"
	I0501 03:41:28.297185   68640 pod_ready.go:81] duration metric: took 10.040841ms for pod "coredns-7db6d8ff4d-v8bqq" in "kube-system" namespace to be "Ready" ...
	E0501 03:41:28.297198   68640 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-892672" hosting pod "coredns-7db6d8ff4d-v8bqq" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-892672" has status "Ready":"False"
	I0501 03:41:28.297206   68640 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-892672" in "kube-system" namespace to be "Ready" ...
	I0501 03:41:28.307648   68640 pod_ready.go:97] node "no-preload-892672" hosting pod "etcd-no-preload-892672" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-892672" has status "Ready":"False"
	I0501 03:41:28.307682   68640 pod_ready.go:81] duration metric: took 10.464199ms for pod "etcd-no-preload-892672" in "kube-system" namespace to be "Ready" ...
	E0501 03:41:28.307695   68640 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-892672" hosting pod "etcd-no-preload-892672" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-892672" has status "Ready":"False"
	I0501 03:41:28.307707   68640 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-892672" in "kube-system" namespace to be "Ready" ...
	I0501 03:41:30.319652   68640 pod_ready.go:102] pod "kube-apiserver-no-preload-892672" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:32.821375   68640 pod_ready.go:102] pod "kube-apiserver-no-preload-892672" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:29.657167   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:32.157549   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:32.012009   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:34.511543   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:31.679423   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:32.179628   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:32.679456   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:33.179336   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:33.679221   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:34.178900   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:34.679236   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:35.179595   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:35.679520   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:36.179639   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:35.317202   68640 pod_ready.go:102] pod "kube-apiserver-no-preload-892672" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:37.318125   68640 pod_ready.go:92] pod "kube-apiserver-no-preload-892672" in "kube-system" namespace has status "Ready":"True"
	I0501 03:41:37.318157   68640 pod_ready.go:81] duration metric: took 9.010440772s for pod "kube-apiserver-no-preload-892672" in "kube-system" namespace to be "Ready" ...
	I0501 03:41:37.318170   68640 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-892672" in "kube-system" namespace to be "Ready" ...
	I0501 03:41:37.327390   68640 pod_ready.go:92] pod "kube-controller-manager-no-preload-892672" in "kube-system" namespace has status "Ready":"True"
	I0501 03:41:37.327412   68640 pod_ready.go:81] duration metric: took 9.233689ms for pod "kube-controller-manager-no-preload-892672" in "kube-system" namespace to be "Ready" ...
	I0501 03:41:37.327425   68640 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-dwvdl" in "kube-system" namespace to be "Ready" ...
	I0501 03:41:37.333971   68640 pod_ready.go:92] pod "kube-proxy-dwvdl" in "kube-system" namespace has status "Ready":"True"
	I0501 03:41:37.333994   68640 pod_ready.go:81] duration metric: took 6.561014ms for pod "kube-proxy-dwvdl" in "kube-system" namespace to be "Ready" ...
	I0501 03:41:37.334006   68640 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-892672" in "kube-system" namespace to be "Ready" ...
	I0501 03:41:37.338637   68640 pod_ready.go:92] pod "kube-scheduler-no-preload-892672" in "kube-system" namespace has status "Ready":"True"
	I0501 03:41:37.338657   68640 pod_ready.go:81] duration metric: took 4.644395ms for pod "kube-scheduler-no-preload-892672" in "kube-system" namespace to be "Ready" ...
	I0501 03:41:37.338665   68640 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace to be "Ready" ...
	I0501 03:41:34.657958   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:36.658191   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:36.512234   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:39.012636   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:36.678883   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:37.179198   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:37.679101   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:38.179088   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:38.679354   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:39.179163   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:39.678809   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:40.179768   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:40.679046   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:41.179618   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:39.346054   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:41.346434   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:39.157142   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:41.656902   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:41.510939   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:43.511571   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:45.511959   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:41.679751   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:42.178848   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:42.679525   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:43.179706   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:43.679665   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:44.179053   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:44.679615   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:45.178830   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:45.679547   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:41:45.679620   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:41:45.718568   69580 cri.go:89] found id: ""
	I0501 03:41:45.718597   69580 logs.go:276] 0 containers: []
	W0501 03:41:45.718611   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:41:45.718619   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:41:45.718678   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:41:45.755572   69580 cri.go:89] found id: ""
	I0501 03:41:45.755596   69580 logs.go:276] 0 containers: []
	W0501 03:41:45.755604   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:41:45.755609   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:41:45.755654   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:41:45.793411   69580 cri.go:89] found id: ""
	I0501 03:41:45.793440   69580 logs.go:276] 0 containers: []
	W0501 03:41:45.793450   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:41:45.793458   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:41:45.793526   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:41:45.834547   69580 cri.go:89] found id: ""
	I0501 03:41:45.834572   69580 logs.go:276] 0 containers: []
	W0501 03:41:45.834579   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:41:45.834585   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:41:45.834668   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:41:45.873293   69580 cri.go:89] found id: ""
	I0501 03:41:45.873321   69580 logs.go:276] 0 containers: []
	W0501 03:41:45.873332   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:41:45.873348   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:41:45.873411   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:41:45.911703   69580 cri.go:89] found id: ""
	I0501 03:41:45.911734   69580 logs.go:276] 0 containers: []
	W0501 03:41:45.911745   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:41:45.911766   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:41:45.911826   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:41:45.949577   69580 cri.go:89] found id: ""
	I0501 03:41:45.949602   69580 logs.go:276] 0 containers: []
	W0501 03:41:45.949610   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:41:45.949616   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:41:45.949666   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:41:45.986174   69580 cri.go:89] found id: ""
	I0501 03:41:45.986199   69580 logs.go:276] 0 containers: []
	W0501 03:41:45.986207   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:41:45.986216   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:41:45.986228   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:41:46.041028   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:41:46.041064   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:41:46.057097   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:41:46.057126   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:41:46.195021   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:41:46.195042   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:41:46.195055   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:41:46.261153   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:41:46.261197   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:41:43.845096   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:45.845950   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:47.849620   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:44.157041   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:46.158028   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:48.658062   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:48.011975   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:50.512345   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:48.809274   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:48.824295   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:41:48.824369   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:41:48.869945   69580 cri.go:89] found id: ""
	I0501 03:41:48.869975   69580 logs.go:276] 0 containers: []
	W0501 03:41:48.869985   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:41:48.869993   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:41:48.870053   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:41:48.918088   69580 cri.go:89] found id: ""
	I0501 03:41:48.918113   69580 logs.go:276] 0 containers: []
	W0501 03:41:48.918122   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:41:48.918131   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:41:48.918190   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:41:48.958102   69580 cri.go:89] found id: ""
	I0501 03:41:48.958132   69580 logs.go:276] 0 containers: []
	W0501 03:41:48.958143   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:41:48.958149   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:41:48.958207   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:41:48.997163   69580 cri.go:89] found id: ""
	I0501 03:41:48.997194   69580 logs.go:276] 0 containers: []
	W0501 03:41:48.997211   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:41:48.997218   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:41:48.997284   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:41:49.040132   69580 cri.go:89] found id: ""
	I0501 03:41:49.040156   69580 logs.go:276] 0 containers: []
	W0501 03:41:49.040164   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:41:49.040170   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:41:49.040228   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:41:49.079680   69580 cri.go:89] found id: ""
	I0501 03:41:49.079712   69580 logs.go:276] 0 containers: []
	W0501 03:41:49.079724   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:41:49.079732   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:41:49.079790   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:41:49.120577   69580 cri.go:89] found id: ""
	I0501 03:41:49.120610   69580 logs.go:276] 0 containers: []
	W0501 03:41:49.120623   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:41:49.120630   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:41:49.120700   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:41:49.167098   69580 cri.go:89] found id: ""
	I0501 03:41:49.167123   69580 logs.go:276] 0 containers: []
	W0501 03:41:49.167133   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:41:49.167141   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:41:49.167152   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:41:49.242834   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:41:49.242868   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:41:49.264011   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:41:49.264033   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:41:49.367711   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:41:49.367739   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:41:49.367764   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:41:49.441925   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:41:49.441964   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:41:50.346009   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:52.346333   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:51.156287   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:53.657588   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:53.010720   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:55.012329   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:51.986536   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:52.001651   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:41:52.001734   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:41:52.039550   69580 cri.go:89] found id: ""
	I0501 03:41:52.039571   69580 logs.go:276] 0 containers: []
	W0501 03:41:52.039579   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:41:52.039584   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:41:52.039636   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:41:52.082870   69580 cri.go:89] found id: ""
	I0501 03:41:52.082892   69580 logs.go:276] 0 containers: []
	W0501 03:41:52.082900   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:41:52.082905   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:41:52.082949   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:41:52.126970   69580 cri.go:89] found id: ""
	I0501 03:41:52.126996   69580 logs.go:276] 0 containers: []
	W0501 03:41:52.127009   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:41:52.127014   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:41:52.127076   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:41:52.169735   69580 cri.go:89] found id: ""
	I0501 03:41:52.169761   69580 logs.go:276] 0 containers: []
	W0501 03:41:52.169769   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:41:52.169774   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:41:52.169826   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:41:52.207356   69580 cri.go:89] found id: ""
	I0501 03:41:52.207392   69580 logs.go:276] 0 containers: []
	W0501 03:41:52.207404   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:41:52.207412   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:41:52.207472   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:41:52.250074   69580 cri.go:89] found id: ""
	I0501 03:41:52.250102   69580 logs.go:276] 0 containers: []
	W0501 03:41:52.250113   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:41:52.250121   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:41:52.250180   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:41:52.290525   69580 cri.go:89] found id: ""
	I0501 03:41:52.290550   69580 logs.go:276] 0 containers: []
	W0501 03:41:52.290558   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:41:52.290564   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:41:52.290610   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:41:52.336058   69580 cri.go:89] found id: ""
	I0501 03:41:52.336084   69580 logs.go:276] 0 containers: []
	W0501 03:41:52.336092   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:41:52.336103   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:41:52.336118   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:41:52.392738   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:41:52.392773   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:41:52.408475   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:41:52.408503   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:41:52.493567   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:41:52.493594   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:41:52.493608   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:41:52.566550   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:41:52.566583   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:41:55.117129   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:55.134840   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:41:55.134918   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:41:55.193990   69580 cri.go:89] found id: ""
	I0501 03:41:55.194019   69580 logs.go:276] 0 containers: []
	W0501 03:41:55.194029   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:41:55.194038   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:41:55.194100   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:41:55.261710   69580 cri.go:89] found id: ""
	I0501 03:41:55.261743   69580 logs.go:276] 0 containers: []
	W0501 03:41:55.261754   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:41:55.261761   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:41:55.261823   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:41:55.302432   69580 cri.go:89] found id: ""
	I0501 03:41:55.302468   69580 logs.go:276] 0 containers: []
	W0501 03:41:55.302480   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:41:55.302488   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:41:55.302550   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:41:55.346029   69580 cri.go:89] found id: ""
	I0501 03:41:55.346058   69580 logs.go:276] 0 containers: []
	W0501 03:41:55.346067   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:41:55.346073   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:41:55.346117   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:41:55.393206   69580 cri.go:89] found id: ""
	I0501 03:41:55.393229   69580 logs.go:276] 0 containers: []
	W0501 03:41:55.393236   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:41:55.393242   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:41:55.393295   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:41:55.437908   69580 cri.go:89] found id: ""
	I0501 03:41:55.437940   69580 logs.go:276] 0 containers: []
	W0501 03:41:55.437952   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:41:55.437960   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:41:55.438020   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:41:55.480439   69580 cri.go:89] found id: ""
	I0501 03:41:55.480468   69580 logs.go:276] 0 containers: []
	W0501 03:41:55.480480   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:41:55.480488   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:41:55.480589   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:41:55.524782   69580 cri.go:89] found id: ""
	I0501 03:41:55.524811   69580 logs.go:276] 0 containers: []
	W0501 03:41:55.524819   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:41:55.524828   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:41:55.524840   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:41:55.604337   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:41:55.604373   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:41:55.649427   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:41:55.649455   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:41:55.707928   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:41:55.707976   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:41:55.723289   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:41:55.723316   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:41:55.805146   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:41:54.347203   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:56.847806   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:55.658387   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:58.156886   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:57.511280   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:59.511460   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:58.306145   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:58.322207   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:41:58.322280   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:41:58.370291   69580 cri.go:89] found id: ""
	I0501 03:41:58.370319   69580 logs.go:276] 0 containers: []
	W0501 03:41:58.370331   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:41:58.370338   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:41:58.370417   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:41:58.421230   69580 cri.go:89] found id: ""
	I0501 03:41:58.421256   69580 logs.go:276] 0 containers: []
	W0501 03:41:58.421264   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:41:58.421270   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:41:58.421317   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:41:58.463694   69580 cri.go:89] found id: ""
	I0501 03:41:58.463724   69580 logs.go:276] 0 containers: []
	W0501 03:41:58.463735   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:41:58.463743   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:41:58.463797   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:41:58.507756   69580 cri.go:89] found id: ""
	I0501 03:41:58.507785   69580 logs.go:276] 0 containers: []
	W0501 03:41:58.507791   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:41:58.507797   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:41:58.507870   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:41:58.554852   69580 cri.go:89] found id: ""
	I0501 03:41:58.554884   69580 logs.go:276] 0 containers: []
	W0501 03:41:58.554895   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:41:58.554903   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:41:58.554969   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:41:58.602467   69580 cri.go:89] found id: ""
	I0501 03:41:58.602495   69580 logs.go:276] 0 containers: []
	W0501 03:41:58.602505   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:41:58.602511   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:41:58.602561   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:41:58.652718   69580 cri.go:89] found id: ""
	I0501 03:41:58.652749   69580 logs.go:276] 0 containers: []
	W0501 03:41:58.652759   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:41:58.652766   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:41:58.652837   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:41:58.694351   69580 cri.go:89] found id: ""
	I0501 03:41:58.694377   69580 logs.go:276] 0 containers: []
	W0501 03:41:58.694385   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:41:58.694393   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:41:58.694434   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:41:58.779878   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:41:58.779911   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:41:58.826733   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:41:58.826768   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:41:58.883808   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:41:58.883842   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:41:58.900463   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:41:58.900495   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:41:58.991346   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:41:59.345807   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:01.846099   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:00.157131   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:02.157204   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:01.511711   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:03.512536   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:01.492396   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:42:01.508620   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:42:01.508756   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:42:01.555669   69580 cri.go:89] found id: ""
	I0501 03:42:01.555696   69580 logs.go:276] 0 containers: []
	W0501 03:42:01.555712   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:42:01.555720   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:42:01.555782   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:42:01.597591   69580 cri.go:89] found id: ""
	I0501 03:42:01.597615   69580 logs.go:276] 0 containers: []
	W0501 03:42:01.597626   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:42:01.597635   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:42:01.597693   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:42:01.636259   69580 cri.go:89] found id: ""
	I0501 03:42:01.636286   69580 logs.go:276] 0 containers: []
	W0501 03:42:01.636297   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:42:01.636305   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:42:01.636361   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:42:01.684531   69580 cri.go:89] found id: ""
	I0501 03:42:01.684562   69580 logs.go:276] 0 containers: []
	W0501 03:42:01.684572   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:42:01.684579   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:42:01.684647   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:42:01.725591   69580 cri.go:89] found id: ""
	I0501 03:42:01.725621   69580 logs.go:276] 0 containers: []
	W0501 03:42:01.725628   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:42:01.725652   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:42:01.725718   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:42:01.767868   69580 cri.go:89] found id: ""
	I0501 03:42:01.767901   69580 logs.go:276] 0 containers: []
	W0501 03:42:01.767910   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:42:01.767917   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:42:01.767977   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:42:01.817590   69580 cri.go:89] found id: ""
	I0501 03:42:01.817618   69580 logs.go:276] 0 containers: []
	W0501 03:42:01.817629   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:42:01.817637   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:42:01.817697   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:42:01.863549   69580 cri.go:89] found id: ""
	I0501 03:42:01.863576   69580 logs.go:276] 0 containers: []
	W0501 03:42:01.863586   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:42:01.863595   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:42:01.863607   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:42:01.879134   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:42:01.879162   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:42:01.967015   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:42:01.967043   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:42:01.967059   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:42:02.051576   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:42:02.051614   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:42:02.095614   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:42:02.095644   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:42:04.652974   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:42:04.671018   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:42:04.671103   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:42:04.712392   69580 cri.go:89] found id: ""
	I0501 03:42:04.712425   69580 logs.go:276] 0 containers: []
	W0501 03:42:04.712435   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:42:04.712442   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:42:04.712503   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:42:04.756854   69580 cri.go:89] found id: ""
	I0501 03:42:04.756881   69580 logs.go:276] 0 containers: []
	W0501 03:42:04.756893   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:42:04.756900   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:42:04.756962   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:42:04.797665   69580 cri.go:89] found id: ""
	I0501 03:42:04.797694   69580 logs.go:276] 0 containers: []
	W0501 03:42:04.797703   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:42:04.797709   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:42:04.797756   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:42:04.838441   69580 cri.go:89] found id: ""
	I0501 03:42:04.838472   69580 logs.go:276] 0 containers: []
	W0501 03:42:04.838483   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:42:04.838491   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:42:04.838556   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:42:04.879905   69580 cri.go:89] found id: ""
	I0501 03:42:04.879935   69580 logs.go:276] 0 containers: []
	W0501 03:42:04.879945   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:42:04.879952   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:42:04.880012   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:42:04.924759   69580 cri.go:89] found id: ""
	I0501 03:42:04.924792   69580 logs.go:276] 0 containers: []
	W0501 03:42:04.924804   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:42:04.924813   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:42:04.924879   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:42:04.965638   69580 cri.go:89] found id: ""
	I0501 03:42:04.965663   69580 logs.go:276] 0 containers: []
	W0501 03:42:04.965670   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:42:04.965676   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:42:04.965721   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:42:05.013127   69580 cri.go:89] found id: ""
	I0501 03:42:05.013153   69580 logs.go:276] 0 containers: []
	W0501 03:42:05.013163   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:42:05.013173   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:42:05.013185   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:42:05.108388   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:42:05.108409   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:42:05.108422   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:42:05.198239   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:42:05.198281   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:42:05.241042   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:42:05.241076   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:42:05.299017   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:42:05.299069   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:42:04.345910   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:06.346830   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:04.657438   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:06.657707   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:06.011511   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:08.016548   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:10.510503   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:07.815458   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:42:07.832047   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:42:07.832125   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:42:07.882950   69580 cri.go:89] found id: ""
	I0501 03:42:07.882985   69580 logs.go:276] 0 containers: []
	W0501 03:42:07.882996   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:42:07.883002   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:42:07.883051   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:42:07.928086   69580 cri.go:89] found id: ""
	I0501 03:42:07.928111   69580 logs.go:276] 0 containers: []
	W0501 03:42:07.928119   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:42:07.928124   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:42:07.928177   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:42:07.976216   69580 cri.go:89] found id: ""
	I0501 03:42:07.976250   69580 logs.go:276] 0 containers: []
	W0501 03:42:07.976268   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:42:07.976274   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:42:07.976331   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:42:08.019903   69580 cri.go:89] found id: ""
	I0501 03:42:08.019932   69580 logs.go:276] 0 containers: []
	W0501 03:42:08.019943   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:42:08.019951   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:42:08.020009   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:42:08.075980   69580 cri.go:89] found id: ""
	I0501 03:42:08.076004   69580 logs.go:276] 0 containers: []
	W0501 03:42:08.076012   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:42:08.076018   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:42:08.076065   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:42:08.114849   69580 cri.go:89] found id: ""
	I0501 03:42:08.114881   69580 logs.go:276] 0 containers: []
	W0501 03:42:08.114891   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:42:08.114897   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:42:08.114955   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:42:08.159427   69580 cri.go:89] found id: ""
	I0501 03:42:08.159457   69580 logs.go:276] 0 containers: []
	W0501 03:42:08.159468   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:42:08.159476   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:42:08.159543   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:42:08.200117   69580 cri.go:89] found id: ""
	I0501 03:42:08.200151   69580 logs.go:276] 0 containers: []
	W0501 03:42:08.200163   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:42:08.200182   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:42:08.200197   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:42:08.281926   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:42:08.281972   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:42:08.331393   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:42:08.331429   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:42:08.386758   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:42:08.386793   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:42:08.402551   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:42:08.402581   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:42:08.489678   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:42:10.990653   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:42:11.007879   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:42:11.007958   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:42:11.049842   69580 cri.go:89] found id: ""
	I0501 03:42:11.049867   69580 logs.go:276] 0 containers: []
	W0501 03:42:11.049879   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:42:11.049885   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:42:11.049933   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:42:11.091946   69580 cri.go:89] found id: ""
	I0501 03:42:11.091980   69580 logs.go:276] 0 containers: []
	W0501 03:42:11.091992   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:42:11.092000   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:42:11.092079   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:42:11.140100   69580 cri.go:89] found id: ""
	I0501 03:42:11.140129   69580 logs.go:276] 0 containers: []
	W0501 03:42:11.140138   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:42:11.140144   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:42:11.140207   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:42:11.182796   69580 cri.go:89] found id: ""
	I0501 03:42:11.182821   69580 logs.go:276] 0 containers: []
	W0501 03:42:11.182832   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:42:11.182838   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:42:11.182896   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:42:11.222985   69580 cri.go:89] found id: ""
	I0501 03:42:11.223016   69580 logs.go:276] 0 containers: []
	W0501 03:42:11.223027   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:42:11.223033   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:42:11.223114   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:42:11.265793   69580 cri.go:89] found id: ""
	I0501 03:42:11.265818   69580 logs.go:276] 0 containers: []
	W0501 03:42:11.265830   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:42:11.265838   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:42:11.265913   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:42:11.309886   69580 cri.go:89] found id: ""
	I0501 03:42:11.309912   69580 logs.go:276] 0 containers: []
	W0501 03:42:11.309924   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:42:11.309931   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:42:11.309989   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:42:11.357757   69580 cri.go:89] found id: ""
	I0501 03:42:11.357791   69580 logs.go:276] 0 containers: []
	W0501 03:42:11.357803   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:42:11.357823   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:42:11.357839   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:42:11.412668   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:42:11.412704   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:42:11.428380   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:42:11.428422   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0501 03:42:08.347511   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:10.846691   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:09.156632   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:11.158047   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:13.657603   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:12.512713   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:15.011382   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	W0501 03:42:11.521898   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:42:11.521924   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:42:11.521940   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:42:11.607081   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:42:11.607116   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:42:14.153054   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:42:14.173046   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:42:14.173150   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:42:14.219583   69580 cri.go:89] found id: ""
	I0501 03:42:14.219605   69580 logs.go:276] 0 containers: []
	W0501 03:42:14.219613   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:42:14.219619   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:42:14.219664   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:42:14.260316   69580 cri.go:89] found id: ""
	I0501 03:42:14.260349   69580 logs.go:276] 0 containers: []
	W0501 03:42:14.260357   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:42:14.260366   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:42:14.260420   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:42:14.305049   69580 cri.go:89] found id: ""
	I0501 03:42:14.305085   69580 logs.go:276] 0 containers: []
	W0501 03:42:14.305109   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:42:14.305117   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:42:14.305198   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:42:14.359589   69580 cri.go:89] found id: ""
	I0501 03:42:14.359614   69580 logs.go:276] 0 containers: []
	W0501 03:42:14.359622   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:42:14.359628   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:42:14.359672   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:42:14.403867   69580 cri.go:89] found id: ""
	I0501 03:42:14.403895   69580 logs.go:276] 0 containers: []
	W0501 03:42:14.403904   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:42:14.403910   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:42:14.403987   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:42:14.446626   69580 cri.go:89] found id: ""
	I0501 03:42:14.446655   69580 logs.go:276] 0 containers: []
	W0501 03:42:14.446675   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:42:14.446683   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:42:14.446754   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:42:14.490983   69580 cri.go:89] found id: ""
	I0501 03:42:14.491016   69580 logs.go:276] 0 containers: []
	W0501 03:42:14.491028   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:42:14.491036   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:42:14.491117   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:42:14.534180   69580 cri.go:89] found id: ""
	I0501 03:42:14.534205   69580 logs.go:276] 0 containers: []
	W0501 03:42:14.534213   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:42:14.534221   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:42:14.534236   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:42:14.621433   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:42:14.621491   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:42:14.680265   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:42:14.680310   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:42:14.738943   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:42:14.738983   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:42:14.754145   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:42:14.754176   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:42:14.839974   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:42:13.347081   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:15.847072   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:17.847749   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:16.157433   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:18.158120   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:17.017276   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:19.514339   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:17.340948   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:42:17.360007   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:42:17.360068   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:42:17.403201   69580 cri.go:89] found id: ""
	I0501 03:42:17.403231   69580 logs.go:276] 0 containers: []
	W0501 03:42:17.403239   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:42:17.403245   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:42:17.403301   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:42:17.442940   69580 cri.go:89] found id: ""
	I0501 03:42:17.442966   69580 logs.go:276] 0 containers: []
	W0501 03:42:17.442975   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:42:17.442981   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:42:17.443038   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:42:17.487219   69580 cri.go:89] found id: ""
	I0501 03:42:17.487248   69580 logs.go:276] 0 containers: []
	W0501 03:42:17.487259   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:42:17.487267   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:42:17.487324   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:42:17.528551   69580 cri.go:89] found id: ""
	I0501 03:42:17.528583   69580 logs.go:276] 0 containers: []
	W0501 03:42:17.528593   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:42:17.528601   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:42:17.528668   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:42:17.577005   69580 cri.go:89] found id: ""
	I0501 03:42:17.577041   69580 logs.go:276] 0 containers: []
	W0501 03:42:17.577052   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:42:17.577061   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:42:17.577132   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:42:17.618924   69580 cri.go:89] found id: ""
	I0501 03:42:17.618949   69580 logs.go:276] 0 containers: []
	W0501 03:42:17.618957   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:42:17.618963   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:42:17.619022   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:42:17.660487   69580 cri.go:89] found id: ""
	I0501 03:42:17.660514   69580 logs.go:276] 0 containers: []
	W0501 03:42:17.660525   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:42:17.660532   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:42:17.660592   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:42:17.701342   69580 cri.go:89] found id: ""
	I0501 03:42:17.701370   69580 logs.go:276] 0 containers: []
	W0501 03:42:17.701378   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:42:17.701387   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:42:17.701400   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:42:17.757034   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:42:17.757069   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:42:17.772955   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:42:17.772984   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:42:17.888062   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:42:17.888088   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:42:17.888101   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:42:17.969274   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:42:17.969312   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:42:20.521053   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:42:20.536065   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:42:20.536141   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:42:20.577937   69580 cri.go:89] found id: ""
	I0501 03:42:20.577967   69580 logs.go:276] 0 containers: []
	W0501 03:42:20.577977   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:42:20.577986   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:42:20.578055   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:42:20.626690   69580 cri.go:89] found id: ""
	I0501 03:42:20.626714   69580 logs.go:276] 0 containers: []
	W0501 03:42:20.626722   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:42:20.626728   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:42:20.626809   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:42:20.670849   69580 cri.go:89] found id: ""
	I0501 03:42:20.670872   69580 logs.go:276] 0 containers: []
	W0501 03:42:20.670881   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:42:20.670886   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:42:20.670946   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:42:20.711481   69580 cri.go:89] found id: ""
	I0501 03:42:20.711511   69580 logs.go:276] 0 containers: []
	W0501 03:42:20.711522   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:42:20.711531   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:42:20.711596   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:42:20.753413   69580 cri.go:89] found id: ""
	I0501 03:42:20.753443   69580 logs.go:276] 0 containers: []
	W0501 03:42:20.753452   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:42:20.753459   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:42:20.753536   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:42:20.791424   69580 cri.go:89] found id: ""
	I0501 03:42:20.791452   69580 logs.go:276] 0 containers: []
	W0501 03:42:20.791461   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:42:20.791466   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:42:20.791526   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:42:20.833718   69580 cri.go:89] found id: ""
	I0501 03:42:20.833740   69580 logs.go:276] 0 containers: []
	W0501 03:42:20.833748   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:42:20.833752   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:42:20.833799   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:42:20.879788   69580 cri.go:89] found id: ""
	I0501 03:42:20.879818   69580 logs.go:276] 0 containers: []
	W0501 03:42:20.879828   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:42:20.879839   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:42:20.879855   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:42:20.895266   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:42:20.895304   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:42:20.976429   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:42:20.976452   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:42:20.976465   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:42:21.063573   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:42:21.063611   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:42:21.113510   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:42:21.113543   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:42:20.346735   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:22.347096   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:20.658642   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:22.659841   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:22.011045   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:24.012756   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:23.672203   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:42:23.687849   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:42:23.687946   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:42:23.731428   69580 cri.go:89] found id: ""
	I0501 03:42:23.731455   69580 logs.go:276] 0 containers: []
	W0501 03:42:23.731467   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:42:23.731473   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:42:23.731534   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:42:23.772219   69580 cri.go:89] found id: ""
	I0501 03:42:23.772248   69580 logs.go:276] 0 containers: []
	W0501 03:42:23.772259   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:42:23.772266   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:42:23.772369   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:42:23.837203   69580 cri.go:89] found id: ""
	I0501 03:42:23.837235   69580 logs.go:276] 0 containers: []
	W0501 03:42:23.837247   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:42:23.837255   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:42:23.837317   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:42:23.884681   69580 cri.go:89] found id: ""
	I0501 03:42:23.884709   69580 logs.go:276] 0 containers: []
	W0501 03:42:23.884716   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:42:23.884722   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:42:23.884783   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:42:23.927544   69580 cri.go:89] found id: ""
	I0501 03:42:23.927576   69580 logs.go:276] 0 containers: []
	W0501 03:42:23.927584   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:42:23.927590   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:42:23.927652   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:42:23.970428   69580 cri.go:89] found id: ""
	I0501 03:42:23.970457   69580 logs.go:276] 0 containers: []
	W0501 03:42:23.970467   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:42:23.970476   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:42:23.970541   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:42:24.010545   69580 cri.go:89] found id: ""
	I0501 03:42:24.010573   69580 logs.go:276] 0 containers: []
	W0501 03:42:24.010583   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:42:24.010593   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:42:24.010653   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:42:24.053547   69580 cri.go:89] found id: ""
	I0501 03:42:24.053574   69580 logs.go:276] 0 containers: []
	W0501 03:42:24.053582   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:42:24.053591   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:42:24.053602   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:42:24.108416   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:42:24.108452   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:42:24.124052   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:42:24.124083   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:42:24.209024   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:42:24.209048   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:42:24.209063   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:42:24.291644   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:42:24.291693   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:42:24.846439   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:26.846750   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:25.157009   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:27.657022   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:26.510679   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:28.511049   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:30.511542   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:26.840623   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:42:26.856231   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:42:26.856320   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:42:26.897988   69580 cri.go:89] found id: ""
	I0501 03:42:26.898022   69580 logs.go:276] 0 containers: []
	W0501 03:42:26.898033   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:42:26.898041   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:42:26.898109   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:42:26.937608   69580 cri.go:89] found id: ""
	I0501 03:42:26.937638   69580 logs.go:276] 0 containers: []
	W0501 03:42:26.937660   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:42:26.937668   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:42:26.937731   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:42:26.979799   69580 cri.go:89] found id: ""
	I0501 03:42:26.979836   69580 logs.go:276] 0 containers: []
	W0501 03:42:26.979847   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:42:26.979854   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:42:26.979922   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:42:27.018863   69580 cri.go:89] found id: ""
	I0501 03:42:27.018896   69580 logs.go:276] 0 containers: []
	W0501 03:42:27.018903   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:42:27.018909   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:42:27.018959   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:42:27.057864   69580 cri.go:89] found id: ""
	I0501 03:42:27.057893   69580 logs.go:276] 0 containers: []
	W0501 03:42:27.057904   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:42:27.057912   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:42:27.057982   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:42:27.102909   69580 cri.go:89] found id: ""
	I0501 03:42:27.102939   69580 logs.go:276] 0 containers: []
	W0501 03:42:27.102950   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:42:27.102958   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:42:27.103019   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:42:27.148292   69580 cri.go:89] found id: ""
	I0501 03:42:27.148326   69580 logs.go:276] 0 containers: []
	W0501 03:42:27.148336   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:42:27.148344   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:42:27.148407   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:42:27.197557   69580 cri.go:89] found id: ""
	I0501 03:42:27.197581   69580 logs.go:276] 0 containers: []
	W0501 03:42:27.197588   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:42:27.197596   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:42:27.197609   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:42:27.281768   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:42:27.281793   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:42:27.281806   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:42:27.361496   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:42:27.361528   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:42:27.407640   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:42:27.407675   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:42:27.472533   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:42:27.472576   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:42:29.987773   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:42:30.003511   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:42:30.003619   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:42:30.049330   69580 cri.go:89] found id: ""
	I0501 03:42:30.049363   69580 logs.go:276] 0 containers: []
	W0501 03:42:30.049377   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:42:30.049384   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:42:30.049439   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:42:30.088521   69580 cri.go:89] found id: ""
	I0501 03:42:30.088549   69580 logs.go:276] 0 containers: []
	W0501 03:42:30.088560   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:42:30.088568   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:42:30.088624   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:42:30.132731   69580 cri.go:89] found id: ""
	I0501 03:42:30.132765   69580 logs.go:276] 0 containers: []
	W0501 03:42:30.132777   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:42:30.132784   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:42:30.132847   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:42:30.178601   69580 cri.go:89] found id: ""
	I0501 03:42:30.178639   69580 logs.go:276] 0 containers: []
	W0501 03:42:30.178648   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:42:30.178656   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:42:30.178714   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:42:30.230523   69580 cri.go:89] found id: ""
	I0501 03:42:30.230549   69580 logs.go:276] 0 containers: []
	W0501 03:42:30.230561   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:42:30.230569   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:42:30.230632   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:42:30.289234   69580 cri.go:89] found id: ""
	I0501 03:42:30.289262   69580 logs.go:276] 0 containers: []
	W0501 03:42:30.289270   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:42:30.289277   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:42:30.289342   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:42:30.332596   69580 cri.go:89] found id: ""
	I0501 03:42:30.332627   69580 logs.go:276] 0 containers: []
	W0501 03:42:30.332637   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:42:30.332644   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:42:30.332710   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:42:30.383871   69580 cri.go:89] found id: ""
	I0501 03:42:30.383901   69580 logs.go:276] 0 containers: []
	W0501 03:42:30.383908   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:42:30.383917   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:42:30.383929   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:42:30.464382   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:42:30.464404   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:42:30.464417   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:42:30.550604   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:42:30.550637   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:42:30.594927   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:42:30.594959   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:42:30.648392   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:42:30.648426   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:42:28.847271   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:31.345865   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:29.657316   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:31.657435   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:32.511887   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:35.011677   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:33.167591   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:42:33.183804   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:42:33.183874   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:42:33.223501   69580 cri.go:89] found id: ""
	I0501 03:42:33.223525   69580 logs.go:276] 0 containers: []
	W0501 03:42:33.223532   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:42:33.223539   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:42:33.223600   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:42:33.268674   69580 cri.go:89] found id: ""
	I0501 03:42:33.268705   69580 logs.go:276] 0 containers: []
	W0501 03:42:33.268741   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:42:33.268749   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:42:33.268807   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:42:33.310613   69580 cri.go:89] found id: ""
	I0501 03:42:33.310655   69580 logs.go:276] 0 containers: []
	W0501 03:42:33.310666   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:42:33.310674   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:42:33.310737   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:42:33.353156   69580 cri.go:89] found id: ""
	I0501 03:42:33.353177   69580 logs.go:276] 0 containers: []
	W0501 03:42:33.353184   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:42:33.353189   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:42:33.353237   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:42:33.389702   69580 cri.go:89] found id: ""
	I0501 03:42:33.389730   69580 logs.go:276] 0 containers: []
	W0501 03:42:33.389743   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:42:33.389751   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:42:33.389817   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:42:33.431244   69580 cri.go:89] found id: ""
	I0501 03:42:33.431275   69580 logs.go:276] 0 containers: []
	W0501 03:42:33.431290   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:42:33.431298   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:42:33.431384   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:42:33.472382   69580 cri.go:89] found id: ""
	I0501 03:42:33.472412   69580 logs.go:276] 0 containers: []
	W0501 03:42:33.472423   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:42:33.472431   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:42:33.472519   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:42:33.517042   69580 cri.go:89] found id: ""
	I0501 03:42:33.517064   69580 logs.go:276] 0 containers: []
	W0501 03:42:33.517071   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:42:33.517079   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:42:33.517091   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:42:33.573343   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:42:33.573372   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:42:33.588932   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:42:33.588963   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:42:33.674060   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:42:33.674090   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:42:33.674106   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:42:33.756635   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:42:33.756684   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:42:36.300909   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:42:36.320407   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:42:36.320474   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:42:36.367236   69580 cri.go:89] found id: ""
	I0501 03:42:36.367261   69580 logs.go:276] 0 containers: []
	W0501 03:42:36.367269   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:42:36.367274   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:42:36.367335   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:42:36.406440   69580 cri.go:89] found id: ""
	I0501 03:42:36.406471   69580 logs.go:276] 0 containers: []
	W0501 03:42:36.406482   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:42:36.406489   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:42:36.406552   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:42:36.443931   69580 cri.go:89] found id: ""
	I0501 03:42:36.443957   69580 logs.go:276] 0 containers: []
	W0501 03:42:36.443964   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:42:36.443969   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:42:36.444024   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:42:33.844832   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:35.845476   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:37.846291   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:34.156976   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:36.657001   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:38.657056   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:37.510534   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:39.511335   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:36.486169   69580 cri.go:89] found id: ""
	I0501 03:42:36.486200   69580 logs.go:276] 0 containers: []
	W0501 03:42:36.486213   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:42:36.486220   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:42:36.486276   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:42:36.532211   69580 cri.go:89] found id: ""
	I0501 03:42:36.532237   69580 logs.go:276] 0 containers: []
	W0501 03:42:36.532246   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:42:36.532251   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:42:36.532311   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:42:36.571889   69580 cri.go:89] found id: ""
	I0501 03:42:36.571921   69580 logs.go:276] 0 containers: []
	W0501 03:42:36.571933   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:42:36.571940   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:42:36.572000   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:42:36.612126   69580 cri.go:89] found id: ""
	I0501 03:42:36.612159   69580 logs.go:276] 0 containers: []
	W0501 03:42:36.612170   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:42:36.612177   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:42:36.612238   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:42:36.654067   69580 cri.go:89] found id: ""
	I0501 03:42:36.654096   69580 logs.go:276] 0 containers: []
	W0501 03:42:36.654106   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:42:36.654117   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:42:36.654129   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:42:36.740205   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:42:36.740226   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:42:36.740237   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:42:36.821403   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:42:36.821437   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:42:36.874829   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:42:36.874867   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:42:36.928312   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:42:36.928342   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:42:39.444598   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:42:39.460086   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:42:39.460151   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:42:39.500833   69580 cri.go:89] found id: ""
	I0501 03:42:39.500859   69580 logs.go:276] 0 containers: []
	W0501 03:42:39.500870   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:42:39.500879   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:42:39.500936   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:42:39.544212   69580 cri.go:89] found id: ""
	I0501 03:42:39.544238   69580 logs.go:276] 0 containers: []
	W0501 03:42:39.544248   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:42:39.544260   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:42:39.544326   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:42:39.582167   69580 cri.go:89] found id: ""
	I0501 03:42:39.582200   69580 logs.go:276] 0 containers: []
	W0501 03:42:39.582218   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:42:39.582231   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:42:39.582296   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:42:39.624811   69580 cri.go:89] found id: ""
	I0501 03:42:39.624837   69580 logs.go:276] 0 containers: []
	W0501 03:42:39.624848   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:42:39.624855   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:42:39.624913   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:42:39.666001   69580 cri.go:89] found id: ""
	I0501 03:42:39.666030   69580 logs.go:276] 0 containers: []
	W0501 03:42:39.666041   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:42:39.666048   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:42:39.666111   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:42:39.708790   69580 cri.go:89] found id: ""
	I0501 03:42:39.708820   69580 logs.go:276] 0 containers: []
	W0501 03:42:39.708831   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:42:39.708839   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:42:39.708896   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:42:39.750585   69580 cri.go:89] found id: ""
	I0501 03:42:39.750609   69580 logs.go:276] 0 containers: []
	W0501 03:42:39.750617   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:42:39.750622   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:42:39.750670   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:42:39.798576   69580 cri.go:89] found id: ""
	I0501 03:42:39.798612   69580 logs.go:276] 0 containers: []
	W0501 03:42:39.798624   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:42:39.798636   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:42:39.798651   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:42:39.891759   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:42:39.891782   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:42:39.891797   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:42:39.974419   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:42:39.974462   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:42:40.020700   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:42:40.020728   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:42:40.073946   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:42:40.073980   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:42:40.345975   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:42.350579   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:40.657403   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:42.658271   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:41.511780   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:43.512428   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:42.590933   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:42:42.606044   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:42:42.606120   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:42:42.653074   69580 cri.go:89] found id: ""
	I0501 03:42:42.653104   69580 logs.go:276] 0 containers: []
	W0501 03:42:42.653115   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:42:42.653123   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:42:42.653195   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:42:42.693770   69580 cri.go:89] found id: ""
	I0501 03:42:42.693809   69580 logs.go:276] 0 containers: []
	W0501 03:42:42.693821   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:42:42.693829   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:42:42.693885   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:42:42.739087   69580 cri.go:89] found id: ""
	I0501 03:42:42.739115   69580 logs.go:276] 0 containers: []
	W0501 03:42:42.739125   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:42:42.739133   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:42:42.739196   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:42:42.779831   69580 cri.go:89] found id: ""
	I0501 03:42:42.779863   69580 logs.go:276] 0 containers: []
	W0501 03:42:42.779876   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:42:42.779885   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:42:42.779950   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:42:42.826759   69580 cri.go:89] found id: ""
	I0501 03:42:42.826791   69580 logs.go:276] 0 containers: []
	W0501 03:42:42.826799   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:42:42.826804   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:42:42.826854   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:42:42.872602   69580 cri.go:89] found id: ""
	I0501 03:42:42.872629   69580 logs.go:276] 0 containers: []
	W0501 03:42:42.872640   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:42:42.872648   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:42:42.872707   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:42:42.913833   69580 cri.go:89] found id: ""
	I0501 03:42:42.913862   69580 logs.go:276] 0 containers: []
	W0501 03:42:42.913872   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:42:42.913879   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:42:42.913936   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:42:42.953629   69580 cri.go:89] found id: ""
	I0501 03:42:42.953657   69580 logs.go:276] 0 containers: []
	W0501 03:42:42.953667   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:42:42.953679   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:42:42.953695   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:42:42.968420   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:42:42.968447   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:42:43.046840   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:42:43.046874   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:42:43.046898   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:42:43.135453   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:42:43.135492   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:42:43.184103   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:42:43.184141   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:42:45.738246   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:42:45.753193   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:42:45.753258   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:42:45.791191   69580 cri.go:89] found id: ""
	I0501 03:42:45.791216   69580 logs.go:276] 0 containers: []
	W0501 03:42:45.791224   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:42:45.791236   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:42:45.791285   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:42:45.831935   69580 cri.go:89] found id: ""
	I0501 03:42:45.831967   69580 logs.go:276] 0 containers: []
	W0501 03:42:45.831978   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:42:45.831986   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:42:45.832041   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:42:45.869492   69580 cri.go:89] found id: ""
	I0501 03:42:45.869517   69580 logs.go:276] 0 containers: []
	W0501 03:42:45.869529   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:42:45.869536   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:42:45.869593   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:42:45.910642   69580 cri.go:89] found id: ""
	I0501 03:42:45.910672   69580 logs.go:276] 0 containers: []
	W0501 03:42:45.910682   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:42:45.910691   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:42:45.910754   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:42:45.951489   69580 cri.go:89] found id: ""
	I0501 03:42:45.951518   69580 logs.go:276] 0 containers: []
	W0501 03:42:45.951528   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:42:45.951535   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:42:45.951582   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:42:45.991388   69580 cri.go:89] found id: ""
	I0501 03:42:45.991410   69580 logs.go:276] 0 containers: []
	W0501 03:42:45.991418   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:42:45.991423   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:42:45.991467   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:42:46.036524   69580 cri.go:89] found id: ""
	I0501 03:42:46.036546   69580 logs.go:276] 0 containers: []
	W0501 03:42:46.036553   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:42:46.036560   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:42:46.036622   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:42:46.087472   69580 cri.go:89] found id: ""
	I0501 03:42:46.087495   69580 logs.go:276] 0 containers: []
	W0501 03:42:46.087504   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:42:46.087513   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:42:46.087526   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:42:46.101283   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:42:46.101314   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:42:46.176459   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:42:46.176491   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:42:46.176506   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:42:46.261921   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:42:46.261956   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:42:46.309879   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:42:46.309910   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:42:44.846042   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:47.349023   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:44.658318   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:47.155780   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:46.011347   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:48.511156   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:50.512175   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:48.867064   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:42:48.884082   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:42:48.884192   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:42:48.929681   69580 cri.go:89] found id: ""
	I0501 03:42:48.929708   69580 logs.go:276] 0 containers: []
	W0501 03:42:48.929716   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:42:48.929722   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:42:48.929789   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:42:48.977850   69580 cri.go:89] found id: ""
	I0501 03:42:48.977882   69580 logs.go:276] 0 containers: []
	W0501 03:42:48.977894   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:42:48.977901   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:42:48.977962   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:42:49.022590   69580 cri.go:89] found id: ""
	I0501 03:42:49.022619   69580 logs.go:276] 0 containers: []
	W0501 03:42:49.022629   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:42:49.022637   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:42:49.022706   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:42:49.064092   69580 cri.go:89] found id: ""
	I0501 03:42:49.064122   69580 logs.go:276] 0 containers: []
	W0501 03:42:49.064143   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:42:49.064152   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:42:49.064220   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:42:49.103962   69580 cri.go:89] found id: ""
	I0501 03:42:49.103990   69580 logs.go:276] 0 containers: []
	W0501 03:42:49.104002   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:42:49.104009   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:42:49.104070   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:42:49.144566   69580 cri.go:89] found id: ""
	I0501 03:42:49.144596   69580 logs.go:276] 0 containers: []
	W0501 03:42:49.144604   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:42:49.144610   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:42:49.144669   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:42:49.183110   69580 cri.go:89] found id: ""
	I0501 03:42:49.183141   69580 logs.go:276] 0 containers: []
	W0501 03:42:49.183161   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:42:49.183166   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:42:49.183239   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:42:49.225865   69580 cri.go:89] found id: ""
	I0501 03:42:49.225890   69580 logs.go:276] 0 containers: []
	W0501 03:42:49.225902   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:42:49.225912   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:42:49.225926   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:42:49.312967   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:42:49.313005   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:42:49.361171   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:42:49.361206   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:42:49.418731   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:42:49.418780   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:42:49.436976   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:42:49.437007   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:42:49.517994   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:42:49.848517   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:52.346908   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:49.160713   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:51.656444   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:53.659040   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:53.011092   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:55.011811   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:52.018675   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:42:52.033946   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:42:52.034022   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:42:52.081433   69580 cri.go:89] found id: ""
	I0501 03:42:52.081465   69580 logs.go:276] 0 containers: []
	W0501 03:42:52.081477   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:42:52.081485   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:42:52.081544   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:42:52.123914   69580 cri.go:89] found id: ""
	I0501 03:42:52.123947   69580 logs.go:276] 0 containers: []
	W0501 03:42:52.123958   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:42:52.123966   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:42:52.124023   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:42:52.164000   69580 cri.go:89] found id: ""
	I0501 03:42:52.164020   69580 logs.go:276] 0 containers: []
	W0501 03:42:52.164027   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:42:52.164033   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:42:52.164086   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:42:52.205984   69580 cri.go:89] found id: ""
	I0501 03:42:52.206011   69580 logs.go:276] 0 containers: []
	W0501 03:42:52.206023   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:42:52.206031   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:42:52.206096   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:42:52.252743   69580 cri.go:89] found id: ""
	I0501 03:42:52.252766   69580 logs.go:276] 0 containers: []
	W0501 03:42:52.252774   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:42:52.252779   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:42:52.252839   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:42:52.296814   69580 cri.go:89] found id: ""
	I0501 03:42:52.296838   69580 logs.go:276] 0 containers: []
	W0501 03:42:52.296856   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:42:52.296864   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:42:52.296928   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:42:52.335996   69580 cri.go:89] found id: ""
	I0501 03:42:52.336023   69580 logs.go:276] 0 containers: []
	W0501 03:42:52.336034   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:42:52.336042   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:42:52.336105   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:42:52.377470   69580 cri.go:89] found id: ""
	I0501 03:42:52.377498   69580 logs.go:276] 0 containers: []
	W0501 03:42:52.377513   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:42:52.377524   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:42:52.377540   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:42:52.432644   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:42:52.432680   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:42:52.447518   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:42:52.447552   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:42:52.530967   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:42:52.530992   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:42:52.531005   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:42:52.612280   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:42:52.612327   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:42:55.170134   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:42:55.185252   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:42:55.185328   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:42:55.227741   69580 cri.go:89] found id: ""
	I0501 03:42:55.227764   69580 logs.go:276] 0 containers: []
	W0501 03:42:55.227771   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:42:55.227777   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:42:55.227820   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:42:55.270796   69580 cri.go:89] found id: ""
	I0501 03:42:55.270823   69580 logs.go:276] 0 containers: []
	W0501 03:42:55.270834   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:42:55.270840   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:42:55.270898   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:42:55.312146   69580 cri.go:89] found id: ""
	I0501 03:42:55.312171   69580 logs.go:276] 0 containers: []
	W0501 03:42:55.312180   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:42:55.312190   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:42:55.312236   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:42:55.354410   69580 cri.go:89] found id: ""
	I0501 03:42:55.354436   69580 logs.go:276] 0 containers: []
	W0501 03:42:55.354445   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:42:55.354450   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:42:55.354509   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:42:55.393550   69580 cri.go:89] found id: ""
	I0501 03:42:55.393580   69580 logs.go:276] 0 containers: []
	W0501 03:42:55.393589   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:42:55.393594   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:42:55.393651   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:42:55.431468   69580 cri.go:89] found id: ""
	I0501 03:42:55.431497   69580 logs.go:276] 0 containers: []
	W0501 03:42:55.431507   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:42:55.431514   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:42:55.431566   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:42:55.470491   69580 cri.go:89] found id: ""
	I0501 03:42:55.470513   69580 logs.go:276] 0 containers: []
	W0501 03:42:55.470520   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:42:55.470526   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:42:55.470571   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:42:55.509849   69580 cri.go:89] found id: ""
	I0501 03:42:55.509875   69580 logs.go:276] 0 containers: []
	W0501 03:42:55.509885   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:42:55.509894   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:42:55.509909   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:42:55.566680   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:42:55.566762   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:42:55.584392   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:42:55.584423   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:42:55.663090   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:42:55.663116   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:42:55.663131   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:42:55.741459   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:42:55.741494   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:42:54.846549   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:56.848989   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:56.156918   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:58.157016   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:57.012980   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:59.513719   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:58.294435   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:42:58.310204   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:42:58.310267   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:42:58.350292   69580 cri.go:89] found id: ""
	I0501 03:42:58.350322   69580 logs.go:276] 0 containers: []
	W0501 03:42:58.350334   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:42:58.350343   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:42:58.350431   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:42:58.395998   69580 cri.go:89] found id: ""
	I0501 03:42:58.396029   69580 logs.go:276] 0 containers: []
	W0501 03:42:58.396041   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:42:58.396049   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:42:58.396131   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:42:58.434371   69580 cri.go:89] found id: ""
	I0501 03:42:58.434414   69580 logs.go:276] 0 containers: []
	W0501 03:42:58.434427   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:42:58.434434   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:42:58.434493   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:42:58.473457   69580 cri.go:89] found id: ""
	I0501 03:42:58.473489   69580 logs.go:276] 0 containers: []
	W0501 03:42:58.473499   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:42:58.473507   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:42:58.473572   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:42:58.515172   69580 cri.go:89] found id: ""
	I0501 03:42:58.515201   69580 logs.go:276] 0 containers: []
	W0501 03:42:58.515212   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:42:58.515221   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:42:58.515291   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:42:58.560305   69580 cri.go:89] found id: ""
	I0501 03:42:58.560333   69580 logs.go:276] 0 containers: []
	W0501 03:42:58.560341   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:42:58.560348   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:42:58.560407   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:42:58.617980   69580 cri.go:89] found id: ""
	I0501 03:42:58.618005   69580 logs.go:276] 0 containers: []
	W0501 03:42:58.618013   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:42:58.618019   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:42:58.618080   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:42:58.659800   69580 cri.go:89] found id: ""
	I0501 03:42:58.659827   69580 logs.go:276] 0 containers: []
	W0501 03:42:58.659838   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:42:58.659848   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:42:58.659862   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:42:58.718134   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:42:58.718169   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:42:58.733972   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:42:58.734001   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:42:58.813055   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:42:58.813082   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:42:58.813099   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:42:58.897293   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:42:58.897331   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:43:01.442980   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:43:01.459602   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:43:01.459687   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:42:58.849599   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:01.346264   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:00.157322   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:02.657002   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:02.012753   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:04.510896   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:01.502817   69580 cri.go:89] found id: ""
	I0501 03:43:01.502848   69580 logs.go:276] 0 containers: []
	W0501 03:43:01.502857   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:43:01.502863   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:43:01.502924   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:43:01.547251   69580 cri.go:89] found id: ""
	I0501 03:43:01.547289   69580 logs.go:276] 0 containers: []
	W0501 03:43:01.547301   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:43:01.547308   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:43:01.547376   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:43:01.590179   69580 cri.go:89] found id: ""
	I0501 03:43:01.590211   69580 logs.go:276] 0 containers: []
	W0501 03:43:01.590221   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:43:01.590228   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:43:01.590296   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:43:01.628772   69580 cri.go:89] found id: ""
	I0501 03:43:01.628814   69580 logs.go:276] 0 containers: []
	W0501 03:43:01.628826   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:43:01.628834   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:43:01.628893   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:43:01.677414   69580 cri.go:89] found id: ""
	I0501 03:43:01.677440   69580 logs.go:276] 0 containers: []
	W0501 03:43:01.677448   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:43:01.677453   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:43:01.677500   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:43:01.723107   69580 cri.go:89] found id: ""
	I0501 03:43:01.723139   69580 logs.go:276] 0 containers: []
	W0501 03:43:01.723152   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:43:01.723160   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:43:01.723225   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:43:01.771846   69580 cri.go:89] found id: ""
	I0501 03:43:01.771873   69580 logs.go:276] 0 containers: []
	W0501 03:43:01.771883   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:43:01.771890   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:43:01.771952   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:43:01.818145   69580 cri.go:89] found id: ""
	I0501 03:43:01.818179   69580 logs.go:276] 0 containers: []
	W0501 03:43:01.818191   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:43:01.818202   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:43:01.818218   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:43:01.881502   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:43:01.881546   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:43:01.897580   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:43:01.897614   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:43:01.981959   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:43:01.981980   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:43:01.981996   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:43:02.066228   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:43:02.066269   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:43:04.609855   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:43:04.626885   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:43:04.626962   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:43:04.668248   69580 cri.go:89] found id: ""
	I0501 03:43:04.668277   69580 logs.go:276] 0 containers: []
	W0501 03:43:04.668290   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:43:04.668298   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:43:04.668364   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:43:04.711032   69580 cri.go:89] found id: ""
	I0501 03:43:04.711057   69580 logs.go:276] 0 containers: []
	W0501 03:43:04.711068   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:43:04.711076   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:43:04.711136   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:43:04.754197   69580 cri.go:89] found id: ""
	I0501 03:43:04.754232   69580 logs.go:276] 0 containers: []
	W0501 03:43:04.754241   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:43:04.754248   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:43:04.754317   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:43:04.801062   69580 cri.go:89] found id: ""
	I0501 03:43:04.801089   69580 logs.go:276] 0 containers: []
	W0501 03:43:04.801097   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:43:04.801103   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:43:04.801163   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:43:04.849425   69580 cri.go:89] found id: ""
	I0501 03:43:04.849454   69580 logs.go:276] 0 containers: []
	W0501 03:43:04.849465   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:43:04.849473   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:43:04.849536   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:43:04.892555   69580 cri.go:89] found id: ""
	I0501 03:43:04.892589   69580 logs.go:276] 0 containers: []
	W0501 03:43:04.892597   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:43:04.892603   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:43:04.892661   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:43:04.934101   69580 cri.go:89] found id: ""
	I0501 03:43:04.934129   69580 logs.go:276] 0 containers: []
	W0501 03:43:04.934137   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:43:04.934142   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:43:04.934191   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:43:04.985720   69580 cri.go:89] found id: ""
	I0501 03:43:04.985747   69580 logs.go:276] 0 containers: []
	W0501 03:43:04.985760   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:43:04.985773   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:43:04.985789   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:43:05.060634   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:43:05.060692   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:43:05.082007   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:43:05.082036   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:43:05.164613   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:43:05.164636   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:43:05.164652   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:43:05.244064   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:43:05.244103   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:43:03.845495   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:06.346757   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:05.157929   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:07.657094   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:06.511168   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:08.511512   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:10.511984   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:07.793867   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:43:07.811161   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:43:07.811236   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:43:07.850738   69580 cri.go:89] found id: ""
	I0501 03:43:07.850765   69580 logs.go:276] 0 containers: []
	W0501 03:43:07.850775   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:43:07.850782   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:43:07.850841   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:43:07.892434   69580 cri.go:89] found id: ""
	I0501 03:43:07.892466   69580 logs.go:276] 0 containers: []
	W0501 03:43:07.892476   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:43:07.892483   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:43:07.892543   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:43:07.934093   69580 cri.go:89] found id: ""
	I0501 03:43:07.934122   69580 logs.go:276] 0 containers: []
	W0501 03:43:07.934133   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:43:07.934141   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:43:07.934200   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:43:07.976165   69580 cri.go:89] found id: ""
	I0501 03:43:07.976196   69580 logs.go:276] 0 containers: []
	W0501 03:43:07.976205   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:43:07.976216   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:43:07.976278   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:43:08.016925   69580 cri.go:89] found id: ""
	I0501 03:43:08.016956   69580 logs.go:276] 0 containers: []
	W0501 03:43:08.016968   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:43:08.016975   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:43:08.017038   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:43:08.063385   69580 cri.go:89] found id: ""
	I0501 03:43:08.063438   69580 logs.go:276] 0 containers: []
	W0501 03:43:08.063454   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:43:08.063465   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:43:08.063551   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:43:08.103586   69580 cri.go:89] found id: ""
	I0501 03:43:08.103610   69580 logs.go:276] 0 containers: []
	W0501 03:43:08.103618   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:43:08.103628   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:43:08.103672   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:43:08.142564   69580 cri.go:89] found id: ""
	I0501 03:43:08.142594   69580 logs.go:276] 0 containers: []
	W0501 03:43:08.142605   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:43:08.142617   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:43:08.142635   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:43:08.231532   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:43:08.231556   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:43:08.231571   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:43:08.311009   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:43:08.311053   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:43:08.357841   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:43:08.357877   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:43:08.409577   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:43:08.409610   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:43:10.924898   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:43:10.941525   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:43:10.941591   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:43:11.009214   69580 cri.go:89] found id: ""
	I0501 03:43:11.009238   69580 logs.go:276] 0 containers: []
	W0501 03:43:11.009247   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:43:11.009255   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:43:11.009316   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:43:11.072233   69580 cri.go:89] found id: ""
	I0501 03:43:11.072259   69580 logs.go:276] 0 containers: []
	W0501 03:43:11.072267   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:43:11.072273   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:43:11.072327   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:43:11.111662   69580 cri.go:89] found id: ""
	I0501 03:43:11.111691   69580 logs.go:276] 0 containers: []
	W0501 03:43:11.111701   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:43:11.111708   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:43:11.111765   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:43:11.151540   69580 cri.go:89] found id: ""
	I0501 03:43:11.151570   69580 logs.go:276] 0 containers: []
	W0501 03:43:11.151580   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:43:11.151594   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:43:11.151656   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:43:11.194030   69580 cri.go:89] found id: ""
	I0501 03:43:11.194064   69580 logs.go:276] 0 containers: []
	W0501 03:43:11.194076   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:43:11.194083   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:43:11.194146   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:43:11.233010   69580 cri.go:89] found id: ""
	I0501 03:43:11.233045   69580 logs.go:276] 0 containers: []
	W0501 03:43:11.233056   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:43:11.233063   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:43:11.233117   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:43:11.270979   69580 cri.go:89] found id: ""
	I0501 03:43:11.271009   69580 logs.go:276] 0 containers: []
	W0501 03:43:11.271019   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:43:11.271026   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:43:11.271088   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:43:11.312338   69580 cri.go:89] found id: ""
	I0501 03:43:11.312369   69580 logs.go:276] 0 containers: []
	W0501 03:43:11.312381   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:43:11.312393   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:43:11.312408   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:43:11.364273   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:43:11.364307   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:43:11.418603   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:43:11.418634   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:43:11.433409   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:43:11.433438   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0501 03:43:08.349537   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:10.845566   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:12.846699   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:10.157910   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:12.657859   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:12.512669   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:15.013314   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	W0501 03:43:11.511243   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:43:11.511265   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:43:11.511280   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:43:14.089834   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:43:14.104337   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:43:14.104419   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:43:14.148799   69580 cri.go:89] found id: ""
	I0501 03:43:14.148826   69580 logs.go:276] 0 containers: []
	W0501 03:43:14.148833   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:43:14.148839   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:43:14.148904   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:43:14.191330   69580 cri.go:89] found id: ""
	I0501 03:43:14.191366   69580 logs.go:276] 0 containers: []
	W0501 03:43:14.191378   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:43:14.191386   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:43:14.191448   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:43:14.245978   69580 cri.go:89] found id: ""
	I0501 03:43:14.246010   69580 logs.go:276] 0 containers: []
	W0501 03:43:14.246018   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:43:14.246024   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:43:14.246093   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:43:14.287188   69580 cri.go:89] found id: ""
	I0501 03:43:14.287215   69580 logs.go:276] 0 containers: []
	W0501 03:43:14.287223   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:43:14.287228   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:43:14.287276   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:43:14.328060   69580 cri.go:89] found id: ""
	I0501 03:43:14.328093   69580 logs.go:276] 0 containers: []
	W0501 03:43:14.328104   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:43:14.328113   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:43:14.328179   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:43:14.370734   69580 cri.go:89] found id: ""
	I0501 03:43:14.370765   69580 logs.go:276] 0 containers: []
	W0501 03:43:14.370776   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:43:14.370783   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:43:14.370837   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:43:14.414690   69580 cri.go:89] found id: ""
	I0501 03:43:14.414713   69580 logs.go:276] 0 containers: []
	W0501 03:43:14.414721   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:43:14.414726   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:43:14.414790   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:43:14.459030   69580 cri.go:89] found id: ""
	I0501 03:43:14.459060   69580 logs.go:276] 0 containers: []
	W0501 03:43:14.459072   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:43:14.459083   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:43:14.459098   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:43:14.519728   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:43:14.519761   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:43:14.535841   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:43:14.535871   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:43:14.615203   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:43:14.615231   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:43:14.615249   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:43:14.707677   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:43:14.707725   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:43:15.345927   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:17.846732   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:14.657956   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:17.156935   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:17.512424   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:20.012471   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:17.254918   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:43:17.270643   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:43:17.270698   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:43:17.310692   69580 cri.go:89] found id: ""
	I0501 03:43:17.310724   69580 logs.go:276] 0 containers: []
	W0501 03:43:17.310732   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:43:17.310739   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:43:17.310806   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:43:17.349932   69580 cri.go:89] found id: ""
	I0501 03:43:17.349959   69580 logs.go:276] 0 containers: []
	W0501 03:43:17.349969   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:43:17.349976   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:43:17.350040   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:43:17.393073   69580 cri.go:89] found id: ""
	I0501 03:43:17.393099   69580 logs.go:276] 0 containers: []
	W0501 03:43:17.393109   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:43:17.393116   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:43:17.393176   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:43:17.429736   69580 cri.go:89] found id: ""
	I0501 03:43:17.429763   69580 logs.go:276] 0 containers: []
	W0501 03:43:17.429773   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:43:17.429787   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:43:17.429858   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:43:17.473052   69580 cri.go:89] found id: ""
	I0501 03:43:17.473085   69580 logs.go:276] 0 containers: []
	W0501 03:43:17.473097   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:43:17.473105   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:43:17.473168   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:43:17.514035   69580 cri.go:89] found id: ""
	I0501 03:43:17.514062   69580 logs.go:276] 0 containers: []
	W0501 03:43:17.514071   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:43:17.514078   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:43:17.514126   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:43:17.553197   69580 cri.go:89] found id: ""
	I0501 03:43:17.553225   69580 logs.go:276] 0 containers: []
	W0501 03:43:17.553234   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:43:17.553240   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:43:17.553300   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:43:17.592170   69580 cri.go:89] found id: ""
	I0501 03:43:17.592192   69580 logs.go:276] 0 containers: []
	W0501 03:43:17.592199   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:43:17.592208   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:43:17.592220   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:43:17.647549   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:43:17.647584   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:43:17.663084   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:43:17.663114   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:43:17.748357   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:43:17.748385   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:43:17.748401   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:43:17.832453   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:43:17.832491   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:43:20.375927   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:43:20.391840   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:43:20.391918   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:43:20.434158   69580 cri.go:89] found id: ""
	I0501 03:43:20.434185   69580 logs.go:276] 0 containers: []
	W0501 03:43:20.434193   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:43:20.434198   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:43:20.434254   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:43:20.477209   69580 cri.go:89] found id: ""
	I0501 03:43:20.477237   69580 logs.go:276] 0 containers: []
	W0501 03:43:20.477253   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:43:20.477259   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:43:20.477309   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:43:20.517227   69580 cri.go:89] found id: ""
	I0501 03:43:20.517260   69580 logs.go:276] 0 containers: []
	W0501 03:43:20.517270   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:43:20.517282   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:43:20.517340   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:43:20.555771   69580 cri.go:89] found id: ""
	I0501 03:43:20.555802   69580 logs.go:276] 0 containers: []
	W0501 03:43:20.555812   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:43:20.555820   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:43:20.555866   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:43:20.598177   69580 cri.go:89] found id: ""
	I0501 03:43:20.598200   69580 logs.go:276] 0 containers: []
	W0501 03:43:20.598213   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:43:20.598218   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:43:20.598326   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:43:20.637336   69580 cri.go:89] found id: ""
	I0501 03:43:20.637364   69580 logs.go:276] 0 containers: []
	W0501 03:43:20.637373   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:43:20.637378   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:43:20.637435   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:43:20.687736   69580 cri.go:89] found id: ""
	I0501 03:43:20.687761   69580 logs.go:276] 0 containers: []
	W0501 03:43:20.687768   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:43:20.687782   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:43:20.687840   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:43:20.726102   69580 cri.go:89] found id: ""
	I0501 03:43:20.726135   69580 logs.go:276] 0 containers: []
	W0501 03:43:20.726143   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:43:20.726154   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:43:20.726169   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:43:20.780874   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:43:20.780905   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:43:20.795798   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:43:20.795836   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:43:20.882337   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:43:20.882367   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:43:20.882381   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:43:20.962138   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:43:20.962188   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:43:20.345887   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:22.346061   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:19.157165   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:21.657358   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:22.015676   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:24.511682   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:23.512174   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:43:23.528344   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:43:23.528417   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:43:23.567182   69580 cri.go:89] found id: ""
	I0501 03:43:23.567212   69580 logs.go:276] 0 containers: []
	W0501 03:43:23.567222   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:43:23.567230   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:43:23.567291   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:43:23.607522   69580 cri.go:89] found id: ""
	I0501 03:43:23.607556   69580 logs.go:276] 0 containers: []
	W0501 03:43:23.607567   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:43:23.607574   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:43:23.607637   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:43:23.650932   69580 cri.go:89] found id: ""
	I0501 03:43:23.650959   69580 logs.go:276] 0 containers: []
	W0501 03:43:23.650970   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:43:23.650976   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:43:23.651035   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:43:23.695392   69580 cri.go:89] found id: ""
	I0501 03:43:23.695419   69580 logs.go:276] 0 containers: []
	W0501 03:43:23.695428   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:43:23.695436   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:43:23.695514   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:43:23.736577   69580 cri.go:89] found id: ""
	I0501 03:43:23.736607   69580 logs.go:276] 0 containers: []
	W0501 03:43:23.736619   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:43:23.736627   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:43:23.736685   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:43:23.776047   69580 cri.go:89] found id: ""
	I0501 03:43:23.776070   69580 logs.go:276] 0 containers: []
	W0501 03:43:23.776077   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:43:23.776082   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:43:23.776134   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:43:23.813896   69580 cri.go:89] found id: ""
	I0501 03:43:23.813934   69580 logs.go:276] 0 containers: []
	W0501 03:43:23.813943   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:43:23.813949   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:43:23.813997   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:43:23.858898   69580 cri.go:89] found id: ""
	I0501 03:43:23.858925   69580 logs.go:276] 0 containers: []
	W0501 03:43:23.858936   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:43:23.858947   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:43:23.858964   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:43:23.901796   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:43:23.901850   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:43:23.957009   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:43:23.957040   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:43:23.972811   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:43:23.972839   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:43:24.055535   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:43:24.055557   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:43:24.055576   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:43:24.845310   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:26.847397   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:24.157453   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:26.661073   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:27.012181   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:29.511387   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:26.640114   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:43:26.657217   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:43:26.657285   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:43:26.701191   69580 cri.go:89] found id: ""
	I0501 03:43:26.701218   69580 logs.go:276] 0 containers: []
	W0501 03:43:26.701227   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:43:26.701232   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:43:26.701287   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:43:26.740710   69580 cri.go:89] found id: ""
	I0501 03:43:26.740737   69580 logs.go:276] 0 containers: []
	W0501 03:43:26.740745   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:43:26.740750   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:43:26.740808   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:43:26.778682   69580 cri.go:89] found id: ""
	I0501 03:43:26.778710   69580 logs.go:276] 0 containers: []
	W0501 03:43:26.778724   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:43:26.778730   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:43:26.778789   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:43:26.822143   69580 cri.go:89] found id: ""
	I0501 03:43:26.822190   69580 logs.go:276] 0 containers: []
	W0501 03:43:26.822201   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:43:26.822209   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:43:26.822270   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:43:26.865938   69580 cri.go:89] found id: ""
	I0501 03:43:26.865976   69580 logs.go:276] 0 containers: []
	W0501 03:43:26.865988   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:43:26.865996   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:43:26.866058   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:43:26.914939   69580 cri.go:89] found id: ""
	I0501 03:43:26.914969   69580 logs.go:276] 0 containers: []
	W0501 03:43:26.914979   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:43:26.914986   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:43:26.915043   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:43:26.961822   69580 cri.go:89] found id: ""
	I0501 03:43:26.961850   69580 logs.go:276] 0 containers: []
	W0501 03:43:26.961860   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:43:26.961867   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:43:26.961920   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:43:27.005985   69580 cri.go:89] found id: ""
	I0501 03:43:27.006012   69580 logs.go:276] 0 containers: []
	W0501 03:43:27.006021   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:43:27.006032   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:43:27.006046   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:43:27.058265   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:43:27.058303   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:43:27.076270   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:43:27.076308   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:43:27.152627   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:43:27.152706   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:43:27.152728   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:43:27.229638   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:43:27.229678   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:43:29.775960   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:43:29.792849   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:43:29.792925   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:43:29.832508   69580 cri.go:89] found id: ""
	I0501 03:43:29.832537   69580 logs.go:276] 0 containers: []
	W0501 03:43:29.832551   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:43:29.832559   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:43:29.832617   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:43:29.873160   69580 cri.go:89] found id: ""
	I0501 03:43:29.873188   69580 logs.go:276] 0 containers: []
	W0501 03:43:29.873199   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:43:29.873207   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:43:29.873271   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:43:29.919431   69580 cri.go:89] found id: ""
	I0501 03:43:29.919459   69580 logs.go:276] 0 containers: []
	W0501 03:43:29.919468   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:43:29.919474   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:43:29.919533   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:43:29.967944   69580 cri.go:89] found id: ""
	I0501 03:43:29.967976   69580 logs.go:276] 0 containers: []
	W0501 03:43:29.967987   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:43:29.967995   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:43:29.968060   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:43:30.011626   69580 cri.go:89] found id: ""
	I0501 03:43:30.011657   69580 logs.go:276] 0 containers: []
	W0501 03:43:30.011669   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:43:30.011678   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:43:30.011743   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:43:30.051998   69580 cri.go:89] found id: ""
	I0501 03:43:30.052020   69580 logs.go:276] 0 containers: []
	W0501 03:43:30.052028   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:43:30.052034   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:43:30.052095   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:43:30.094140   69580 cri.go:89] found id: ""
	I0501 03:43:30.094164   69580 logs.go:276] 0 containers: []
	W0501 03:43:30.094172   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:43:30.094179   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:43:30.094253   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:43:30.132363   69580 cri.go:89] found id: ""
	I0501 03:43:30.132391   69580 logs.go:276] 0 containers: []
	W0501 03:43:30.132399   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:43:30.132411   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:43:30.132422   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:43:30.221368   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:43:30.221410   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:43:30.271279   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:43:30.271317   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:43:30.325549   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:43:30.325586   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:43:30.345337   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:43:30.345376   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:43:30.427552   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:43:29.347108   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:31.846435   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:29.156483   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:31.156871   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:33.157355   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:32.015498   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:34.511190   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:32.928667   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:43:32.945489   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:43:32.945557   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:43:32.989604   69580 cri.go:89] found id: ""
	I0501 03:43:32.989628   69580 logs.go:276] 0 containers: []
	W0501 03:43:32.989636   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:43:32.989642   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:43:32.989701   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:43:33.030862   69580 cri.go:89] found id: ""
	I0501 03:43:33.030892   69580 logs.go:276] 0 containers: []
	W0501 03:43:33.030903   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:43:33.030912   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:43:33.030977   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:43:33.079795   69580 cri.go:89] found id: ""
	I0501 03:43:33.079827   69580 logs.go:276] 0 containers: []
	W0501 03:43:33.079835   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:43:33.079841   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:43:33.079898   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:43:33.120612   69580 cri.go:89] found id: ""
	I0501 03:43:33.120636   69580 logs.go:276] 0 containers: []
	W0501 03:43:33.120644   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:43:33.120649   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:43:33.120694   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:43:33.161824   69580 cri.go:89] found id: ""
	I0501 03:43:33.161851   69580 logs.go:276] 0 containers: []
	W0501 03:43:33.161861   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:43:33.161868   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:43:33.161924   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:43:33.200068   69580 cri.go:89] found id: ""
	I0501 03:43:33.200098   69580 logs.go:276] 0 containers: []
	W0501 03:43:33.200107   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:43:33.200113   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:43:33.200175   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:43:33.239314   69580 cri.go:89] found id: ""
	I0501 03:43:33.239341   69580 logs.go:276] 0 containers: []
	W0501 03:43:33.239351   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:43:33.239359   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:43:33.239427   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:43:33.281381   69580 cri.go:89] found id: ""
	I0501 03:43:33.281408   69580 logs.go:276] 0 containers: []
	W0501 03:43:33.281419   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:43:33.281431   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:43:33.281447   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:43:33.297992   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:43:33.298047   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:43:33.383273   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:43:33.383292   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:43:33.383303   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:43:33.465256   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:43:33.465289   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:43:33.509593   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:43:33.509621   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:43:36.065074   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:43:36.081361   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:43:36.081429   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:43:36.130394   69580 cri.go:89] found id: ""
	I0501 03:43:36.130436   69580 logs.go:276] 0 containers: []
	W0501 03:43:36.130448   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:43:36.130456   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:43:36.130524   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:43:36.171013   69580 cri.go:89] found id: ""
	I0501 03:43:36.171038   69580 logs.go:276] 0 containers: []
	W0501 03:43:36.171046   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:43:36.171052   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:43:36.171099   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:43:36.215372   69580 cri.go:89] found id: ""
	I0501 03:43:36.215411   69580 logs.go:276] 0 containers: []
	W0501 03:43:36.215424   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:43:36.215431   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:43:36.215493   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:43:36.257177   69580 cri.go:89] found id: ""
	I0501 03:43:36.257204   69580 logs.go:276] 0 containers: []
	W0501 03:43:36.257216   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:43:36.257223   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:43:36.257293   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:43:36.299035   69580 cri.go:89] found id: ""
	I0501 03:43:36.299066   69580 logs.go:276] 0 containers: []
	W0501 03:43:36.299085   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:43:36.299094   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:43:36.299166   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:43:36.339060   69580 cri.go:89] found id: ""
	I0501 03:43:36.339087   69580 logs.go:276] 0 containers: []
	W0501 03:43:36.339097   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:43:36.339105   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:43:36.339163   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:43:36.379982   69580 cri.go:89] found id: ""
	I0501 03:43:36.380016   69580 logs.go:276] 0 containers: []
	W0501 03:43:36.380028   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:43:36.380037   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:43:36.380100   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:43:36.419702   69580 cri.go:89] found id: ""
	I0501 03:43:36.419734   69580 logs.go:276] 0 containers: []
	W0501 03:43:36.419746   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:43:36.419758   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:43:36.419780   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:43:33.846499   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:35.846579   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:37.852802   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:35.159724   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:37.657040   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:36.516601   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:39.012001   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:36.472553   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:43:36.472774   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:43:36.488402   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:43:36.488439   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:43:36.566390   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:43:36.566433   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:43:36.566446   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:43:36.643493   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:43:36.643527   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:43:39.199060   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:43:39.216612   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:43:39.216695   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:43:39.262557   69580 cri.go:89] found id: ""
	I0501 03:43:39.262581   69580 logs.go:276] 0 containers: []
	W0501 03:43:39.262589   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:43:39.262595   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:43:39.262642   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:43:39.331051   69580 cri.go:89] found id: ""
	I0501 03:43:39.331076   69580 logs.go:276] 0 containers: []
	W0501 03:43:39.331093   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:43:39.331098   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:43:39.331162   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:43:39.382033   69580 cri.go:89] found id: ""
	I0501 03:43:39.382058   69580 logs.go:276] 0 containers: []
	W0501 03:43:39.382066   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:43:39.382071   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:43:39.382122   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:43:39.424019   69580 cri.go:89] found id: ""
	I0501 03:43:39.424049   69580 logs.go:276] 0 containers: []
	W0501 03:43:39.424058   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:43:39.424064   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:43:39.424120   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:43:39.465787   69580 cri.go:89] found id: ""
	I0501 03:43:39.465833   69580 logs.go:276] 0 containers: []
	W0501 03:43:39.465846   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:43:39.465855   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:43:39.465916   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:43:39.507746   69580 cri.go:89] found id: ""
	I0501 03:43:39.507781   69580 logs.go:276] 0 containers: []
	W0501 03:43:39.507791   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:43:39.507798   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:43:39.507861   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:43:39.550737   69580 cri.go:89] found id: ""
	I0501 03:43:39.550768   69580 logs.go:276] 0 containers: []
	W0501 03:43:39.550775   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:43:39.550781   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:43:39.550831   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:43:39.592279   69580 cri.go:89] found id: ""
	I0501 03:43:39.592329   69580 logs.go:276] 0 containers: []
	W0501 03:43:39.592343   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:43:39.592356   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:43:39.592373   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:43:39.648858   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:43:39.648896   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:43:39.665316   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:43:39.665343   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:43:39.743611   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:43:39.743632   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:43:39.743646   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:43:39.829285   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:43:39.829322   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:43:40.347121   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:42.845466   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:39.657888   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:41.657976   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:41.512061   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:44.017693   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:42.374457   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:43:42.389944   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:43:42.390002   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:43:42.431270   69580 cri.go:89] found id: ""
	I0501 03:43:42.431294   69580 logs.go:276] 0 containers: []
	W0501 03:43:42.431302   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:43:42.431308   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:43:42.431366   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:43:42.470515   69580 cri.go:89] found id: ""
	I0501 03:43:42.470546   69580 logs.go:276] 0 containers: []
	W0501 03:43:42.470558   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:43:42.470566   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:43:42.470619   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:43:42.518472   69580 cri.go:89] found id: ""
	I0501 03:43:42.518494   69580 logs.go:276] 0 containers: []
	W0501 03:43:42.518501   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:43:42.518506   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:43:42.518555   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:43:42.562192   69580 cri.go:89] found id: ""
	I0501 03:43:42.562220   69580 logs.go:276] 0 containers: []
	W0501 03:43:42.562231   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:43:42.562239   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:43:42.562300   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:43:42.599372   69580 cri.go:89] found id: ""
	I0501 03:43:42.599403   69580 logs.go:276] 0 containers: []
	W0501 03:43:42.599414   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:43:42.599422   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:43:42.599483   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:43:42.636738   69580 cri.go:89] found id: ""
	I0501 03:43:42.636766   69580 logs.go:276] 0 containers: []
	W0501 03:43:42.636777   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:43:42.636786   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:43:42.636845   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:43:42.682087   69580 cri.go:89] found id: ""
	I0501 03:43:42.682115   69580 logs.go:276] 0 containers: []
	W0501 03:43:42.682125   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:43:42.682133   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:43:42.682198   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:43:42.724280   69580 cri.go:89] found id: ""
	I0501 03:43:42.724316   69580 logs.go:276] 0 containers: []
	W0501 03:43:42.724328   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:43:42.724340   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:43:42.724354   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:43:42.771667   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:43:42.771702   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:43:42.827390   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:43:42.827428   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:43:42.843452   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:43:42.843480   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:43:42.925544   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:43:42.925563   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:43:42.925577   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:43:45.515104   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:43:45.529545   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:43:45.529619   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:43:45.573451   69580 cri.go:89] found id: ""
	I0501 03:43:45.573475   69580 logs.go:276] 0 containers: []
	W0501 03:43:45.573483   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:43:45.573489   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:43:45.573536   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:43:45.613873   69580 cri.go:89] found id: ""
	I0501 03:43:45.613897   69580 logs.go:276] 0 containers: []
	W0501 03:43:45.613905   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:43:45.613910   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:43:45.613954   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:43:45.660195   69580 cri.go:89] found id: ""
	I0501 03:43:45.660215   69580 logs.go:276] 0 containers: []
	W0501 03:43:45.660221   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:43:45.660226   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:43:45.660284   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:43:45.703539   69580 cri.go:89] found id: ""
	I0501 03:43:45.703566   69580 logs.go:276] 0 containers: []
	W0501 03:43:45.703574   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:43:45.703580   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:43:45.703637   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:43:45.754635   69580 cri.go:89] found id: ""
	I0501 03:43:45.754659   69580 logs.go:276] 0 containers: []
	W0501 03:43:45.754668   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:43:45.754675   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:43:45.754738   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:43:45.800836   69580 cri.go:89] found id: ""
	I0501 03:43:45.800866   69580 logs.go:276] 0 containers: []
	W0501 03:43:45.800884   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:43:45.800892   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:43:45.800955   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:43:45.859057   69580 cri.go:89] found id: ""
	I0501 03:43:45.859084   69580 logs.go:276] 0 containers: []
	W0501 03:43:45.859092   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:43:45.859098   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:43:45.859145   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:43:45.913173   69580 cri.go:89] found id: ""
	I0501 03:43:45.913204   69580 logs.go:276] 0 containers: []
	W0501 03:43:45.913216   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:43:45.913227   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:43:45.913243   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:43:45.930050   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:43:45.930087   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:43:46.006047   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:43:46.006081   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:43:46.006097   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:43:46.086630   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:43:46.086666   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:43:46.134635   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:43:46.134660   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:43:45.347071   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:47.845983   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:44.157143   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:46.157880   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:48.656747   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:46.510981   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:48.512854   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:48.690330   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:43:48.705024   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:43:48.705093   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:43:48.750244   69580 cri.go:89] found id: ""
	I0501 03:43:48.750278   69580 logs.go:276] 0 containers: []
	W0501 03:43:48.750299   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:43:48.750307   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:43:48.750377   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:43:48.791231   69580 cri.go:89] found id: ""
	I0501 03:43:48.791264   69580 logs.go:276] 0 containers: []
	W0501 03:43:48.791276   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:43:48.791283   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:43:48.791348   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:43:48.834692   69580 cri.go:89] found id: ""
	I0501 03:43:48.834720   69580 logs.go:276] 0 containers: []
	W0501 03:43:48.834731   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:43:48.834739   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:43:48.834809   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:43:48.877383   69580 cri.go:89] found id: ""
	I0501 03:43:48.877415   69580 logs.go:276] 0 containers: []
	W0501 03:43:48.877424   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:43:48.877430   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:43:48.877479   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:43:48.919728   69580 cri.go:89] found id: ""
	I0501 03:43:48.919756   69580 logs.go:276] 0 containers: []
	W0501 03:43:48.919767   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:43:48.919775   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:43:48.919836   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:43:48.962090   69580 cri.go:89] found id: ""
	I0501 03:43:48.962122   69580 logs.go:276] 0 containers: []
	W0501 03:43:48.962137   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:43:48.962144   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:43:48.962205   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:43:48.998456   69580 cri.go:89] found id: ""
	I0501 03:43:48.998487   69580 logs.go:276] 0 containers: []
	W0501 03:43:48.998498   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:43:48.998506   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:43:48.998566   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:43:49.042591   69580 cri.go:89] found id: ""
	I0501 03:43:49.042623   69580 logs.go:276] 0 containers: []
	W0501 03:43:49.042633   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:43:49.042645   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:43:49.042661   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:43:49.088533   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:43:49.088571   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:43:49.145252   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:43:49.145288   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:43:49.163093   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:43:49.163120   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:43:49.240805   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:43:49.240831   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:43:49.240844   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:43:49.848864   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:52.347128   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:50.656790   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:52.658130   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:51.011713   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:53.510598   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:55.512900   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:51.825530   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:43:51.839596   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:43:51.839669   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:43:51.879493   69580 cri.go:89] found id: ""
	I0501 03:43:51.879516   69580 logs.go:276] 0 containers: []
	W0501 03:43:51.879524   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:43:51.879530   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:43:51.879585   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:43:51.921577   69580 cri.go:89] found id: ""
	I0501 03:43:51.921608   69580 logs.go:276] 0 containers: []
	W0501 03:43:51.921620   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:43:51.921627   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:43:51.921693   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:43:51.961000   69580 cri.go:89] found id: ""
	I0501 03:43:51.961028   69580 logs.go:276] 0 containers: []
	W0501 03:43:51.961037   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:43:51.961043   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:43:51.961103   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:43:52.006087   69580 cri.go:89] found id: ""
	I0501 03:43:52.006118   69580 logs.go:276] 0 containers: []
	W0501 03:43:52.006129   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:43:52.006137   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:43:52.006201   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:43:52.047196   69580 cri.go:89] found id: ""
	I0501 03:43:52.047228   69580 logs.go:276] 0 containers: []
	W0501 03:43:52.047239   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:43:52.047250   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:43:52.047319   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:43:52.086380   69580 cri.go:89] found id: ""
	I0501 03:43:52.086423   69580 logs.go:276] 0 containers: []
	W0501 03:43:52.086434   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:43:52.086442   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:43:52.086499   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:43:52.128824   69580 cri.go:89] found id: ""
	I0501 03:43:52.128851   69580 logs.go:276] 0 containers: []
	W0501 03:43:52.128861   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:43:52.128868   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:43:52.128933   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:43:52.168743   69580 cri.go:89] found id: ""
	I0501 03:43:52.168769   69580 logs.go:276] 0 containers: []
	W0501 03:43:52.168776   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:43:52.168788   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:43:52.168802   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:43:52.184391   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:43:52.184419   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:43:52.268330   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:43:52.268368   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:43:52.268386   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:43:52.350556   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:43:52.350586   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:43:52.395930   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:43:52.395967   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:43:54.952879   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:43:54.968440   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:43:54.968517   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:43:55.008027   69580 cri.go:89] found id: ""
	I0501 03:43:55.008056   69580 logs.go:276] 0 containers: []
	W0501 03:43:55.008067   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:43:55.008074   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:43:55.008137   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:43:55.048848   69580 cri.go:89] found id: ""
	I0501 03:43:55.048869   69580 logs.go:276] 0 containers: []
	W0501 03:43:55.048877   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:43:55.048882   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:43:55.048931   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:43:55.085886   69580 cri.go:89] found id: ""
	I0501 03:43:55.085910   69580 logs.go:276] 0 containers: []
	W0501 03:43:55.085919   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:43:55.085924   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:43:55.085971   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:43:55.119542   69580 cri.go:89] found id: ""
	I0501 03:43:55.119567   69580 logs.go:276] 0 containers: []
	W0501 03:43:55.119574   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:43:55.119580   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:43:55.119636   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:43:55.158327   69580 cri.go:89] found id: ""
	I0501 03:43:55.158357   69580 logs.go:276] 0 containers: []
	W0501 03:43:55.158367   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:43:55.158374   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:43:55.158449   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:43:55.200061   69580 cri.go:89] found id: ""
	I0501 03:43:55.200085   69580 logs.go:276] 0 containers: []
	W0501 03:43:55.200093   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:43:55.200100   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:43:55.200146   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:43:55.239446   69580 cri.go:89] found id: ""
	I0501 03:43:55.239476   69580 logs.go:276] 0 containers: []
	W0501 03:43:55.239487   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:43:55.239493   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:43:55.239557   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:43:55.275593   69580 cri.go:89] found id: ""
	I0501 03:43:55.275623   69580 logs.go:276] 0 containers: []
	W0501 03:43:55.275635   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:43:55.275646   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:43:55.275662   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:43:55.356701   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:43:55.356724   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:43:55.356740   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:43:55.437445   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:43:55.437483   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:43:55.489024   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:43:55.489051   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:43:55.548083   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:43:55.548114   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:43:54.845529   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:57.348771   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:55.158591   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:57.657361   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:58.010099   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:00.010511   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:58.067063   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:43:58.080485   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:43:58.080539   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:43:58.121459   69580 cri.go:89] found id: ""
	I0501 03:43:58.121488   69580 logs.go:276] 0 containers: []
	W0501 03:43:58.121498   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:43:58.121505   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:43:58.121562   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:43:58.161445   69580 cri.go:89] found id: ""
	I0501 03:43:58.161479   69580 logs.go:276] 0 containers: []
	W0501 03:43:58.161489   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:43:58.161499   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:43:58.161560   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:43:58.203216   69580 cri.go:89] found id: ""
	I0501 03:43:58.203238   69580 logs.go:276] 0 containers: []
	W0501 03:43:58.203246   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:43:58.203251   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:43:58.203297   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:43:58.239496   69580 cri.go:89] found id: ""
	I0501 03:43:58.239526   69580 logs.go:276] 0 containers: []
	W0501 03:43:58.239538   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:43:58.239546   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:43:58.239605   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:43:58.280331   69580 cri.go:89] found id: ""
	I0501 03:43:58.280359   69580 logs.go:276] 0 containers: []
	W0501 03:43:58.280370   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:43:58.280378   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:43:58.280438   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:43:58.318604   69580 cri.go:89] found id: ""
	I0501 03:43:58.318634   69580 logs.go:276] 0 containers: []
	W0501 03:43:58.318646   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:43:58.318653   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:43:58.318712   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:43:58.359360   69580 cri.go:89] found id: ""
	I0501 03:43:58.359383   69580 logs.go:276] 0 containers: []
	W0501 03:43:58.359392   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:43:58.359398   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:43:58.359446   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:43:58.401172   69580 cri.go:89] found id: ""
	I0501 03:43:58.401202   69580 logs.go:276] 0 containers: []
	W0501 03:43:58.401211   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:43:58.401220   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:43:58.401232   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:43:58.416877   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:43:58.416907   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:43:58.489812   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:43:58.489835   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:43:58.489849   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:43:58.574971   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:43:58.575004   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:43:58.619526   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:43:58.619557   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:44:01.173759   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:44:01.187838   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:44:01.187922   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:44:01.227322   69580 cri.go:89] found id: ""
	I0501 03:44:01.227355   69580 logs.go:276] 0 containers: []
	W0501 03:44:01.227366   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:44:01.227372   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:44:01.227432   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:44:01.268418   69580 cri.go:89] found id: ""
	I0501 03:44:01.268453   69580 logs.go:276] 0 containers: []
	W0501 03:44:01.268465   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:44:01.268472   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:44:01.268530   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:44:01.314641   69580 cri.go:89] found id: ""
	I0501 03:44:01.314667   69580 logs.go:276] 0 containers: []
	W0501 03:44:01.314675   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:44:01.314681   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:44:01.314739   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:44:01.361237   69580 cri.go:89] found id: ""
	I0501 03:44:01.361272   69580 logs.go:276] 0 containers: []
	W0501 03:44:01.361288   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:44:01.361294   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:44:01.361348   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:44:01.400650   69580 cri.go:89] found id: ""
	I0501 03:44:01.400676   69580 logs.go:276] 0 containers: []
	W0501 03:44:01.400684   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:44:01.400690   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:44:01.400739   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:44:01.447998   69580 cri.go:89] found id: ""
	I0501 03:44:01.448023   69580 logs.go:276] 0 containers: []
	W0501 03:44:01.448032   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:44:01.448040   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:44:01.448101   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:43:59.845726   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:02.345826   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:00.155851   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:02.155998   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:02.010828   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:04.014801   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:01.492172   69580 cri.go:89] found id: ""
	I0501 03:44:01.492199   69580 logs.go:276] 0 containers: []
	W0501 03:44:01.492207   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:44:01.492213   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:44:01.492265   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:44:01.538589   69580 cri.go:89] found id: ""
	I0501 03:44:01.538617   69580 logs.go:276] 0 containers: []
	W0501 03:44:01.538628   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:44:01.538638   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:44:01.538653   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:44:01.592914   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:44:01.592952   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:44:01.611706   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:44:01.611754   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:44:01.693469   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:44:01.693488   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:44:01.693501   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:44:01.774433   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:44:01.774470   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:44:04.321593   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:44:04.335428   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:44:04.335497   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:44:04.378479   69580 cri.go:89] found id: ""
	I0501 03:44:04.378505   69580 logs.go:276] 0 containers: []
	W0501 03:44:04.378516   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:44:04.378525   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:44:04.378585   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:44:04.420025   69580 cri.go:89] found id: ""
	I0501 03:44:04.420050   69580 logs.go:276] 0 containers: []
	W0501 03:44:04.420059   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:44:04.420065   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:44:04.420113   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:44:04.464009   69580 cri.go:89] found id: ""
	I0501 03:44:04.464039   69580 logs.go:276] 0 containers: []
	W0501 03:44:04.464047   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:44:04.464052   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:44:04.464113   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:44:04.502039   69580 cri.go:89] found id: ""
	I0501 03:44:04.502069   69580 logs.go:276] 0 containers: []
	W0501 03:44:04.502081   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:44:04.502088   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:44:04.502150   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:44:04.544566   69580 cri.go:89] found id: ""
	I0501 03:44:04.544593   69580 logs.go:276] 0 containers: []
	W0501 03:44:04.544605   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:44:04.544614   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:44:04.544672   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:44:04.584067   69580 cri.go:89] found id: ""
	I0501 03:44:04.584095   69580 logs.go:276] 0 containers: []
	W0501 03:44:04.584104   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:44:04.584112   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:44:04.584174   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:44:04.625165   69580 cri.go:89] found id: ""
	I0501 03:44:04.625197   69580 logs.go:276] 0 containers: []
	W0501 03:44:04.625210   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:44:04.625219   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:44:04.625292   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:44:04.667796   69580 cri.go:89] found id: ""
	I0501 03:44:04.667830   69580 logs.go:276] 0 containers: []
	W0501 03:44:04.667839   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:44:04.667850   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:44:04.667868   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:44:04.722269   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:44:04.722303   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:44:04.738232   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:44:04.738265   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:44:04.821551   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:44:04.821578   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:44:04.821595   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:44:04.902575   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:44:04.902618   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:44:04.346197   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:06.845552   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:04.157333   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:06.157366   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:08.656837   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:06.513484   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:09.012004   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:07.449793   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:44:07.466348   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:44:07.466450   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:44:07.510325   69580 cri.go:89] found id: ""
	I0501 03:44:07.510352   69580 logs.go:276] 0 containers: []
	W0501 03:44:07.510363   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:44:07.510371   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:44:07.510450   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:44:07.550722   69580 cri.go:89] found id: ""
	I0501 03:44:07.550748   69580 logs.go:276] 0 containers: []
	W0501 03:44:07.550756   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:44:07.550762   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:44:07.550810   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:44:07.589592   69580 cri.go:89] found id: ""
	I0501 03:44:07.589617   69580 logs.go:276] 0 containers: []
	W0501 03:44:07.589625   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:44:07.589630   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:44:07.589678   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:44:07.631628   69580 cri.go:89] found id: ""
	I0501 03:44:07.631655   69580 logs.go:276] 0 containers: []
	W0501 03:44:07.631662   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:44:07.631668   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:44:07.631726   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:44:07.674709   69580 cri.go:89] found id: ""
	I0501 03:44:07.674743   69580 logs.go:276] 0 containers: []
	W0501 03:44:07.674753   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:44:07.674760   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:44:07.674811   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:44:07.714700   69580 cri.go:89] found id: ""
	I0501 03:44:07.714767   69580 logs.go:276] 0 containers: []
	W0501 03:44:07.714788   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:44:07.714797   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:44:07.714856   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:44:07.753440   69580 cri.go:89] found id: ""
	I0501 03:44:07.753467   69580 logs.go:276] 0 containers: []
	W0501 03:44:07.753478   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:44:07.753485   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:44:07.753549   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:44:07.791579   69580 cri.go:89] found id: ""
	I0501 03:44:07.791606   69580 logs.go:276] 0 containers: []
	W0501 03:44:07.791617   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:44:07.791628   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:44:07.791644   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:44:07.845568   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:44:07.845606   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:44:07.861861   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:44:07.861885   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:44:07.941719   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:44:07.941743   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:44:07.941757   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:44:08.022684   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:44:08.022720   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:44:10.575417   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:44:10.593408   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:44:10.593468   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:44:10.641322   69580 cri.go:89] found id: ""
	I0501 03:44:10.641357   69580 logs.go:276] 0 containers: []
	W0501 03:44:10.641370   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:44:10.641378   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:44:10.641442   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:44:10.686330   69580 cri.go:89] found id: ""
	I0501 03:44:10.686358   69580 logs.go:276] 0 containers: []
	W0501 03:44:10.686368   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:44:10.686377   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:44:10.686458   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:44:10.734414   69580 cri.go:89] found id: ""
	I0501 03:44:10.734444   69580 logs.go:276] 0 containers: []
	W0501 03:44:10.734456   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:44:10.734463   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:44:10.734527   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:44:10.776063   69580 cri.go:89] found id: ""
	I0501 03:44:10.776095   69580 logs.go:276] 0 containers: []
	W0501 03:44:10.776106   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:44:10.776113   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:44:10.776176   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:44:10.819035   69580 cri.go:89] found id: ""
	I0501 03:44:10.819065   69580 logs.go:276] 0 containers: []
	W0501 03:44:10.819076   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:44:10.819084   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:44:10.819150   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:44:10.868912   69580 cri.go:89] found id: ""
	I0501 03:44:10.868938   69580 logs.go:276] 0 containers: []
	W0501 03:44:10.868946   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:44:10.868952   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:44:10.869000   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:44:10.910517   69580 cri.go:89] found id: ""
	I0501 03:44:10.910549   69580 logs.go:276] 0 containers: []
	W0501 03:44:10.910572   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:44:10.910581   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:44:10.910678   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:44:10.949267   69580 cri.go:89] found id: ""
	I0501 03:44:10.949297   69580 logs.go:276] 0 containers: []
	W0501 03:44:10.949306   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:44:10.949314   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:44:10.949327   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:44:11.004731   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:44:11.004779   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:44:11.022146   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:44:11.022174   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:44:11.108992   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:44:11.109020   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:44:11.109035   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:44:11.192571   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:44:11.192605   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:44:08.846431   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:11.346295   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:10.657938   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:13.156112   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:11.012040   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:13.512166   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:15.512232   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:13.739336   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:44:13.758622   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:44:13.758721   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:44:13.805395   69580 cri.go:89] found id: ""
	I0501 03:44:13.805423   69580 logs.go:276] 0 containers: []
	W0501 03:44:13.805434   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:44:13.805442   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:44:13.805523   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:44:13.847372   69580 cri.go:89] found id: ""
	I0501 03:44:13.847400   69580 logs.go:276] 0 containers: []
	W0501 03:44:13.847409   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:44:13.847417   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:44:13.847474   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:44:13.891842   69580 cri.go:89] found id: ""
	I0501 03:44:13.891867   69580 logs.go:276] 0 containers: []
	W0501 03:44:13.891874   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:44:13.891880   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:44:13.891935   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:44:13.933382   69580 cri.go:89] found id: ""
	I0501 03:44:13.933411   69580 logs.go:276] 0 containers: []
	W0501 03:44:13.933422   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:44:13.933430   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:44:13.933490   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:44:13.973955   69580 cri.go:89] found id: ""
	I0501 03:44:13.973980   69580 logs.go:276] 0 containers: []
	W0501 03:44:13.973991   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:44:13.974000   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:44:13.974053   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:44:14.015202   69580 cri.go:89] found id: ""
	I0501 03:44:14.015226   69580 logs.go:276] 0 containers: []
	W0501 03:44:14.015234   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:44:14.015240   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:44:14.015287   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:44:14.057441   69580 cri.go:89] found id: ""
	I0501 03:44:14.057471   69580 logs.go:276] 0 containers: []
	W0501 03:44:14.057483   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:44:14.057491   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:44:14.057551   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:44:14.099932   69580 cri.go:89] found id: ""
	I0501 03:44:14.099961   69580 logs.go:276] 0 containers: []
	W0501 03:44:14.099972   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:44:14.099983   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:44:14.099996   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:44:14.160386   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:44:14.160418   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:44:14.176880   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:44:14.176908   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:44:14.272137   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:44:14.272155   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:44:14.272168   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:44:14.366523   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:44:14.366571   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:44:13.349770   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:15.351345   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:17.845182   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:15.156569   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:17.157994   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:17.512836   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:20.012034   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:16.914394   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:44:16.930976   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:44:16.931038   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:44:16.977265   69580 cri.go:89] found id: ""
	I0501 03:44:16.977294   69580 logs.go:276] 0 containers: []
	W0501 03:44:16.977303   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:44:16.977309   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:44:16.977363   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:44:17.015656   69580 cri.go:89] found id: ""
	I0501 03:44:17.015686   69580 logs.go:276] 0 containers: []
	W0501 03:44:17.015694   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:44:17.015700   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:44:17.015768   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:44:17.056079   69580 cri.go:89] found id: ""
	I0501 03:44:17.056111   69580 logs.go:276] 0 containers: []
	W0501 03:44:17.056121   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:44:17.056129   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:44:17.056188   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:44:17.099504   69580 cri.go:89] found id: ""
	I0501 03:44:17.099528   69580 logs.go:276] 0 containers: []
	W0501 03:44:17.099536   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:44:17.099542   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:44:17.099606   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:44:17.141371   69580 cri.go:89] found id: ""
	I0501 03:44:17.141401   69580 logs.go:276] 0 containers: []
	W0501 03:44:17.141410   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:44:17.141417   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:44:17.141484   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:44:17.184143   69580 cri.go:89] found id: ""
	I0501 03:44:17.184167   69580 logs.go:276] 0 containers: []
	W0501 03:44:17.184179   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:44:17.184193   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:44:17.184246   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:44:17.224012   69580 cri.go:89] found id: ""
	I0501 03:44:17.224049   69580 logs.go:276] 0 containers: []
	W0501 03:44:17.224061   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:44:17.224069   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:44:17.224136   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:44:17.268185   69580 cri.go:89] found id: ""
	I0501 03:44:17.268216   69580 logs.go:276] 0 containers: []
	W0501 03:44:17.268224   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:44:17.268233   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:44:17.268248   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:44:17.351342   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:44:17.351392   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:44:17.398658   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:44:17.398689   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:44:17.452476   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:44:17.452517   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:44:17.468734   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:44:17.468771   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:44:17.558971   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:44:20.059342   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:44:20.075707   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:44:20.075791   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:44:20.114436   69580 cri.go:89] found id: ""
	I0501 03:44:20.114472   69580 logs.go:276] 0 containers: []
	W0501 03:44:20.114486   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:44:20.114495   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:44:20.114562   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:44:20.155607   69580 cri.go:89] found id: ""
	I0501 03:44:20.155638   69580 logs.go:276] 0 containers: []
	W0501 03:44:20.155649   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:44:20.155657   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:44:20.155715   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:44:20.198188   69580 cri.go:89] found id: ""
	I0501 03:44:20.198218   69580 logs.go:276] 0 containers: []
	W0501 03:44:20.198227   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:44:20.198234   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:44:20.198291   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:44:20.237183   69580 cri.go:89] found id: ""
	I0501 03:44:20.237213   69580 logs.go:276] 0 containers: []
	W0501 03:44:20.237223   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:44:20.237232   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:44:20.237286   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:44:20.279289   69580 cri.go:89] found id: ""
	I0501 03:44:20.279320   69580 logs.go:276] 0 containers: []
	W0501 03:44:20.279332   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:44:20.279341   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:44:20.279409   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:44:20.334066   69580 cri.go:89] found id: ""
	I0501 03:44:20.334091   69580 logs.go:276] 0 containers: []
	W0501 03:44:20.334112   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:44:20.334121   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:44:20.334181   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:44:20.385740   69580 cri.go:89] found id: ""
	I0501 03:44:20.385775   69580 logs.go:276] 0 containers: []
	W0501 03:44:20.385785   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:44:20.385796   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:44:20.385860   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:44:20.425151   69580 cri.go:89] found id: ""
	I0501 03:44:20.425176   69580 logs.go:276] 0 containers: []
	W0501 03:44:20.425183   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:44:20.425193   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:44:20.425214   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:44:20.472563   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:44:20.472605   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:44:20.526589   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:44:20.526626   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:44:20.541978   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:44:20.542013   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:44:20.619513   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:44:20.619540   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:44:20.619555   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:44:19.846208   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:22.345166   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:19.658986   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:22.156821   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:23.159267   68864 pod_ready.go:81] duration metric: took 4m0.009511824s for pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace to be "Ready" ...
	E0501 03:44:23.159296   68864 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0501 03:44:23.159308   68864 pod_ready.go:38] duration metric: took 4m7.423794373s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0501 03:44:23.159327   68864 api_server.go:52] waiting for apiserver process to appear ...
	I0501 03:44:23.159362   68864 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:44:23.159422   68864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:44:23.225563   68864 cri.go:89] found id: "a96815c49ac45b34c359d7171b30be5c25cee80fae08d947fc004d479693df00"
	I0501 03:44:23.225590   68864 cri.go:89] found id: ""
	I0501 03:44:23.225607   68864 logs.go:276] 1 containers: [a96815c49ac45b34c359d7171b30be5c25cee80fae08d947fc004d479693df00]
	I0501 03:44:23.225663   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:44:23.231542   68864 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:44:23.231598   68864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:44:23.290847   68864 cri.go:89] found id: "d109948ffbbddd2e38484d83d2260690afce3c51e16abff0fe8713e844bfdcb3"
	I0501 03:44:23.290871   68864 cri.go:89] found id: ""
	I0501 03:44:23.290878   68864 logs.go:276] 1 containers: [d109948ffbbddd2e38484d83d2260690afce3c51e16abff0fe8713e844bfdcb3]
	I0501 03:44:23.290926   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:44:23.295697   68864 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:44:23.295755   68864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:44:23.348625   68864 cri.go:89] found id: "e3c74de489af3d78a4b0cf620ed16e1fba7c9ce1c7021a15c5b4bd241b7a934a"
	I0501 03:44:23.348652   68864 cri.go:89] found id: ""
	I0501 03:44:23.348661   68864 logs.go:276] 1 containers: [e3c74de489af3d78a4b0cf620ed16e1fba7c9ce1c7021a15c5b4bd241b7a934a]
	I0501 03:44:23.348717   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:44:23.355801   68864 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:44:23.355896   68864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:44:23.409428   68864 cri.go:89] found id: "1813f35574f4f0398e7d482fe77c5d7f8490726666a277035bda0a5cff80af6e"
	I0501 03:44:23.409461   68864 cri.go:89] found id: ""
	I0501 03:44:23.409471   68864 logs.go:276] 1 containers: [1813f35574f4f0398e7d482fe77c5d7f8490726666a277035bda0a5cff80af6e]
	I0501 03:44:23.409530   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:44:23.416480   68864 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:44:23.416560   68864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:44:23.466642   68864 cri.go:89] found id: "94afdb03c382269209154b2e73edbf200662e89960c248ba775a0ef2b8fbe6b1"
	I0501 03:44:23.466672   68864 cri.go:89] found id: ""
	I0501 03:44:23.466681   68864 logs.go:276] 1 containers: [94afdb03c382269209154b2e73edbf200662e89960c248ba775a0ef2b8fbe6b1]
	I0501 03:44:23.466739   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:44:23.472831   68864 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:44:23.472906   68864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:44:23.524815   68864 cri.go:89] found id: "7e7158f7ff3922e97ba5ee97108acc1d0ba0350dff2f9aa56c2f71519108791c"
	I0501 03:44:23.524841   68864 cri.go:89] found id: ""
	I0501 03:44:23.524850   68864 logs.go:276] 1 containers: [7e7158f7ff3922e97ba5ee97108acc1d0ba0350dff2f9aa56c2f71519108791c]
	I0501 03:44:23.524902   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:44:23.532092   68864 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:44:23.532161   68864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:44:23.577262   68864 cri.go:89] found id: ""
	I0501 03:44:23.577292   68864 logs.go:276] 0 containers: []
	W0501 03:44:23.577305   68864 logs.go:278] No container was found matching "kindnet"
	I0501 03:44:23.577312   68864 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0501 03:44:23.577374   68864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0501 03:44:23.623597   68864 cri.go:89] found id: "f9a8d2f0f9453969b410fb7f880f6cf4c7e6e2f3c17a687583906efe4181dd17"
	I0501 03:44:23.623626   68864 cri.go:89] found id: "aaae36261c5ce1accf3bfb5993be5a256a037a640d5f6f30de8384900dda06b2"
	I0501 03:44:23.623632   68864 cri.go:89] found id: ""
	I0501 03:44:23.623640   68864 logs.go:276] 2 containers: [f9a8d2f0f9453969b410fb7f880f6cf4c7e6e2f3c17a687583906efe4181dd17 aaae36261c5ce1accf3bfb5993be5a256a037a640d5f6f30de8384900dda06b2]
	I0501 03:44:23.623702   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:44:23.630189   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:44:23.635673   68864 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:44:23.635694   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:44:22.012084   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:24.511736   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:23.203031   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:44:23.219964   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:44:23.220043   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:44:23.264287   69580 cri.go:89] found id: ""
	I0501 03:44:23.264315   69580 logs.go:276] 0 containers: []
	W0501 03:44:23.264323   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:44:23.264328   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:44:23.264395   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:44:23.310337   69580 cri.go:89] found id: ""
	I0501 03:44:23.310366   69580 logs.go:276] 0 containers: []
	W0501 03:44:23.310375   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:44:23.310383   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:44:23.310461   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:44:23.364550   69580 cri.go:89] found id: ""
	I0501 03:44:23.364577   69580 logs.go:276] 0 containers: []
	W0501 03:44:23.364588   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:44:23.364596   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:44:23.364676   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:44:23.412620   69580 cri.go:89] found id: ""
	I0501 03:44:23.412647   69580 logs.go:276] 0 containers: []
	W0501 03:44:23.412657   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:44:23.412665   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:44:23.412726   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:44:23.461447   69580 cri.go:89] found id: ""
	I0501 03:44:23.461477   69580 logs.go:276] 0 containers: []
	W0501 03:44:23.461488   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:44:23.461496   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:44:23.461558   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:44:23.514868   69580 cri.go:89] found id: ""
	I0501 03:44:23.514896   69580 logs.go:276] 0 containers: []
	W0501 03:44:23.514915   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:44:23.514924   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:44:23.514984   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:44:23.559171   69580 cri.go:89] found id: ""
	I0501 03:44:23.559200   69580 logs.go:276] 0 containers: []
	W0501 03:44:23.559210   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:44:23.559218   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:44:23.559284   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:44:23.601713   69580 cri.go:89] found id: ""
	I0501 03:44:23.601740   69580 logs.go:276] 0 containers: []
	W0501 03:44:23.601749   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:44:23.601760   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:44:23.601772   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:44:23.656147   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:44:23.656187   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:44:23.673507   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:44:23.673545   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:44:23.771824   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:44:23.771846   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:44:23.771861   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:44:23.861128   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:44:23.861161   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:44:26.406507   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:44:26.421836   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:44:26.421894   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:44:26.462758   69580 cri.go:89] found id: ""
	I0501 03:44:26.462785   69580 logs.go:276] 0 containers: []
	W0501 03:44:26.462796   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:44:26.462804   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:44:26.462860   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:44:24.346534   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:26.847370   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:24.220047   68864 logs.go:123] Gathering logs for kubelet ...
	I0501 03:44:24.220087   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:44:24.279596   68864 logs.go:123] Gathering logs for kube-apiserver [a96815c49ac45b34c359d7171b30be5c25cee80fae08d947fc004d479693df00] ...
	I0501 03:44:24.279633   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a96815c49ac45b34c359d7171b30be5c25cee80fae08d947fc004d479693df00"
	I0501 03:44:24.336092   68864 logs.go:123] Gathering logs for etcd [d109948ffbbddd2e38484d83d2260690afce3c51e16abff0fe8713e844bfdcb3] ...
	I0501 03:44:24.336128   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d109948ffbbddd2e38484d83d2260690afce3c51e16abff0fe8713e844bfdcb3"
	I0501 03:44:24.396117   68864 logs.go:123] Gathering logs for storage-provisioner [f9a8d2f0f9453969b410fb7f880f6cf4c7e6e2f3c17a687583906efe4181dd17] ...
	I0501 03:44:24.396145   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f9a8d2f0f9453969b410fb7f880f6cf4c7e6e2f3c17a687583906efe4181dd17"
	I0501 03:44:24.443608   68864 logs.go:123] Gathering logs for storage-provisioner [aaae36261c5ce1accf3bfb5993be5a256a037a640d5f6f30de8384900dda06b2] ...
	I0501 03:44:24.443644   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aaae36261c5ce1accf3bfb5993be5a256a037a640d5f6f30de8384900dda06b2"
	I0501 03:44:24.499533   68864 logs.go:123] Gathering logs for kube-controller-manager [7e7158f7ff3922e97ba5ee97108acc1d0ba0350dff2f9aa56c2f71519108791c] ...
	I0501 03:44:24.499560   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7e7158f7ff3922e97ba5ee97108acc1d0ba0350dff2f9aa56c2f71519108791c"
	I0501 03:44:24.562990   68864 logs.go:123] Gathering logs for container status ...
	I0501 03:44:24.563028   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:44:24.622630   68864 logs.go:123] Gathering logs for dmesg ...
	I0501 03:44:24.622671   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:44:24.641106   68864 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:44:24.641145   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0501 03:44:24.781170   68864 logs.go:123] Gathering logs for coredns [e3c74de489af3d78a4b0cf620ed16e1fba7c9ce1c7021a15c5b4bd241b7a934a] ...
	I0501 03:44:24.781203   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e3c74de489af3d78a4b0cf620ed16e1fba7c9ce1c7021a15c5b4bd241b7a934a"
	I0501 03:44:24.824616   68864 logs.go:123] Gathering logs for kube-scheduler [1813f35574f4f0398e7d482fe77c5d7f8490726666a277035bda0a5cff80af6e] ...
	I0501 03:44:24.824643   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1813f35574f4f0398e7d482fe77c5d7f8490726666a277035bda0a5cff80af6e"
	I0501 03:44:24.871956   68864 logs.go:123] Gathering logs for kube-proxy [94afdb03c382269209154b2e73edbf200662e89960c248ba775a0ef2b8fbe6b1] ...
	I0501 03:44:24.871992   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 94afdb03c382269209154b2e73edbf200662e89960c248ba775a0ef2b8fbe6b1"
	I0501 03:44:27.424582   68864 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:44:27.447490   68864 api_server.go:72] duration metric: took 4m19.445111196s to wait for apiserver process to appear ...
	I0501 03:44:27.447522   68864 api_server.go:88] waiting for apiserver healthz status ...
	I0501 03:44:27.447555   68864 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:44:27.447601   68864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:44:27.494412   68864 cri.go:89] found id: "a96815c49ac45b34c359d7171b30be5c25cee80fae08d947fc004d479693df00"
	I0501 03:44:27.494437   68864 cri.go:89] found id: ""
	I0501 03:44:27.494445   68864 logs.go:276] 1 containers: [a96815c49ac45b34c359d7171b30be5c25cee80fae08d947fc004d479693df00]
	I0501 03:44:27.494490   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:44:27.503782   68864 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:44:27.503853   68864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:44:27.550991   68864 cri.go:89] found id: "d109948ffbbddd2e38484d83d2260690afce3c51e16abff0fe8713e844bfdcb3"
	I0501 03:44:27.551018   68864 cri.go:89] found id: ""
	I0501 03:44:27.551026   68864 logs.go:276] 1 containers: [d109948ffbbddd2e38484d83d2260690afce3c51e16abff0fe8713e844bfdcb3]
	I0501 03:44:27.551073   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:44:27.556919   68864 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:44:27.556983   68864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:44:27.606005   68864 cri.go:89] found id: "e3c74de489af3d78a4b0cf620ed16e1fba7c9ce1c7021a15c5b4bd241b7a934a"
	I0501 03:44:27.606033   68864 cri.go:89] found id: ""
	I0501 03:44:27.606042   68864 logs.go:276] 1 containers: [e3c74de489af3d78a4b0cf620ed16e1fba7c9ce1c7021a15c5b4bd241b7a934a]
	I0501 03:44:27.606100   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:44:27.611639   68864 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:44:27.611706   68864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:44:27.661151   68864 cri.go:89] found id: "1813f35574f4f0398e7d482fe77c5d7f8490726666a277035bda0a5cff80af6e"
	I0501 03:44:27.661172   68864 cri.go:89] found id: ""
	I0501 03:44:27.661179   68864 logs.go:276] 1 containers: [1813f35574f4f0398e7d482fe77c5d7f8490726666a277035bda0a5cff80af6e]
	I0501 03:44:27.661278   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:44:27.666443   68864 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:44:27.666514   68864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:44:27.712387   68864 cri.go:89] found id: "94afdb03c382269209154b2e73edbf200662e89960c248ba775a0ef2b8fbe6b1"
	I0501 03:44:27.712416   68864 cri.go:89] found id: ""
	I0501 03:44:27.712424   68864 logs.go:276] 1 containers: [94afdb03c382269209154b2e73edbf200662e89960c248ba775a0ef2b8fbe6b1]
	I0501 03:44:27.712480   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:44:27.717280   68864 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:44:27.717342   68864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:44:27.767124   68864 cri.go:89] found id: "7e7158f7ff3922e97ba5ee97108acc1d0ba0350dff2f9aa56c2f71519108791c"
	I0501 03:44:27.767154   68864 cri.go:89] found id: ""
	I0501 03:44:27.767163   68864 logs.go:276] 1 containers: [7e7158f7ff3922e97ba5ee97108acc1d0ba0350dff2f9aa56c2f71519108791c]
	I0501 03:44:27.767215   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:44:27.773112   68864 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:44:27.773183   68864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:44:27.829966   68864 cri.go:89] found id: ""
	I0501 03:44:27.829991   68864 logs.go:276] 0 containers: []
	W0501 03:44:27.829999   68864 logs.go:278] No container was found matching "kindnet"
	I0501 03:44:27.830005   68864 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0501 03:44:27.830056   68864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0501 03:44:27.873391   68864 cri.go:89] found id: "f9a8d2f0f9453969b410fb7f880f6cf4c7e6e2f3c17a687583906efe4181dd17"
	I0501 03:44:27.873415   68864 cri.go:89] found id: "aaae36261c5ce1accf3bfb5993be5a256a037a640d5f6f30de8384900dda06b2"
	I0501 03:44:27.873419   68864 cri.go:89] found id: ""
	I0501 03:44:27.873426   68864 logs.go:276] 2 containers: [f9a8d2f0f9453969b410fb7f880f6cf4c7e6e2f3c17a687583906efe4181dd17 aaae36261c5ce1accf3bfb5993be5a256a037a640d5f6f30de8384900dda06b2]
	I0501 03:44:27.873473   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:44:27.878537   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:44:27.883518   68864 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:44:27.883543   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0501 03:44:28.012337   68864 logs.go:123] Gathering logs for kube-apiserver [a96815c49ac45b34c359d7171b30be5c25cee80fae08d947fc004d479693df00] ...
	I0501 03:44:28.012377   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a96815c49ac45b34c359d7171b30be5c25cee80fae08d947fc004d479693df00"
	I0501 03:44:28.063686   68864 logs.go:123] Gathering logs for etcd [d109948ffbbddd2e38484d83d2260690afce3c51e16abff0fe8713e844bfdcb3] ...
	I0501 03:44:28.063715   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d109948ffbbddd2e38484d83d2260690afce3c51e16abff0fe8713e844bfdcb3"
	I0501 03:44:28.116507   68864 logs.go:123] Gathering logs for storage-provisioner [aaae36261c5ce1accf3bfb5993be5a256a037a640d5f6f30de8384900dda06b2] ...
	I0501 03:44:28.116535   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aaae36261c5ce1accf3bfb5993be5a256a037a640d5f6f30de8384900dda06b2"
	I0501 03:44:28.165593   68864 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:44:28.165636   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:44:28.595278   68864 logs.go:123] Gathering logs for container status ...
	I0501 03:44:28.595333   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:44:28.645790   68864 logs.go:123] Gathering logs for dmesg ...
	I0501 03:44:28.645836   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:44:28.662952   68864 logs.go:123] Gathering logs for coredns [e3c74de489af3d78a4b0cf620ed16e1fba7c9ce1c7021a15c5b4bd241b7a934a] ...
	I0501 03:44:28.662984   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e3c74de489af3d78a4b0cf620ed16e1fba7c9ce1c7021a15c5b4bd241b7a934a"
	I0501 03:44:28.710273   68864 logs.go:123] Gathering logs for kube-scheduler [1813f35574f4f0398e7d482fe77c5d7f8490726666a277035bda0a5cff80af6e] ...
	I0501 03:44:28.710302   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1813f35574f4f0398e7d482fe77c5d7f8490726666a277035bda0a5cff80af6e"
	I0501 03:44:28.761838   68864 logs.go:123] Gathering logs for kube-proxy [94afdb03c382269209154b2e73edbf200662e89960c248ba775a0ef2b8fbe6b1] ...
	I0501 03:44:28.761872   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 94afdb03c382269209154b2e73edbf200662e89960c248ba775a0ef2b8fbe6b1"
	I0501 03:44:28.810775   68864 logs.go:123] Gathering logs for kube-controller-manager [7e7158f7ff3922e97ba5ee97108acc1d0ba0350dff2f9aa56c2f71519108791c] ...
	I0501 03:44:28.810808   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7e7158f7ff3922e97ba5ee97108acc1d0ba0350dff2f9aa56c2f71519108791c"
	I0501 03:44:27.012119   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:29.510651   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:26.505067   69580 cri.go:89] found id: ""
	I0501 03:44:26.505098   69580 logs.go:276] 0 containers: []
	W0501 03:44:26.505110   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:44:26.505121   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:44:26.505182   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:44:26.544672   69580 cri.go:89] found id: ""
	I0501 03:44:26.544699   69580 logs.go:276] 0 containers: []
	W0501 03:44:26.544711   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:44:26.544717   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:44:26.544764   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:44:26.590579   69580 cri.go:89] found id: ""
	I0501 03:44:26.590605   69580 logs.go:276] 0 containers: []
	W0501 03:44:26.590614   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:44:26.590620   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:44:26.590670   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:44:26.637887   69580 cri.go:89] found id: ""
	I0501 03:44:26.637920   69580 logs.go:276] 0 containers: []
	W0501 03:44:26.637930   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:44:26.637939   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:44:26.637998   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:44:26.686778   69580 cri.go:89] found id: ""
	I0501 03:44:26.686807   69580 logs.go:276] 0 containers: []
	W0501 03:44:26.686815   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:44:26.686821   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:44:26.686882   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:44:26.729020   69580 cri.go:89] found id: ""
	I0501 03:44:26.729045   69580 logs.go:276] 0 containers: []
	W0501 03:44:26.729054   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:44:26.729060   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:44:26.729124   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:44:26.769022   69580 cri.go:89] found id: ""
	I0501 03:44:26.769043   69580 logs.go:276] 0 containers: []
	W0501 03:44:26.769051   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:44:26.769059   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:44:26.769073   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:44:26.854985   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:44:26.855011   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:44:26.855024   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:44:26.937031   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:44:26.937063   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:44:27.006267   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:44:27.006301   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:44:27.080503   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:44:27.080545   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:44:29.598176   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:44:29.614465   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:44:29.614523   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:44:29.662384   69580 cri.go:89] found id: ""
	I0501 03:44:29.662421   69580 logs.go:276] 0 containers: []
	W0501 03:44:29.662433   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:44:29.662439   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:44:29.662483   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:44:29.705262   69580 cri.go:89] found id: ""
	I0501 03:44:29.705286   69580 logs.go:276] 0 containers: []
	W0501 03:44:29.705295   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:44:29.705300   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:44:29.705345   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:44:29.752308   69580 cri.go:89] found id: ""
	I0501 03:44:29.752335   69580 logs.go:276] 0 containers: []
	W0501 03:44:29.752343   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:44:29.752349   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:44:29.752403   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:44:29.802702   69580 cri.go:89] found id: ""
	I0501 03:44:29.802729   69580 logs.go:276] 0 containers: []
	W0501 03:44:29.802741   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:44:29.802749   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:44:29.802814   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:44:29.854112   69580 cri.go:89] found id: ""
	I0501 03:44:29.854138   69580 logs.go:276] 0 containers: []
	W0501 03:44:29.854149   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:44:29.854157   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:44:29.854217   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:44:29.898447   69580 cri.go:89] found id: ""
	I0501 03:44:29.898470   69580 logs.go:276] 0 containers: []
	W0501 03:44:29.898480   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:44:29.898486   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:44:29.898545   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:44:29.938832   69580 cri.go:89] found id: ""
	I0501 03:44:29.938862   69580 logs.go:276] 0 containers: []
	W0501 03:44:29.938873   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:44:29.938881   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:44:29.938948   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:44:29.987697   69580 cri.go:89] found id: ""
	I0501 03:44:29.987721   69580 logs.go:276] 0 containers: []
	W0501 03:44:29.987730   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:44:29.987738   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:44:29.987753   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:44:30.042446   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:44:30.042473   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:44:30.095358   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:44:30.095389   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:44:30.110745   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:44:30.110782   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:44:30.190923   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:44:30.190951   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:44:30.190965   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:44:29.346013   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:31.347513   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:28.868838   68864 logs.go:123] Gathering logs for storage-provisioner [f9a8d2f0f9453969b410fb7f880f6cf4c7e6e2f3c17a687583906efe4181dd17] ...
	I0501 03:44:28.868876   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f9a8d2f0f9453969b410fb7f880f6cf4c7e6e2f3c17a687583906efe4181dd17"
	I0501 03:44:28.912436   68864 logs.go:123] Gathering logs for kubelet ...
	I0501 03:44:28.912474   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:44:31.469456   68864 api_server.go:253] Checking apiserver healthz at https://192.168.50.218:8443/healthz ...
	I0501 03:44:31.478498   68864 api_server.go:279] https://192.168.50.218:8443/healthz returned 200:
	ok
	I0501 03:44:31.479838   68864 api_server.go:141] control plane version: v1.30.0
	I0501 03:44:31.479861   68864 api_server.go:131] duration metric: took 4.032331979s to wait for apiserver health ...
	I0501 03:44:31.479869   68864 system_pods.go:43] waiting for kube-system pods to appear ...
	I0501 03:44:31.479889   68864 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:44:31.479930   68864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:44:31.531068   68864 cri.go:89] found id: "a96815c49ac45b34c359d7171b30be5c25cee80fae08d947fc004d479693df00"
	I0501 03:44:31.531088   68864 cri.go:89] found id: ""
	I0501 03:44:31.531095   68864 logs.go:276] 1 containers: [a96815c49ac45b34c359d7171b30be5c25cee80fae08d947fc004d479693df00]
	I0501 03:44:31.531137   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:44:31.536216   68864 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:44:31.536292   68864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:44:31.584155   68864 cri.go:89] found id: "d109948ffbbddd2e38484d83d2260690afce3c51e16abff0fe8713e844bfdcb3"
	I0501 03:44:31.584183   68864 cri.go:89] found id: ""
	I0501 03:44:31.584194   68864 logs.go:276] 1 containers: [d109948ffbbddd2e38484d83d2260690afce3c51e16abff0fe8713e844bfdcb3]
	I0501 03:44:31.584250   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:44:31.589466   68864 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:44:31.589528   68864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:44:31.639449   68864 cri.go:89] found id: "e3c74de489af3d78a4b0cf620ed16e1fba7c9ce1c7021a15c5b4bd241b7a934a"
	I0501 03:44:31.639476   68864 cri.go:89] found id: ""
	I0501 03:44:31.639484   68864 logs.go:276] 1 containers: [e3c74de489af3d78a4b0cf620ed16e1fba7c9ce1c7021a15c5b4bd241b7a934a]
	I0501 03:44:31.639535   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:44:31.644684   68864 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:44:31.644750   68864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:44:31.702095   68864 cri.go:89] found id: "1813f35574f4f0398e7d482fe77c5d7f8490726666a277035bda0a5cff80af6e"
	I0501 03:44:31.702119   68864 cri.go:89] found id: ""
	I0501 03:44:31.702125   68864 logs.go:276] 1 containers: [1813f35574f4f0398e7d482fe77c5d7f8490726666a277035bda0a5cff80af6e]
	I0501 03:44:31.702173   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:44:31.707443   68864 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:44:31.707508   68864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:44:31.758582   68864 cri.go:89] found id: "94afdb03c382269209154b2e73edbf200662e89960c248ba775a0ef2b8fbe6b1"
	I0501 03:44:31.758603   68864 cri.go:89] found id: ""
	I0501 03:44:31.758610   68864 logs.go:276] 1 containers: [94afdb03c382269209154b2e73edbf200662e89960c248ba775a0ef2b8fbe6b1]
	I0501 03:44:31.758656   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:44:31.764261   68864 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:44:31.764325   68864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:44:31.813385   68864 cri.go:89] found id: "7e7158f7ff3922e97ba5ee97108acc1d0ba0350dff2f9aa56c2f71519108791c"
	I0501 03:44:31.813407   68864 cri.go:89] found id: ""
	I0501 03:44:31.813414   68864 logs.go:276] 1 containers: [7e7158f7ff3922e97ba5ee97108acc1d0ba0350dff2f9aa56c2f71519108791c]
	I0501 03:44:31.813457   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:44:31.818289   68864 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:44:31.818348   68864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:44:31.862788   68864 cri.go:89] found id: ""
	I0501 03:44:31.862814   68864 logs.go:276] 0 containers: []
	W0501 03:44:31.862824   68864 logs.go:278] No container was found matching "kindnet"
	I0501 03:44:31.862832   68864 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0501 03:44:31.862890   68864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0501 03:44:31.912261   68864 cri.go:89] found id: "f9a8d2f0f9453969b410fb7f880f6cf4c7e6e2f3c17a687583906efe4181dd17"
	I0501 03:44:31.912284   68864 cri.go:89] found id: "aaae36261c5ce1accf3bfb5993be5a256a037a640d5f6f30de8384900dda06b2"
	I0501 03:44:31.912298   68864 cri.go:89] found id: ""
	I0501 03:44:31.912312   68864 logs.go:276] 2 containers: [f9a8d2f0f9453969b410fb7f880f6cf4c7e6e2f3c17a687583906efe4181dd17 aaae36261c5ce1accf3bfb5993be5a256a037a640d5f6f30de8384900dda06b2]
	I0501 03:44:31.912367   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:44:31.917696   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:44:31.922432   68864 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:44:31.922450   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:44:32.332797   68864 logs.go:123] Gathering logs for kubelet ...
	I0501 03:44:32.332836   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:44:32.396177   68864 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:44:32.396214   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0501 03:44:32.511915   68864 logs.go:123] Gathering logs for etcd [d109948ffbbddd2e38484d83d2260690afce3c51e16abff0fe8713e844bfdcb3] ...
	I0501 03:44:32.511953   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d109948ffbbddd2e38484d83d2260690afce3c51e16abff0fe8713e844bfdcb3"
	I0501 03:44:32.564447   68864 logs.go:123] Gathering logs for kube-proxy [94afdb03c382269209154b2e73edbf200662e89960c248ba775a0ef2b8fbe6b1] ...
	I0501 03:44:32.564475   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 94afdb03c382269209154b2e73edbf200662e89960c248ba775a0ef2b8fbe6b1"
	I0501 03:44:32.610196   68864 logs.go:123] Gathering logs for kube-controller-manager [7e7158f7ff3922e97ba5ee97108acc1d0ba0350dff2f9aa56c2f71519108791c] ...
	I0501 03:44:32.610235   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7e7158f7ff3922e97ba5ee97108acc1d0ba0350dff2f9aa56c2f71519108791c"
	I0501 03:44:32.665262   68864 logs.go:123] Gathering logs for storage-provisioner [aaae36261c5ce1accf3bfb5993be5a256a037a640d5f6f30de8384900dda06b2] ...
	I0501 03:44:32.665314   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aaae36261c5ce1accf3bfb5993be5a256a037a640d5f6f30de8384900dda06b2"
	I0501 03:44:32.707346   68864 logs.go:123] Gathering logs for container status ...
	I0501 03:44:32.707377   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:44:32.757693   68864 logs.go:123] Gathering logs for dmesg ...
	I0501 03:44:32.757726   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:44:32.775720   68864 logs.go:123] Gathering logs for kube-apiserver [a96815c49ac45b34c359d7171b30be5c25cee80fae08d947fc004d479693df00] ...
	I0501 03:44:32.775759   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a96815c49ac45b34c359d7171b30be5c25cee80fae08d947fc004d479693df00"
	I0501 03:44:32.831002   68864 logs.go:123] Gathering logs for coredns [e3c74de489af3d78a4b0cf620ed16e1fba7c9ce1c7021a15c5b4bd241b7a934a] ...
	I0501 03:44:32.831039   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e3c74de489af3d78a4b0cf620ed16e1fba7c9ce1c7021a15c5b4bd241b7a934a"
	I0501 03:44:32.878365   68864 logs.go:123] Gathering logs for kube-scheduler [1813f35574f4f0398e7d482fe77c5d7f8490726666a277035bda0a5cff80af6e] ...
	I0501 03:44:32.878416   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1813f35574f4f0398e7d482fe77c5d7f8490726666a277035bda0a5cff80af6e"
	I0501 03:44:32.935752   68864 logs.go:123] Gathering logs for storage-provisioner [f9a8d2f0f9453969b410fb7f880f6cf4c7e6e2f3c17a687583906efe4181dd17] ...
	I0501 03:44:32.935791   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f9a8d2f0f9453969b410fb7f880f6cf4c7e6e2f3c17a687583906efe4181dd17"
	I0501 03:44:35.492575   68864 system_pods.go:59] 8 kube-system pods found
	I0501 03:44:35.492603   68864 system_pods.go:61] "coredns-7db6d8ff4d-sjplt" [6701ee8e-0630-4332-b01c-26741ed3a7b7] Running
	I0501 03:44:35.492607   68864 system_pods.go:61] "etcd-embed-certs-277128" [744d7481-dd80-4435-90da-685fc16a76a4] Running
	I0501 03:44:35.492612   68864 system_pods.go:61] "kube-apiserver-embed-certs-277128" [2f1d0f09-b270-4cdc-af3c-e17e3a244bbf] Running
	I0501 03:44:35.492616   68864 system_pods.go:61] "kube-controller-manager-embed-certs-277128" [729aa138-dc0d-4616-97d4-1592453971b8] Running
	I0501 03:44:35.492619   68864 system_pods.go:61] "kube-proxy-phx7x" [56c0381e-c140-4f69-bbe4-09d393db8b23] Running
	I0501 03:44:35.492621   68864 system_pods.go:61] "kube-scheduler-embed-certs-277128" [cc56e9bc-7f21-4f57-a65a-73a10ffd7145] Running
	I0501 03:44:35.492627   68864 system_pods.go:61] "metrics-server-569cc877fc-p8j59" [f8ad6c24-dd5d-4515-9052-c9aca7412b55] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0501 03:44:35.492631   68864 system_pods.go:61] "storage-provisioner" [785be666-58d5-4b9d-92fd-bcacdbdebeb2] Running
	I0501 03:44:35.492638   68864 system_pods.go:74] duration metric: took 4.012764043s to wait for pod list to return data ...
	I0501 03:44:35.492644   68864 default_sa.go:34] waiting for default service account to be created ...
	I0501 03:44:35.494580   68864 default_sa.go:45] found service account: "default"
	I0501 03:44:35.494599   68864 default_sa.go:55] duration metric: took 1.949121ms for default service account to be created ...
	I0501 03:44:35.494606   68864 system_pods.go:116] waiting for k8s-apps to be running ...
	I0501 03:44:35.499484   68864 system_pods.go:86] 8 kube-system pods found
	I0501 03:44:35.499507   68864 system_pods.go:89] "coredns-7db6d8ff4d-sjplt" [6701ee8e-0630-4332-b01c-26741ed3a7b7] Running
	I0501 03:44:35.499514   68864 system_pods.go:89] "etcd-embed-certs-277128" [744d7481-dd80-4435-90da-685fc16a76a4] Running
	I0501 03:44:35.499519   68864 system_pods.go:89] "kube-apiserver-embed-certs-277128" [2f1d0f09-b270-4cdc-af3c-e17e3a244bbf] Running
	I0501 03:44:35.499523   68864 system_pods.go:89] "kube-controller-manager-embed-certs-277128" [729aa138-dc0d-4616-97d4-1592453971b8] Running
	I0501 03:44:35.499526   68864 system_pods.go:89] "kube-proxy-phx7x" [56c0381e-c140-4f69-bbe4-09d393db8b23] Running
	I0501 03:44:35.499531   68864 system_pods.go:89] "kube-scheduler-embed-certs-277128" [cc56e9bc-7f21-4f57-a65a-73a10ffd7145] Running
	I0501 03:44:35.499537   68864 system_pods.go:89] "metrics-server-569cc877fc-p8j59" [f8ad6c24-dd5d-4515-9052-c9aca7412b55] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0501 03:44:35.499544   68864 system_pods.go:89] "storage-provisioner" [785be666-58d5-4b9d-92fd-bcacdbdebeb2] Running
	I0501 03:44:35.499550   68864 system_pods.go:126] duration metric: took 4.939659ms to wait for k8s-apps to be running ...
	I0501 03:44:35.499559   68864 system_svc.go:44] waiting for kubelet service to be running ....
	I0501 03:44:35.499599   68864 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 03:44:35.518471   68864 system_svc.go:56] duration metric: took 18.902776ms WaitForService to wait for kubelet
	I0501 03:44:35.518498   68864 kubeadm.go:576] duration metric: took 4m27.516125606s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0501 03:44:35.518521   68864 node_conditions.go:102] verifying NodePressure condition ...
	I0501 03:44:35.521936   68864 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0501 03:44:35.521956   68864 node_conditions.go:123] node cpu capacity is 2
	I0501 03:44:35.521966   68864 node_conditions.go:105] duration metric: took 3.439997ms to run NodePressure ...
	I0501 03:44:35.521976   68864 start.go:240] waiting for startup goroutines ...
	I0501 03:44:35.521983   68864 start.go:245] waiting for cluster config update ...
	I0501 03:44:35.521994   68864 start.go:254] writing updated cluster config ...
	I0501 03:44:35.522311   68864 ssh_runner.go:195] Run: rm -f paused
	I0501 03:44:35.572130   68864 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0501 03:44:35.573709   68864 out.go:177] * Done! kubectl is now configured to use "embed-certs-277128" cluster and "default" namespace by default
	I0501 03:44:31.512755   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:34.011892   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:32.772208   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:44:32.791063   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:44:32.791145   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:44:32.856883   69580 cri.go:89] found id: ""
	I0501 03:44:32.856909   69580 logs.go:276] 0 containers: []
	W0501 03:44:32.856920   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:44:32.856927   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:44:32.856988   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:44:32.928590   69580 cri.go:89] found id: ""
	I0501 03:44:32.928625   69580 logs.go:276] 0 containers: []
	W0501 03:44:32.928637   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:44:32.928644   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:44:32.928707   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:44:32.978068   69580 cri.go:89] found id: ""
	I0501 03:44:32.978100   69580 logs.go:276] 0 containers: []
	W0501 03:44:32.978113   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:44:32.978120   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:44:32.978184   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:44:33.018873   69580 cri.go:89] found id: ""
	I0501 03:44:33.018897   69580 logs.go:276] 0 containers: []
	W0501 03:44:33.018905   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:44:33.018911   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:44:33.018970   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:44:33.060633   69580 cri.go:89] found id: ""
	I0501 03:44:33.060661   69580 logs.go:276] 0 containers: []
	W0501 03:44:33.060673   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:44:33.060681   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:44:33.060735   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:44:33.099862   69580 cri.go:89] found id: ""
	I0501 03:44:33.099891   69580 logs.go:276] 0 containers: []
	W0501 03:44:33.099900   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:44:33.099906   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:44:33.099953   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:44:33.139137   69580 cri.go:89] found id: ""
	I0501 03:44:33.139163   69580 logs.go:276] 0 containers: []
	W0501 03:44:33.139171   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:44:33.139177   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:44:33.139224   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:44:33.178800   69580 cri.go:89] found id: ""
	I0501 03:44:33.178826   69580 logs.go:276] 0 containers: []
	W0501 03:44:33.178834   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:44:33.178842   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:44:33.178856   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:44:33.233811   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:44:33.233842   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:44:33.248931   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:44:33.248958   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:44:33.325530   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:44:33.325551   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:44:33.325563   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:44:33.412071   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:44:33.412103   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:44:35.954706   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:44:35.970256   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:44:35.970333   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:44:36.010417   69580 cri.go:89] found id: ""
	I0501 03:44:36.010443   69580 logs.go:276] 0 containers: []
	W0501 03:44:36.010452   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:44:36.010459   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:44:36.010524   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:44:36.051571   69580 cri.go:89] found id: ""
	I0501 03:44:36.051600   69580 logs.go:276] 0 containers: []
	W0501 03:44:36.051611   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:44:36.051619   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:44:36.051683   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:44:36.092148   69580 cri.go:89] found id: ""
	I0501 03:44:36.092176   69580 logs.go:276] 0 containers: []
	W0501 03:44:36.092185   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:44:36.092190   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:44:36.092247   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:44:36.136243   69580 cri.go:89] found id: ""
	I0501 03:44:36.136282   69580 logs.go:276] 0 containers: []
	W0501 03:44:36.136290   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:44:36.136296   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:44:36.136342   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:44:36.178154   69580 cri.go:89] found id: ""
	I0501 03:44:36.178183   69580 logs.go:276] 0 containers: []
	W0501 03:44:36.178193   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:44:36.178200   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:44:36.178264   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:44:36.217050   69580 cri.go:89] found id: ""
	I0501 03:44:36.217077   69580 logs.go:276] 0 containers: []
	W0501 03:44:36.217089   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:44:36.217096   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:44:36.217172   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:44:36.260438   69580 cri.go:89] found id: ""
	I0501 03:44:36.260470   69580 logs.go:276] 0 containers: []
	W0501 03:44:36.260481   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:44:36.260488   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:44:36.260546   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:44:36.303410   69580 cri.go:89] found id: ""
	I0501 03:44:36.303436   69580 logs.go:276] 0 containers: []
	W0501 03:44:36.303448   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:44:36.303459   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:44:36.303475   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:44:36.390427   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:44:36.390468   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:44:36.433631   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:44:36.433663   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:44:33.845863   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:35.847896   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:36.012448   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:38.510722   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:39.005005   69237 pod_ready.go:81] duration metric: took 4m0.000783466s for pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace to be "Ready" ...
	E0501 03:44:39.005036   69237 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace to be "Ready" (will not retry!)
	I0501 03:44:39.005057   69237 pod_ready.go:38] duration metric: took 4m8.020392425s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0501 03:44:39.005089   69237 kubeadm.go:591] duration metric: took 4m17.941775807s to restartPrimaryControlPlane
	W0501 03:44:39.005175   69237 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0501 03:44:39.005208   69237 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0501 03:44:36.486334   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:44:36.486365   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:44:36.502145   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:44:36.502175   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:44:36.586733   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:44:39.087607   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:44:39.102475   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:44:39.102552   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:44:39.141916   69580 cri.go:89] found id: ""
	I0501 03:44:39.141947   69580 logs.go:276] 0 containers: []
	W0501 03:44:39.141958   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:44:39.141964   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:44:39.142012   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:44:39.188472   69580 cri.go:89] found id: ""
	I0501 03:44:39.188501   69580 logs.go:276] 0 containers: []
	W0501 03:44:39.188512   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:44:39.188520   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:44:39.188582   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:44:39.243282   69580 cri.go:89] found id: ""
	I0501 03:44:39.243306   69580 logs.go:276] 0 containers: []
	W0501 03:44:39.243313   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:44:39.243318   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:44:39.243377   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:44:39.288254   69580 cri.go:89] found id: ""
	I0501 03:44:39.288284   69580 logs.go:276] 0 containers: []
	W0501 03:44:39.288296   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:44:39.288304   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:44:39.288379   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:44:39.330846   69580 cri.go:89] found id: ""
	I0501 03:44:39.330879   69580 logs.go:276] 0 containers: []
	W0501 03:44:39.330892   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:44:39.330901   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:44:39.330969   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:44:39.377603   69580 cri.go:89] found id: ""
	I0501 03:44:39.377632   69580 logs.go:276] 0 containers: []
	W0501 03:44:39.377642   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:44:39.377649   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:44:39.377710   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:44:39.421545   69580 cri.go:89] found id: ""
	I0501 03:44:39.421574   69580 logs.go:276] 0 containers: []
	W0501 03:44:39.421585   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:44:39.421594   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:44:39.421653   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:44:39.463394   69580 cri.go:89] found id: ""
	I0501 03:44:39.463424   69580 logs.go:276] 0 containers: []
	W0501 03:44:39.463435   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:44:39.463447   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:44:39.463464   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:44:39.552196   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:44:39.552218   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:44:39.552229   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:44:39.648509   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:44:39.648549   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:44:39.702829   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:44:39.702866   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:44:39.757712   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:44:39.757746   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:44:38.347120   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:40.355310   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:42.847346   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:42.273443   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:44:42.289788   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:44:42.289856   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:44:42.336802   69580 cri.go:89] found id: ""
	I0501 03:44:42.336833   69580 logs.go:276] 0 containers: []
	W0501 03:44:42.336846   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:44:42.336854   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:44:42.336919   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:44:42.387973   69580 cri.go:89] found id: ""
	I0501 03:44:42.388017   69580 logs.go:276] 0 containers: []
	W0501 03:44:42.388028   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:44:42.388036   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:44:42.388103   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:44:42.444866   69580 cri.go:89] found id: ""
	I0501 03:44:42.444895   69580 logs.go:276] 0 containers: []
	W0501 03:44:42.444906   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:44:42.444914   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:44:42.444987   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:44:42.493647   69580 cri.go:89] found id: ""
	I0501 03:44:42.493676   69580 logs.go:276] 0 containers: []
	W0501 03:44:42.493686   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:44:42.493692   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:44:42.493748   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:44:42.535046   69580 cri.go:89] found id: ""
	I0501 03:44:42.535075   69580 logs.go:276] 0 containers: []
	W0501 03:44:42.535086   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:44:42.535093   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:44:42.535161   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:44:42.579453   69580 cri.go:89] found id: ""
	I0501 03:44:42.579486   69580 logs.go:276] 0 containers: []
	W0501 03:44:42.579499   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:44:42.579507   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:44:42.579568   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:44:42.621903   69580 cri.go:89] found id: ""
	I0501 03:44:42.621931   69580 logs.go:276] 0 containers: []
	W0501 03:44:42.621942   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:44:42.621950   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:44:42.622009   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:44:42.666202   69580 cri.go:89] found id: ""
	I0501 03:44:42.666232   69580 logs.go:276] 0 containers: []
	W0501 03:44:42.666243   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:44:42.666257   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:44:42.666272   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:44:42.736032   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:44:42.736078   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:44:42.750773   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:44:42.750799   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:44:42.836942   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:44:42.836975   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:44:42.836997   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:44:42.930660   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:44:42.930695   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:44:45.479619   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:44:45.495112   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:44:45.495174   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:44:45.536693   69580 cri.go:89] found id: ""
	I0501 03:44:45.536722   69580 logs.go:276] 0 containers: []
	W0501 03:44:45.536730   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:44:45.536737   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:44:45.536785   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:44:45.577838   69580 cri.go:89] found id: ""
	I0501 03:44:45.577866   69580 logs.go:276] 0 containers: []
	W0501 03:44:45.577876   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:44:45.577894   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:44:45.577958   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:44:45.615842   69580 cri.go:89] found id: ""
	I0501 03:44:45.615868   69580 logs.go:276] 0 containers: []
	W0501 03:44:45.615879   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:44:45.615892   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:44:45.615953   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:44:45.654948   69580 cri.go:89] found id: ""
	I0501 03:44:45.654972   69580 logs.go:276] 0 containers: []
	W0501 03:44:45.654980   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:44:45.654986   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:44:45.655042   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:44:45.695104   69580 cri.go:89] found id: ""
	I0501 03:44:45.695129   69580 logs.go:276] 0 containers: []
	W0501 03:44:45.695138   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:44:45.695145   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:44:45.695212   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:44:45.737609   69580 cri.go:89] found id: ""
	I0501 03:44:45.737633   69580 logs.go:276] 0 containers: []
	W0501 03:44:45.737641   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:44:45.737647   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:44:45.737693   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:44:45.778655   69580 cri.go:89] found id: ""
	I0501 03:44:45.778685   69580 logs.go:276] 0 containers: []
	W0501 03:44:45.778696   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:44:45.778702   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:44:45.778781   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:44:45.819430   69580 cri.go:89] found id: ""
	I0501 03:44:45.819452   69580 logs.go:276] 0 containers: []
	W0501 03:44:45.819460   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:44:45.819469   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:44:45.819485   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:44:45.875879   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:44:45.875911   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:44:45.892035   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:44:45.892062   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:44:45.975803   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:44:45.975836   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:44:45.975853   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:44:46.058183   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:44:46.058222   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:44:45.345465   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:47.346947   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:48.604991   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:44:48.621226   69580 kubeadm.go:591] duration metric: took 4m4.888665162s to restartPrimaryControlPlane
	W0501 03:44:48.621351   69580 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0501 03:44:48.621407   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0501 03:44:49.654748   69580 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.033320548s)
	I0501 03:44:49.654838   69580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 03:44:49.671511   69580 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0501 03:44:49.684266   69580 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0501 03:44:49.697079   69580 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0501 03:44:49.697101   69580 kubeadm.go:156] found existing configuration files:
	
	I0501 03:44:49.697159   69580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0501 03:44:49.710609   69580 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0501 03:44:49.710692   69580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0501 03:44:49.723647   69580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0501 03:44:49.736855   69580 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0501 03:44:49.737023   69580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0501 03:44:49.748842   69580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0501 03:44:49.760856   69580 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0501 03:44:49.760923   69580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0501 03:44:49.772685   69580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0501 03:44:49.784035   69580 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0501 03:44:49.784114   69580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0501 03:44:49.795699   69580 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0501 03:44:49.869387   69580 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0501 03:44:49.869481   69580 kubeadm.go:309] [preflight] Running pre-flight checks
	I0501 03:44:50.028858   69580 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0501 03:44:50.028999   69580 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0501 03:44:50.029182   69580 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0501 03:44:50.242773   69580 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0501 03:44:50.244816   69580 out.go:204]   - Generating certificates and keys ...
	I0501 03:44:50.244918   69580 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0501 03:44:50.245008   69580 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0501 03:44:50.245111   69580 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0501 03:44:50.245216   69580 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0501 03:44:50.245331   69580 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0501 03:44:50.245424   69580 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0501 03:44:50.245490   69580 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0501 03:44:50.245556   69580 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0501 03:44:50.245629   69580 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0501 03:44:50.245724   69580 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0501 03:44:50.245784   69580 kubeadm.go:309] [certs] Using the existing "sa" key
	I0501 03:44:50.245877   69580 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0501 03:44:50.501955   69580 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0501 03:44:50.683749   69580 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0501 03:44:50.905745   69580 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0501 03:44:51.005912   69580 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0501 03:44:51.025470   69580 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0501 03:44:51.029411   69580 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0501 03:44:51.029859   69580 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0501 03:44:51.181498   69580 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0501 03:44:51.183222   69580 out.go:204]   - Booting up control plane ...
	I0501 03:44:51.183334   69580 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0501 03:44:51.200394   69580 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0501 03:44:51.201612   69580 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0501 03:44:51.202445   69580 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0501 03:44:51.204681   69580 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0501 03:44:49.847629   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:52.345383   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:54.346479   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:56.348560   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:58.846207   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:45:01.345790   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:45:03.847746   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:45:06.346172   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:45:08.346693   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:45:10.846797   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:45:11.778923   69237 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.773690939s)
	I0501 03:45:11.778992   69237 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 03:45:11.796337   69237 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0501 03:45:11.810167   69237 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0501 03:45:11.822425   69237 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0501 03:45:11.822457   69237 kubeadm.go:156] found existing configuration files:
	
	I0501 03:45:11.822514   69237 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0501 03:45:11.834539   69237 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0501 03:45:11.834596   69237 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0501 03:45:11.848336   69237 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0501 03:45:11.860459   69237 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0501 03:45:11.860535   69237 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0501 03:45:11.873903   69237 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0501 03:45:11.887353   69237 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0501 03:45:11.887427   69237 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0501 03:45:11.900805   69237 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0501 03:45:11.912512   69237 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0501 03:45:11.912572   69237 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0501 03:45:11.924870   69237 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0501 03:45:12.149168   69237 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0501 03:45:13.348651   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:45:15.847148   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:45:20.882309   69237 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0501 03:45:20.882382   69237 kubeadm.go:309] [preflight] Running pre-flight checks
	I0501 03:45:20.882472   69237 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0501 03:45:20.882602   69237 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0501 03:45:20.882741   69237 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0501 03:45:20.882836   69237 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0501 03:45:20.884733   69237 out.go:204]   - Generating certificates and keys ...
	I0501 03:45:20.884837   69237 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0501 03:45:20.884894   69237 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0501 03:45:20.884996   69237 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0501 03:45:20.885106   69237 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0501 03:45:20.885209   69237 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0501 03:45:20.885316   69237 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0501 03:45:20.885400   69237 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0501 03:45:20.885483   69237 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0501 03:45:20.885590   69237 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0501 03:45:20.885702   69237 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0501 03:45:20.885759   69237 kubeadm.go:309] [certs] Using the existing "sa" key
	I0501 03:45:20.885838   69237 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0501 03:45:20.885915   69237 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0501 03:45:20.885996   69237 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0501 03:45:20.886074   69237 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0501 03:45:20.886164   69237 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0501 03:45:20.886233   69237 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0501 03:45:20.886362   69237 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0501 03:45:20.886492   69237 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0501 03:45:20.888113   69237 out.go:204]   - Booting up control plane ...
	I0501 03:45:20.888194   69237 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0501 03:45:20.888264   69237 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0501 03:45:20.888329   69237 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0501 03:45:20.888458   69237 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0501 03:45:20.888570   69237 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0501 03:45:20.888627   69237 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0501 03:45:20.888777   69237 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0501 03:45:20.888863   69237 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0501 03:45:20.888964   69237 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 502.867448ms
	I0501 03:45:20.889080   69237 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0501 03:45:20.889177   69237 kubeadm.go:309] [api-check] The API server is healthy after 5.503139909s
	I0501 03:45:20.889335   69237 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0501 03:45:20.889506   69237 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0501 03:45:20.889579   69237 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0501 03:45:20.889817   69237 kubeadm.go:309] [mark-control-plane] Marking the node default-k8s-diff-port-715118 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0501 03:45:20.889868   69237 kubeadm.go:309] [bootstrap-token] Using token: 2vhvw6.gdesonhc2twrukzt
	I0501 03:45:20.892253   69237 out.go:204]   - Configuring RBAC rules ...
	I0501 03:45:20.892395   69237 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0501 03:45:20.892475   69237 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0501 03:45:20.892652   69237 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0501 03:45:20.892812   69237 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0501 03:45:20.892931   69237 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0501 03:45:20.893040   69237 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0501 03:45:20.893201   69237 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0501 03:45:20.893264   69237 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0501 03:45:20.893309   69237 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0501 03:45:20.893319   69237 kubeadm.go:309] 
	I0501 03:45:20.893367   69237 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0501 03:45:20.893373   69237 kubeadm.go:309] 
	I0501 03:45:20.893450   69237 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0501 03:45:20.893458   69237 kubeadm.go:309] 
	I0501 03:45:20.893481   69237 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0501 03:45:20.893544   69237 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0501 03:45:20.893591   69237 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0501 03:45:20.893597   69237 kubeadm.go:309] 
	I0501 03:45:20.893643   69237 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0501 03:45:20.893650   69237 kubeadm.go:309] 
	I0501 03:45:20.893685   69237 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0501 03:45:20.893690   69237 kubeadm.go:309] 
	I0501 03:45:20.893741   69237 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0501 03:45:20.893805   69237 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0501 03:45:20.893858   69237 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0501 03:45:20.893863   69237 kubeadm.go:309] 
	I0501 03:45:20.893946   69237 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0501 03:45:20.894035   69237 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0501 03:45:20.894045   69237 kubeadm.go:309] 
	I0501 03:45:20.894139   69237 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8444 --token 2vhvw6.gdesonhc2twrukzt \
	I0501 03:45:20.894267   69237 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:bd94cc6acfb95fe36f70b12c6a1f2980601b8235e7c4dea65cebdf35eb514754 \
	I0501 03:45:20.894294   69237 kubeadm.go:309] 	--control-plane 
	I0501 03:45:20.894301   69237 kubeadm.go:309] 
	I0501 03:45:20.894368   69237 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0501 03:45:20.894375   69237 kubeadm.go:309] 
	I0501 03:45:20.894498   69237 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8444 --token 2vhvw6.gdesonhc2twrukzt \
	I0501 03:45:20.894605   69237 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:bd94cc6acfb95fe36f70b12c6a1f2980601b8235e7c4dea65cebdf35eb514754 
	I0501 03:45:20.894616   69237 cni.go:84] Creating CNI manager for ""
	I0501 03:45:20.894623   69237 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0501 03:45:20.896151   69237 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0501 03:45:18.346276   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:45:20.846958   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:45:20.897443   69237 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0501 03:45:20.911935   69237 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0501 03:45:20.941109   69237 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0501 03:45:20.941193   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:20.941249   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-715118 minikube.k8s.io/updated_at=2024_05_01T03_45_20_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=2c4eae41cda912e6a762d77f0d8868e00f97bb4e minikube.k8s.io/name=default-k8s-diff-port-715118 minikube.k8s.io/primary=true
	I0501 03:45:20.971300   69237 ops.go:34] apiserver oom_adj: -16
	I0501 03:45:21.143744   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:21.643800   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:22.144096   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:22.643852   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:23.144726   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:23.644174   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:24.143735   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:24.643947   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:25.143871   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:25.644557   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:23.345774   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:45:25.346189   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:45:27.348026   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:45:26.144443   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:26.643761   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:27.144691   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:27.644445   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:28.144006   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:28.643904   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:29.144077   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:29.644690   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:30.144692   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:30.644604   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:31.207553   69580 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0501 03:45:31.208328   69580 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0501 03:45:31.208516   69580 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0501 03:45:29.845785   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:45:32.348020   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:45:31.144517   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:31.644673   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:32.143793   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:32.644380   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:33.144729   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:33.644415   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:33.752056   69237 kubeadm.go:1107] duration metric: took 12.810918189s to wait for elevateKubeSystemPrivileges
	W0501 03:45:33.752096   69237 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0501 03:45:33.752105   69237 kubeadm.go:393] duration metric: took 5m12.753721662s to StartCluster
	I0501 03:45:33.752124   69237 settings.go:142] acquiring lock: {Name:mkcfe08bcadd45d99c00ed1eaf4e9c226a489fe2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 03:45:33.752219   69237 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18779-13391/kubeconfig
	I0501 03:45:33.753829   69237 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18779-13391/kubeconfig: {Name:mk5d1131557f885800877622151c0915337cb23a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 03:45:33.754094   69237 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.72.158 Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0501 03:45:33.755764   69237 out.go:177] * Verifying Kubernetes components...
	I0501 03:45:33.754178   69237 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0501 03:45:33.754310   69237 config.go:182] Loaded profile config "default-k8s-diff-port-715118": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0501 03:45:33.757144   69237 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 03:45:33.757151   69237 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-715118"
	I0501 03:45:33.757172   69237 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-715118"
	I0501 03:45:33.757189   69237 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-715118"
	I0501 03:45:33.757213   69237 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-715118"
	I0501 03:45:33.757221   69237 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-715118"
	W0501 03:45:33.757230   69237 addons.go:243] addon metrics-server should already be in state true
	I0501 03:45:33.757264   69237 host.go:66] Checking if "default-k8s-diff-port-715118" exists ...
	I0501 03:45:33.757180   69237 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-715118"
	W0501 03:45:33.757295   69237 addons.go:243] addon storage-provisioner should already be in state true
	I0501 03:45:33.757355   69237 host.go:66] Checking if "default-k8s-diff-port-715118" exists ...
	I0501 03:45:33.757596   69237 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:45:33.757624   69237 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:45:33.757630   69237 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:45:33.757762   69237 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:45:33.757808   69237 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:45:33.757662   69237 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:45:33.773846   69237 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44313
	I0501 03:45:33.774442   69237 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:45:33.775002   69237 main.go:141] libmachine: Using API Version  1
	I0501 03:45:33.775024   69237 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:45:33.775438   69237 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:45:33.776086   69237 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:45:33.776117   69237 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:45:33.777715   69237 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37079
	I0501 03:45:33.777835   69237 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38097
	I0501 03:45:33.778170   69237 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:45:33.778240   69237 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:45:33.778701   69237 main.go:141] libmachine: Using API Version  1
	I0501 03:45:33.778734   69237 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:45:33.778778   69237 main.go:141] libmachine: Using API Version  1
	I0501 03:45:33.778795   69237 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:45:33.779142   69237 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:45:33.779150   69237 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:45:33.779427   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetState
	I0501 03:45:33.779721   69237 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:45:33.779769   69237 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:45:33.783493   69237 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-715118"
	W0501 03:45:33.783519   69237 addons.go:243] addon default-storageclass should already be in state true
	I0501 03:45:33.783551   69237 host.go:66] Checking if "default-k8s-diff-port-715118" exists ...
	I0501 03:45:33.783922   69237 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:45:33.783965   69237 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:45:33.795373   69237 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43491
	I0501 03:45:33.795988   69237 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:45:33.796557   69237 main.go:141] libmachine: Using API Version  1
	I0501 03:45:33.796579   69237 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:45:33.796931   69237 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:45:33.797093   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetState
	I0501 03:45:33.797155   69237 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46755
	I0501 03:45:33.797806   69237 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:45:33.798383   69237 main.go:141] libmachine: Using API Version  1
	I0501 03:45:33.798442   69237 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:45:33.798848   69237 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:45:33.799052   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetState
	I0501 03:45:33.799105   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .DriverName
	I0501 03:45:33.801809   69237 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0501 03:45:33.800600   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .DriverName
	I0501 03:45:33.803752   69237 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0501 03:45:33.803779   69237 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0501 03:45:33.803800   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHHostname
	I0501 03:45:33.805235   69237 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0501 03:45:33.804172   69237 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35709
	I0501 03:45:33.806635   69237 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0501 03:45:33.806651   69237 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0501 03:45:33.806670   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHHostname
	I0501 03:45:33.806889   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:45:33.806967   69237 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:45:33.807292   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:12:31", ip: ""} in network mk-default-k8s-diff-port-715118: {Iface:virbr3 ExpiryTime:2024-05-01 04:40:05 +0000 UTC Type:0 Mac:52:54:00:87:12:31 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-715118 Clientid:01:52:54:00:87:12:31}
	I0501 03:45:33.807426   69237 main.go:141] libmachine: Using API Version  1
	I0501 03:45:33.807428   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHPort
	I0501 03:45:33.807437   69237 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:45:33.807449   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined IP address 192.168.72.158 and MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:45:33.807578   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHKeyPath
	I0501 03:45:33.807680   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHUsername
	I0501 03:45:33.807799   69237 sshutil.go:53] new ssh client: &{IP:192.168.72.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/default-k8s-diff-port-715118/id_rsa Username:docker}
	I0501 03:45:33.808171   69237 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:45:33.808625   69237 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:45:33.808660   69237 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:45:33.810668   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:45:33.811266   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:12:31", ip: ""} in network mk-default-k8s-diff-port-715118: {Iface:virbr3 ExpiryTime:2024-05-01 04:40:05 +0000 UTC Type:0 Mac:52:54:00:87:12:31 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-715118 Clientid:01:52:54:00:87:12:31}
	I0501 03:45:33.811297   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined IP address 192.168.72.158 and MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:45:33.811595   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHPort
	I0501 03:45:33.811794   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHKeyPath
	I0501 03:45:33.811983   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHUsername
	I0501 03:45:33.812124   69237 sshutil.go:53] new ssh client: &{IP:192.168.72.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/default-k8s-diff-port-715118/id_rsa Username:docker}
	I0501 03:45:33.825315   69237 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35461
	I0501 03:45:33.825891   69237 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:45:33.826334   69237 main.go:141] libmachine: Using API Version  1
	I0501 03:45:33.826351   69237 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:45:33.826679   69237 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:45:33.826912   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetState
	I0501 03:45:33.828659   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .DriverName
	I0501 03:45:33.828931   69237 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0501 03:45:33.828946   69237 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0501 03:45:33.828963   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHHostname
	I0501 03:45:33.832151   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:45:33.832632   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:12:31", ip: ""} in network mk-default-k8s-diff-port-715118: {Iface:virbr3 ExpiryTime:2024-05-01 04:40:05 +0000 UTC Type:0 Mac:52:54:00:87:12:31 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-715118 Clientid:01:52:54:00:87:12:31}
	I0501 03:45:33.832656   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined IP address 192.168.72.158 and MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:45:33.832863   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHPort
	I0501 03:45:33.833010   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHKeyPath
	I0501 03:45:33.833146   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHUsername
	I0501 03:45:33.833302   69237 sshutil.go:53] new ssh client: &{IP:192.168.72.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/default-k8s-diff-port-715118/id_rsa Username:docker}
	I0501 03:45:34.014287   69237 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0501 03:45:34.047199   69237 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-715118" to be "Ready" ...
	I0501 03:45:34.069000   69237 node_ready.go:49] node "default-k8s-diff-port-715118" has status "Ready":"True"
	I0501 03:45:34.069023   69237 node_ready.go:38] duration metric: took 21.790599ms for node "default-k8s-diff-port-715118" to be "Ready" ...
	I0501 03:45:34.069033   69237 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0501 03:45:34.077182   69237 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-bg755" in "kube-system" namespace to be "Ready" ...
	I0501 03:45:34.151001   69237 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0501 03:45:34.166362   69237 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0501 03:45:34.166385   69237 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0501 03:45:34.214624   69237 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0501 03:45:34.329110   69237 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0501 03:45:34.329133   69237 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0501 03:45:34.436779   69237 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0501 03:45:34.436804   69237 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0501 03:45:34.611410   69237 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0501 03:45:34.698997   69237 main.go:141] libmachine: Making call to close driver server
	I0501 03:45:34.699026   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .Close
	I0501 03:45:34.699321   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | Closing plugin on server side
	I0501 03:45:34.699389   69237 main.go:141] libmachine: Successfully made call to close driver server
	I0501 03:45:34.699408   69237 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 03:45:34.699423   69237 main.go:141] libmachine: Making call to close driver server
	I0501 03:45:34.699437   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .Close
	I0501 03:45:34.699684   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | Closing plugin on server side
	I0501 03:45:34.699726   69237 main.go:141] libmachine: Successfully made call to close driver server
	I0501 03:45:34.699734   69237 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 03:45:34.708143   69237 main.go:141] libmachine: Making call to close driver server
	I0501 03:45:34.708171   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .Close
	I0501 03:45:34.708438   69237 main.go:141] libmachine: Successfully made call to close driver server
	I0501 03:45:34.708457   69237 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 03:45:34.708463   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | Closing plugin on server side
	I0501 03:45:35.510225   69237 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.295555956s)
	I0501 03:45:35.510274   69237 main.go:141] libmachine: Making call to close driver server
	I0501 03:45:35.510286   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .Close
	I0501 03:45:35.510700   69237 main.go:141] libmachine: Successfully made call to close driver server
	I0501 03:45:35.510721   69237 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 03:45:35.510732   69237 main.go:141] libmachine: Making call to close driver server
	I0501 03:45:35.510728   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | Closing plugin on server side
	I0501 03:45:35.510740   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .Close
	I0501 03:45:35.510961   69237 main.go:141] libmachine: Successfully made call to close driver server
	I0501 03:45:35.510979   69237 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 03:45:35.510983   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | Closing plugin on server side
	I0501 03:45:35.845633   69237 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.234178466s)
	I0501 03:45:35.845691   69237 main.go:141] libmachine: Making call to close driver server
	I0501 03:45:35.845708   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .Close
	I0501 03:45:35.845997   69237 main.go:141] libmachine: Successfully made call to close driver server
	I0501 03:45:35.846017   69237 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 03:45:35.846027   69237 main.go:141] libmachine: Making call to close driver server
	I0501 03:45:35.846026   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | Closing plugin on server side
	I0501 03:45:35.846036   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .Close
	I0501 03:45:35.847736   69237 main.go:141] libmachine: Successfully made call to close driver server
	I0501 03:45:35.847767   69237 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 03:45:35.847781   69237 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-715118"
	I0501 03:45:35.847786   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | Closing plugin on server side
	I0501 03:45:35.849438   69237 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0501 03:45:36.209029   69580 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0501 03:45:36.209300   69580 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0501 03:45:34.848699   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:45:37.338985   68640 pod_ready.go:81] duration metric: took 4m0.000306796s for pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace to be "Ready" ...
	E0501 03:45:37.339010   68640 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace to be "Ready" (will not retry!)
	I0501 03:45:37.339029   68640 pod_ready.go:38] duration metric: took 4m9.062496127s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0501 03:45:37.339089   68640 kubeadm.go:591] duration metric: took 4m19.268153875s to restartPrimaryControlPlane
	W0501 03:45:37.339148   68640 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0501 03:45:37.339176   68640 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0501 03:45:35.851156   69237 addons.go:505] duration metric: took 2.096980743s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0501 03:45:36.085176   69237 pod_ready.go:102] pod "coredns-7db6d8ff4d-bg755" in "kube-system" namespace has status "Ready":"False"
	I0501 03:45:36.585390   69237 pod_ready.go:92] pod "coredns-7db6d8ff4d-bg755" in "kube-system" namespace has status "Ready":"True"
	I0501 03:45:36.585415   69237 pod_ready.go:81] duration metric: took 2.508204204s for pod "coredns-7db6d8ff4d-bg755" in "kube-system" namespace to be "Ready" ...
	I0501 03:45:36.585428   69237 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-mp6f5" in "kube-system" namespace to be "Ready" ...
	I0501 03:45:36.594575   69237 pod_ready.go:92] pod "coredns-7db6d8ff4d-mp6f5" in "kube-system" namespace has status "Ready":"True"
	I0501 03:45:36.594600   69237 pod_ready.go:81] duration metric: took 9.163923ms for pod "coredns-7db6d8ff4d-mp6f5" in "kube-system" namespace to be "Ready" ...
	I0501 03:45:36.594613   69237 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-715118" in "kube-system" namespace to be "Ready" ...
	I0501 03:45:36.606784   69237 pod_ready.go:92] pod "etcd-default-k8s-diff-port-715118" in "kube-system" namespace has status "Ready":"True"
	I0501 03:45:36.606807   69237 pod_ready.go:81] duration metric: took 12.186129ms for pod "etcd-default-k8s-diff-port-715118" in "kube-system" namespace to be "Ready" ...
	I0501 03:45:36.606819   69237 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-715118" in "kube-system" namespace to be "Ready" ...
	I0501 03:45:36.617373   69237 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-715118" in "kube-system" namespace has status "Ready":"True"
	I0501 03:45:36.617394   69237 pod_ready.go:81] duration metric: took 10.566278ms for pod "kube-apiserver-default-k8s-diff-port-715118" in "kube-system" namespace to be "Ready" ...
	I0501 03:45:36.617404   69237 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-715118" in "kube-system" namespace to be "Ready" ...
	I0501 03:45:36.622441   69237 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-715118" in "kube-system" namespace has status "Ready":"True"
	I0501 03:45:36.622460   69237 pod_ready.go:81] duration metric: took 5.049948ms for pod "kube-controller-manager-default-k8s-diff-port-715118" in "kube-system" namespace to be "Ready" ...
	I0501 03:45:36.622469   69237 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2knrp" in "kube-system" namespace to be "Ready" ...
	I0501 03:45:36.981490   69237 pod_ready.go:92] pod "kube-proxy-2knrp" in "kube-system" namespace has status "Ready":"True"
	I0501 03:45:36.981513   69237 pod_ready.go:81] duration metric: took 359.038927ms for pod "kube-proxy-2knrp" in "kube-system" namespace to be "Ready" ...
	I0501 03:45:36.981523   69237 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-715118" in "kube-system" namespace to be "Ready" ...
	I0501 03:45:37.381970   69237 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-715118" in "kube-system" namespace has status "Ready":"True"
	I0501 03:45:37.381999   69237 pod_ready.go:81] duration metric: took 400.468372ms for pod "kube-scheduler-default-k8s-diff-port-715118" in "kube-system" namespace to be "Ready" ...
	I0501 03:45:37.382011   69237 pod_ready.go:38] duration metric: took 3.312967983s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0501 03:45:37.382028   69237 api_server.go:52] waiting for apiserver process to appear ...
	I0501 03:45:37.382091   69237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:45:37.401961   69237 api_server.go:72] duration metric: took 3.647829991s to wait for apiserver process to appear ...
	I0501 03:45:37.401992   69237 api_server.go:88] waiting for apiserver healthz status ...
	I0501 03:45:37.402016   69237 api_server.go:253] Checking apiserver healthz at https://192.168.72.158:8444/healthz ...
	I0501 03:45:37.407177   69237 api_server.go:279] https://192.168.72.158:8444/healthz returned 200:
	ok
	I0501 03:45:37.408020   69237 api_server.go:141] control plane version: v1.30.0
	I0501 03:45:37.408037   69237 api_server.go:131] duration metric: took 6.036621ms to wait for apiserver health ...
	I0501 03:45:37.408046   69237 system_pods.go:43] waiting for kube-system pods to appear ...
	I0501 03:45:37.586052   69237 system_pods.go:59] 9 kube-system pods found
	I0501 03:45:37.586081   69237 system_pods.go:61] "coredns-7db6d8ff4d-bg755" [884d489a-bc1e-442c-8e00-4616f983d3e9] Running
	I0501 03:45:37.586085   69237 system_pods.go:61] "coredns-7db6d8ff4d-mp6f5" [4c8550d0-0029-48f1-a892-1800f6639c75] Running
	I0501 03:45:37.586090   69237 system_pods.go:61] "etcd-default-k8s-diff-port-715118" [12be9bec-1d84-49ee-898c-499ff75a8026] Running
	I0501 03:45:37.586094   69237 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-715118" [ae9a476b-03cf-4d4d-9990-5e760db82e60] Running
	I0501 03:45:37.586098   69237 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-715118" [542bbe50-58b6-40fb-b81b-0cc2444a3401] Running
	I0501 03:45:37.586101   69237 system_pods.go:61] "kube-proxy-2knrp" [cf1406ff-8a6e-49bb-b180-1e72f4b54fbf] Running
	I0501 03:45:37.586104   69237 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-715118" [d24f02a2-67a9-4f28-9acc-445e0e74a68d] Running
	I0501 03:45:37.586109   69237 system_pods.go:61] "metrics-server-569cc877fc-xwxx9" [a66f5df4-355c-47f0-8b6e-da29e1c4394e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0501 03:45:37.586113   69237 system_pods.go:61] "storage-provisioner" [debb3a59-143a-46d3-87da-c2403e264861] Running
	I0501 03:45:37.586123   69237 system_pods.go:74] duration metric: took 178.07045ms to wait for pod list to return data ...
	I0501 03:45:37.586132   69237 default_sa.go:34] waiting for default service account to be created ...
	I0501 03:45:37.780696   69237 default_sa.go:45] found service account: "default"
	I0501 03:45:37.780720   69237 default_sa.go:55] duration metric: took 194.582743ms for default service account to be created ...
	I0501 03:45:37.780728   69237 system_pods.go:116] waiting for k8s-apps to be running ...
	I0501 03:45:37.985342   69237 system_pods.go:86] 9 kube-system pods found
	I0501 03:45:37.985368   69237 system_pods.go:89] "coredns-7db6d8ff4d-bg755" [884d489a-bc1e-442c-8e00-4616f983d3e9] Running
	I0501 03:45:37.985374   69237 system_pods.go:89] "coredns-7db6d8ff4d-mp6f5" [4c8550d0-0029-48f1-a892-1800f6639c75] Running
	I0501 03:45:37.985378   69237 system_pods.go:89] "etcd-default-k8s-diff-port-715118" [12be9bec-1d84-49ee-898c-499ff75a8026] Running
	I0501 03:45:37.985383   69237 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-715118" [ae9a476b-03cf-4d4d-9990-5e760db82e60] Running
	I0501 03:45:37.985387   69237 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-715118" [542bbe50-58b6-40fb-b81b-0cc2444a3401] Running
	I0501 03:45:37.985391   69237 system_pods.go:89] "kube-proxy-2knrp" [cf1406ff-8a6e-49bb-b180-1e72f4b54fbf] Running
	I0501 03:45:37.985395   69237 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-715118" [d24f02a2-67a9-4f28-9acc-445e0e74a68d] Running
	I0501 03:45:37.985401   69237 system_pods.go:89] "metrics-server-569cc877fc-xwxx9" [a66f5df4-355c-47f0-8b6e-da29e1c4394e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0501 03:45:37.985405   69237 system_pods.go:89] "storage-provisioner" [debb3a59-143a-46d3-87da-c2403e264861] Running
	I0501 03:45:37.985412   69237 system_pods.go:126] duration metric: took 204.679545ms to wait for k8s-apps to be running ...
	I0501 03:45:37.985418   69237 system_svc.go:44] waiting for kubelet service to be running ....
	I0501 03:45:37.985463   69237 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 03:45:38.002421   69237 system_svc.go:56] duration metric: took 16.992346ms WaitForService to wait for kubelet
	I0501 03:45:38.002458   69237 kubeadm.go:576] duration metric: took 4.248332952s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0501 03:45:38.002477   69237 node_conditions.go:102] verifying NodePressure condition ...
	I0501 03:45:38.181465   69237 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0501 03:45:38.181496   69237 node_conditions.go:123] node cpu capacity is 2
	I0501 03:45:38.181510   69237 node_conditions.go:105] duration metric: took 179.027834ms to run NodePressure ...
	I0501 03:45:38.181524   69237 start.go:240] waiting for startup goroutines ...
	I0501 03:45:38.181534   69237 start.go:245] waiting for cluster config update ...
	I0501 03:45:38.181547   69237 start.go:254] writing updated cluster config ...
	I0501 03:45:38.181810   69237 ssh_runner.go:195] Run: rm -f paused
	I0501 03:45:38.244075   69237 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0501 03:45:38.246261   69237 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-715118" cluster and "default" namespace by default
	I0501 03:45:46.209837   69580 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0501 03:45:46.210120   69580 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0501 03:46:06.211471   69580 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0501 03:46:06.211673   69580 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0501 03:46:09.967666   68640 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.628454657s)
	I0501 03:46:09.967737   68640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 03:46:09.985802   68640 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0501 03:46:09.996494   68640 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0501 03:46:10.006956   68640 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0501 03:46:10.006979   68640 kubeadm.go:156] found existing configuration files:
	
	I0501 03:46:10.007025   68640 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0501 03:46:10.017112   68640 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0501 03:46:10.017174   68640 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0501 03:46:10.027747   68640 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0501 03:46:10.037853   68640 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0501 03:46:10.037910   68640 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0501 03:46:10.048023   68640 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0501 03:46:10.057354   68640 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0501 03:46:10.057408   68640 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0501 03:46:10.067352   68640 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0501 03:46:10.076696   68640 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0501 03:46:10.076741   68640 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0501 03:46:10.086799   68640 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0501 03:46:10.150816   68640 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0501 03:46:10.150871   68640 kubeadm.go:309] [preflight] Running pre-flight checks
	I0501 03:46:10.325430   68640 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0501 03:46:10.325546   68640 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0501 03:46:10.325669   68640 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0501 03:46:10.581934   68640 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0501 03:46:10.585119   68640 out.go:204]   - Generating certificates and keys ...
	I0501 03:46:10.585221   68640 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0501 03:46:10.585319   68640 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0501 03:46:10.585416   68640 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0501 03:46:10.585522   68640 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0501 03:46:10.585620   68640 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0501 03:46:10.585695   68640 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0501 03:46:10.585781   68640 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0501 03:46:10.585861   68640 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0501 03:46:10.585959   68640 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0501 03:46:10.586064   68640 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0501 03:46:10.586116   68640 kubeadm.go:309] [certs] Using the existing "sa" key
	I0501 03:46:10.586208   68640 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0501 03:46:10.789482   68640 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0501 03:46:10.991219   68640 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0501 03:46:11.194897   68640 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0501 03:46:11.411926   68640 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0501 03:46:11.994791   68640 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0501 03:46:11.995468   68640 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0501 03:46:11.998463   68640 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0501 03:46:12.000394   68640 out.go:204]   - Booting up control plane ...
	I0501 03:46:12.000521   68640 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0501 03:46:12.000632   68640 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0501 03:46:12.000721   68640 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0501 03:46:12.022371   68640 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0501 03:46:12.023628   68640 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0501 03:46:12.023709   68640 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0501 03:46:12.178475   68640 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0501 03:46:12.178615   68640 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0501 03:46:12.680307   68640 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 502.179909ms
	I0501 03:46:12.680409   68640 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0501 03:46:18.182830   68640 kubeadm.go:309] [api-check] The API server is healthy after 5.502227274s
	I0501 03:46:18.197822   68640 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0501 03:46:18.217282   68640 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0501 03:46:18.247591   68640 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0501 03:46:18.247833   68640 kubeadm.go:309] [mark-control-plane] Marking the node no-preload-892672 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0501 03:46:18.259687   68640 kubeadm.go:309] [bootstrap-token] Using token: 8rc6kt.ele1oeavg6hezahw
	I0501 03:46:18.261204   68640 out.go:204]   - Configuring RBAC rules ...
	I0501 03:46:18.261333   68640 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0501 03:46:18.272461   68640 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0501 03:46:18.284615   68640 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0501 03:46:18.288686   68640 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0501 03:46:18.292005   68640 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0501 03:46:18.295772   68640 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0501 03:46:18.591035   68640 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0501 03:46:19.028299   68640 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0501 03:46:19.598192   68640 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0501 03:46:19.598219   68640 kubeadm.go:309] 
	I0501 03:46:19.598323   68640 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0501 03:46:19.598337   68640 kubeadm.go:309] 
	I0501 03:46:19.598490   68640 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0501 03:46:19.598514   68640 kubeadm.go:309] 
	I0501 03:46:19.598542   68640 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0501 03:46:19.598604   68640 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0501 03:46:19.598648   68640 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0501 03:46:19.598673   68640 kubeadm.go:309] 
	I0501 03:46:19.598771   68640 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0501 03:46:19.598784   68640 kubeadm.go:309] 
	I0501 03:46:19.598850   68640 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0501 03:46:19.598860   68640 kubeadm.go:309] 
	I0501 03:46:19.598963   68640 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0501 03:46:19.599069   68640 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0501 03:46:19.599167   68640 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0501 03:46:19.599183   68640 kubeadm.go:309] 
	I0501 03:46:19.599283   68640 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0501 03:46:19.599389   68640 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0501 03:46:19.599400   68640 kubeadm.go:309] 
	I0501 03:46:19.599500   68640 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 8rc6kt.ele1oeavg6hezahw \
	I0501 03:46:19.599626   68640 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:bd94cc6acfb95fe36f70b12c6a1f2980601b8235e7c4dea65cebdf35eb514754 \
	I0501 03:46:19.599666   68640 kubeadm.go:309] 	--control-plane 
	I0501 03:46:19.599676   68640 kubeadm.go:309] 
	I0501 03:46:19.599779   68640 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0501 03:46:19.599807   68640 kubeadm.go:309] 
	I0501 03:46:19.599931   68640 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 8rc6kt.ele1oeavg6hezahw \
	I0501 03:46:19.600079   68640 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:bd94cc6acfb95fe36f70b12c6a1f2980601b8235e7c4dea65cebdf35eb514754 
	I0501 03:46:19.600763   68640 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0501 03:46:19.600786   68640 cni.go:84] Creating CNI manager for ""
	I0501 03:46:19.600792   68640 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0501 03:46:19.602473   68640 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0501 03:46:19.603816   68640 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0501 03:46:19.621706   68640 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0501 03:46:19.649643   68640 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0501 03:46:19.649762   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:19.649787   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-892672 minikube.k8s.io/updated_at=2024_05_01T03_46_19_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=2c4eae41cda912e6a762d77f0d8868e00f97bb4e minikube.k8s.io/name=no-preload-892672 minikube.k8s.io/primary=true
	I0501 03:46:19.892482   68640 ops.go:34] apiserver oom_adj: -16
	I0501 03:46:19.892631   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:20.393436   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:20.893412   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:21.393634   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:21.893273   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:22.393031   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:22.893498   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:23.393599   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:23.893024   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:24.393544   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:24.893431   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:25.393290   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:25.892718   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:26.392928   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:26.893101   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:27.393045   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:27.892722   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:28.393102   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:28.892871   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:29.392650   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:29.893034   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:30.393561   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:30.893661   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:31.393235   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:31.892889   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:32.393263   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:32.893427   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:33.046965   68640 kubeadm.go:1107] duration metric: took 13.397277159s to wait for elevateKubeSystemPrivileges
	W0501 03:46:33.047010   68640 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0501 03:46:33.047020   68640 kubeadm.go:393] duration metric: took 5m15.038324633s to StartCluster
	I0501 03:46:33.047042   68640 settings.go:142] acquiring lock: {Name:mkcfe08bcadd45d99c00ed1eaf4e9c226a489fe2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 03:46:33.047126   68640 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18779-13391/kubeconfig
	I0501 03:46:33.048731   68640 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18779-13391/kubeconfig: {Name:mk5d1131557f885800877622151c0915337cb23a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 03:46:33.048988   68640 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.144 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0501 03:46:33.050376   68640 out.go:177] * Verifying Kubernetes components...
	I0501 03:46:33.049030   68640 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0501 03:46:33.049253   68640 config.go:182] Loaded profile config "no-preload-892672": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0501 03:46:33.051595   68640 addons.go:69] Setting storage-provisioner=true in profile "no-preload-892672"
	I0501 03:46:33.051616   68640 addons.go:69] Setting metrics-server=true in profile "no-preload-892672"
	I0501 03:46:33.051639   68640 addons.go:234] Setting addon storage-provisioner=true in "no-preload-892672"
	I0501 03:46:33.051644   68640 addons.go:234] Setting addon metrics-server=true in "no-preload-892672"
	W0501 03:46:33.051649   68640 addons.go:243] addon storage-provisioner should already be in state true
	W0501 03:46:33.051653   68640 addons.go:243] addon metrics-server should already be in state true
	I0501 03:46:33.051675   68640 host.go:66] Checking if "no-preload-892672" exists ...
	I0501 03:46:33.051680   68640 host.go:66] Checking if "no-preload-892672" exists ...
	I0501 03:46:33.051599   68640 addons.go:69] Setting default-storageclass=true in profile "no-preload-892672"
	I0501 03:46:33.051760   68640 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-892672"
	I0501 03:46:33.051600   68640 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 03:46:33.052016   68640 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:46:33.052047   68640 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:46:33.052064   68640 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:46:33.052095   68640 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:46:33.052110   68640 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:46:33.052135   68640 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:46:33.068515   68640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46407
	I0501 03:46:33.069115   68640 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:46:33.069702   68640 main.go:141] libmachine: Using API Version  1
	I0501 03:46:33.069728   68640 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:46:33.070085   68640 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:46:33.070731   68640 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:46:33.070763   68640 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:46:33.072166   68640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46609
	I0501 03:46:33.072179   68640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46109
	I0501 03:46:33.072632   68640 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:46:33.072770   68640 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:46:33.073161   68640 main.go:141] libmachine: Using API Version  1
	I0501 03:46:33.073180   68640 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:46:33.073318   68640 main.go:141] libmachine: Using API Version  1
	I0501 03:46:33.073333   68640 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:46:33.073467   68640 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:46:33.073893   68640 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:46:33.074056   68640 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:46:33.074065   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetState
	I0501 03:46:33.074092   68640 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:46:33.077976   68640 addons.go:234] Setting addon default-storageclass=true in "no-preload-892672"
	W0501 03:46:33.077997   68640 addons.go:243] addon default-storageclass should already be in state true
	I0501 03:46:33.078110   68640 host.go:66] Checking if "no-preload-892672" exists ...
	I0501 03:46:33.078535   68640 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:46:33.078566   68640 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:46:33.092605   68640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39707
	I0501 03:46:33.092996   68640 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:46:33.093578   68640 main.go:141] libmachine: Using API Version  1
	I0501 03:46:33.093597   68640 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:46:33.093602   68640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36671
	I0501 03:46:33.093778   68640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34727
	I0501 03:46:33.093893   68640 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:46:33.094117   68640 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:46:33.094169   68640 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:46:33.094250   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetState
	I0501 03:46:33.094577   68640 main.go:141] libmachine: Using API Version  1
	I0501 03:46:33.094602   68640 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:46:33.094986   68640 main.go:141] libmachine: Using API Version  1
	I0501 03:46:33.095004   68640 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:46:33.095062   68640 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:46:33.095389   68640 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:46:33.096401   68640 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:46:33.096423   68640 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:46:33.096665   68640 main.go:141] libmachine: (no-preload-892672) Calling .DriverName
	I0501 03:46:33.096678   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetState
	I0501 03:46:33.098465   68640 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0501 03:46:33.099842   68640 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0501 03:46:33.099861   68640 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0501 03:46:33.099879   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHHostname
	I0501 03:46:33.098734   68640 main.go:141] libmachine: (no-preload-892672) Calling .DriverName
	I0501 03:46:33.101305   68640 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0501 03:46:33.102491   68640 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0501 03:46:33.102512   68640 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0501 03:46:33.102531   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHHostname
	I0501 03:46:33.103006   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:46:33.103617   68640 main.go:141] libmachine: (no-preload-892672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6d:9a", ip: ""} in network mk-no-preload-892672: {Iface:virbr1 ExpiryTime:2024-05-01 04:40:47 +0000 UTC Type:0 Mac:52:54:00:c7:6d:9a Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:no-preload-892672 Clientid:01:52:54:00:c7:6d:9a}
	I0501 03:46:33.103641   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined IP address 192.168.39.144 and MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:46:33.103799   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHPort
	I0501 03:46:33.103977   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHKeyPath
	I0501 03:46:33.104143   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHUsername
	I0501 03:46:33.104272   68640 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/no-preload-892672/id_rsa Username:docker}
	I0501 03:46:33.105452   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:46:33.105795   68640 main.go:141] libmachine: (no-preload-892672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6d:9a", ip: ""} in network mk-no-preload-892672: {Iface:virbr1 ExpiryTime:2024-05-01 04:40:47 +0000 UTC Type:0 Mac:52:54:00:c7:6d:9a Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:no-preload-892672 Clientid:01:52:54:00:c7:6d:9a}
	I0501 03:46:33.105821   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined IP address 192.168.39.144 and MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:46:33.106142   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHPort
	I0501 03:46:33.106290   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHKeyPath
	I0501 03:46:33.106392   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHUsername
	I0501 03:46:33.106511   68640 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/no-preload-892672/id_rsa Username:docker}
	I0501 03:46:33.113012   68640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42615
	I0501 03:46:33.113365   68640 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:46:33.113813   68640 main.go:141] libmachine: Using API Version  1
	I0501 03:46:33.113834   68640 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:46:33.114127   68640 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:46:33.114304   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetState
	I0501 03:46:33.115731   68640 main.go:141] libmachine: (no-preload-892672) Calling .DriverName
	I0501 03:46:33.115997   68640 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0501 03:46:33.116010   68640 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0501 03:46:33.116023   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHHostname
	I0501 03:46:33.119272   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:46:33.119644   68640 main.go:141] libmachine: (no-preload-892672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6d:9a", ip: ""} in network mk-no-preload-892672: {Iface:virbr1 ExpiryTime:2024-05-01 04:40:47 +0000 UTC Type:0 Mac:52:54:00:c7:6d:9a Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:no-preload-892672 Clientid:01:52:54:00:c7:6d:9a}
	I0501 03:46:33.119661   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined IP address 192.168.39.144 and MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:46:33.119845   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHPort
	I0501 03:46:33.120223   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHKeyPath
	I0501 03:46:33.120358   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHUsername
	I0501 03:46:33.120449   68640 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/no-preload-892672/id_rsa Username:docker}
	I0501 03:46:33.296711   68640 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0501 03:46:33.342215   68640 node_ready.go:35] waiting up to 6m0s for node "no-preload-892672" to be "Ready" ...
	I0501 03:46:33.355677   68640 node_ready.go:49] node "no-preload-892672" has status "Ready":"True"
	I0501 03:46:33.355707   68640 node_ready.go:38] duration metric: took 13.392381ms for node "no-preload-892672" to be "Ready" ...
	I0501 03:46:33.355718   68640 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0501 03:46:33.367706   68640 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-57k52" in "kube-system" namespace to be "Ready" ...
	I0501 03:46:33.413444   68640 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0501 03:46:33.418869   68640 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0501 03:46:33.439284   68640 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0501 03:46:33.439318   68640 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0501 03:46:33.512744   68640 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0501 03:46:33.512768   68640 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0501 03:46:33.594777   68640 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0501 03:46:33.594798   68640 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0501 03:46:33.658506   68640 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0501 03:46:34.013890   68640 main.go:141] libmachine: Making call to close driver server
	I0501 03:46:34.013919   68640 main.go:141] libmachine: (no-preload-892672) Calling .Close
	I0501 03:46:34.014023   68640 main.go:141] libmachine: Making call to close driver server
	I0501 03:46:34.014056   68640 main.go:141] libmachine: (no-preload-892672) Calling .Close
	I0501 03:46:34.014250   68640 main.go:141] libmachine: Successfully made call to close driver server
	I0501 03:46:34.014284   68640 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 03:46:34.014297   68640 main.go:141] libmachine: Making call to close driver server
	I0501 03:46:34.014306   68640 main.go:141] libmachine: (no-preload-892672) Calling .Close
	I0501 03:46:34.014353   68640 main.go:141] libmachine: Successfully made call to close driver server
	I0501 03:46:34.014370   68640 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 03:46:34.014383   68640 main.go:141] libmachine: Making call to close driver server
	I0501 03:46:34.014393   68640 main.go:141] libmachine: (no-preload-892672) Calling .Close
	I0501 03:46:34.014642   68640 main.go:141] libmachine: Successfully made call to close driver server
	I0501 03:46:34.014664   68640 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 03:46:34.016263   68640 main.go:141] libmachine: (no-preload-892672) DBG | Closing plugin on server side
	I0501 03:46:34.016263   68640 main.go:141] libmachine: Successfully made call to close driver server
	I0501 03:46:34.016288   68640 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 03:46:34.031961   68640 main.go:141] libmachine: Making call to close driver server
	I0501 03:46:34.031996   68640 main.go:141] libmachine: (no-preload-892672) Calling .Close
	I0501 03:46:34.032303   68640 main.go:141] libmachine: Successfully made call to close driver server
	I0501 03:46:34.032324   68640 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 03:46:34.260154   68640 main.go:141] libmachine: Making call to close driver server
	I0501 03:46:34.260180   68640 main.go:141] libmachine: (no-preload-892672) Calling .Close
	I0501 03:46:34.260600   68640 main.go:141] libmachine: Successfully made call to close driver server
	I0501 03:46:34.260629   68640 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 03:46:34.260641   68640 main.go:141] libmachine: Making call to close driver server
	I0501 03:46:34.260650   68640 main.go:141] libmachine: (no-preload-892672) Calling .Close
	I0501 03:46:34.260876   68640 main.go:141] libmachine: Successfully made call to close driver server
	I0501 03:46:34.260888   68640 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 03:46:34.260899   68640 addons.go:470] Verifying addon metrics-server=true in "no-preload-892672"
	I0501 03:46:34.262520   68640 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0501 03:46:34.264176   68640 addons.go:505] duration metric: took 1.215147486s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0501 03:46:35.384910   68640 pod_ready.go:102] pod "coredns-7db6d8ff4d-57k52" in "kube-system" namespace has status "Ready":"False"
	I0501 03:46:36.377298   68640 pod_ready.go:92] pod "coredns-7db6d8ff4d-57k52" in "kube-system" namespace has status "Ready":"True"
	I0501 03:46:36.377321   68640 pod_ready.go:81] duration metric: took 3.009581117s for pod "coredns-7db6d8ff4d-57k52" in "kube-system" namespace to be "Ready" ...
	I0501 03:46:36.377331   68640 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-c6lnj" in "kube-system" namespace to be "Ready" ...
	I0501 03:46:36.383022   68640 pod_ready.go:92] pod "coredns-7db6d8ff4d-c6lnj" in "kube-system" namespace has status "Ready":"True"
	I0501 03:46:36.383042   68640 pod_ready.go:81] duration metric: took 5.704691ms for pod "coredns-7db6d8ff4d-c6lnj" in "kube-system" namespace to be "Ready" ...
	I0501 03:46:36.383051   68640 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-892672" in "kube-system" namespace to be "Ready" ...
	I0501 03:46:36.387456   68640 pod_ready.go:92] pod "etcd-no-preload-892672" in "kube-system" namespace has status "Ready":"True"
	I0501 03:46:36.387476   68640 pod_ready.go:81] duration metric: took 4.418883ms for pod "etcd-no-preload-892672" in "kube-system" namespace to be "Ready" ...
	I0501 03:46:36.387485   68640 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-892672" in "kube-system" namespace to be "Ready" ...
	I0501 03:46:36.392348   68640 pod_ready.go:92] pod "kube-apiserver-no-preload-892672" in "kube-system" namespace has status "Ready":"True"
	I0501 03:46:36.392366   68640 pod_ready.go:81] duration metric: took 4.874928ms for pod "kube-apiserver-no-preload-892672" in "kube-system" namespace to be "Ready" ...
	I0501 03:46:36.392375   68640 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-892672" in "kube-system" namespace to be "Ready" ...
	I0501 03:46:36.397155   68640 pod_ready.go:92] pod "kube-controller-manager-no-preload-892672" in "kube-system" namespace has status "Ready":"True"
	I0501 03:46:36.397175   68640 pod_ready.go:81] duration metric: took 4.794583ms for pod "kube-controller-manager-no-preload-892672" in "kube-system" namespace to be "Ready" ...
	I0501 03:46:36.397185   68640 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-czsqz" in "kube-system" namespace to be "Ready" ...
	I0501 03:46:36.774003   68640 pod_ready.go:92] pod "kube-proxy-czsqz" in "kube-system" namespace has status "Ready":"True"
	I0501 03:46:36.774025   68640 pod_ready.go:81] duration metric: took 376.83321ms for pod "kube-proxy-czsqz" in "kube-system" namespace to be "Ready" ...
	I0501 03:46:36.774036   68640 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-892672" in "kube-system" namespace to be "Ready" ...
	I0501 03:46:37.171504   68640 pod_ready.go:92] pod "kube-scheduler-no-preload-892672" in "kube-system" namespace has status "Ready":"True"
	I0501 03:46:37.171526   68640 pod_ready.go:81] duration metric: took 397.484706ms for pod "kube-scheduler-no-preload-892672" in "kube-system" namespace to be "Ready" ...
	I0501 03:46:37.171535   68640 pod_ready.go:38] duration metric: took 3.815806043s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0501 03:46:37.171549   68640 api_server.go:52] waiting for apiserver process to appear ...
	I0501 03:46:37.171609   68640 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:46:37.189446   68640 api_server.go:72] duration metric: took 4.140414812s to wait for apiserver process to appear ...
	I0501 03:46:37.189473   68640 api_server.go:88] waiting for apiserver healthz status ...
	I0501 03:46:37.189494   68640 api_server.go:253] Checking apiserver healthz at https://192.168.39.144:8443/healthz ...
	I0501 03:46:37.195052   68640 api_server.go:279] https://192.168.39.144:8443/healthz returned 200:
	ok
	I0501 03:46:37.196163   68640 api_server.go:141] control plane version: v1.30.0
	I0501 03:46:37.196183   68640 api_server.go:131] duration metric: took 6.703804ms to wait for apiserver health ...
	I0501 03:46:37.196191   68640 system_pods.go:43] waiting for kube-system pods to appear ...
	I0501 03:46:37.375742   68640 system_pods.go:59] 9 kube-system pods found
	I0501 03:46:37.375775   68640 system_pods.go:61] "coredns-7db6d8ff4d-57k52" [f98cb358-71ba-49c5-8213-0f3160c6e38b] Running
	I0501 03:46:37.375784   68640 system_pods.go:61] "coredns-7db6d8ff4d-c6lnj" [f8b8c1f1-7696-43f2-98be-339f99963e7c] Running
	I0501 03:46:37.375789   68640 system_pods.go:61] "etcd-no-preload-892672" [5f92eb1b-6611-4663-95f0-8c071a3a37c9] Running
	I0501 03:46:37.375796   68640 system_pods.go:61] "kube-apiserver-no-preload-892672" [90bcaa82-61b0-49d5-b50c-76288b099683] Running
	I0501 03:46:37.375804   68640 system_pods.go:61] "kube-controller-manager-no-preload-892672" [f80af654-aa81-4cd2-b5ce-4f31f6e49e9f] Running
	I0501 03:46:37.375809   68640 system_pods.go:61] "kube-proxy-czsqz" [4254b019-b6c8-4ff9-a361-c96eaf20dc65] Running
	I0501 03:46:37.375813   68640 system_pods.go:61] "kube-scheduler-no-preload-892672" [6753a5df-86d1-47bf-9514-6b8352acf969] Running
	I0501 03:46:37.375824   68640 system_pods.go:61] "metrics-server-569cc877fc-5m5qf" [a1ec3e6c-fe90-4168-b0ec-54f82f17b46d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0501 03:46:37.375830   68640 system_pods.go:61] "storage-provisioner" [b55b7e8b-4de0-40f8-96ff-bf0b550699d1] Running
	I0501 03:46:37.375841   68640 system_pods.go:74] duration metric: took 179.642731ms to wait for pod list to return data ...
	I0501 03:46:37.375857   68640 default_sa.go:34] waiting for default service account to be created ...
	I0501 03:46:37.572501   68640 default_sa.go:45] found service account: "default"
	I0501 03:46:37.572530   68640 default_sa.go:55] duration metric: took 196.664812ms for default service account to be created ...
	I0501 03:46:37.572542   68640 system_pods.go:116] waiting for k8s-apps to be running ...
	I0501 03:46:37.778012   68640 system_pods.go:86] 9 kube-system pods found
	I0501 03:46:37.778053   68640 system_pods.go:89] "coredns-7db6d8ff4d-57k52" [f98cb358-71ba-49c5-8213-0f3160c6e38b] Running
	I0501 03:46:37.778062   68640 system_pods.go:89] "coredns-7db6d8ff4d-c6lnj" [f8b8c1f1-7696-43f2-98be-339f99963e7c] Running
	I0501 03:46:37.778068   68640 system_pods.go:89] "etcd-no-preload-892672" [5f92eb1b-6611-4663-95f0-8c071a3a37c9] Running
	I0501 03:46:37.778075   68640 system_pods.go:89] "kube-apiserver-no-preload-892672" [90bcaa82-61b0-49d5-b50c-76288b099683] Running
	I0501 03:46:37.778082   68640 system_pods.go:89] "kube-controller-manager-no-preload-892672" [f80af654-aa81-4cd2-b5ce-4f31f6e49e9f] Running
	I0501 03:46:37.778088   68640 system_pods.go:89] "kube-proxy-czsqz" [4254b019-b6c8-4ff9-a361-c96eaf20dc65] Running
	I0501 03:46:37.778094   68640 system_pods.go:89] "kube-scheduler-no-preload-892672" [6753a5df-86d1-47bf-9514-6b8352acf969] Running
	I0501 03:46:37.778104   68640 system_pods.go:89] "metrics-server-569cc877fc-5m5qf" [a1ec3e6c-fe90-4168-b0ec-54f82f17b46d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0501 03:46:37.778112   68640 system_pods.go:89] "storage-provisioner" [b55b7e8b-4de0-40f8-96ff-bf0b550699d1] Running
	I0501 03:46:37.778127   68640 system_pods.go:126] duration metric: took 205.578312ms to wait for k8s-apps to be running ...
	I0501 03:46:37.778148   68640 system_svc.go:44] waiting for kubelet service to be running ....
	I0501 03:46:37.778215   68640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 03:46:37.794660   68640 system_svc.go:56] duration metric: took 16.509214ms WaitForService to wait for kubelet
	I0501 03:46:37.794694   68640 kubeadm.go:576] duration metric: took 4.745668881s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0501 03:46:37.794721   68640 node_conditions.go:102] verifying NodePressure condition ...
	I0501 03:46:37.972621   68640 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0501 03:46:37.972647   68640 node_conditions.go:123] node cpu capacity is 2
	I0501 03:46:37.972660   68640 node_conditions.go:105] duration metric: took 177.933367ms to run NodePressure ...
	I0501 03:46:37.972676   68640 start.go:240] waiting for startup goroutines ...
	I0501 03:46:37.972684   68640 start.go:245] waiting for cluster config update ...
	I0501 03:46:37.972699   68640 start.go:254] writing updated cluster config ...
	I0501 03:46:37.972951   68640 ssh_runner.go:195] Run: rm -f paused
	I0501 03:46:38.023054   68640 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0501 03:46:38.025098   68640 out.go:177] * Done! kubectl is now configured to use "no-preload-892672" cluster and "default" namespace by default
	I0501 03:46:46.214470   69580 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0501 03:46:46.214695   69580 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0501 03:46:46.214721   69580 kubeadm.go:309] 
	I0501 03:46:46.214770   69580 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0501 03:46:46.214837   69580 kubeadm.go:309] 		timed out waiting for the condition
	I0501 03:46:46.214875   69580 kubeadm.go:309] 
	I0501 03:46:46.214936   69580 kubeadm.go:309] 	This error is likely caused by:
	I0501 03:46:46.214983   69580 kubeadm.go:309] 		- The kubelet is not running
	I0501 03:46:46.215076   69580 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0501 03:46:46.215084   69580 kubeadm.go:309] 
	I0501 03:46:46.215169   69580 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0501 03:46:46.215201   69580 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0501 03:46:46.215233   69580 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0501 03:46:46.215239   69580 kubeadm.go:309] 
	I0501 03:46:46.215380   69580 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0501 03:46:46.215489   69580 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0501 03:46:46.215505   69580 kubeadm.go:309] 
	I0501 03:46:46.215657   69580 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0501 03:46:46.215782   69580 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0501 03:46:46.215882   69580 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0501 03:46:46.215972   69580 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0501 03:46:46.215984   69580 kubeadm.go:309] 
	I0501 03:46:46.217243   69580 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0501 03:46:46.217352   69580 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0501 03:46:46.217426   69580 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0501 03:46:46.217550   69580 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0501 03:46:46.217611   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0501 03:46:47.375634   69580 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.157990231s)
	I0501 03:46:47.375723   69580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 03:46:47.392333   69580 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0501 03:46:47.404983   69580 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0501 03:46:47.405007   69580 kubeadm.go:156] found existing configuration files:
	
	I0501 03:46:47.405054   69580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0501 03:46:47.417437   69580 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0501 03:46:47.417501   69580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0501 03:46:47.429929   69580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0501 03:46:47.441141   69580 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0501 03:46:47.441215   69580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0501 03:46:47.453012   69580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0501 03:46:47.463702   69580 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0501 03:46:47.463759   69580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0501 03:46:47.474783   69580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0501 03:46:47.485793   69580 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0501 03:46:47.485853   69580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0501 03:46:47.497706   69580 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0501 03:46:47.588221   69580 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0501 03:46:47.588340   69580 kubeadm.go:309] [preflight] Running pre-flight checks
	I0501 03:46:47.759631   69580 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0501 03:46:47.759801   69580 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0501 03:46:47.759949   69580 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0501 03:46:47.978077   69580 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0501 03:46:47.980130   69580 out.go:204]   - Generating certificates and keys ...
	I0501 03:46:47.980240   69580 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0501 03:46:47.980323   69580 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0501 03:46:47.980455   69580 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0501 03:46:47.980579   69580 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0501 03:46:47.980679   69580 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0501 03:46:47.980771   69580 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0501 03:46:47.980864   69580 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0501 03:46:47.981256   69580 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0501 03:46:47.981616   69580 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0501 03:46:47.981858   69580 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0501 03:46:47.981907   69580 kubeadm.go:309] [certs] Using the existing "sa" key
	I0501 03:46:47.981991   69580 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0501 03:46:48.100377   69580 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0501 03:46:48.463892   69580 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0501 03:46:48.521991   69580 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0501 03:46:48.735222   69580 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0501 03:46:48.753098   69580 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0501 03:46:48.756950   69580 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0501 03:46:48.757379   69580 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0501 03:46:48.937039   69580 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0501 03:46:48.939065   69580 out.go:204]   - Booting up control plane ...
	I0501 03:46:48.939183   69580 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0501 03:46:48.961380   69580 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0501 03:46:48.962890   69580 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0501 03:46:48.963978   69580 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0501 03:46:48.971754   69580 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0501 03:47:28.974873   69580 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0501 03:47:28.975296   69580 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0501 03:47:28.975545   69580 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0501 03:47:33.976469   69580 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0501 03:47:33.976699   69580 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0501 03:47:43.977443   69580 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0501 03:47:43.977663   69580 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0501 03:48:03.979113   69580 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0501 03:48:03.979409   69580 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0501 03:48:43.982479   69580 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0501 03:48:43.982781   69580 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0501 03:48:43.983363   69580 kubeadm.go:309] 
	I0501 03:48:43.983427   69580 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0501 03:48:43.983484   69580 kubeadm.go:309] 		timed out waiting for the condition
	I0501 03:48:43.983490   69580 kubeadm.go:309] 
	I0501 03:48:43.983520   69580 kubeadm.go:309] 	This error is likely caused by:
	I0501 03:48:43.983547   69580 kubeadm.go:309] 		- The kubelet is not running
	I0501 03:48:43.983633   69580 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0501 03:48:43.983637   69580 kubeadm.go:309] 
	I0501 03:48:43.983721   69580 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0501 03:48:43.983748   69580 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0501 03:48:43.983774   69580 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0501 03:48:43.983778   69580 kubeadm.go:309] 
	I0501 03:48:43.983861   69580 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0501 03:48:43.983928   69580 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0501 03:48:43.983932   69580 kubeadm.go:309] 
	I0501 03:48:43.984023   69580 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0501 03:48:43.984094   69580 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0501 03:48:43.984155   69580 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0501 03:48:43.984212   69580 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0501 03:48:43.984216   69580 kubeadm.go:309] 
	I0501 03:48:43.985577   69580 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0501 03:48:43.985777   69580 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0501 03:48:43.985875   69580 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0501 03:48:43.985971   69580 kubeadm.go:393] duration metric: took 8m0.315126498s to StartCluster
	I0501 03:48:43.986025   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:48:43.986092   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:48:44.038296   69580 cri.go:89] found id: ""
	I0501 03:48:44.038328   69580 logs.go:276] 0 containers: []
	W0501 03:48:44.038339   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:48:44.038346   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:48:44.038426   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:48:44.081855   69580 cri.go:89] found id: ""
	I0501 03:48:44.081891   69580 logs.go:276] 0 containers: []
	W0501 03:48:44.081904   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:48:44.081913   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:48:44.081996   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:48:44.131400   69580 cri.go:89] found id: ""
	I0501 03:48:44.131435   69580 logs.go:276] 0 containers: []
	W0501 03:48:44.131445   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:48:44.131451   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:48:44.131519   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:48:44.178274   69580 cri.go:89] found id: ""
	I0501 03:48:44.178302   69580 logs.go:276] 0 containers: []
	W0501 03:48:44.178310   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:48:44.178316   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:48:44.178376   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:48:44.223087   69580 cri.go:89] found id: ""
	I0501 03:48:44.223115   69580 logs.go:276] 0 containers: []
	W0501 03:48:44.223125   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:48:44.223133   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:48:44.223196   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:48:44.266093   69580 cri.go:89] found id: ""
	I0501 03:48:44.266122   69580 logs.go:276] 0 containers: []
	W0501 03:48:44.266135   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:48:44.266143   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:48:44.266204   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:48:44.307766   69580 cri.go:89] found id: ""
	I0501 03:48:44.307795   69580 logs.go:276] 0 containers: []
	W0501 03:48:44.307806   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:48:44.307813   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:48:44.307876   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:48:44.348548   69580 cri.go:89] found id: ""
	I0501 03:48:44.348576   69580 logs.go:276] 0 containers: []
	W0501 03:48:44.348585   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:48:44.348594   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:48:44.348614   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:48:44.394160   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:48:44.394209   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:48:44.449845   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:48:44.449879   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:48:44.467663   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:48:44.467694   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:48:44.556150   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:48:44.556183   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:48:44.556199   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W0501 03:48:44.661110   69580 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0501 03:48:44.661169   69580 out.go:239] * 
	W0501 03:48:44.661226   69580 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0501 03:48:44.661246   69580 out.go:239] * 
	W0501 03:48:44.662064   69580 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0501 03:48:44.665608   69580 out.go:177] 
	W0501 03:48:44.666799   69580 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0501 03:48:44.666851   69580 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0501 03:48:44.666870   69580 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0501 03:48:44.668487   69580 out.go:177] 
	
	
	==> CRI-O <==
	May 01 03:57:50 old-k8s-version-503971 crio[647]: time="2024-05-01 03:57:50.089736777Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714535870089707758,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7e511cbf-4e80-4017-b4c9-10ba8a5d87aa name=/runtime.v1.ImageService/ImageFsInfo
	May 01 03:57:50 old-k8s-version-503971 crio[647]: time="2024-05-01 03:57:50.090468936Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b4a4883c-4159-4fb6-ae65-453c402e5355 name=/runtime.v1.RuntimeService/ListContainers
	May 01 03:57:50 old-k8s-version-503971 crio[647]: time="2024-05-01 03:57:50.090557547Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b4a4883c-4159-4fb6-ae65-453c402e5355 name=/runtime.v1.RuntimeService/ListContainers
	May 01 03:57:50 old-k8s-version-503971 crio[647]: time="2024-05-01 03:57:50.090601086Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=b4a4883c-4159-4fb6-ae65-453c402e5355 name=/runtime.v1.RuntimeService/ListContainers
	May 01 03:57:50 old-k8s-version-503971 crio[647]: time="2024-05-01 03:57:50.130595361Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d5cefbe4-270d-4733-b4f4-cc07e6e93e29 name=/runtime.v1.RuntimeService/Version
	May 01 03:57:50 old-k8s-version-503971 crio[647]: time="2024-05-01 03:57:50.130759171Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d5cefbe4-270d-4733-b4f4-cc07e6e93e29 name=/runtime.v1.RuntimeService/Version
	May 01 03:57:50 old-k8s-version-503971 crio[647]: time="2024-05-01 03:57:50.133083676Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=41209132-9b35-4bfa-ad56-6d3ac0641b45 name=/runtime.v1.ImageService/ImageFsInfo
	May 01 03:57:50 old-k8s-version-503971 crio[647]: time="2024-05-01 03:57:50.133647768Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714535870133623230,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=41209132-9b35-4bfa-ad56-6d3ac0641b45 name=/runtime.v1.ImageService/ImageFsInfo
	May 01 03:57:50 old-k8s-version-503971 crio[647]: time="2024-05-01 03:57:50.134194615Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cfc2ed4f-cb59-4aa0-94f8-395932634940 name=/runtime.v1.RuntimeService/ListContainers
	May 01 03:57:50 old-k8s-version-503971 crio[647]: time="2024-05-01 03:57:50.134275675Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cfc2ed4f-cb59-4aa0-94f8-395932634940 name=/runtime.v1.RuntimeService/ListContainers
	May 01 03:57:50 old-k8s-version-503971 crio[647]: time="2024-05-01 03:57:50.134315625Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=cfc2ed4f-cb59-4aa0-94f8-395932634940 name=/runtime.v1.RuntimeService/ListContainers
	May 01 03:57:50 old-k8s-version-503971 crio[647]: time="2024-05-01 03:57:50.169021470Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bfc4fd89-4355-430d-b998-ee06b97ef1c0 name=/runtime.v1.RuntimeService/Version
	May 01 03:57:50 old-k8s-version-503971 crio[647]: time="2024-05-01 03:57:50.169209834Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bfc4fd89-4355-430d-b998-ee06b97ef1c0 name=/runtime.v1.RuntimeService/Version
	May 01 03:57:50 old-k8s-version-503971 crio[647]: time="2024-05-01 03:57:50.170494066Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=71be7e96-1f05-4949-9bed-e11ef86fba30 name=/runtime.v1.ImageService/ImageFsInfo
	May 01 03:57:50 old-k8s-version-503971 crio[647]: time="2024-05-01 03:57:50.170968720Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714535870170936909,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=71be7e96-1f05-4949-9bed-e11ef86fba30 name=/runtime.v1.ImageService/ImageFsInfo
	May 01 03:57:50 old-k8s-version-503971 crio[647]: time="2024-05-01 03:57:50.171600950Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=609c5518-b322-4ede-8f42-36ecb2d3fa0c name=/runtime.v1.RuntimeService/ListContainers
	May 01 03:57:50 old-k8s-version-503971 crio[647]: time="2024-05-01 03:57:50.171683902Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=609c5518-b322-4ede-8f42-36ecb2d3fa0c name=/runtime.v1.RuntimeService/ListContainers
	May 01 03:57:50 old-k8s-version-503971 crio[647]: time="2024-05-01 03:57:50.171753652Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=609c5518-b322-4ede-8f42-36ecb2d3fa0c name=/runtime.v1.RuntimeService/ListContainers
	May 01 03:57:50 old-k8s-version-503971 crio[647]: time="2024-05-01 03:57:50.211935346Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7b426bf3-939c-424f-849c-99fcb4780497 name=/runtime.v1.RuntimeService/Version
	May 01 03:57:50 old-k8s-version-503971 crio[647]: time="2024-05-01 03:57:50.212046914Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7b426bf3-939c-424f-849c-99fcb4780497 name=/runtime.v1.RuntimeService/Version
	May 01 03:57:50 old-k8s-version-503971 crio[647]: time="2024-05-01 03:57:50.213175335Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e46d75e8-cd8d-413f-84d1-17a1e45b8225 name=/runtime.v1.ImageService/ImageFsInfo
	May 01 03:57:50 old-k8s-version-503971 crio[647]: time="2024-05-01 03:57:50.213645866Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714535870213617756,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e46d75e8-cd8d-413f-84d1-17a1e45b8225 name=/runtime.v1.ImageService/ImageFsInfo
	May 01 03:57:50 old-k8s-version-503971 crio[647]: time="2024-05-01 03:57:50.214386175Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=364171ee-c9e8-414a-ba03-3268aca49c2a name=/runtime.v1.RuntimeService/ListContainers
	May 01 03:57:50 old-k8s-version-503971 crio[647]: time="2024-05-01 03:57:50.214466935Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=364171ee-c9e8-414a-ba03-3268aca49c2a name=/runtime.v1.RuntimeService/ListContainers
	May 01 03:57:50 old-k8s-version-503971 crio[647]: time="2024-05-01 03:57:50.214531494Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=364171ee-c9e8-414a-ba03-3268aca49c2a name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[May 1 03:40] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.055665] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.051850] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.015816] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.551540] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.720618] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.127424] systemd-fstab-generator[566]: Ignoring "noauto" option for root device
	[  +0.059671] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.072683] systemd-fstab-generator[579]: Ignoring "noauto" option for root device
	[  +0.239117] systemd-fstab-generator[592]: Ignoring "noauto" option for root device
	[  +0.162286] systemd-fstab-generator[604]: Ignoring "noauto" option for root device
	[  +0.321649] systemd-fstab-generator[630]: Ignoring "noauto" option for root device
	[  +7.891142] systemd-fstab-generator[842]: Ignoring "noauto" option for root device
	[  +0.068807] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.309273] systemd-fstab-generator[969]: Ignoring "noauto" option for root device
	[ +12.277413] kauditd_printk_skb: 46 callbacks suppressed
	[May 1 03:44] systemd-fstab-generator[5009]: Ignoring "noauto" option for root device
	[May 1 03:46] systemd-fstab-generator[5290]: Ignoring "noauto" option for root device
	[  +0.082733] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 03:57:50 up 17 min,  0 users,  load average: 0.00, 0.04, 0.06
	Linux old-k8s-version-503971 5.10.207 #1 SMP Tue Apr 30 22:38:43 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	May 01 03:57:45 old-k8s-version-503971 kubelet[6461]:         /usr/local/go/src/net/dial.go:580 +0x5e5
	May 01 03:57:45 old-k8s-version-503971 kubelet[6461]: net.(*sysDialer).dialSerial(0xc000c68880, 0x4f7fe40, 0xc000cd0180, 0xc000cf0930, 0x1, 0x1, 0x0, 0x0, 0x0, 0x0)
	May 01 03:57:45 old-k8s-version-503971 kubelet[6461]:         /usr/local/go/src/net/dial.go:548 +0x152
	May 01 03:57:45 old-k8s-version-503971 kubelet[6461]: net.(*Dialer).DialContext(0xc000b88780, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc000c62b10, 0x24, 0x0, 0x0, 0x0, ...)
	May 01 03:57:45 old-k8s-version-503971 kubelet[6461]:         /usr/local/go/src/net/dial.go:425 +0x6e5
	May 01 03:57:45 old-k8s-version-503971 kubelet[6461]: k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation.(*Dialer).DialContext(0xc000b94960, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc000c62b10, 0x24, 0x60, 0x7f83043704d8, 0x118, ...)
	May 01 03:57:45 old-k8s-version-503971 kubelet[6461]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation/connrotation.go:73 +0x7e
	May 01 03:57:45 old-k8s-version-503971 kubelet[6461]: net/http.(*Transport).dial(0xc000abda40, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc000c62b10, 0x24, 0x0, 0x0, 0x0, ...)
	May 01 03:57:45 old-k8s-version-503971 kubelet[6461]:         /usr/local/go/src/net/http/transport.go:1141 +0x1fd
	May 01 03:57:45 old-k8s-version-503971 kubelet[6461]: net/http.(*Transport).dialConn(0xc000abda40, 0x4f7fe00, 0xc000052030, 0x0, 0xc000c90960, 0x5, 0xc000c62b10, 0x24, 0x0, 0xc000bf2b40, ...)
	May 01 03:57:45 old-k8s-version-503971 kubelet[6461]:         /usr/local/go/src/net/http/transport.go:1575 +0x1abb
	May 01 03:57:45 old-k8s-version-503971 kubelet[6461]: net/http.(*Transport).dialConnFor(0xc000abda40, 0xc000bf4420)
	May 01 03:57:45 old-k8s-version-503971 kubelet[6461]:         /usr/local/go/src/net/http/transport.go:1421 +0xc6
	May 01 03:57:45 old-k8s-version-503971 kubelet[6461]: created by net/http.(*Transport).queueForDial
	May 01 03:57:45 old-k8s-version-503971 kubelet[6461]:         /usr/local/go/src/net/http/transport.go:1390 +0x40f
	May 01 03:57:45 old-k8s-version-503971 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	May 01 03:57:45 old-k8s-version-503971 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	May 01 03:57:46 old-k8s-version-503971 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 113.
	May 01 03:57:46 old-k8s-version-503971 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	May 01 03:57:46 old-k8s-version-503971 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	May 01 03:57:46 old-k8s-version-503971 kubelet[6470]: I0501 03:57:46.808859    6470 server.go:416] Version: v1.20.0
	May 01 03:57:46 old-k8s-version-503971 kubelet[6470]: I0501 03:57:46.809311    6470 server.go:837] Client rotation is on, will bootstrap in background
	May 01 03:57:46 old-k8s-version-503971 kubelet[6470]: I0501 03:57:46.811449    6470 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	May 01 03:57:46 old-k8s-version-503971 kubelet[6470]: I0501 03:57:46.812791    6470 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	May 01 03:57:46 old-k8s-version-503971 kubelet[6470]: W0501 03:57:46.812822    6470 manager.go:159] Cannot detect current cgroup on cgroup v2
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-503971 -n old-k8s-version-503971
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-503971 -n old-k8s-version-503971: exit status 2 (278.858941ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-503971" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.63s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (400.96s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-277128 -n embed-certs-277128
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-05-01 04:00:18.359922368 +0000 UTC m=+6793.465838664
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-277128 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-277128 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.771µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-277128 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-277128 -n embed-certs-277128
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-277128 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-277128 logs -n 25: (1.529413378s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p no-preload-892672                                   | no-preload-892672            | jenkins | v1.33.0 | 01 May 24 03:30 UTC | 01 May 24 03:32 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-277128                                  | embed-certs-277128           | jenkins | v1.33.0 | 01 May 24 03:30 UTC | 01 May 24 03:32 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-046243                           | kubernetes-upgrade-046243    | jenkins | v1.33.0 | 01 May 24 03:30 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-046243                           | kubernetes-upgrade-046243    | jenkins | v1.33.0 | 01 May 24 03:30 UTC | 01 May 24 03:32 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-046243                           | kubernetes-upgrade-046243    | jenkins | v1.33.0 | 01 May 24 03:32 UTC | 01 May 24 03:32 UTC |
	| delete  | -p                                                     | disable-driver-mounts-483221 | jenkins | v1.33.0 | 01 May 24 03:32 UTC | 01 May 24 03:32 UTC |
	|         | disable-driver-mounts-483221                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-715118 | jenkins | v1.33.0 | 01 May 24 03:32 UTC | 01 May 24 03:33 UTC |
	|         | default-k8s-diff-port-715118                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-892672             | no-preload-892672            | jenkins | v1.33.0 | 01 May 24 03:32 UTC | 01 May 24 03:32 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-892672                                   | no-preload-892672            | jenkins | v1.33.0 | 01 May 24 03:32 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-277128            | embed-certs-277128           | jenkins | v1.33.0 | 01 May 24 03:32 UTC | 01 May 24 03:32 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-277128                                  | embed-certs-277128           | jenkins | v1.33.0 | 01 May 24 03:32 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-715118  | default-k8s-diff-port-715118 | jenkins | v1.33.0 | 01 May 24 03:33 UTC | 01 May 24 03:33 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-715118 | jenkins | v1.33.0 | 01 May 24 03:33 UTC |                     |
	|         | default-k8s-diff-port-715118                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-892672                  | no-preload-892672            | jenkins | v1.33.0 | 01 May 24 03:34 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-277128                 | embed-certs-277128           | jenkins | v1.33.0 | 01 May 24 03:34 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-892672                                   | no-preload-892672            | jenkins | v1.33.0 | 01 May 24 03:34 UTC | 01 May 24 03:46 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-503971        | old-k8s-version-503971       | jenkins | v1.33.0 | 01 May 24 03:34 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p embed-certs-277128                                  | embed-certs-277128           | jenkins | v1.33.0 | 01 May 24 03:34 UTC | 01 May 24 03:44 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-715118       | default-k8s-diff-port-715118 | jenkins | v1.33.0 | 01 May 24 03:35 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-715118 | jenkins | v1.33.0 | 01 May 24 03:35 UTC | 01 May 24 03:45 UTC |
	|         | default-k8s-diff-port-715118                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-503971                              | old-k8s-version-503971       | jenkins | v1.33.0 | 01 May 24 03:36 UTC | 01 May 24 03:36 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-503971             | old-k8s-version-503971       | jenkins | v1.33.0 | 01 May 24 03:36 UTC | 01 May 24 03:36 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-503971                              | old-k8s-version-503971       | jenkins | v1.33.0 | 01 May 24 03:36 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-503971                              | old-k8s-version-503971       | jenkins | v1.33.0 | 01 May 24 04:00 UTC | 01 May 24 04:00 UTC |
	| start   | -p newest-cni-906018 --memory=2200 --alsologtostderr   | newest-cni-906018            | jenkins | v1.33.0 | 01 May 24 04:00 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/01 04:00:17
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0501 04:00:17.117291   75701 out.go:291] Setting OutFile to fd 1 ...
	I0501 04:00:17.117426   75701 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 04:00:17.117435   75701 out.go:304] Setting ErrFile to fd 2...
	I0501 04:00:17.117439   75701 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 04:00:17.117634   75701 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18779-13391/.minikube/bin
	I0501 04:00:17.118209   75701 out.go:298] Setting JSON to false
	I0501 04:00:17.119147   75701 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":9760,"bootTime":1714526257,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0501 04:00:17.119202   75701 start.go:139] virtualization: kvm guest
	I0501 04:00:17.121577   75701 out.go:177] * [newest-cni-906018] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0501 04:00:17.123158   75701 notify.go:220] Checking for updates...
	I0501 04:00:17.123166   75701 out.go:177]   - MINIKUBE_LOCATION=18779
	I0501 04:00:17.124550   75701 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0501 04:00:17.125663   75701 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18779-13391/kubeconfig
	I0501 04:00:17.126851   75701 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18779-13391/.minikube
	I0501 04:00:17.127982   75701 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0501 04:00:17.129175   75701 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0501 04:00:17.130986   75701 config.go:182] Loaded profile config "default-k8s-diff-port-715118": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0501 04:00:17.131133   75701 config.go:182] Loaded profile config "embed-certs-277128": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0501 04:00:17.131276   75701 config.go:182] Loaded profile config "no-preload-892672": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0501 04:00:17.131390   75701 driver.go:392] Setting default libvirt URI to qemu:///system
	I0501 04:00:17.168580   75701 out.go:177] * Using the kvm2 driver based on user configuration
	I0501 04:00:17.169984   75701 start.go:297] selected driver: kvm2
	I0501 04:00:17.169999   75701 start.go:901] validating driver "kvm2" against <nil>
	I0501 04:00:17.170009   75701 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0501 04:00:17.170837   75701 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0501 04:00:17.170904   75701 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18779-13391/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0501 04:00:17.185826   75701 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0501 04:00:17.185878   75701 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0501 04:00:17.185901   75701 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0501 04:00:17.186105   75701 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0501 04:00:17.186164   75701 cni.go:84] Creating CNI manager for ""
	I0501 04:00:17.186176   75701 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0501 04:00:17.186188   75701 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0501 04:00:17.186235   75701 start.go:340] cluster config:
	{Name:newest-cni-906018 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:newest-cni-906018 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 04:00:17.186324   75701 iso.go:125] acquiring lock: {Name:mkcd2496aadb29931e179193d707c731bfda98b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0501 04:00:17.188002   75701 out.go:177] * Starting "newest-cni-906018" primary control-plane node in "newest-cni-906018" cluster
	I0501 04:00:17.189196   75701 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0501 04:00:17.189226   75701 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0501 04:00:17.189233   75701 cache.go:56] Caching tarball of preloaded images
	I0501 04:00:17.189316   75701 preload.go:173] Found /home/jenkins/minikube-integration/18779-13391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0501 04:00:17.189326   75701 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0501 04:00:17.189410   75701 profile.go:143] Saving config to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/newest-cni-906018/config.json ...
	I0501 04:00:17.189426   75701 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/newest-cni-906018/config.json: {Name:mk36e297e787aa320875d4c2133eb9c1395184fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 04:00:17.189535   75701 start.go:360] acquireMachinesLock for newest-cni-906018: {Name:mkc4548e5afe1d4b0a833f6c522103562b0cefff Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0501 04:00:17.189561   75701 start.go:364] duration metric: took 14.659µs to acquireMachinesLock for "newest-cni-906018"
	I0501 04:00:17.189585   75701 start.go:93] Provisioning new machine with config: &{Name:newest-cni-906018 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.0 ClusterName:newest-cni-906018 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0501 04:00:17.189666   75701 start.go:125] createHost starting for "" (driver="kvm2")
	
	
	==> CRI-O <==
	May 01 04:00:19 embed-certs-277128 crio[724]: time="2024-05-01 04:00:19.168997146Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f9a8d2f0f9453969b410fb7f880f6cf4c7e6e2f3c17a687583906efe4181dd17,PodSandboxId:547fe01dd31038fc019087a6db1fa7e7b44ea15a157584ffbd0aa6cdd3cf08b4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714534836531788505,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 785be666-58d5-4b9d-92fd-bcacdbdebeb2,},Annotations:map[string]string{io.kubernetes.container.hash: c88b4da4,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3c74de489af3d78a4b0cf620ed16e1fba7c9ce1c7021a15c5b4bd241b7a934a,PodSandboxId:aa9d355e603c7861a2f071569dfb4a7cb20ec2430f8bdd0246d00adc0e5ec201,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714534822085371460,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sjplt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6701ee8e-0630-4332-b01c-26741ed3a7b7,},Annotations:map[string]string{io.kubernetes.container.hash: c52dc745,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eaaa2457f3825d23c9124baf727b248c8ae44a540669b26c888b887edb6e6096,PodSandboxId:b316d2fa718c57ee546cd0e7c6676cf7f048c4b01def7a73cbb35a78db72fc65,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1714534816519344878,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kuberne
tes.pod.uid: ff71ad09-63da-4dd0-99c7-9ffb0f4eeae3,},Annotations:map[string]string{io.kubernetes.container.hash: 85b1f6f5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94afdb03c382269209154b2e73edbf200662e89960c248ba775a0ef2b8fbe6b1,PodSandboxId:19d7e38955886efeca25c599f334336ad453e231add7410a16e538399ce6da41,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714534805698551271,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-phx7x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56c0381e-c140-4f69-b
be4-09d393db8b23,},Annotations:map[string]string{io.kubernetes.container.hash: e5799dc5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aaae36261c5ce1accf3bfb5993be5a256a037a640d5f6f30de8384900dda06b2,PodSandboxId:547fe01dd31038fc019087a6db1fa7e7b44ea15a157584ffbd0aa6cdd3cf08b4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1714534805694050491,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 785be666-58d5-4b9d-92fd-bcacdbdeb
eb2,},Annotations:map[string]string{io.kubernetes.container.hash: c88b4da4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1813f35574f4f0398e7d482fe77c5d7f8490726666a277035bda0a5cff80af6e,PodSandboxId:7212f5087ba09b79f58f15756887b1d9e38cf5501f38802286314c3be8daf914,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714534801956494338,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-277128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d93b953fdcb3197a925f72d5f1925818,},Annota
tions:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e7158f7ff3922e97ba5ee97108acc1d0ba0350dff2f9aa56c2f71519108791c,PodSandboxId:cd226d9eea9632ec815202404544eb5687a36a3097cab2af50e23979f4fc5026,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714534801936348492,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-277128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e21a7c0a2e06e5de26960a82e6
6d8e6d,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a96815c49ac45b34c359d7171b30be5c25cee80fae08d947fc004d479693df00,PodSandboxId:464a0acb133488889f9601dcdece2117c4eb53e229a62c35b942da265898373e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714534801919027276,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-277128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6dee1fba7311ab90adf2d7b6467002b,},Ann
otations:map[string]string{io.kubernetes.container.hash: 7e88cfe1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d109948ffbbddd2e38484d83d2260690afce3c51e16abff0fe8713e844bfdcb3,PodSandboxId:f6338b841057be5ce903e4539b40e972adb0d1a022af422482cc77db570d5486,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714534801887944145,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-277128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68d692155d566ac180b3b7676623c918,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 5b59e402,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=440a9837-cff3-446c-bc34-fcafce0d8d5d name=/runtime.v1.RuntimeService/ListContainers
	May 01 04:00:19 embed-certs-277128 crio[724]: time="2024-05-01 04:00:19.231084506Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4c6012b3-c9ca-4d4a-ac91-cd5d2443b48d name=/runtime.v1.RuntimeService/Version
	May 01 04:00:19 embed-certs-277128 crio[724]: time="2024-05-01 04:00:19.231253106Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4c6012b3-c9ca-4d4a-ac91-cd5d2443b48d name=/runtime.v1.RuntimeService/Version
	May 01 04:00:19 embed-certs-277128 crio[724]: time="2024-05-01 04:00:19.233249581Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=21d39741-22d1-4b4f-97cc-dd54d62577f9 name=/runtime.v1.ImageService/ImageFsInfo
	May 01 04:00:19 embed-certs-277128 crio[724]: time="2024-05-01 04:00:19.233690962Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714536019233667731,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133261,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=21d39741-22d1-4b4f-97cc-dd54d62577f9 name=/runtime.v1.ImageService/ImageFsInfo
	May 01 04:00:19 embed-certs-277128 crio[724]: time="2024-05-01 04:00:19.234334146Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dc1b229f-f769-43a5-a1a2-8ea78b25c03c name=/runtime.v1.RuntimeService/ListContainers
	May 01 04:00:19 embed-certs-277128 crio[724]: time="2024-05-01 04:00:19.234395484Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dc1b229f-f769-43a5-a1a2-8ea78b25c03c name=/runtime.v1.RuntimeService/ListContainers
	May 01 04:00:19 embed-certs-277128 crio[724]: time="2024-05-01 04:00:19.234596037Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f9a8d2f0f9453969b410fb7f880f6cf4c7e6e2f3c17a687583906efe4181dd17,PodSandboxId:547fe01dd31038fc019087a6db1fa7e7b44ea15a157584ffbd0aa6cdd3cf08b4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714534836531788505,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 785be666-58d5-4b9d-92fd-bcacdbdebeb2,},Annotations:map[string]string{io.kubernetes.container.hash: c88b4da4,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3c74de489af3d78a4b0cf620ed16e1fba7c9ce1c7021a15c5b4bd241b7a934a,PodSandboxId:aa9d355e603c7861a2f071569dfb4a7cb20ec2430f8bdd0246d00adc0e5ec201,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714534822085371460,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sjplt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6701ee8e-0630-4332-b01c-26741ed3a7b7,},Annotations:map[string]string{io.kubernetes.container.hash: c52dc745,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eaaa2457f3825d23c9124baf727b248c8ae44a540669b26c888b887edb6e6096,PodSandboxId:b316d2fa718c57ee546cd0e7c6676cf7f048c4b01def7a73cbb35a78db72fc65,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1714534816519344878,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kuberne
tes.pod.uid: ff71ad09-63da-4dd0-99c7-9ffb0f4eeae3,},Annotations:map[string]string{io.kubernetes.container.hash: 85b1f6f5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94afdb03c382269209154b2e73edbf200662e89960c248ba775a0ef2b8fbe6b1,PodSandboxId:19d7e38955886efeca25c599f334336ad453e231add7410a16e538399ce6da41,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714534805698551271,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-phx7x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56c0381e-c140-4f69-b
be4-09d393db8b23,},Annotations:map[string]string{io.kubernetes.container.hash: e5799dc5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aaae36261c5ce1accf3bfb5993be5a256a037a640d5f6f30de8384900dda06b2,PodSandboxId:547fe01dd31038fc019087a6db1fa7e7b44ea15a157584ffbd0aa6cdd3cf08b4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1714534805694050491,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 785be666-58d5-4b9d-92fd-bcacdbdeb
eb2,},Annotations:map[string]string{io.kubernetes.container.hash: c88b4da4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1813f35574f4f0398e7d482fe77c5d7f8490726666a277035bda0a5cff80af6e,PodSandboxId:7212f5087ba09b79f58f15756887b1d9e38cf5501f38802286314c3be8daf914,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714534801956494338,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-277128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d93b953fdcb3197a925f72d5f1925818,},Annota
tions:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e7158f7ff3922e97ba5ee97108acc1d0ba0350dff2f9aa56c2f71519108791c,PodSandboxId:cd226d9eea9632ec815202404544eb5687a36a3097cab2af50e23979f4fc5026,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714534801936348492,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-277128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e21a7c0a2e06e5de26960a82e6
6d8e6d,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a96815c49ac45b34c359d7171b30be5c25cee80fae08d947fc004d479693df00,PodSandboxId:464a0acb133488889f9601dcdece2117c4eb53e229a62c35b942da265898373e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714534801919027276,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-277128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6dee1fba7311ab90adf2d7b6467002b,},Ann
otations:map[string]string{io.kubernetes.container.hash: 7e88cfe1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d109948ffbbddd2e38484d83d2260690afce3c51e16abff0fe8713e844bfdcb3,PodSandboxId:f6338b841057be5ce903e4539b40e972adb0d1a022af422482cc77db570d5486,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714534801887944145,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-277128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68d692155d566ac180b3b7676623c918,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 5b59e402,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=dc1b229f-f769-43a5-a1a2-8ea78b25c03c name=/runtime.v1.RuntimeService/ListContainers
	May 01 04:00:19 embed-certs-277128 crio[724]: time="2024-05-01 04:00:19.258222901Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4fc14478-d03a-41a2-b845-5e0c38ad5072 name=/runtime.v1.RuntimeService/ListPodSandbox
	May 01 04:00:19 embed-certs-277128 crio[724]: time="2024-05-01 04:00:19.258544581Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:aa9d355e603c7861a2f071569dfb4a7cb20ec2430f8bdd0246d00adc0e5ec201,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-sjplt,Uid:6701ee8e-0630-4332-b01c-26741ed3a7b7,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1714534821070279493,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-sjplt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6701ee8e-0630-4332-b01c-26741ed3a7b7,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-05-01T03:40:05.200714348Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:3ff6149bcc0b583f486fa5553cf289e0c372e2830b328c7100b28319d89ac5d3,Metadata:&PodSandboxMetadata{Name:metrics-server-569cc877fc-p8j59,Uid:f8ad6c24-dd5d-4515-9052-c9aca7412b55,Namespace
:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1714534813269746380,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-569cc877fc-p8j59,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8ad6c24-dd5d-4515-9052-c9aca7412b55,k8s-app: metrics-server,pod-template-hash: 569cc877fc,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-05-01T03:40:05.200712886Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b316d2fa718c57ee546cd0e7c6676cf7f048c4b01def7a73cbb35a78db72fc65,Metadata:&PodSandboxMetadata{Name:busybox,Uid:ff71ad09-63da-4dd0-99c7-9ffb0f4eeae3,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1714534813168862549,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ff71ad09-63da-4dd0-99c7-9ffb0f4eeae3,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-05-01T03:40:05.
200720680Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:547fe01dd31038fc019087a6db1fa7e7b44ea15a157584ffbd0aa6cdd3cf08b4,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:785be666-58d5-4b9d-92fd-bcacdbdebeb2,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1714534805516892406,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 785be666-58d5-4b9d-92fd-bcacdbdebeb2,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-
minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-05-01T03:40:05.200719356Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:19d7e38955886efeca25c599f334336ad453e231add7410a16e538399ce6da41,Metadata:&PodSandboxMetadata{Name:kube-proxy-phx7x,Uid:56c0381e-c140-4f69-bbe4-09d393db8b23,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1714534805514361799,Labels:map[string]string{controller-revision-hash: 79cf874c65,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-phx7x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56c0381e-c140-4f69-bbe4-09d393db8b23,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.i
o/config.seen: 2024-05-01T03:40:05.200718231Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:7212f5087ba09b79f58f15756887b1d9e38cf5501f38802286314c3be8daf914,Metadata:&PodSandboxMetadata{Name:kube-scheduler-embed-certs-277128,Uid:d93b953fdcb3197a925f72d5f1925818,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1714534801691370453,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-embed-certs-277128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d93b953fdcb3197a925f72d5f1925818,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: d93b953fdcb3197a925f72d5f1925818,kubernetes.io/config.seen: 2024-05-01T03:40:01.200300685Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:464a0acb133488889f9601dcdece2117c4eb53e229a62c35b942da265898373e,Metadata:&PodSandboxMetadata{Name:kube-apiserver-embed-certs-277128,Uid:c6dee1fba7311ab90adf2d7b6467002b,Namespa
ce:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1714534801690045398,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-embed-certs-277128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6dee1fba7311ab90adf2d7b6467002b,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.50.218:8443,kubernetes.io/config.hash: c6dee1fba7311ab90adf2d7b6467002b,kubernetes.io/config.seen: 2024-05-01T03:40:01.200305006Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:cd226d9eea9632ec815202404544eb5687a36a3097cab2af50e23979f4fc5026,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-embed-certs-277128,Uid:e21a7c0a2e06e5de26960a82e66d8e6d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1714534801688361521,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: ku
be-controller-manager-embed-certs-277128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e21a7c0a2e06e5de26960a82e66d8e6d,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: e21a7c0a2e06e5de26960a82e66d8e6d,kubernetes.io/config.seen: 2024-05-01T03:40:01.200306326Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:f6338b841057be5ce903e4539b40e972adb0d1a022af422482cc77db570d5486,Metadata:&PodSandboxMetadata{Name:etcd-embed-certs-277128,Uid:68d692155d566ac180b3b7676623c918,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1714534801685093177,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-embed-certs-277128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68d692155d566ac180b3b7676623c918,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.50.218:2379,kubernetes.io/config.hash: 68d692155d566ac180b3b76766
23c918,kubernetes.io/config.seen: 2024-05-01T03:40:01.245559236Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=4fc14478-d03a-41a2-b845-5e0c38ad5072 name=/runtime.v1.RuntimeService/ListPodSandbox
	May 01 04:00:19 embed-certs-277128 crio[724]: time="2024-05-01 04:00:19.260267164Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=80df6ca6-2e2f-4e17-a91f-09294913137f name=/runtime.v1.RuntimeService/ListContainers
	May 01 04:00:19 embed-certs-277128 crio[724]: time="2024-05-01 04:00:19.260354112Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=80df6ca6-2e2f-4e17-a91f-09294913137f name=/runtime.v1.RuntimeService/ListContainers
	May 01 04:00:19 embed-certs-277128 crio[724]: time="2024-05-01 04:00:19.260686817Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f9a8d2f0f9453969b410fb7f880f6cf4c7e6e2f3c17a687583906efe4181dd17,PodSandboxId:547fe01dd31038fc019087a6db1fa7e7b44ea15a157584ffbd0aa6cdd3cf08b4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714534836531788505,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 785be666-58d5-4b9d-92fd-bcacdbdebeb2,},Annotations:map[string]string{io.kubernetes.container.hash: c88b4da4,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3c74de489af3d78a4b0cf620ed16e1fba7c9ce1c7021a15c5b4bd241b7a934a,PodSandboxId:aa9d355e603c7861a2f071569dfb4a7cb20ec2430f8bdd0246d00adc0e5ec201,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714534822085371460,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sjplt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6701ee8e-0630-4332-b01c-26741ed3a7b7,},Annotations:map[string]string{io.kubernetes.container.hash: c52dc745,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eaaa2457f3825d23c9124baf727b248c8ae44a540669b26c888b887edb6e6096,PodSandboxId:b316d2fa718c57ee546cd0e7c6676cf7f048c4b01def7a73cbb35a78db72fc65,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1714534816519344878,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kuberne
tes.pod.uid: ff71ad09-63da-4dd0-99c7-9ffb0f4eeae3,},Annotations:map[string]string{io.kubernetes.container.hash: 85b1f6f5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94afdb03c382269209154b2e73edbf200662e89960c248ba775a0ef2b8fbe6b1,PodSandboxId:19d7e38955886efeca25c599f334336ad453e231add7410a16e538399ce6da41,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714534805698551271,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-phx7x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56c0381e-c140-4f69-b
be4-09d393db8b23,},Annotations:map[string]string{io.kubernetes.container.hash: e5799dc5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1813f35574f4f0398e7d482fe77c5d7f8490726666a277035bda0a5cff80af6e,PodSandboxId:7212f5087ba09b79f58f15756887b1d9e38cf5501f38802286314c3be8daf914,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714534801956494338,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-277128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d93b953fdcb3197a925f72d5f192
5818,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e7158f7ff3922e97ba5ee97108acc1d0ba0350dff2f9aa56c2f71519108791c,PodSandboxId:cd226d9eea9632ec815202404544eb5687a36a3097cab2af50e23979f4fc5026,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714534801936348492,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-277128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e21a7c0a2e06e
5de26960a82e66d8e6d,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a96815c49ac45b34c359d7171b30be5c25cee80fae08d947fc004d479693df00,PodSandboxId:464a0acb133488889f9601dcdece2117c4eb53e229a62c35b942da265898373e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714534801919027276,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-277128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6dee1fba7311ab90adf2d7b6
467002b,},Annotations:map[string]string{io.kubernetes.container.hash: 7e88cfe1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d109948ffbbddd2e38484d83d2260690afce3c51e16abff0fe8713e844bfdcb3,PodSandboxId:f6338b841057be5ce903e4539b40e972adb0d1a022af422482cc77db570d5486,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714534801887944145,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-277128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68d692155d566ac180b3b7676623c918,},Annotations:map[string]string{io
.kubernetes.container.hash: 5b59e402,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=80df6ca6-2e2f-4e17-a91f-09294913137f name=/runtime.v1.RuntimeService/ListContainers
	May 01 04:00:19 embed-certs-277128 crio[724]: time="2024-05-01 04:00:19.279046652Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a981a671-3b0d-4786-bf7b-021f21606508 name=/runtime.v1.RuntimeService/Version
	May 01 04:00:19 embed-certs-277128 crio[724]: time="2024-05-01 04:00:19.279118190Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a981a671-3b0d-4786-bf7b-021f21606508 name=/runtime.v1.RuntimeService/Version
	May 01 04:00:19 embed-certs-277128 crio[724]: time="2024-05-01 04:00:19.281812828Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=89d1a116-cedc-4d34-b2de-b7cebefe9e23 name=/runtime.v1.ImageService/ImageFsInfo
	May 01 04:00:19 embed-certs-277128 crio[724]: time="2024-05-01 04:00:19.282394587Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714536019282360652,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133261,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=89d1a116-cedc-4d34-b2de-b7cebefe9e23 name=/runtime.v1.ImageService/ImageFsInfo
	May 01 04:00:19 embed-certs-277128 crio[724]: time="2024-05-01 04:00:19.283706425Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9a911a61-501b-43dc-970b-0b2049c8611c name=/runtime.v1.RuntimeService/ListContainers
	May 01 04:00:19 embed-certs-277128 crio[724]: time="2024-05-01 04:00:19.283783993Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9a911a61-501b-43dc-970b-0b2049c8611c name=/runtime.v1.RuntimeService/ListContainers
	May 01 04:00:19 embed-certs-277128 crio[724]: time="2024-05-01 04:00:19.284016808Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f9a8d2f0f9453969b410fb7f880f6cf4c7e6e2f3c17a687583906efe4181dd17,PodSandboxId:547fe01dd31038fc019087a6db1fa7e7b44ea15a157584ffbd0aa6cdd3cf08b4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714534836531788505,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 785be666-58d5-4b9d-92fd-bcacdbdebeb2,},Annotations:map[string]string{io.kubernetes.container.hash: c88b4da4,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3c74de489af3d78a4b0cf620ed16e1fba7c9ce1c7021a15c5b4bd241b7a934a,PodSandboxId:aa9d355e603c7861a2f071569dfb4a7cb20ec2430f8bdd0246d00adc0e5ec201,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714534822085371460,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sjplt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6701ee8e-0630-4332-b01c-26741ed3a7b7,},Annotations:map[string]string{io.kubernetes.container.hash: c52dc745,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eaaa2457f3825d23c9124baf727b248c8ae44a540669b26c888b887edb6e6096,PodSandboxId:b316d2fa718c57ee546cd0e7c6676cf7f048c4b01def7a73cbb35a78db72fc65,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1714534816519344878,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kuberne
tes.pod.uid: ff71ad09-63da-4dd0-99c7-9ffb0f4eeae3,},Annotations:map[string]string{io.kubernetes.container.hash: 85b1f6f5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94afdb03c382269209154b2e73edbf200662e89960c248ba775a0ef2b8fbe6b1,PodSandboxId:19d7e38955886efeca25c599f334336ad453e231add7410a16e538399ce6da41,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714534805698551271,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-phx7x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56c0381e-c140-4f69-b
be4-09d393db8b23,},Annotations:map[string]string{io.kubernetes.container.hash: e5799dc5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aaae36261c5ce1accf3bfb5993be5a256a037a640d5f6f30de8384900dda06b2,PodSandboxId:547fe01dd31038fc019087a6db1fa7e7b44ea15a157584ffbd0aa6cdd3cf08b4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1714534805694050491,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 785be666-58d5-4b9d-92fd-bcacdbdeb
eb2,},Annotations:map[string]string{io.kubernetes.container.hash: c88b4da4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1813f35574f4f0398e7d482fe77c5d7f8490726666a277035bda0a5cff80af6e,PodSandboxId:7212f5087ba09b79f58f15756887b1d9e38cf5501f38802286314c3be8daf914,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714534801956494338,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-277128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d93b953fdcb3197a925f72d5f1925818,},Annota
tions:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e7158f7ff3922e97ba5ee97108acc1d0ba0350dff2f9aa56c2f71519108791c,PodSandboxId:cd226d9eea9632ec815202404544eb5687a36a3097cab2af50e23979f4fc5026,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714534801936348492,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-277128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e21a7c0a2e06e5de26960a82e6
6d8e6d,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a96815c49ac45b34c359d7171b30be5c25cee80fae08d947fc004d479693df00,PodSandboxId:464a0acb133488889f9601dcdece2117c4eb53e229a62c35b942da265898373e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714534801919027276,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-277128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6dee1fba7311ab90adf2d7b6467002b,},Ann
otations:map[string]string{io.kubernetes.container.hash: 7e88cfe1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d109948ffbbddd2e38484d83d2260690afce3c51e16abff0fe8713e844bfdcb3,PodSandboxId:f6338b841057be5ce903e4539b40e972adb0d1a022af422482cc77db570d5486,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714534801887944145,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-277128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68d692155d566ac180b3b7676623c918,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 5b59e402,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9a911a61-501b-43dc-970b-0b2049c8611c name=/runtime.v1.RuntimeService/ListContainers
	May 01 04:00:19 embed-certs-277128 crio[724]: time="2024-05-01 04:00:19.284607795Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=3ce0af1d-4249-45df-8ccc-9932c7eba96e name=/runtime.v1.RuntimeService/ListPodSandbox
	May 01 04:00:19 embed-certs-277128 crio[724]: time="2024-05-01 04:00:19.284826158Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:aa9d355e603c7861a2f071569dfb4a7cb20ec2430f8bdd0246d00adc0e5ec201,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-sjplt,Uid:6701ee8e-0630-4332-b01c-26741ed3a7b7,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1714534821070279493,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-sjplt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6701ee8e-0630-4332-b01c-26741ed3a7b7,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-05-01T03:40:05.200714348Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:3ff6149bcc0b583f486fa5553cf289e0c372e2830b328c7100b28319d89ac5d3,Metadata:&PodSandboxMetadata{Name:metrics-server-569cc877fc-p8j59,Uid:f8ad6c24-dd5d-4515-9052-c9aca7412b55,Namespace
:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1714534813269746380,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-569cc877fc-p8j59,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8ad6c24-dd5d-4515-9052-c9aca7412b55,k8s-app: metrics-server,pod-template-hash: 569cc877fc,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-05-01T03:40:05.200712886Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b316d2fa718c57ee546cd0e7c6676cf7f048c4b01def7a73cbb35a78db72fc65,Metadata:&PodSandboxMetadata{Name:busybox,Uid:ff71ad09-63da-4dd0-99c7-9ffb0f4eeae3,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1714534813168862549,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ff71ad09-63da-4dd0-99c7-9ffb0f4eeae3,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-05-01T03:40:05.
200720680Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:547fe01dd31038fc019087a6db1fa7e7b44ea15a157584ffbd0aa6cdd3cf08b4,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:785be666-58d5-4b9d-92fd-bcacdbdebeb2,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1714534805516892406,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 785be666-58d5-4b9d-92fd-bcacdbdebeb2,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-
minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-05-01T03:40:05.200719356Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:19d7e38955886efeca25c599f334336ad453e231add7410a16e538399ce6da41,Metadata:&PodSandboxMetadata{Name:kube-proxy-phx7x,Uid:56c0381e-c140-4f69-bbe4-09d393db8b23,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1714534805514361799,Labels:map[string]string{controller-revision-hash: 79cf874c65,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-phx7x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56c0381e-c140-4f69-bbe4-09d393db8b23,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.i
o/config.seen: 2024-05-01T03:40:05.200718231Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:7212f5087ba09b79f58f15756887b1d9e38cf5501f38802286314c3be8daf914,Metadata:&PodSandboxMetadata{Name:kube-scheduler-embed-certs-277128,Uid:d93b953fdcb3197a925f72d5f1925818,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1714534801691370453,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-embed-certs-277128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d93b953fdcb3197a925f72d5f1925818,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: d93b953fdcb3197a925f72d5f1925818,kubernetes.io/config.seen: 2024-05-01T03:40:01.200300685Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:464a0acb133488889f9601dcdece2117c4eb53e229a62c35b942da265898373e,Metadata:&PodSandboxMetadata{Name:kube-apiserver-embed-certs-277128,Uid:c6dee1fba7311ab90adf2d7b6467002b,Namespa
ce:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1714534801690045398,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-embed-certs-277128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6dee1fba7311ab90adf2d7b6467002b,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.50.218:8443,kubernetes.io/config.hash: c6dee1fba7311ab90adf2d7b6467002b,kubernetes.io/config.seen: 2024-05-01T03:40:01.200305006Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:cd226d9eea9632ec815202404544eb5687a36a3097cab2af50e23979f4fc5026,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-embed-certs-277128,Uid:e21a7c0a2e06e5de26960a82e66d8e6d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1714534801688361521,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: ku
be-controller-manager-embed-certs-277128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e21a7c0a2e06e5de26960a82e66d8e6d,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: e21a7c0a2e06e5de26960a82e66d8e6d,kubernetes.io/config.seen: 2024-05-01T03:40:01.200306326Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:f6338b841057be5ce903e4539b40e972adb0d1a022af422482cc77db570d5486,Metadata:&PodSandboxMetadata{Name:etcd-embed-certs-277128,Uid:68d692155d566ac180b3b7676623c918,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1714534801685093177,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-embed-certs-277128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68d692155d566ac180b3b7676623c918,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.50.218:2379,kubernetes.io/config.hash: 68d692155d566ac180b3b76766
23c918,kubernetes.io/config.seen: 2024-05-01T03:40:01.245559236Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=3ce0af1d-4249-45df-8ccc-9932c7eba96e name=/runtime.v1.RuntimeService/ListPodSandbox
	May 01 04:00:19 embed-certs-277128 crio[724]: time="2024-05-01 04:00:19.285524396Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=49bfc293-a7bf-4821-8394-ccf5a912a237 name=/runtime.v1.RuntimeService/ListContainers
	May 01 04:00:19 embed-certs-277128 crio[724]: time="2024-05-01 04:00:19.285601757Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=49bfc293-a7bf-4821-8394-ccf5a912a237 name=/runtime.v1.RuntimeService/ListContainers
	May 01 04:00:19 embed-certs-277128 crio[724]: time="2024-05-01 04:00:19.285772838Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f9a8d2f0f9453969b410fb7f880f6cf4c7e6e2f3c17a687583906efe4181dd17,PodSandboxId:547fe01dd31038fc019087a6db1fa7e7b44ea15a157584ffbd0aa6cdd3cf08b4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714534836531788505,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 785be666-58d5-4b9d-92fd-bcacdbdebeb2,},Annotations:map[string]string{io.kubernetes.container.hash: c88b4da4,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3c74de489af3d78a4b0cf620ed16e1fba7c9ce1c7021a15c5b4bd241b7a934a,PodSandboxId:aa9d355e603c7861a2f071569dfb4a7cb20ec2430f8bdd0246d00adc0e5ec201,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714534822085371460,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sjplt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6701ee8e-0630-4332-b01c-26741ed3a7b7,},Annotations:map[string]string{io.kubernetes.container.hash: c52dc745,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eaaa2457f3825d23c9124baf727b248c8ae44a540669b26c888b887edb6e6096,PodSandboxId:b316d2fa718c57ee546cd0e7c6676cf7f048c4b01def7a73cbb35a78db72fc65,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1714534816519344878,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kuberne
tes.pod.uid: ff71ad09-63da-4dd0-99c7-9ffb0f4eeae3,},Annotations:map[string]string{io.kubernetes.container.hash: 85b1f6f5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94afdb03c382269209154b2e73edbf200662e89960c248ba775a0ef2b8fbe6b1,PodSandboxId:19d7e38955886efeca25c599f334336ad453e231add7410a16e538399ce6da41,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714534805698551271,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-phx7x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56c0381e-c140-4f69-b
be4-09d393db8b23,},Annotations:map[string]string{io.kubernetes.container.hash: e5799dc5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aaae36261c5ce1accf3bfb5993be5a256a037a640d5f6f30de8384900dda06b2,PodSandboxId:547fe01dd31038fc019087a6db1fa7e7b44ea15a157584ffbd0aa6cdd3cf08b4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1714534805694050491,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 785be666-58d5-4b9d-92fd-bcacdbdeb
eb2,},Annotations:map[string]string{io.kubernetes.container.hash: c88b4da4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1813f35574f4f0398e7d482fe77c5d7f8490726666a277035bda0a5cff80af6e,PodSandboxId:7212f5087ba09b79f58f15756887b1d9e38cf5501f38802286314c3be8daf914,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714534801956494338,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-277128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d93b953fdcb3197a925f72d5f1925818,},Annota
tions:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e7158f7ff3922e97ba5ee97108acc1d0ba0350dff2f9aa56c2f71519108791c,PodSandboxId:cd226d9eea9632ec815202404544eb5687a36a3097cab2af50e23979f4fc5026,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714534801936348492,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-277128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e21a7c0a2e06e5de26960a82e6
6d8e6d,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a96815c49ac45b34c359d7171b30be5c25cee80fae08d947fc004d479693df00,PodSandboxId:464a0acb133488889f9601dcdece2117c4eb53e229a62c35b942da265898373e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714534801919027276,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-277128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6dee1fba7311ab90adf2d7b6467002b,},Ann
otations:map[string]string{io.kubernetes.container.hash: 7e88cfe1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d109948ffbbddd2e38484d83d2260690afce3c51e16abff0fe8713e844bfdcb3,PodSandboxId:f6338b841057be5ce903e4539b40e972adb0d1a022af422482cc77db570d5486,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714534801887944145,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-277128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68d692155d566ac180b3b7676623c918,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 5b59e402,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=49bfc293-a7bf-4821-8394-ccf5a912a237 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f9a8d2f0f9453       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      19 minutes ago      Running             storage-provisioner       2                   547fe01dd3103       storage-provisioner
	e3c74de489af3       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      19 minutes ago      Running             coredns                   1                   aa9d355e603c7       coredns-7db6d8ff4d-sjplt
	eaaa2457f3825       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   20 minutes ago      Running             busybox                   1                   b316d2fa718c5       busybox
	94afdb03c3822       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b                                      20 minutes ago      Running             kube-proxy                1                   19d7e38955886       kube-proxy-phx7x
	aaae36261c5ce       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      20 minutes ago      Exited              storage-provisioner       1                   547fe01dd3103       storage-provisioner
	1813f35574f4f       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced                                      20 minutes ago      Running             kube-scheduler            1                   7212f5087ba09       kube-scheduler-embed-certs-277128
	7e7158f7ff392       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b                                      20 minutes ago      Running             kube-controller-manager   1                   cd226d9eea963       kube-controller-manager-embed-certs-277128
	a96815c49ac45       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0                                      20 minutes ago      Running             kube-apiserver            1                   464a0acb13348       kube-apiserver-embed-certs-277128
	d109948ffbbdd       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      20 minutes ago      Running             etcd                      1                   f6338b841057b       etcd-embed-certs-277128
	
	
	==> coredns [e3c74de489af3d78a4b0cf620ed16e1fba7c9ce1c7021a15c5b4bd241b7a934a] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:60045 - 41276 "HINFO IN 8860685169335691977.5956143156893298464. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015607388s
	
	
	==> describe nodes <==
	Name:               embed-certs-277128
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-277128
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2c4eae41cda912e6a762d77f0d8868e00f97bb4e
	                    minikube.k8s.io/name=embed-certs-277128
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_01T03_31_53_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 01 May 2024 03:31:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-277128
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 01 May 2024 04:00:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 01 May 2024 03:55:54 +0000   Wed, 01 May 2024 03:31:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 01 May 2024 03:55:54 +0000   Wed, 01 May 2024 03:31:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 01 May 2024 03:55:54 +0000   Wed, 01 May 2024 03:31:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 01 May 2024 03:55:54 +0000   Wed, 01 May 2024 03:40:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.218
	  Hostname:    embed-certs-277128
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ad39f4832e9c4708b4e1c4cd2dd491e3
	  System UUID:                ad39f483-2e9c-4708-b4e1-c4cd2dd491e3
	  Boot ID:                    84ceacf6-d21b-4d8e-bbd3-e4c7ef6a03f1
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 coredns-7db6d8ff4d-sjplt                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     28m
	  kube-system                 etcd-embed-certs-277128                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         28m
	  kube-system                 kube-apiserver-embed-certs-277128             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-controller-manager-embed-certs-277128    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-proxy-phx7x                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-scheduler-embed-certs-277128             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 metrics-server-569cc877fc-p8j59               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         27m
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 28m                kube-proxy       
	  Normal  Starting                 20m                kube-proxy       
	  Normal  Starting                 28m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     28m                kubelet          Node embed-certs-277128 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  28m                kubelet          Node embed-certs-277128 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28m                kubelet          Node embed-certs-277128 status is now: NodeHasNoDiskPressure
	  Normal  NodeReady                28m                kubelet          Node embed-certs-277128 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  28m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           28m                node-controller  Node embed-certs-277128 event: Registered Node embed-certs-277128 in Controller
	  Normal  Starting                 20m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  20m (x8 over 20m)  kubelet          Node embed-certs-277128 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    20m (x8 over 20m)  kubelet          Node embed-certs-277128 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     20m (x7 over 20m)  kubelet          Node embed-certs-277128 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  20m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           20m                node-controller  Node embed-certs-277128 event: Registered Node embed-certs-277128 in Controller
	
	
	==> dmesg <==
	[May 1 03:39] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052425] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.044249] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.622609] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.580831] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.514186] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.078345] systemd-fstab-generator[641]: Ignoring "noauto" option for root device
	[  +0.056973] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.072552] systemd-fstab-generator[653]: Ignoring "noauto" option for root device
	[  +0.211948] systemd-fstab-generator[667]: Ignoring "noauto" option for root device
	[  +0.139896] systemd-fstab-generator[679]: Ignoring "noauto" option for root device
	[  +0.358137] systemd-fstab-generator[709]: Ignoring "noauto" option for root device
	[  +4.954162] systemd-fstab-generator[806]: Ignoring "noauto" option for root device
	[  +0.059456] kauditd_printk_skb: 130 callbacks suppressed
	[May 1 03:40] systemd-fstab-generator[930]: Ignoring "noauto" option for root device
	[  +4.594794] kauditd_printk_skb: 97 callbacks suppressed
	[  +2.497015] systemd-fstab-generator[1546]: Ignoring "noauto" option for root device
	[  +5.102827] kauditd_printk_skb: 62 callbacks suppressed
	[  +5.083710] kauditd_printk_skb: 26 callbacks suppressed
	[ +18.323503] kauditd_printk_skb: 15 callbacks suppressed
	
	
	==> etcd [d109948ffbbddd2e38484d83d2260690afce3c51e16abff0fe8713e844bfdcb3] <==
	{"level":"info","ts":"2024-05-01T03:40:03.543593Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.218:2379"}
	{"level":"info","ts":"2024-05-01T03:40:03.543593Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2024-05-01T03:40:22.08801Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"349.079844ms","expected-duration":"100ms","prefix":"","request":"header:<ID:14030277660075336088 > lease_revoke:<id:42b58f323ce3a071>","response":"size:29"}
	{"level":"info","ts":"2024-05-01T03:40:22.0883Z","caller":"traceutil/trace.go:171","msg":"trace[1584344752] linearizableReadLoop","detail":"{readStateIndex:594; appliedIndex:593; }","duration":"354.900577ms","start":"2024-05-01T03:40:21.733358Z","end":"2024-05-01T03:40:22.088259Z","steps":["trace[1584344752] 'read index received'  (duration: 5.278129ms)","trace[1584344752] 'applied index is now lower than readState.Index'  (duration: 349.621321ms)"],"step_count":2}
	{"level":"warn","ts":"2024-05-01T03:40:22.088482Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"355.084306ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-7db6d8ff4d-sjplt\" ","response":"range_response_count:1 size:4822"}
	{"level":"info","ts":"2024-05-01T03:40:22.088506Z","caller":"traceutil/trace.go:171","msg":"trace[390937185] range","detail":"{range_begin:/registry/pods/kube-system/coredns-7db6d8ff4d-sjplt; range_end:; response_count:1; response_revision:553; }","duration":"355.161958ms","start":"2024-05-01T03:40:21.733334Z","end":"2024-05-01T03:40:22.088496Z","steps":["trace[390937185] 'agreement among raft nodes before linearized reading'  (duration: 355.007274ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-01T03:40:22.088542Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-01T03:40:21.733316Z","time spent":"355.214129ms","remote":"127.0.0.1:58990","response type":"/etcdserverpb.KV/Range","request count":0,"request size":53,"response count":1,"response size":4846,"request content":"key:\"/registry/pods/kube-system/coredns-7db6d8ff4d-sjplt\" "}
	{"level":"warn","ts":"2024-05-01T03:40:41.815299Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"132.795428ms","expected-duration":"100ms","prefix":"","request":"header:<ID:14030277660075336256 > lease_revoke:<id:42b58f323ce3a1cd>","response":"size:29"}
	{"level":"info","ts":"2024-05-01T03:40:41.815398Z","caller":"traceutil/trace.go:171","msg":"trace[788791491] linearizableReadLoop","detail":"{readStateIndex:625; appliedIndex:624; }","duration":"176.609199ms","start":"2024-05-01T03:40:41.638777Z","end":"2024-05-01T03:40:41.815386Z","steps":["trace[788791491] 'read index received'  (duration: 43.673822ms)","trace[788791491] 'applied index is now lower than readState.Index'  (duration: 132.934532ms)"],"step_count":2}
	{"level":"warn","ts":"2024-05-01T03:40:41.815521Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"176.729563ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-569cc877fc-p8j59\" ","response":"range_response_count:1 size:4239"}
	{"level":"info","ts":"2024-05-01T03:40:41.815549Z","caller":"traceutil/trace.go:171","msg":"trace[1709848795] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-569cc877fc-p8j59; range_end:; response_count:1; response_revision:580; }","duration":"176.783608ms","start":"2024-05-01T03:40:41.638753Z","end":"2024-05-01T03:40:41.815537Z","steps":["trace[1709848795] 'agreement among raft nodes before linearized reading'  (duration: 176.663763ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-01T03:41:09.494558Z","caller":"traceutil/trace.go:171","msg":"trace[799975887] transaction","detail":"{read_only:false; response_revision:606; number_of_response:1; }","duration":"222.980256ms","start":"2024-05-01T03:41:09.271563Z","end":"2024-05-01T03:41:09.494544Z","steps":["trace[799975887] 'process raft request'  (duration: 221.005416ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-01T03:41:09.497836Z","caller":"traceutil/trace.go:171","msg":"trace[1236022152] linearizableReadLoop","detail":"{readStateIndex:658; appliedIndex:656; }","duration":"113.979463ms","start":"2024-05-01T03:41:09.38384Z","end":"2024-05-01T03:41:09.49782Z","steps":["trace[1236022152] 'read index received'  (duration: 108.804286ms)","trace[1236022152] 'applied index is now lower than readState.Index'  (duration: 5.174581ms)"],"step_count":2}
	{"level":"info","ts":"2024-05-01T03:41:09.499999Z","caller":"traceutil/trace.go:171","msg":"trace[460494178] transaction","detail":"{read_only:false; response_revision:607; number_of_response:1; }","duration":"146.621872ms","start":"2024-05-01T03:41:09.35335Z","end":"2024-05-01T03:41:09.499972Z","steps":["trace[460494178] 'process raft request'  (duration: 144.372229ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-01T03:41:09.501508Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"117.673586ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/flowschemas/\" range_end:\"/registry/flowschemas0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-05-01T03:41:09.501654Z","caller":"traceutil/trace.go:171","msg":"trace[1194557245] range","detail":"{range_begin:/registry/flowschemas/; range_end:/registry/flowschemas0; response_count:0; response_revision:607; }","duration":"117.845466ms","start":"2024-05-01T03:41:09.383784Z","end":"2024-05-01T03:41:09.50163Z","steps":["trace[1194557245] 'agreement among raft nodes before linearized reading'  (duration: 114.132729ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-01T03:50:03.582928Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":806}
	{"level":"info","ts":"2024-05-01T03:50:03.592675Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":806,"took":"9.375644ms","hash":416217357,"current-db-size-bytes":2666496,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":2666496,"current-db-size-in-use":"2.7 MB"}
	{"level":"info","ts":"2024-05-01T03:50:03.592746Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":416217357,"revision":806,"compact-revision":-1}
	{"level":"info","ts":"2024-05-01T03:55:03.590956Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1049}
	{"level":"info","ts":"2024-05-01T03:55:03.595906Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1049,"took":"4.10541ms","hash":3376694056,"current-db-size-bytes":2666496,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":1638400,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-05-01T03:55:03.596002Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3376694056,"revision":1049,"compact-revision":806}
	{"level":"info","ts":"2024-05-01T04:00:03.598467Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1292}
	{"level":"info","ts":"2024-05-01T04:00:03.603047Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1292,"took":"4.189088ms","hash":364171432,"current-db-size-bytes":2666496,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":1638400,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-05-01T04:00:03.60311Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":364171432,"revision":1292,"compact-revision":1049}
	
	
	==> kernel <==
	 04:00:19 up 20 min,  0 users,  load average: 1.49, 0.56, 0.25
	Linux embed-certs-277128 5.10.207 #1 SMP Tue Apr 30 22:38:43 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [a96815c49ac45b34c359d7171b30be5c25cee80fae08d947fc004d479693df00] <==
	I0501 03:55:05.922193       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0501 03:56:05.921406       1 handler_proxy.go:93] no RequestInfo found in the context
	E0501 03:56:05.921592       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0501 03:56:05.921626       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0501 03:56:05.922553       1 handler_proxy.go:93] no RequestInfo found in the context
	E0501 03:56:05.922606       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0501 03:56:05.922738       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0501 03:58:05.922439       1 handler_proxy.go:93] no RequestInfo found in the context
	E0501 03:58:05.922771       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0501 03:58:05.922813       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0501 03:58:05.922909       1 handler_proxy.go:93] no RequestInfo found in the context
	E0501 03:58:05.922946       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0501 03:58:05.924930       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0501 04:00:04.925822       1 handler_proxy.go:93] no RequestInfo found in the context
	E0501 04:00:04.926041       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0501 04:00:05.926807       1 handler_proxy.go:93] no RequestInfo found in the context
	E0501 04:00:05.926879       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0501 04:00:05.926890       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0501 04:00:05.926957       1 handler_proxy.go:93] no RequestInfo found in the context
	E0501 04:00:05.927003       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0501 04:00:05.928333       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [7e7158f7ff3922e97ba5ee97108acc1d0ba0350dff2f9aa56c2f71519108791c] <==
	I0501 03:54:49.108622       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0501 03:55:18.520418       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0501 03:55:19.117430       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0501 03:55:48.526616       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0501 03:55:49.125459       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0501 03:56:15.285045       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="631.237µs"
	E0501 03:56:18.533057       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0501 03:56:19.133716       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0501 03:56:27.276853       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="93.927µs"
	E0501 03:56:48.538778       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0501 03:56:49.141487       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0501 03:57:18.548997       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0501 03:57:19.150395       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0501 03:57:48.554436       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0501 03:57:49.161768       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0501 03:58:18.560801       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0501 03:58:19.170308       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0501 03:58:48.566611       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0501 03:58:49.180469       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0501 03:59:18.572054       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0501 03:59:19.190867       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0501 03:59:48.577603       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0501 03:59:49.198950       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0501 04:00:18.588113       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0501 04:00:19.219701       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [94afdb03c382269209154b2e73edbf200662e89960c248ba775a0ef2b8fbe6b1] <==
	I0501 03:40:05.952100       1 server_linux.go:69] "Using iptables proxy"
	I0501 03:40:05.974921       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.50.218"]
	I0501 03:40:06.079352       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0501 03:40:06.080620       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0501 03:40:06.080724       1 server_linux.go:165] "Using iptables Proxier"
	I0501 03:40:06.093010       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0501 03:40:06.093276       1 server.go:872] "Version info" version="v1.30.0"
	I0501 03:40:06.093854       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0501 03:40:06.095284       1 config.go:192] "Starting service config controller"
	I0501 03:40:06.095346       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0501 03:40:06.095387       1 config.go:101] "Starting endpoint slice config controller"
	I0501 03:40:06.095404       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0501 03:40:06.098349       1 config.go:319] "Starting node config controller"
	I0501 03:40:06.098431       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0501 03:40:06.196450       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0501 03:40:06.196500       1 shared_informer.go:320] Caches are synced for service config
	I0501 03:40:06.198572       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [1813f35574f4f0398e7d482fe77c5d7f8490726666a277035bda0a5cff80af6e] <==
	I0501 03:40:03.016766       1 serving.go:380] Generated self-signed cert in-memory
	W0501 03:40:04.842816       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0501 03:40:04.842942       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0501 03:40:04.843080       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0501 03:40:04.843114       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0501 03:40:04.907947       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0"
	I0501 03:40:04.908035       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0501 03:40:04.909802       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0501 03:40:04.909874       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0501 03:40:04.909992       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0501 03:40:04.910064       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0501 03:40:05.011267       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	May 01 03:58:01 embed-certs-277128 kubelet[937]: E0501 03:58:01.308383     937 iptables.go:577] "Could not set up iptables canary" err=<
	May 01 03:58:01 embed-certs-277128 kubelet[937]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 01 03:58:01 embed-certs-277128 kubelet[937]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 01 03:58:01 embed-certs-277128 kubelet[937]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 01 03:58:01 embed-certs-277128 kubelet[937]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 01 03:58:07 embed-certs-277128 kubelet[937]: E0501 03:58:07.258985     937 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-p8j59" podUID="f8ad6c24-dd5d-4515-9052-c9aca7412b55"
	May 01 03:58:21 embed-certs-277128 kubelet[937]: E0501 03:58:21.259629     937 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-p8j59" podUID="f8ad6c24-dd5d-4515-9052-c9aca7412b55"
	May 01 03:58:36 embed-certs-277128 kubelet[937]: E0501 03:58:36.256984     937 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-p8j59" podUID="f8ad6c24-dd5d-4515-9052-c9aca7412b55"
	May 01 03:58:51 embed-certs-277128 kubelet[937]: E0501 03:58:51.259273     937 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-p8j59" podUID="f8ad6c24-dd5d-4515-9052-c9aca7412b55"
	May 01 03:59:01 embed-certs-277128 kubelet[937]: E0501 03:59:01.309394     937 iptables.go:577] "Could not set up iptables canary" err=<
	May 01 03:59:01 embed-certs-277128 kubelet[937]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 01 03:59:01 embed-certs-277128 kubelet[937]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 01 03:59:01 embed-certs-277128 kubelet[937]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 01 03:59:01 embed-certs-277128 kubelet[937]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 01 03:59:05 embed-certs-277128 kubelet[937]: E0501 03:59:05.258329     937 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-p8j59" podUID="f8ad6c24-dd5d-4515-9052-c9aca7412b55"
	May 01 03:59:17 embed-certs-277128 kubelet[937]: E0501 03:59:17.259374     937 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-p8j59" podUID="f8ad6c24-dd5d-4515-9052-c9aca7412b55"
	May 01 03:59:28 embed-certs-277128 kubelet[937]: E0501 03:59:28.256943     937 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-p8j59" podUID="f8ad6c24-dd5d-4515-9052-c9aca7412b55"
	May 01 03:59:43 embed-certs-277128 kubelet[937]: E0501 03:59:43.257908     937 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-p8j59" podUID="f8ad6c24-dd5d-4515-9052-c9aca7412b55"
	May 01 03:59:58 embed-certs-277128 kubelet[937]: E0501 03:59:58.257627     937 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-p8j59" podUID="f8ad6c24-dd5d-4515-9052-c9aca7412b55"
	May 01 04:00:01 embed-certs-277128 kubelet[937]: E0501 04:00:01.308004     937 iptables.go:577] "Could not set up iptables canary" err=<
	May 01 04:00:01 embed-certs-277128 kubelet[937]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 01 04:00:01 embed-certs-277128 kubelet[937]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 01 04:00:01 embed-certs-277128 kubelet[937]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 01 04:00:01 embed-certs-277128 kubelet[937]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 01 04:00:09 embed-certs-277128 kubelet[937]: E0501 04:00:09.261636     937 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-p8j59" podUID="f8ad6c24-dd5d-4515-9052-c9aca7412b55"
	
	
	==> storage-provisioner [aaae36261c5ce1accf3bfb5993be5a256a037a640d5f6f30de8384900dda06b2] <==
	I0501 03:40:05.904319       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0501 03:40:35.908215       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [f9a8d2f0f9453969b410fb7f880f6cf4c7e6e2f3c17a687583906efe4181dd17] <==
	I0501 03:40:36.685379       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0501 03:40:36.700336       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0501 03:40:36.700528       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0501 03:40:54.110988       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0501 03:40:54.111690       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9e13eae9-b179-487f-bd34-653ce075558a", APIVersion:"v1", ResourceVersion:"587", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-277128_2eb83f7d-a184-4cb0-9be5-8cfdad84d7a9 became leader
	I0501 03:40:54.111951       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-277128_2eb83f7d-a184-4cb0-9be5-8cfdad84d7a9!
	I0501 03:40:54.213181       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-277128_2eb83f7d-a184-4cb0-9be5-8cfdad84d7a9!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-277128 -n embed-certs-277128
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-277128 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-p8j59
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-277128 describe pod metrics-server-569cc877fc-p8j59
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-277128 describe pod metrics-server-569cc877fc-p8j59: exit status 1 (73.331328ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-p8j59" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-277128 describe pod metrics-server-569cc877fc-p8j59: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (400.96s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (451.57s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-715118 -n default-k8s-diff-port-715118
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-05-01 04:02:09.938246818 +0000 UTC m=+6905.044163120
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-715118 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-715118 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.499µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-715118 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-715118 -n default-k8s-diff-port-715118
E0501 04:02:10.087466   20724 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/no-preload-892672/client.crt: no such file or directory
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-715118 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-715118 logs -n 25: (3.62068323s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p                                                     | default-k8s-diff-port-715118 | jenkins | v1.33.0 | 01 May 24 03:33 UTC |                     |
	|         | default-k8s-diff-port-715118                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-892672                  | no-preload-892672            | jenkins | v1.33.0 | 01 May 24 03:34 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-277128                 | embed-certs-277128           | jenkins | v1.33.0 | 01 May 24 03:34 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-892672                                   | no-preload-892672            | jenkins | v1.33.0 | 01 May 24 03:34 UTC | 01 May 24 03:46 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-503971        | old-k8s-version-503971       | jenkins | v1.33.0 | 01 May 24 03:34 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p embed-certs-277128                                  | embed-certs-277128           | jenkins | v1.33.0 | 01 May 24 03:34 UTC | 01 May 24 03:44 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-715118       | default-k8s-diff-port-715118 | jenkins | v1.33.0 | 01 May 24 03:35 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-715118 | jenkins | v1.33.0 | 01 May 24 03:35 UTC | 01 May 24 03:45 UTC |
	|         | default-k8s-diff-port-715118                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-503971                              | old-k8s-version-503971       | jenkins | v1.33.0 | 01 May 24 03:36 UTC | 01 May 24 03:36 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-503971             | old-k8s-version-503971       | jenkins | v1.33.0 | 01 May 24 03:36 UTC | 01 May 24 03:36 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-503971                              | old-k8s-version-503971       | jenkins | v1.33.0 | 01 May 24 03:36 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-503971                              | old-k8s-version-503971       | jenkins | v1.33.0 | 01 May 24 04:00 UTC | 01 May 24 04:00 UTC |
	| start   | -p newest-cni-906018 --memory=2200 --alsologtostderr   | newest-cni-906018            | jenkins | v1.33.0 | 01 May 24 04:00 UTC | 01 May 24 04:01 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| delete  | -p no-preload-892672                                   | no-preload-892672            | jenkins | v1.33.0 | 01 May 24 04:00 UTC | 01 May 24 04:00 UTC |
	| start   | -p auto-731347 --memory=3072                           | auto-731347                  | jenkins | v1.33.0 | 01 May 24 04:00 UTC | 01 May 24 04:01 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p embed-certs-277128                                  | embed-certs-277128           | jenkins | v1.33.0 | 01 May 24 04:00 UTC | 01 May 24 04:00 UTC |
	| start   | -p kindnet-731347                                      | kindnet-731347               | jenkins | v1.33.0 | 01 May 24 04:00 UTC |                     |
	|         | --memory=3072                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --cni=kindnet --driver=kvm2                            |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-906018             | newest-cni-906018            | jenkins | v1.33.0 | 01 May 24 04:01 UTC | 01 May 24 04:01 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-906018                                   | newest-cni-906018            | jenkins | v1.33.0 | 01 May 24 04:01 UTC | 01 May 24 04:01 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-906018                  | newest-cni-906018            | jenkins | v1.33.0 | 01 May 24 04:01 UTC | 01 May 24 04:01 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-906018 --memory=2200 --alsologtostderr   | newest-cni-906018            | jenkins | v1.33.0 | 01 May 24 04:01 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| ssh     | -p auto-731347 pgrep -a                                | auto-731347                  | jenkins | v1.33.0 | 01 May 24 04:01 UTC | 01 May 24 04:01 UTC |
	|         | kubelet                                                |                              |         |         |                     |                     |
	| ssh     | -p auto-731347 sudo cat                                | auto-731347                  | jenkins | v1.33.0 | 01 May 24 04:02 UTC | 01 May 24 04:02 UTC |
	|         | /etc/nsswitch.conf                                     |                              |         |         |                     |                     |
	| ssh     | -p auto-731347 sudo cat                                | auto-731347                  | jenkins | v1.33.0 | 01 May 24 04:02 UTC | 01 May 24 04:02 UTC |
	|         | /etc/hosts                                             |                              |         |         |                     |                     |
	| ssh     | -p auto-731347 sudo cat                                | auto-731347                  | jenkins | v1.33.0 | 01 May 24 04:02 UTC |                     |
	|         | /etc/resolv.conf                                       |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/01 04:01:29
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0501 04:01:29.809240   77015 out.go:291] Setting OutFile to fd 1 ...
	I0501 04:01:29.809568   77015 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 04:01:29.809585   77015 out.go:304] Setting ErrFile to fd 2...
	I0501 04:01:29.809591   77015 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 04:01:29.809908   77015 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18779-13391/.minikube/bin
	I0501 04:01:29.810702   77015 out.go:298] Setting JSON to false
	I0501 04:01:29.812510   77015 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":9833,"bootTime":1714526257,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0501 04:01:29.812706   77015 start.go:139] virtualization: kvm guest
	I0501 04:01:29.814460   77015 out.go:177] * [newest-cni-906018] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0501 04:01:29.816337   77015 out.go:177]   - MINIKUBE_LOCATION=18779
	I0501 04:01:29.816298   77015 notify.go:220] Checking for updates...
	I0501 04:01:29.817754   77015 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0501 04:01:29.819079   77015 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18779-13391/kubeconfig
	I0501 04:01:29.820540   77015 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18779-13391/.minikube
	I0501 04:01:29.821725   77015 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0501 04:01:29.822868   77015 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0501 04:01:29.824695   77015 config.go:182] Loaded profile config "newest-cni-906018": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0501 04:01:29.825226   77015 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 04:01:29.825274   77015 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 04:01:29.841121   77015 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39271
	I0501 04:01:29.841568   77015 main.go:141] libmachine: () Calling .GetVersion
	I0501 04:01:29.842153   77015 main.go:141] libmachine: Using API Version  1
	I0501 04:01:29.842184   77015 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 04:01:29.842590   77015 main.go:141] libmachine: () Calling .GetMachineName
	I0501 04:01:29.842794   77015 main.go:141] libmachine: (newest-cni-906018) Calling .DriverName
	I0501 04:01:29.843077   77015 driver.go:392] Setting default libvirt URI to qemu:///system
	I0501 04:01:29.843488   77015 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 04:01:29.843559   77015 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 04:01:29.859213   77015 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46079
	I0501 04:01:29.859684   77015 main.go:141] libmachine: () Calling .GetVersion
	I0501 04:01:29.860225   77015 main.go:141] libmachine: Using API Version  1
	I0501 04:01:29.860249   77015 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 04:01:29.860611   77015 main.go:141] libmachine: () Calling .GetMachineName
	I0501 04:01:29.860822   77015 main.go:141] libmachine: (newest-cni-906018) Calling .DriverName
	I0501 04:01:29.901122   77015 out.go:177] * Using the kvm2 driver based on existing profile
	I0501 04:01:29.902410   77015 start.go:297] selected driver: kvm2
	I0501 04:01:29.902428   77015 start.go:901] validating driver "kvm2" against &{Name:newest-cni-906018 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.0 ClusterName:newest-cni-906018 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.183 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] St
artHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 04:01:29.902579   77015 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0501 04:01:29.903329   77015 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0501 04:01:29.903428   77015 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18779-13391/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0501 04:01:29.920348   77015 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0501 04:01:29.920742   77015 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0501 04:01:29.920798   77015 cni.go:84] Creating CNI manager for ""
	I0501 04:01:29.920808   77015 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0501 04:01:29.920843   77015 start.go:340] cluster config:
	{Name:newest-cni-906018 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:newest-cni-906018 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.183 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network
: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 04:01:29.920948   77015 iso.go:125] acquiring lock: {Name:mkcd2496aadb29931e179193d707c731bfda98b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0501 04:01:29.923536   77015 out.go:177] * Starting "newest-cni-906018" primary control-plane node in "newest-cni-906018" cluster
	I0501 04:01:31.811447   76145 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0501 04:01:31.811527   76145 kubeadm.go:309] [preflight] Running pre-flight checks
	I0501 04:01:31.811598   76145 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0501 04:01:31.811689   76145 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0501 04:01:31.811819   76145 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0501 04:01:31.811907   76145 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0501 04:01:31.813440   76145 out.go:204]   - Generating certificates and keys ...
	I0501 04:01:31.813505   76145 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0501 04:01:31.813582   76145 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0501 04:01:31.813675   76145 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0501 04:01:31.813750   76145 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0501 04:01:31.813822   76145 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0501 04:01:31.813895   76145 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0501 04:01:31.813982   76145 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0501 04:01:31.814164   76145 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [auto-731347 localhost] and IPs [192.168.39.152 127.0.0.1 ::1]
	I0501 04:01:31.814235   76145 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0501 04:01:31.814331   76145 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [auto-731347 localhost] and IPs [192.168.39.152 127.0.0.1 ::1]
	I0501 04:01:31.814390   76145 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0501 04:01:31.814462   76145 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0501 04:01:31.814510   76145 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0501 04:01:31.814585   76145 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0501 04:01:31.814661   76145 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0501 04:01:31.814740   76145 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0501 04:01:31.814816   76145 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0501 04:01:31.814904   76145 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0501 04:01:31.814981   76145 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0501 04:01:31.815107   76145 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0501 04:01:31.815167   76145 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0501 04:01:31.816499   76145 out.go:204]   - Booting up control plane ...
	I0501 04:01:31.816591   76145 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0501 04:01:31.816662   76145 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0501 04:01:31.816725   76145 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0501 04:01:31.816821   76145 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0501 04:01:31.816915   76145 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0501 04:01:31.816980   76145 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0501 04:01:31.817113   76145 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0501 04:01:31.817213   76145 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0501 04:01:31.817305   76145 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 501.090332ms
	I0501 04:01:31.817415   76145 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0501 04:01:31.817466   76145 kubeadm.go:309] [api-check] The API server is healthy after 5.504812712s
	I0501 04:01:31.817599   76145 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0501 04:01:31.817743   76145 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0501 04:01:31.817801   76145 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0501 04:01:31.817936   76145 kubeadm.go:309] [mark-control-plane] Marking the node auto-731347 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0501 04:01:31.817990   76145 kubeadm.go:309] [bootstrap-token] Using token: dkyf3k.ko3yfmtthnl0mor7
	I0501 04:01:31.819225   76145 out.go:204]   - Configuring RBAC rules ...
	I0501 04:01:31.819353   76145 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0501 04:01:31.819438   76145 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0501 04:01:31.819563   76145 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0501 04:01:31.819708   76145 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0501 04:01:31.819860   76145 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0501 04:01:31.819972   76145 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0501 04:01:31.820091   76145 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0501 04:01:31.820134   76145 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0501 04:01:31.820180   76145 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0501 04:01:31.820187   76145 kubeadm.go:309] 
	I0501 04:01:31.820244   76145 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0501 04:01:31.820253   76145 kubeadm.go:309] 
	I0501 04:01:31.820319   76145 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0501 04:01:31.820326   76145 kubeadm.go:309] 
	I0501 04:01:31.820355   76145 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0501 04:01:31.820408   76145 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0501 04:01:31.820466   76145 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0501 04:01:31.820479   76145 kubeadm.go:309] 
	I0501 04:01:31.820533   76145 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0501 04:01:31.820539   76145 kubeadm.go:309] 
	I0501 04:01:31.820595   76145 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0501 04:01:31.820605   76145 kubeadm.go:309] 
	I0501 04:01:31.820644   76145 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0501 04:01:31.820702   76145 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0501 04:01:31.820756   76145 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0501 04:01:31.820761   76145 kubeadm.go:309] 
	I0501 04:01:31.820827   76145 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0501 04:01:31.820893   76145 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0501 04:01:31.820903   76145 kubeadm.go:309] 
	I0501 04:01:31.820969   76145 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token dkyf3k.ko3yfmtthnl0mor7 \
	I0501 04:01:31.821058   76145 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:bd94cc6acfb95fe36f70b12c6a1f2980601b8235e7c4dea65cebdf35eb514754 \
	I0501 04:01:31.821084   76145 kubeadm.go:309] 	--control-plane 
	I0501 04:01:31.821094   76145 kubeadm.go:309] 
	I0501 04:01:31.821173   76145 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0501 04:01:31.821179   76145 kubeadm.go:309] 
	I0501 04:01:31.821247   76145 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token dkyf3k.ko3yfmtthnl0mor7 \
	I0501 04:01:31.821338   76145 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:bd94cc6acfb95fe36f70b12c6a1f2980601b8235e7c4dea65cebdf35eb514754 
	I0501 04:01:31.821359   76145 cni.go:84] Creating CNI manager for ""
	I0501 04:01:31.821369   76145 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0501 04:01:31.822677   76145 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0501 04:01:29.460554   76250 main.go:141] libmachine: (kindnet-731347) DBG | domain kindnet-731347 has defined MAC address 52:54:00:73:09:76 in network mk-kindnet-731347
	I0501 04:01:29.461065   76250 main.go:141] libmachine: (kindnet-731347) DBG | unable to find current IP address of domain kindnet-731347 in network mk-kindnet-731347
	I0501 04:01:29.461096   76250 main.go:141] libmachine: (kindnet-731347) DBG | I0501 04:01:29.461030   76628 retry.go:31] will retry after 5.438275864s: waiting for machine to come up
	I0501 04:01:29.924831   77015 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0501 04:01:29.924874   77015 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0501 04:01:29.924882   77015 cache.go:56] Caching tarball of preloaded images
	I0501 04:01:29.924984   77015 preload.go:173] Found /home/jenkins/minikube-integration/18779-13391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0501 04:01:29.924999   77015 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0501 04:01:29.925124   77015 profile.go:143] Saving config to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/newest-cni-906018/config.json ...
	I0501 04:01:29.925301   77015 start.go:360] acquireMachinesLock for newest-cni-906018: {Name:mkc4548e5afe1d4b0a833f6c522103562b0cefff Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0501 04:01:31.823756   76145 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0501 04:01:31.835148   76145 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0501 04:01:31.855691   76145 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0501 04:01:31.855783   76145 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 04:01:31.855805   76145 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-731347 minikube.k8s.io/updated_at=2024_05_01T04_01_31_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=2c4eae41cda912e6a762d77f0d8868e00f97bb4e minikube.k8s.io/name=auto-731347 minikube.k8s.io/primary=true
	I0501 04:01:32.045934   76145 ops.go:34] apiserver oom_adj: -16
	I0501 04:01:32.045942   76145 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 04:01:32.546007   76145 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 04:01:33.046022   76145 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 04:01:33.546188   76145 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 04:01:34.046041   76145 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 04:01:34.546686   76145 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 04:01:35.046009   76145 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 04:01:35.546361   76145 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 04:01:36.046201   76145 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 04:01:34.900718   76250 main.go:141] libmachine: (kindnet-731347) DBG | domain kindnet-731347 has defined MAC address 52:54:00:73:09:76 in network mk-kindnet-731347
	I0501 04:01:34.901254   76250 main.go:141] libmachine: (kindnet-731347) Found IP for machine: 192.168.50.20
	I0501 04:01:34.901278   76250 main.go:141] libmachine: (kindnet-731347) DBG | domain kindnet-731347 has current primary IP address 192.168.50.20 and MAC address 52:54:00:73:09:76 in network mk-kindnet-731347
	I0501 04:01:34.901287   76250 main.go:141] libmachine: (kindnet-731347) Reserving static IP address...
	I0501 04:01:34.901614   76250 main.go:141] libmachine: (kindnet-731347) DBG | unable to find host DHCP lease matching {name: "kindnet-731347", mac: "52:54:00:73:09:76", ip: "192.168.50.20"} in network mk-kindnet-731347
	I0501 04:01:34.978370   76250 main.go:141] libmachine: (kindnet-731347) Reserved static IP address: 192.168.50.20
	I0501 04:01:34.978418   76250 main.go:141] libmachine: (kindnet-731347) DBG | Getting to WaitForSSH function...
	I0501 04:01:34.978428   76250 main.go:141] libmachine: (kindnet-731347) Waiting for SSH to be available...
	I0501 04:01:34.981254   76250 main.go:141] libmachine: (kindnet-731347) DBG | domain kindnet-731347 has defined MAC address 52:54:00:73:09:76 in network mk-kindnet-731347
	I0501 04:01:34.981568   76250 main.go:141] libmachine: (kindnet-731347) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:73:09:76", ip: ""} in network mk-kindnet-731347
	I0501 04:01:34.981594   76250 main.go:141] libmachine: (kindnet-731347) DBG | unable to find defined IP address of network mk-kindnet-731347 interface with MAC address 52:54:00:73:09:76
	I0501 04:01:34.981763   76250 main.go:141] libmachine: (kindnet-731347) DBG | Using SSH client type: external
	I0501 04:01:34.981793   76250 main.go:141] libmachine: (kindnet-731347) DBG | Using SSH private key: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/kindnet-731347/id_rsa (-rw-------)
	I0501 04:01:34.981827   76250 main.go:141] libmachine: (kindnet-731347) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18779-13391/.minikube/machines/kindnet-731347/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0501 04:01:34.981845   76250 main.go:141] libmachine: (kindnet-731347) DBG | About to run SSH command:
	I0501 04:01:34.981863   76250 main.go:141] libmachine: (kindnet-731347) DBG | exit 0
	I0501 04:01:34.985503   76250 main.go:141] libmachine: (kindnet-731347) DBG | SSH cmd err, output: exit status 255: 
	I0501 04:01:34.985529   76250 main.go:141] libmachine: (kindnet-731347) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0501 04:01:34.985539   76250 main.go:141] libmachine: (kindnet-731347) DBG | command : exit 0
	I0501 04:01:34.985548   76250 main.go:141] libmachine: (kindnet-731347) DBG | err     : exit status 255
	I0501 04:01:34.985573   76250 main.go:141] libmachine: (kindnet-731347) DBG | output  : 
	I0501 04:01:39.531927   77015 start.go:364] duration metric: took 9.606597102s to acquireMachinesLock for "newest-cni-906018"
	I0501 04:01:39.531993   77015 start.go:96] Skipping create...Using existing machine configuration
	I0501 04:01:39.532001   77015 fix.go:54] fixHost starting: 
	I0501 04:01:39.532421   77015 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 04:01:39.532480   77015 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 04:01:39.550194   77015 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44537
	I0501 04:01:39.550704   77015 main.go:141] libmachine: () Calling .GetVersion
	I0501 04:01:39.551267   77015 main.go:141] libmachine: Using API Version  1
	I0501 04:01:39.551294   77015 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 04:01:39.551660   77015 main.go:141] libmachine: () Calling .GetMachineName
	I0501 04:01:39.551854   77015 main.go:141] libmachine: (newest-cni-906018) Calling .DriverName
	I0501 04:01:39.552004   77015 main.go:141] libmachine: (newest-cni-906018) Calling .GetState
	I0501 04:01:39.553657   77015 fix.go:112] recreateIfNeeded on newest-cni-906018: state=Stopped err=<nil>
	I0501 04:01:39.553684   77015 main.go:141] libmachine: (newest-cni-906018) Calling .DriverName
	W0501 04:01:39.553834   77015 fix.go:138] unexpected machine state, will restart: <nil>
	I0501 04:01:39.556039   77015 out.go:177] * Restarting existing kvm2 VM for "newest-cni-906018" ...
	I0501 04:01:39.557410   77015 main.go:141] libmachine: (newest-cni-906018) Calling .Start
	I0501 04:01:39.557583   77015 main.go:141] libmachine: (newest-cni-906018) Ensuring networks are active...
	I0501 04:01:39.558466   77015 main.go:141] libmachine: (newest-cni-906018) Ensuring network default is active
	I0501 04:01:39.558876   77015 main.go:141] libmachine: (newest-cni-906018) Ensuring network mk-newest-cni-906018 is active
	I0501 04:01:39.559281   77015 main.go:141] libmachine: (newest-cni-906018) Getting domain xml...
	I0501 04:01:39.560075   77015 main.go:141] libmachine: (newest-cni-906018) Creating domain...
	I0501 04:01:37.985921   76250 main.go:141] libmachine: (kindnet-731347) DBG | Getting to WaitForSSH function...
	I0501 04:01:37.988414   76250 main.go:141] libmachine: (kindnet-731347) DBG | domain kindnet-731347 has defined MAC address 52:54:00:73:09:76 in network mk-kindnet-731347
	I0501 04:01:37.988828   76250 main.go:141] libmachine: (kindnet-731347) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:09:76", ip: ""} in network mk-kindnet-731347: {Iface:virbr2 ExpiryTime:2024-05-01 05:01:29 +0000 UTC Type:0 Mac:52:54:00:73:09:76 Iaid: IPaddr:192.168.50.20 Prefix:24 Hostname:kindnet-731347 Clientid:01:52:54:00:73:09:76}
	I0501 04:01:37.988853   76250 main.go:141] libmachine: (kindnet-731347) DBG | domain kindnet-731347 has defined IP address 192.168.50.20 and MAC address 52:54:00:73:09:76 in network mk-kindnet-731347
	I0501 04:01:37.988993   76250 main.go:141] libmachine: (kindnet-731347) DBG | Using SSH client type: external
	I0501 04:01:37.989013   76250 main.go:141] libmachine: (kindnet-731347) DBG | Using SSH private key: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/kindnet-731347/id_rsa (-rw-------)
	I0501 04:01:37.989064   76250 main.go:141] libmachine: (kindnet-731347) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.20 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18779-13391/.minikube/machines/kindnet-731347/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0501 04:01:37.989092   76250 main.go:141] libmachine: (kindnet-731347) DBG | About to run SSH command:
	I0501 04:01:37.989109   76250 main.go:141] libmachine: (kindnet-731347) DBG | exit 0
	I0501 04:01:38.114554   76250 main.go:141] libmachine: (kindnet-731347) DBG | SSH cmd err, output: <nil>: 
	I0501 04:01:38.114857   76250 main.go:141] libmachine: (kindnet-731347) KVM machine creation complete!
	I0501 04:01:38.115152   76250 main.go:141] libmachine: (kindnet-731347) Calling .GetConfigRaw
	I0501 04:01:38.115737   76250 main.go:141] libmachine: (kindnet-731347) Calling .DriverName
	I0501 04:01:38.115961   76250 main.go:141] libmachine: (kindnet-731347) Calling .DriverName
	I0501 04:01:38.116131   76250 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0501 04:01:38.116149   76250 main.go:141] libmachine: (kindnet-731347) Calling .GetState
	I0501 04:01:38.117347   76250 main.go:141] libmachine: Detecting operating system of created instance...
	I0501 04:01:38.117362   76250 main.go:141] libmachine: Waiting for SSH to be available...
	I0501 04:01:38.117370   76250 main.go:141] libmachine: Getting to WaitForSSH function...
	I0501 04:01:38.117379   76250 main.go:141] libmachine: (kindnet-731347) Calling .GetSSHHostname
	I0501 04:01:38.119615   76250 main.go:141] libmachine: (kindnet-731347) DBG | domain kindnet-731347 has defined MAC address 52:54:00:73:09:76 in network mk-kindnet-731347
	I0501 04:01:38.120068   76250 main.go:141] libmachine: (kindnet-731347) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:09:76", ip: ""} in network mk-kindnet-731347: {Iface:virbr2 ExpiryTime:2024-05-01 05:01:29 +0000 UTC Type:0 Mac:52:54:00:73:09:76 Iaid: IPaddr:192.168.50.20 Prefix:24 Hostname:kindnet-731347 Clientid:01:52:54:00:73:09:76}
	I0501 04:01:38.120096   76250 main.go:141] libmachine: (kindnet-731347) DBG | domain kindnet-731347 has defined IP address 192.168.50.20 and MAC address 52:54:00:73:09:76 in network mk-kindnet-731347
	I0501 04:01:38.120260   76250 main.go:141] libmachine: (kindnet-731347) Calling .GetSSHPort
	I0501 04:01:38.120437   76250 main.go:141] libmachine: (kindnet-731347) Calling .GetSSHKeyPath
	I0501 04:01:38.120582   76250 main.go:141] libmachine: (kindnet-731347) Calling .GetSSHKeyPath
	I0501 04:01:38.120697   76250 main.go:141] libmachine: (kindnet-731347) Calling .GetSSHUsername
	I0501 04:01:38.120870   76250 main.go:141] libmachine: Using SSH client type: native
	I0501 04:01:38.121108   76250 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.20 22 <nil> <nil>}
	I0501 04:01:38.121122   76250 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0501 04:01:38.221939   76250 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0501 04:01:38.221969   76250 main.go:141] libmachine: Detecting the provisioner...
	I0501 04:01:38.221981   76250 main.go:141] libmachine: (kindnet-731347) Calling .GetSSHHostname
	I0501 04:01:38.224532   76250 main.go:141] libmachine: (kindnet-731347) DBG | domain kindnet-731347 has defined MAC address 52:54:00:73:09:76 in network mk-kindnet-731347
	I0501 04:01:38.224932   76250 main.go:141] libmachine: (kindnet-731347) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:09:76", ip: ""} in network mk-kindnet-731347: {Iface:virbr2 ExpiryTime:2024-05-01 05:01:29 +0000 UTC Type:0 Mac:52:54:00:73:09:76 Iaid: IPaddr:192.168.50.20 Prefix:24 Hostname:kindnet-731347 Clientid:01:52:54:00:73:09:76}
	I0501 04:01:38.224965   76250 main.go:141] libmachine: (kindnet-731347) DBG | domain kindnet-731347 has defined IP address 192.168.50.20 and MAC address 52:54:00:73:09:76 in network mk-kindnet-731347
	I0501 04:01:38.225163   76250 main.go:141] libmachine: (kindnet-731347) Calling .GetSSHPort
	I0501 04:01:38.225366   76250 main.go:141] libmachine: (kindnet-731347) Calling .GetSSHKeyPath
	I0501 04:01:38.225549   76250 main.go:141] libmachine: (kindnet-731347) Calling .GetSSHKeyPath
	I0501 04:01:38.225722   76250 main.go:141] libmachine: (kindnet-731347) Calling .GetSSHUsername
	I0501 04:01:38.225896   76250 main.go:141] libmachine: Using SSH client type: native
	I0501 04:01:38.226064   76250 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.20 22 <nil> <nil>}
	I0501 04:01:38.226076   76250 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0501 04:01:38.327816   76250 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0501 04:01:38.327910   76250 main.go:141] libmachine: found compatible host: buildroot
	I0501 04:01:38.327925   76250 main.go:141] libmachine: Provisioning with buildroot...
	I0501 04:01:38.327937   76250 main.go:141] libmachine: (kindnet-731347) Calling .GetMachineName
	I0501 04:01:38.328260   76250 buildroot.go:166] provisioning hostname "kindnet-731347"
	I0501 04:01:38.328305   76250 main.go:141] libmachine: (kindnet-731347) Calling .GetMachineName
	I0501 04:01:38.328509   76250 main.go:141] libmachine: (kindnet-731347) Calling .GetSSHHostname
	I0501 04:01:38.331376   76250 main.go:141] libmachine: (kindnet-731347) DBG | domain kindnet-731347 has defined MAC address 52:54:00:73:09:76 in network mk-kindnet-731347
	I0501 04:01:38.331763   76250 main.go:141] libmachine: (kindnet-731347) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:09:76", ip: ""} in network mk-kindnet-731347: {Iface:virbr2 ExpiryTime:2024-05-01 05:01:29 +0000 UTC Type:0 Mac:52:54:00:73:09:76 Iaid: IPaddr:192.168.50.20 Prefix:24 Hostname:kindnet-731347 Clientid:01:52:54:00:73:09:76}
	I0501 04:01:38.331802   76250 main.go:141] libmachine: (kindnet-731347) DBG | domain kindnet-731347 has defined IP address 192.168.50.20 and MAC address 52:54:00:73:09:76 in network mk-kindnet-731347
	I0501 04:01:38.331959   76250 main.go:141] libmachine: (kindnet-731347) Calling .GetSSHPort
	I0501 04:01:38.332126   76250 main.go:141] libmachine: (kindnet-731347) Calling .GetSSHKeyPath
	I0501 04:01:38.332301   76250 main.go:141] libmachine: (kindnet-731347) Calling .GetSSHKeyPath
	I0501 04:01:38.332439   76250 main.go:141] libmachine: (kindnet-731347) Calling .GetSSHUsername
	I0501 04:01:38.332578   76250 main.go:141] libmachine: Using SSH client type: native
	I0501 04:01:38.332737   76250 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.20 22 <nil> <nil>}
	I0501 04:01:38.332749   76250 main.go:141] libmachine: About to run SSH command:
	sudo hostname kindnet-731347 && echo "kindnet-731347" | sudo tee /etc/hostname
	I0501 04:01:38.450257   76250 main.go:141] libmachine: SSH cmd err, output: <nil>: kindnet-731347
	
	I0501 04:01:38.450319   76250 main.go:141] libmachine: (kindnet-731347) Calling .GetSSHHostname
	I0501 04:01:38.452935   76250 main.go:141] libmachine: (kindnet-731347) DBG | domain kindnet-731347 has defined MAC address 52:54:00:73:09:76 in network mk-kindnet-731347
	I0501 04:01:38.453366   76250 main.go:141] libmachine: (kindnet-731347) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:09:76", ip: ""} in network mk-kindnet-731347: {Iface:virbr2 ExpiryTime:2024-05-01 05:01:29 +0000 UTC Type:0 Mac:52:54:00:73:09:76 Iaid: IPaddr:192.168.50.20 Prefix:24 Hostname:kindnet-731347 Clientid:01:52:54:00:73:09:76}
	I0501 04:01:38.453400   76250 main.go:141] libmachine: (kindnet-731347) DBG | domain kindnet-731347 has defined IP address 192.168.50.20 and MAC address 52:54:00:73:09:76 in network mk-kindnet-731347
	I0501 04:01:38.453616   76250 main.go:141] libmachine: (kindnet-731347) Calling .GetSSHPort
	I0501 04:01:38.453803   76250 main.go:141] libmachine: (kindnet-731347) Calling .GetSSHKeyPath
	I0501 04:01:38.453993   76250 main.go:141] libmachine: (kindnet-731347) Calling .GetSSHKeyPath
	I0501 04:01:38.454174   76250 main.go:141] libmachine: (kindnet-731347) Calling .GetSSHUsername
	I0501 04:01:38.454415   76250 main.go:141] libmachine: Using SSH client type: native
	I0501 04:01:38.454590   76250 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.20 22 <nil> <nil>}
	I0501 04:01:38.454608   76250 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-731347' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-731347/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-731347' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0501 04:01:38.567005   76250 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0501 04:01:38.567039   76250 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18779-13391/.minikube CaCertPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18779-13391/.minikube}
	I0501 04:01:38.567064   76250 buildroot.go:174] setting up certificates
	I0501 04:01:38.567077   76250 provision.go:84] configureAuth start
	I0501 04:01:38.567093   76250 main.go:141] libmachine: (kindnet-731347) Calling .GetMachineName
	I0501 04:01:38.567418   76250 main.go:141] libmachine: (kindnet-731347) Calling .GetIP
	I0501 04:01:38.570257   76250 main.go:141] libmachine: (kindnet-731347) DBG | domain kindnet-731347 has defined MAC address 52:54:00:73:09:76 in network mk-kindnet-731347
	I0501 04:01:38.570623   76250 main.go:141] libmachine: (kindnet-731347) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:09:76", ip: ""} in network mk-kindnet-731347: {Iface:virbr2 ExpiryTime:2024-05-01 05:01:29 +0000 UTC Type:0 Mac:52:54:00:73:09:76 Iaid: IPaddr:192.168.50.20 Prefix:24 Hostname:kindnet-731347 Clientid:01:52:54:00:73:09:76}
	I0501 04:01:38.570650   76250 main.go:141] libmachine: (kindnet-731347) DBG | domain kindnet-731347 has defined IP address 192.168.50.20 and MAC address 52:54:00:73:09:76 in network mk-kindnet-731347
	I0501 04:01:38.570751   76250 main.go:141] libmachine: (kindnet-731347) Calling .GetSSHHostname
	I0501 04:01:38.573051   76250 main.go:141] libmachine: (kindnet-731347) DBG | domain kindnet-731347 has defined MAC address 52:54:00:73:09:76 in network mk-kindnet-731347
	I0501 04:01:38.573376   76250 main.go:141] libmachine: (kindnet-731347) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:09:76", ip: ""} in network mk-kindnet-731347: {Iface:virbr2 ExpiryTime:2024-05-01 05:01:29 +0000 UTC Type:0 Mac:52:54:00:73:09:76 Iaid: IPaddr:192.168.50.20 Prefix:24 Hostname:kindnet-731347 Clientid:01:52:54:00:73:09:76}
	I0501 04:01:38.573417   76250 main.go:141] libmachine: (kindnet-731347) DBG | domain kindnet-731347 has defined IP address 192.168.50.20 and MAC address 52:54:00:73:09:76 in network mk-kindnet-731347
	I0501 04:01:38.573514   76250 provision.go:143] copyHostCerts
	I0501 04:01:38.573572   76250 exec_runner.go:144] found /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem, removing ...
	I0501 04:01:38.573587   76250 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem
	I0501 04:01:38.573662   76250 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem (1078 bytes)
	I0501 04:01:38.573797   76250 exec_runner.go:144] found /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem, removing ...
	I0501 04:01:38.573812   76250 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem
	I0501 04:01:38.573843   76250 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem (1123 bytes)
	I0501 04:01:38.573923   76250 exec_runner.go:144] found /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem, removing ...
	I0501 04:01:38.573937   76250 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem
	I0501 04:01:38.573963   76250 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem (1675 bytes)
	I0501 04:01:38.574016   76250 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca-key.pem org=jenkins.kindnet-731347 san=[127.0.0.1 192.168.50.20 kindnet-731347 localhost minikube]
	I0501 04:01:38.820511   76250 provision.go:177] copyRemoteCerts
	I0501 04:01:38.820563   76250 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0501 04:01:38.820588   76250 main.go:141] libmachine: (kindnet-731347) Calling .GetSSHHostname
	I0501 04:01:38.823310   76250 main.go:141] libmachine: (kindnet-731347) DBG | domain kindnet-731347 has defined MAC address 52:54:00:73:09:76 in network mk-kindnet-731347
	I0501 04:01:38.823652   76250 main.go:141] libmachine: (kindnet-731347) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:09:76", ip: ""} in network mk-kindnet-731347: {Iface:virbr2 ExpiryTime:2024-05-01 05:01:29 +0000 UTC Type:0 Mac:52:54:00:73:09:76 Iaid: IPaddr:192.168.50.20 Prefix:24 Hostname:kindnet-731347 Clientid:01:52:54:00:73:09:76}
	I0501 04:01:38.823683   76250 main.go:141] libmachine: (kindnet-731347) DBG | domain kindnet-731347 has defined IP address 192.168.50.20 and MAC address 52:54:00:73:09:76 in network mk-kindnet-731347
	I0501 04:01:38.823829   76250 main.go:141] libmachine: (kindnet-731347) Calling .GetSSHPort
	I0501 04:01:38.824122   76250 main.go:141] libmachine: (kindnet-731347) Calling .GetSSHKeyPath
	I0501 04:01:38.824308   76250 main.go:141] libmachine: (kindnet-731347) Calling .GetSSHUsername
	I0501 04:01:38.824450   76250 sshutil.go:53] new ssh client: &{IP:192.168.50.20 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/kindnet-731347/id_rsa Username:docker}
	I0501 04:01:38.906556   76250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0501 04:01:38.938334   76250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I0501 04:01:38.966197   76250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0501 04:01:38.992761   76250 provision.go:87] duration metric: took 425.670046ms to configureAuth
	I0501 04:01:38.992789   76250 buildroot.go:189] setting minikube options for container-runtime
	I0501 04:01:38.992994   76250 config.go:182] Loaded profile config "kindnet-731347": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0501 04:01:38.993073   76250 main.go:141] libmachine: (kindnet-731347) Calling .GetSSHHostname
	I0501 04:01:38.995696   76250 main.go:141] libmachine: (kindnet-731347) DBG | domain kindnet-731347 has defined MAC address 52:54:00:73:09:76 in network mk-kindnet-731347
	I0501 04:01:38.995998   76250 main.go:141] libmachine: (kindnet-731347) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:09:76", ip: ""} in network mk-kindnet-731347: {Iface:virbr2 ExpiryTime:2024-05-01 05:01:29 +0000 UTC Type:0 Mac:52:54:00:73:09:76 Iaid: IPaddr:192.168.50.20 Prefix:24 Hostname:kindnet-731347 Clientid:01:52:54:00:73:09:76}
	I0501 04:01:38.996029   76250 main.go:141] libmachine: (kindnet-731347) DBG | domain kindnet-731347 has defined IP address 192.168.50.20 and MAC address 52:54:00:73:09:76 in network mk-kindnet-731347
	I0501 04:01:38.996222   76250 main.go:141] libmachine: (kindnet-731347) Calling .GetSSHPort
	I0501 04:01:38.996449   76250 main.go:141] libmachine: (kindnet-731347) Calling .GetSSHKeyPath
	I0501 04:01:38.996607   76250 main.go:141] libmachine: (kindnet-731347) Calling .GetSSHKeyPath
	I0501 04:01:38.996766   76250 main.go:141] libmachine: (kindnet-731347) Calling .GetSSHUsername
	I0501 04:01:38.996924   76250 main.go:141] libmachine: Using SSH client type: native
	I0501 04:01:38.997122   76250 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.20 22 <nil> <nil>}
	I0501 04:01:38.997146   76250 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0501 04:01:39.287844   76250 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0501 04:01:39.287881   76250 main.go:141] libmachine: Checking connection to Docker...
	I0501 04:01:39.287893   76250 main.go:141] libmachine: (kindnet-731347) Calling .GetURL
	I0501 04:01:39.289243   76250 main.go:141] libmachine: (kindnet-731347) DBG | Using libvirt version 6000000
	I0501 04:01:39.291533   76250 main.go:141] libmachine: (kindnet-731347) DBG | domain kindnet-731347 has defined MAC address 52:54:00:73:09:76 in network mk-kindnet-731347
	I0501 04:01:39.291932   76250 main.go:141] libmachine: (kindnet-731347) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:09:76", ip: ""} in network mk-kindnet-731347: {Iface:virbr2 ExpiryTime:2024-05-01 05:01:29 +0000 UTC Type:0 Mac:52:54:00:73:09:76 Iaid: IPaddr:192.168.50.20 Prefix:24 Hostname:kindnet-731347 Clientid:01:52:54:00:73:09:76}
	I0501 04:01:39.291968   76250 main.go:141] libmachine: (kindnet-731347) DBG | domain kindnet-731347 has defined IP address 192.168.50.20 and MAC address 52:54:00:73:09:76 in network mk-kindnet-731347
	I0501 04:01:39.292139   76250 main.go:141] libmachine: Docker is up and running!
	I0501 04:01:39.292155   76250 main.go:141] libmachine: Reticulating splines...
	I0501 04:01:39.292163   76250 client.go:171] duration metric: took 27.451702469s to LocalClient.Create
	I0501 04:01:39.292190   76250 start.go:167] duration metric: took 27.451779553s to libmachine.API.Create "kindnet-731347"
	I0501 04:01:39.292202   76250 start.go:293] postStartSetup for "kindnet-731347" (driver="kvm2")
	I0501 04:01:39.292230   76250 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0501 04:01:39.292254   76250 main.go:141] libmachine: (kindnet-731347) Calling .DriverName
	I0501 04:01:39.292520   76250 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0501 04:01:39.292542   76250 main.go:141] libmachine: (kindnet-731347) Calling .GetSSHHostname
	I0501 04:01:39.294841   76250 main.go:141] libmachine: (kindnet-731347) DBG | domain kindnet-731347 has defined MAC address 52:54:00:73:09:76 in network mk-kindnet-731347
	I0501 04:01:39.295171   76250 main.go:141] libmachine: (kindnet-731347) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:09:76", ip: ""} in network mk-kindnet-731347: {Iface:virbr2 ExpiryTime:2024-05-01 05:01:29 +0000 UTC Type:0 Mac:52:54:00:73:09:76 Iaid: IPaddr:192.168.50.20 Prefix:24 Hostname:kindnet-731347 Clientid:01:52:54:00:73:09:76}
	I0501 04:01:39.295206   76250 main.go:141] libmachine: (kindnet-731347) DBG | domain kindnet-731347 has defined IP address 192.168.50.20 and MAC address 52:54:00:73:09:76 in network mk-kindnet-731347
	I0501 04:01:39.295331   76250 main.go:141] libmachine: (kindnet-731347) Calling .GetSSHPort
	I0501 04:01:39.295503   76250 main.go:141] libmachine: (kindnet-731347) Calling .GetSSHKeyPath
	I0501 04:01:39.295652   76250 main.go:141] libmachine: (kindnet-731347) Calling .GetSSHUsername
	I0501 04:01:39.295788   76250 sshutil.go:53] new ssh client: &{IP:192.168.50.20 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/kindnet-731347/id_rsa Username:docker}
	I0501 04:01:39.378967   76250 ssh_runner.go:195] Run: cat /etc/os-release
	I0501 04:01:39.384340   76250 info.go:137] Remote host: Buildroot 2023.02.9
	I0501 04:01:39.384368   76250 filesync.go:126] Scanning /home/jenkins/minikube-integration/18779-13391/.minikube/addons for local assets ...
	I0501 04:01:39.384457   76250 filesync.go:126] Scanning /home/jenkins/minikube-integration/18779-13391/.minikube/files for local assets ...
	I0501 04:01:39.384562   76250 filesync.go:149] local asset: /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem -> 207242.pem in /etc/ssl/certs
	I0501 04:01:39.384679   76250 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0501 04:01:39.395853   76250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem --> /etc/ssl/certs/207242.pem (1708 bytes)
	I0501 04:01:39.424637   76250 start.go:296] duration metric: took 132.405453ms for postStartSetup
	I0501 04:01:39.424695   76250 main.go:141] libmachine: (kindnet-731347) Calling .GetConfigRaw
	I0501 04:01:39.425252   76250 main.go:141] libmachine: (kindnet-731347) Calling .GetIP
	I0501 04:01:39.427835   76250 main.go:141] libmachine: (kindnet-731347) DBG | domain kindnet-731347 has defined MAC address 52:54:00:73:09:76 in network mk-kindnet-731347
	I0501 04:01:39.428151   76250 main.go:141] libmachine: (kindnet-731347) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:09:76", ip: ""} in network mk-kindnet-731347: {Iface:virbr2 ExpiryTime:2024-05-01 05:01:29 +0000 UTC Type:0 Mac:52:54:00:73:09:76 Iaid: IPaddr:192.168.50.20 Prefix:24 Hostname:kindnet-731347 Clientid:01:52:54:00:73:09:76}
	I0501 04:01:39.428182   76250 main.go:141] libmachine: (kindnet-731347) DBG | domain kindnet-731347 has defined IP address 192.168.50.20 and MAC address 52:54:00:73:09:76 in network mk-kindnet-731347
	I0501 04:01:39.428433   76250 profile.go:143] Saving config to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/kindnet-731347/config.json ...
	I0501 04:01:39.428598   76250 start.go:128] duration metric: took 27.608785105s to createHost
	I0501 04:01:39.428636   76250 main.go:141] libmachine: (kindnet-731347) Calling .GetSSHHostname
	I0501 04:01:39.430777   76250 main.go:141] libmachine: (kindnet-731347) DBG | domain kindnet-731347 has defined MAC address 52:54:00:73:09:76 in network mk-kindnet-731347
	I0501 04:01:39.431109   76250 main.go:141] libmachine: (kindnet-731347) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:09:76", ip: ""} in network mk-kindnet-731347: {Iface:virbr2 ExpiryTime:2024-05-01 05:01:29 +0000 UTC Type:0 Mac:52:54:00:73:09:76 Iaid: IPaddr:192.168.50.20 Prefix:24 Hostname:kindnet-731347 Clientid:01:52:54:00:73:09:76}
	I0501 04:01:39.431134   76250 main.go:141] libmachine: (kindnet-731347) DBG | domain kindnet-731347 has defined IP address 192.168.50.20 and MAC address 52:54:00:73:09:76 in network mk-kindnet-731347
	I0501 04:01:39.431225   76250 main.go:141] libmachine: (kindnet-731347) Calling .GetSSHPort
	I0501 04:01:39.431386   76250 main.go:141] libmachine: (kindnet-731347) Calling .GetSSHKeyPath
	I0501 04:01:39.431545   76250 main.go:141] libmachine: (kindnet-731347) Calling .GetSSHKeyPath
	I0501 04:01:39.431647   76250 main.go:141] libmachine: (kindnet-731347) Calling .GetSSHUsername
	I0501 04:01:39.431790   76250 main.go:141] libmachine: Using SSH client type: native
	I0501 04:01:39.432016   76250 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.20 22 <nil> <nil>}
	I0501 04:01:39.432028   76250 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0501 04:01:39.531765   76250 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714536099.514048016
	
	I0501 04:01:39.531786   76250 fix.go:216] guest clock: 1714536099.514048016
	I0501 04:01:39.531793   76250 fix.go:229] Guest: 2024-05-01 04:01:39.514048016 +0000 UTC Remote: 2024-05-01 04:01:39.428616396 +0000 UTC m=+77.315898210 (delta=85.43162ms)
	I0501 04:01:39.531837   76250 fix.go:200] guest clock delta is within tolerance: 85.43162ms
	I0501 04:01:39.531848   76250 start.go:83] releasing machines lock for "kindnet-731347", held for 27.712290803s
	I0501 04:01:39.531881   76250 main.go:141] libmachine: (kindnet-731347) Calling .DriverName
	I0501 04:01:39.532123   76250 main.go:141] libmachine: (kindnet-731347) Calling .GetIP
	I0501 04:01:39.535323   76250 main.go:141] libmachine: (kindnet-731347) DBG | domain kindnet-731347 has defined MAC address 52:54:00:73:09:76 in network mk-kindnet-731347
	I0501 04:01:39.535754   76250 main.go:141] libmachine: (kindnet-731347) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:09:76", ip: ""} in network mk-kindnet-731347: {Iface:virbr2 ExpiryTime:2024-05-01 05:01:29 +0000 UTC Type:0 Mac:52:54:00:73:09:76 Iaid: IPaddr:192.168.50.20 Prefix:24 Hostname:kindnet-731347 Clientid:01:52:54:00:73:09:76}
	I0501 04:01:39.535785   76250 main.go:141] libmachine: (kindnet-731347) DBG | domain kindnet-731347 has defined IP address 192.168.50.20 and MAC address 52:54:00:73:09:76 in network mk-kindnet-731347
	I0501 04:01:39.535963   76250 main.go:141] libmachine: (kindnet-731347) Calling .DriverName
	I0501 04:01:39.536464   76250 main.go:141] libmachine: (kindnet-731347) Calling .DriverName
	I0501 04:01:39.536624   76250 main.go:141] libmachine: (kindnet-731347) Calling .DriverName
	I0501 04:01:39.536709   76250 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0501 04:01:39.536749   76250 main.go:141] libmachine: (kindnet-731347) Calling .GetSSHHostname
	I0501 04:01:39.536862   76250 ssh_runner.go:195] Run: cat /version.json
	I0501 04:01:39.536891   76250 main.go:141] libmachine: (kindnet-731347) Calling .GetSSHHostname
	I0501 04:01:39.539343   76250 main.go:141] libmachine: (kindnet-731347) DBG | domain kindnet-731347 has defined MAC address 52:54:00:73:09:76 in network mk-kindnet-731347
	I0501 04:01:39.539396   76250 main.go:141] libmachine: (kindnet-731347) DBG | domain kindnet-731347 has defined MAC address 52:54:00:73:09:76 in network mk-kindnet-731347
	I0501 04:01:39.539729   76250 main.go:141] libmachine: (kindnet-731347) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:09:76", ip: ""} in network mk-kindnet-731347: {Iface:virbr2 ExpiryTime:2024-05-01 05:01:29 +0000 UTC Type:0 Mac:52:54:00:73:09:76 Iaid: IPaddr:192.168.50.20 Prefix:24 Hostname:kindnet-731347 Clientid:01:52:54:00:73:09:76}
	I0501 04:01:39.539757   76250 main.go:141] libmachine: (kindnet-731347) DBG | domain kindnet-731347 has defined IP address 192.168.50.20 and MAC address 52:54:00:73:09:76 in network mk-kindnet-731347
	I0501 04:01:39.539781   76250 main.go:141] libmachine: (kindnet-731347) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:09:76", ip: ""} in network mk-kindnet-731347: {Iface:virbr2 ExpiryTime:2024-05-01 05:01:29 +0000 UTC Type:0 Mac:52:54:00:73:09:76 Iaid: IPaddr:192.168.50.20 Prefix:24 Hostname:kindnet-731347 Clientid:01:52:54:00:73:09:76}
	I0501 04:01:39.539793   76250 main.go:141] libmachine: (kindnet-731347) DBG | domain kindnet-731347 has defined IP address 192.168.50.20 and MAC address 52:54:00:73:09:76 in network mk-kindnet-731347
	I0501 04:01:39.539908   76250 main.go:141] libmachine: (kindnet-731347) Calling .GetSSHPort
	I0501 04:01:39.540084   76250 main.go:141] libmachine: (kindnet-731347) Calling .GetSSHKeyPath
	I0501 04:01:39.540096   76250 main.go:141] libmachine: (kindnet-731347) Calling .GetSSHPort
	I0501 04:01:39.540261   76250 main.go:141] libmachine: (kindnet-731347) Calling .GetSSHUsername
	I0501 04:01:39.540268   76250 main.go:141] libmachine: (kindnet-731347) Calling .GetSSHKeyPath
	I0501 04:01:39.540453   76250 sshutil.go:53] new ssh client: &{IP:192.168.50.20 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/kindnet-731347/id_rsa Username:docker}
	I0501 04:01:39.540469   76250 main.go:141] libmachine: (kindnet-731347) Calling .GetSSHUsername
	I0501 04:01:39.540697   76250 sshutil.go:53] new ssh client: &{IP:192.168.50.20 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/kindnet-731347/id_rsa Username:docker}
	I0501 04:01:39.624958   76250 ssh_runner.go:195] Run: systemctl --version
	I0501 04:01:39.649025   76250 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0501 04:01:39.816133   76250 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0501 04:01:39.822965   76250 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0501 04:01:39.823023   76250 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0501 04:01:39.844015   76250 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0501 04:01:39.844039   76250 start.go:494] detecting cgroup driver to use...
	I0501 04:01:39.844125   76250 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0501 04:01:39.864189   76250 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0501 04:01:39.880916   76250 docker.go:217] disabling cri-docker service (if available) ...
	I0501 04:01:39.880978   76250 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0501 04:01:39.897912   76250 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0501 04:01:39.916218   76250 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0501 04:01:40.043133   76250 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0501 04:01:40.230972   76250 docker.go:233] disabling docker service ...
	I0501 04:01:40.231029   76250 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0501 04:01:40.250067   76250 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0501 04:01:40.268746   76250 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0501 04:01:40.435560   76250 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0501 04:01:40.572045   76250 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0501 04:01:40.593012   76250 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0501 04:01:40.617651   76250 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0501 04:01:40.617734   76250 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 04:01:40.633068   76250 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0501 04:01:40.633153   76250 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 04:01:40.648424   76250 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 04:01:40.662462   76250 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 04:01:40.675917   76250 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0501 04:01:40.689822   76250 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 04:01:40.703116   76250 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 04:01:40.725239   76250 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 04:01:40.738883   76250 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0501 04:01:40.751342   76250 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0501 04:01:40.751408   76250 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0501 04:01:40.768496   76250 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0501 04:01:40.781136   76250 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 04:01:40.921669   76250 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0501 04:01:41.086906   76250 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0501 04:01:41.087005   76250 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0501 04:01:41.094864   76250 start.go:562] Will wait 60s for crictl version
	I0501 04:01:41.094947   76250 ssh_runner.go:195] Run: which crictl
	I0501 04:01:41.100034   76250 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0501 04:01:41.159351   76250 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0501 04:01:41.159451   76250 ssh_runner.go:195] Run: crio --version
	I0501 04:01:41.196276   76250 ssh_runner.go:195] Run: crio --version
	I0501 04:01:41.234518   76250 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0501 04:01:36.546499   76145 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 04:01:37.046854   76145 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 04:01:37.546121   76145 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 04:01:38.046945   76145 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 04:01:38.546881   76145 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 04:01:39.046876   76145 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 04:01:39.546352   76145 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 04:01:40.046945   76145 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 04:01:40.546339   76145 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 04:01:41.046076   76145 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 04:01:41.235803   76250 main.go:141] libmachine: (kindnet-731347) Calling .GetIP
	I0501 04:01:41.238586   76250 main.go:141] libmachine: (kindnet-731347) DBG | domain kindnet-731347 has defined MAC address 52:54:00:73:09:76 in network mk-kindnet-731347
	I0501 04:01:41.239046   76250 main.go:141] libmachine: (kindnet-731347) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:09:76", ip: ""} in network mk-kindnet-731347: {Iface:virbr2 ExpiryTime:2024-05-01 05:01:29 +0000 UTC Type:0 Mac:52:54:00:73:09:76 Iaid: IPaddr:192.168.50.20 Prefix:24 Hostname:kindnet-731347 Clientid:01:52:54:00:73:09:76}
	I0501 04:01:41.239075   76250 main.go:141] libmachine: (kindnet-731347) DBG | domain kindnet-731347 has defined IP address 192.168.50.20 and MAC address 52:54:00:73:09:76 in network mk-kindnet-731347
	I0501 04:01:41.239293   76250 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0501 04:01:41.244554   76250 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0501 04:01:41.259363   76250 kubeadm.go:877] updating cluster {Name:kindnet-731347 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.0 ClusterName:kindnet-731347 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.50.20 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort
:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0501 04:01:41.259508   76250 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0501 04:01:41.259575   76250 ssh_runner.go:195] Run: sudo crictl images --output json
	I0501 04:01:41.312052   76250 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0501 04:01:41.312126   76250 ssh_runner.go:195] Run: which lz4
	I0501 04:01:41.317875   76250 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0501 04:01:41.322829   76250 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0501 04:01:41.322865   76250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394544937 bytes)
	I0501 04:01:41.546369   76145 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 04:01:42.046667   76145 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 04:01:42.546550   76145 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 04:01:43.046303   76145 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 04:01:43.546206   76145 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 04:01:44.046837   76145 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 04:01:44.546809   76145 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 04:01:44.671050   76145 kubeadm.go:1107] duration metric: took 12.815316945s to wait for elevateKubeSystemPrivileges
	W0501 04:01:44.671096   76145 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0501 04:01:44.671122   76145 kubeadm.go:393] duration metric: took 24.29240929s to StartCluster
	I0501 04:01:44.671141   76145 settings.go:142] acquiring lock: {Name:mkcfe08bcadd45d99c00ed1eaf4e9c226a489fe2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 04:01:44.671216   76145 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18779-13391/kubeconfig
	I0501 04:01:44.672632   76145 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18779-13391/kubeconfig: {Name:mk5d1131557f885800877622151c0915337cb23a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 04:01:44.673369   76145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0501 04:01:44.673393   76145 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0501 04:01:44.673364   76145 start.go:234] Will wait 15m0s for node &{Name: IP:192.168.39.152 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0501 04:01:44.673458   76145 addons.go:69] Setting storage-provisioner=true in profile "auto-731347"
	I0501 04:01:44.674949   76145 out.go:177] * Verifying Kubernetes components...
	I0501 04:01:44.673485   76145 addons.go:234] Setting addon storage-provisioner=true in "auto-731347"
	I0501 04:01:44.673493   76145 addons.go:69] Setting default-storageclass=true in profile "auto-731347"
	I0501 04:01:44.673606   76145 config.go:182] Loaded profile config "auto-731347": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0501 04:01:44.676716   76145 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 04:01:44.676755   76145 host.go:66] Checking if "auto-731347" exists ...
	I0501 04:01:44.676775   76145 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "auto-731347"
	I0501 04:01:44.679011   76145 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 04:01:44.679062   76145 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 04:01:44.679733   76145 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 04:01:44.679797   76145 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 04:01:44.699135   76145 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37713
	I0501 04:01:44.699633   76145 main.go:141] libmachine: () Calling .GetVersion
	I0501 04:01:44.699912   76145 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35023
	I0501 04:01:44.700255   76145 main.go:141] libmachine: Using API Version  1
	I0501 04:01:44.700276   76145 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 04:01:44.700315   76145 main.go:141] libmachine: () Calling .GetVersion
	I0501 04:01:44.700637   76145 main.go:141] libmachine: () Calling .GetMachineName
	I0501 04:01:44.700810   76145 main.go:141] libmachine: Using API Version  1
	I0501 04:01:44.700834   76145 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 04:01:44.701051   76145 main.go:141] libmachine: (auto-731347) Calling .GetState
	I0501 04:01:44.701140   76145 main.go:141] libmachine: () Calling .GetMachineName
	I0501 04:01:44.701681   76145 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 04:01:44.701719   76145 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 04:01:44.704244   76145 addons.go:234] Setting addon default-storageclass=true in "auto-731347"
	I0501 04:01:44.704276   76145 host.go:66] Checking if "auto-731347" exists ...
	I0501 04:01:44.704504   76145 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 04:01:44.704517   76145 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 04:01:44.723065   76145 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42007
	I0501 04:01:44.723767   76145 main.go:141] libmachine: () Calling .GetVersion
	I0501 04:01:44.723942   76145 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33647
	I0501 04:01:44.724385   76145 main.go:141] libmachine: () Calling .GetVersion
	I0501 04:01:44.724871   76145 main.go:141] libmachine: Using API Version  1
	I0501 04:01:44.724891   76145 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 04:01:44.725054   76145 main.go:141] libmachine: Using API Version  1
	I0501 04:01:44.725063   76145 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 04:01:44.725377   76145 main.go:141] libmachine: () Calling .GetMachineName
	I0501 04:01:44.725599   76145 main.go:141] libmachine: () Calling .GetMachineName
	I0501 04:01:44.725992   76145 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 04:01:44.726037   76145 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 04:01:44.726447   76145 main.go:141] libmachine: (auto-731347) Calling .GetState
	I0501 04:01:44.729086   76145 main.go:141] libmachine: (auto-731347) Calling .DriverName
	I0501 04:01:44.731078   76145 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0501 04:01:40.871092   77015 main.go:141] libmachine: (newest-cni-906018) Waiting to get IP...
	I0501 04:01:40.872256   77015 main.go:141] libmachine: (newest-cni-906018) DBG | domain newest-cni-906018 has defined MAC address 52:54:00:fa:e2:d3 in network mk-newest-cni-906018
	I0501 04:01:40.872802   77015 main.go:141] libmachine: (newest-cni-906018) DBG | unable to find current IP address of domain newest-cni-906018 in network mk-newest-cni-906018
	I0501 04:01:40.872870   77015 main.go:141] libmachine: (newest-cni-906018) DBG | I0501 04:01:40.872770   77110 retry.go:31] will retry after 188.112951ms: waiting for machine to come up
	I0501 04:01:41.062472   77015 main.go:141] libmachine: (newest-cni-906018) DBG | domain newest-cni-906018 has defined MAC address 52:54:00:fa:e2:d3 in network mk-newest-cni-906018
	I0501 04:01:41.063043   77015 main.go:141] libmachine: (newest-cni-906018) DBG | unable to find current IP address of domain newest-cni-906018 in network mk-newest-cni-906018
	I0501 04:01:41.063070   77015 main.go:141] libmachine: (newest-cni-906018) DBG | I0501 04:01:41.063004   77110 retry.go:31] will retry after 286.316876ms: waiting for machine to come up
	I0501 04:01:41.350581   77015 main.go:141] libmachine: (newest-cni-906018) DBG | domain newest-cni-906018 has defined MAC address 52:54:00:fa:e2:d3 in network mk-newest-cni-906018
	I0501 04:01:41.351115   77015 main.go:141] libmachine: (newest-cni-906018) DBG | unable to find current IP address of domain newest-cni-906018 in network mk-newest-cni-906018
	I0501 04:01:41.351140   77015 main.go:141] libmachine: (newest-cni-906018) DBG | I0501 04:01:41.351074   77110 retry.go:31] will retry after 370.654604ms: waiting for machine to come up
	I0501 04:01:41.723856   77015 main.go:141] libmachine: (newest-cni-906018) DBG | domain newest-cni-906018 has defined MAC address 52:54:00:fa:e2:d3 in network mk-newest-cni-906018
	I0501 04:01:41.724673   77015 main.go:141] libmachine: (newest-cni-906018) DBG | unable to find current IP address of domain newest-cni-906018 in network mk-newest-cni-906018
	I0501 04:01:41.724706   77015 main.go:141] libmachine: (newest-cni-906018) DBG | I0501 04:01:41.724621   77110 retry.go:31] will retry after 561.695034ms: waiting for machine to come up
	I0501 04:01:42.288453   77015 main.go:141] libmachine: (newest-cni-906018) DBG | domain newest-cni-906018 has defined MAC address 52:54:00:fa:e2:d3 in network mk-newest-cni-906018
	I0501 04:01:42.289239   77015 main.go:141] libmachine: (newest-cni-906018) DBG | unable to find current IP address of domain newest-cni-906018 in network mk-newest-cni-906018
	I0501 04:01:42.289286   77015 main.go:141] libmachine: (newest-cni-906018) DBG | I0501 04:01:42.289167   77110 retry.go:31] will retry after 734.414449ms: waiting for machine to come up
	I0501 04:01:43.024885   77015 main.go:141] libmachine: (newest-cni-906018) DBG | domain newest-cni-906018 has defined MAC address 52:54:00:fa:e2:d3 in network mk-newest-cni-906018
	I0501 04:01:43.025421   77015 main.go:141] libmachine: (newest-cni-906018) DBG | unable to find current IP address of domain newest-cni-906018 in network mk-newest-cni-906018
	I0501 04:01:43.025443   77015 main.go:141] libmachine: (newest-cni-906018) DBG | I0501 04:01:43.025374   77110 retry.go:31] will retry after 666.945474ms: waiting for machine to come up
	I0501 04:01:43.694338   77015 main.go:141] libmachine: (newest-cni-906018) DBG | domain newest-cni-906018 has defined MAC address 52:54:00:fa:e2:d3 in network mk-newest-cni-906018
	I0501 04:01:43.694818   77015 main.go:141] libmachine: (newest-cni-906018) DBG | unable to find current IP address of domain newest-cni-906018 in network mk-newest-cni-906018
	I0501 04:01:43.694868   77015 main.go:141] libmachine: (newest-cni-906018) DBG | I0501 04:01:43.694784   77110 retry.go:31] will retry after 834.277165ms: waiting for machine to come up
	I0501 04:01:44.530708   77015 main.go:141] libmachine: (newest-cni-906018) DBG | domain newest-cni-906018 has defined MAC address 52:54:00:fa:e2:d3 in network mk-newest-cni-906018
	I0501 04:01:44.531300   77015 main.go:141] libmachine: (newest-cni-906018) DBG | unable to find current IP address of domain newest-cni-906018 in network mk-newest-cni-906018
	I0501 04:01:44.531324   77015 main.go:141] libmachine: (newest-cni-906018) DBG | I0501 04:01:44.531224   77110 retry.go:31] will retry after 1.232367783s: waiting for machine to come up
	I0501 04:01:44.732430   76145 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0501 04:01:44.732449   76145 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0501 04:01:44.732469   76145 main.go:141] libmachine: (auto-731347) Calling .GetSSHHostname
	I0501 04:01:44.737200   76145 main.go:141] libmachine: (auto-731347) DBG | domain auto-731347 has defined MAC address 52:54:00:56:c9:8b in network mk-auto-731347
	I0501 04:01:44.737772   76145 main.go:141] libmachine: (auto-731347) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:c9:8b", ip: ""} in network mk-auto-731347: {Iface:virbr1 ExpiryTime:2024-05-01 05:01:01 +0000 UTC Type:0 Mac:52:54:00:56:c9:8b Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:auto-731347 Clientid:01:52:54:00:56:c9:8b}
	I0501 04:01:44.737796   76145 main.go:141] libmachine: (auto-731347) DBG | domain auto-731347 has defined IP address 192.168.39.152 and MAC address 52:54:00:56:c9:8b in network mk-auto-731347
	I0501 04:01:44.738021   76145 main.go:141] libmachine: (auto-731347) Calling .GetSSHPort
	I0501 04:01:44.738759   76145 main.go:141] libmachine: (auto-731347) Calling .GetSSHKeyPath
	I0501 04:01:44.738975   76145 main.go:141] libmachine: (auto-731347) Calling .GetSSHUsername
	I0501 04:01:44.739137   76145 sshutil.go:53] new ssh client: &{IP:192.168.39.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/auto-731347/id_rsa Username:docker}
	I0501 04:01:44.745510   76145 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40615
	I0501 04:01:44.745957   76145 main.go:141] libmachine: () Calling .GetVersion
	I0501 04:01:44.746526   76145 main.go:141] libmachine: Using API Version  1
	I0501 04:01:44.746544   76145 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 04:01:44.747009   76145 main.go:141] libmachine: () Calling .GetMachineName
	I0501 04:01:44.747272   76145 main.go:141] libmachine: (auto-731347) Calling .GetState
	I0501 04:01:44.749358   76145 main.go:141] libmachine: (auto-731347) Calling .DriverName
	I0501 04:01:44.749650   76145 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0501 04:01:44.749670   76145 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0501 04:01:44.749689   76145 main.go:141] libmachine: (auto-731347) Calling .GetSSHHostname
	I0501 04:01:44.753348   76145 main.go:141] libmachine: (auto-731347) DBG | domain auto-731347 has defined MAC address 52:54:00:56:c9:8b in network mk-auto-731347
	I0501 04:01:44.753799   76145 main.go:141] libmachine: (auto-731347) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:c9:8b", ip: ""} in network mk-auto-731347: {Iface:virbr1 ExpiryTime:2024-05-01 05:01:01 +0000 UTC Type:0 Mac:52:54:00:56:c9:8b Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:auto-731347 Clientid:01:52:54:00:56:c9:8b}
	I0501 04:01:44.753828   76145 main.go:141] libmachine: (auto-731347) DBG | domain auto-731347 has defined IP address 192.168.39.152 and MAC address 52:54:00:56:c9:8b in network mk-auto-731347
	I0501 04:01:44.754107   76145 main.go:141] libmachine: (auto-731347) Calling .GetSSHPort
	I0501 04:01:44.754317   76145 main.go:141] libmachine: (auto-731347) Calling .GetSSHKeyPath
	I0501 04:01:44.754535   76145 main.go:141] libmachine: (auto-731347) Calling .GetSSHUsername
	I0501 04:01:44.754692   76145 sshutil.go:53] new ssh client: &{IP:192.168.39.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/auto-731347/id_rsa Username:docker}
	I0501 04:01:45.059972   76145 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0501 04:01:45.060170   76145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0501 04:01:45.143142   76145 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0501 04:01:45.168194   76145 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0501 04:01:45.987791   76145 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0501 04:01:45.987871   76145 main.go:141] libmachine: Making call to close driver server
	I0501 04:01:45.988049   76145 main.go:141] libmachine: (auto-731347) Calling .Close
	I0501 04:01:45.988610   76145 main.go:141] libmachine: Successfully made call to close driver server
	I0501 04:01:45.988631   76145 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 04:01:45.988642   76145 main.go:141] libmachine: Making call to close driver server
	I0501 04:01:45.988654   76145 main.go:141] libmachine: (auto-731347) Calling .Close
	I0501 04:01:45.988987   76145 main.go:141] libmachine: Successfully made call to close driver server
	I0501 04:01:45.989002   76145 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 04:01:45.989175   76145 node_ready.go:35] waiting up to 15m0s for node "auto-731347" to be "Ready" ...
	I0501 04:01:46.013186   76145 node_ready.go:49] node "auto-731347" has status "Ready":"True"
	I0501 04:01:46.013223   76145 node_ready.go:38] duration metric: took 24.014167ms for node "auto-731347" to be "Ready" ...
	I0501 04:01:46.013239   76145 pod_ready.go:35] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0501 04:01:43.142974   76250 crio.go:462] duration metric: took 1.825123515s to copy over tarball
	I0501 04:01:43.143055   76250 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0501 04:01:46.185921   76250 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.042832442s)
	I0501 04:01:46.185960   76250 crio.go:469] duration metric: took 3.042954513s to extract the tarball
	I0501 04:01:46.185970   76250 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0501 04:01:46.227980   76250 ssh_runner.go:195] Run: sudo crictl images --output json
	I0501 04:01:46.292733   76250 crio.go:514] all images are preloaded for cri-o runtime.
	I0501 04:01:46.292765   76250 cache_images.go:84] Images are preloaded, skipping loading
	I0501 04:01:46.292774   76250 kubeadm.go:928] updating node { 192.168.50.20 8443 v1.30.0 crio true true} ...
	I0501 04:01:46.292877   76250 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kindnet-731347 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.20
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:kindnet-731347 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet}
	I0501 04:01:46.292942   76250 ssh_runner.go:195] Run: crio config
	I0501 04:01:46.350062   76250 cni.go:84] Creating CNI manager for "kindnet"
	I0501 04:01:46.350115   76250 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0501 04:01:46.350173   76250 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.20 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kindnet-731347 NodeName:kindnet-731347 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.20"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.20 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0501 04:01:46.350379   76250 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.20
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kindnet-731347"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.20
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.20"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0501 04:01:46.350474   76250 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0501 04:01:46.365316   76250 binaries.go:44] Found k8s binaries, skipping transfer
	I0501 04:01:46.365396   76250 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0501 04:01:46.381693   76250 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0501 04:01:46.406925   76250 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0501 04:01:46.430878   76250 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2155 bytes)
	I0501 04:01:46.452150   76250 ssh_runner.go:195] Run: grep 192.168.50.20	control-plane.minikube.internal$ /etc/hosts
	I0501 04:01:46.457285   76250 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.20	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0501 04:01:46.473770   76250 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 04:01:46.611556   76250 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0501 04:01:46.637302   76250 certs.go:68] Setting up /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/kindnet-731347 for IP: 192.168.50.20
	I0501 04:01:46.637325   76250 certs.go:194] generating shared ca certs ...
	I0501 04:01:46.637347   76250 certs.go:226] acquiring lock for ca certs: {Name:mk85a75d14c00780d4bf09cd9bdaf0f96f1b8fce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 04:01:46.637505   76250 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18779-13391/.minikube/ca.key
	I0501 04:01:46.637565   76250 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.key
	I0501 04:01:46.637581   76250 certs.go:256] generating profile certs ...
	I0501 04:01:46.637645   76250 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/kindnet-731347/client.key
	I0501 04:01:46.637664   76250 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/kindnet-731347/client.crt with IP's: []
	I0501 04:01:46.722448   76250 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/kindnet-731347/client.crt ...
	I0501 04:01:46.722497   76250 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/kindnet-731347/client.crt: {Name:mk1ebf597f6ac14562b76eae2be41bd81463178c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 04:01:46.727033   76250 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/kindnet-731347/client.key ...
	I0501 04:01:46.727062   76250 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/kindnet-731347/client.key: {Name:mkacb89e18f8be13565719e36033cb508b738db7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 04:01:46.727184   76250 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/kindnet-731347/apiserver.key.bb2cfc23
	I0501 04:01:46.727201   76250 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/kindnet-731347/apiserver.crt.bb2cfc23 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.20]
	I0501 04:01:46.913889   76250 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/kindnet-731347/apiserver.crt.bb2cfc23 ...
	I0501 04:01:46.913920   76250 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/kindnet-731347/apiserver.crt.bb2cfc23: {Name:mke4e856980b527b02120dae140d664819eab6bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 04:01:46.914083   76250 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/kindnet-731347/apiserver.key.bb2cfc23 ...
	I0501 04:01:46.914096   76250 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/kindnet-731347/apiserver.key.bb2cfc23: {Name:mkda8f3be487bbcea9b00c1820a049520c367e49 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 04:01:46.914165   76250 certs.go:381] copying /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/kindnet-731347/apiserver.crt.bb2cfc23 -> /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/kindnet-731347/apiserver.crt
	I0501 04:01:46.914240   76250 certs.go:385] copying /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/kindnet-731347/apiserver.key.bb2cfc23 -> /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/kindnet-731347/apiserver.key
	I0501 04:01:46.914292   76250 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/kindnet-731347/proxy-client.key
	I0501 04:01:46.914307   76250 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/kindnet-731347/proxy-client.crt with IP's: []
	I0501 04:01:47.127018   76250 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/kindnet-731347/proxy-client.crt ...
	I0501 04:01:47.127058   76250 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/kindnet-731347/proxy-client.crt: {Name:mk2c312b96cc028269aced010d656734e80948f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 04:01:47.127239   76250 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/kindnet-731347/proxy-client.key ...
	I0501 04:01:47.127256   76250 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/kindnet-731347/proxy-client.key: {Name:mkf6811971a6b5d09fd3d23c031593df4f7f648a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 04:01:47.127411   76250 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/20724.pem (1338 bytes)
	W0501 04:01:47.127448   76250 certs.go:480] ignoring /home/jenkins/minikube-integration/18779-13391/.minikube/certs/20724_empty.pem, impossibly tiny 0 bytes
	I0501 04:01:47.127458   76250 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca-key.pem (1679 bytes)
	I0501 04:01:47.127477   76250 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem (1078 bytes)
	I0501 04:01:47.127500   76250 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem (1123 bytes)
	I0501 04:01:47.127521   76250 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem (1675 bytes)
	I0501 04:01:47.127562   76250 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem (1708 bytes)
	I0501 04:01:47.128181   76250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0501 04:01:47.157796   76250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0501 04:01:47.187215   76250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0501 04:01:47.294930   76250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0501 04:01:47.325576   76250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/kindnet-731347/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0501 04:01:47.355993   76250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/kindnet-731347/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0501 04:01:47.395558   76250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/kindnet-731347/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0501 04:01:47.469752   76250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/kindnet-731347/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0501 04:01:47.500091   76250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/certs/20724.pem --> /usr/share/ca-certificates/20724.pem (1338 bytes)
	I0501 04:01:47.537484   76250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem --> /usr/share/ca-certificates/207242.pem (1708 bytes)
	I0501 04:01:47.567976   76250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0501 04:01:47.598559   76250 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0501 04:01:47.619145   76250 ssh_runner.go:195] Run: openssl version
	I0501 04:01:47.626048   76250 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/207242.pem && ln -fs /usr/share/ca-certificates/207242.pem /etc/ssl/certs/207242.pem"
	I0501 04:01:47.644405   76250 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/207242.pem
	I0501 04:01:47.651858   76250 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May  1 02:20 /usr/share/ca-certificates/207242.pem
	I0501 04:01:47.651921   76250 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/207242.pem
	I0501 04:01:47.661686   76250 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/207242.pem /etc/ssl/certs/3ec20f2e.0"
	I0501 04:01:47.680875   76250 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0501 04:01:47.697871   76250 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0501 04:01:47.704426   76250 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May  1 02:08 /usr/share/ca-certificates/minikubeCA.pem
	I0501 04:01:47.704500   76250 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0501 04:01:47.714418   76250 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0501 04:01:47.727874   76250 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20724.pem && ln -fs /usr/share/ca-certificates/20724.pem /etc/ssl/certs/20724.pem"
	I0501 04:01:47.741355   76250 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20724.pem
	I0501 04:01:47.747370   76250 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May  1 02:20 /usr/share/ca-certificates/20724.pem
	I0501 04:01:47.747442   76250 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20724.pem
	I0501 04:01:47.754569   76250 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20724.pem /etc/ssl/certs/51391683.0"
	I0501 04:01:47.767426   76250 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0501 04:01:47.772906   76250 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0501 04:01:47.772969   76250 kubeadm.go:391] StartCluster: {Name:kindnet-731347 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0
ClusterName:kindnet-731347 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.50.20 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 04:01:47.773069   76250 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0501 04:01:47.773143   76250 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0501 04:01:47.821722   76250 cri.go:89] found id: ""
	I0501 04:01:47.821800   76250 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0501 04:01:47.834315   76250 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0501 04:01:47.845419   76250 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0501 04:01:47.860843   76250 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0501 04:01:47.860863   76250 kubeadm.go:156] found existing configuration files:
	
	I0501 04:01:47.860920   76250 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0501 04:01:47.872440   76250 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0501 04:01:47.872516   76250 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0501 04:01:47.884443   76250 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0501 04:01:47.897251   76250 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0501 04:01:47.897317   76250 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0501 04:01:47.908540   76250 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0501 04:01:47.919807   76250 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0501 04:01:47.919890   76250 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0501 04:01:47.931340   76250 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0501 04:01:47.942172   76250 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0501 04:01:47.942244   76250 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0501 04:01:47.953824   76250 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0501 04:01:48.019554   76250 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0501 04:01:48.019622   76250 kubeadm.go:309] [preflight] Running pre-flight checks
	I0501 04:01:48.202865   76250 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0501 04:01:48.203032   76250 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0501 04:01:48.203155   76250 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0501 04:01:48.467210   76250 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0501 04:01:46.368562   76145 main.go:141] libmachine: Making call to close driver server
	I0501 04:01:46.382002   76145 main.go:141] libmachine: (auto-731347) Calling .Close
	I0501 04:01:46.372371   76145 pod_ready.go:78] waiting up to 15m0s for pod "coredns-7db6d8ff4d-975mt" in "kube-system" namespace to be "Ready" ...
	I0501 04:01:46.382494   76145 main.go:141] libmachine: (auto-731347) DBG | Closing plugin on server side
	I0501 04:01:46.382568   76145 main.go:141] libmachine: Successfully made call to close driver server
	I0501 04:01:46.382583   76145 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 04:01:47.646471   76145 kapi.go:248] "coredns" deployment in "kube-system" namespace and "auto-731347" context rescaled to 1 replicas
	I0501 04:01:48.396904   76145 pod_ready.go:102] pod "coredns-7db6d8ff4d-975mt" in "kube-system" namespace has status "Ready":"False"
	I0501 04:01:48.923928   76145 pod_ready.go:92] pod "coredns-7db6d8ff4d-975mt" in "kube-system" namespace has status "Ready":"True"
	I0501 04:01:48.923953   76145 pod_ready.go:81] duration metric: took 2.541851139s for pod "coredns-7db6d8ff4d-975mt" in "kube-system" namespace to be "Ready" ...
	I0501 04:01:48.923967   76145 pod_ready.go:78] waiting up to 15m0s for pod "coredns-7db6d8ff4d-xszpw" in "kube-system" namespace to be "Ready" ...
	I0501 04:01:48.935027   76145 pod_ready.go:92] pod "coredns-7db6d8ff4d-xszpw" in "kube-system" namespace has status "Ready":"True"
	I0501 04:01:48.935057   76145 pod_ready.go:81] duration metric: took 11.082063ms for pod "coredns-7db6d8ff4d-xszpw" in "kube-system" namespace to be "Ready" ...
	I0501 04:01:48.935084   76145 pod_ready.go:78] waiting up to 15m0s for pod "etcd-auto-731347" in "kube-system" namespace to be "Ready" ...
	I0501 04:01:48.943819   76145 pod_ready.go:92] pod "etcd-auto-731347" in "kube-system" namespace has status "Ready":"True"
	I0501 04:01:48.943839   76145 pod_ready.go:81] duration metric: took 8.746814ms for pod "etcd-auto-731347" in "kube-system" namespace to be "Ready" ...
	I0501 04:01:48.943850   76145 pod_ready.go:78] waiting up to 15m0s for pod "kube-apiserver-auto-731347" in "kube-system" namespace to be "Ready" ...
	I0501 04:01:48.946327   76145 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.778096324s)
	I0501 04:01:48.946376   76145 main.go:141] libmachine: Making call to close driver server
	I0501 04:01:48.946388   76145 main.go:141] libmachine: (auto-731347) Calling .Close
	I0501 04:01:48.946813   76145 main.go:141] libmachine: Successfully made call to close driver server
	I0501 04:01:48.946873   76145 main.go:141] libmachine: (auto-731347) DBG | Closing plugin on server side
	I0501 04:01:48.946901   76145 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 04:01:48.946932   76145 main.go:141] libmachine: Making call to close driver server
	I0501 04:01:48.946955   76145 main.go:141] libmachine: (auto-731347) Calling .Close
	I0501 04:01:48.948528   76145 main.go:141] libmachine: Successfully made call to close driver server
	I0501 04:01:48.948535   76145 main.go:141] libmachine: (auto-731347) DBG | Closing plugin on server side
	I0501 04:01:48.948546   76145 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 04:01:48.951143   76145 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0501 04:01:45.765897   77015 main.go:141] libmachine: (newest-cni-906018) DBG | domain newest-cni-906018 has defined MAC address 52:54:00:fa:e2:d3 in network mk-newest-cni-906018
	I0501 04:01:45.766425   77015 main.go:141] libmachine: (newest-cni-906018) DBG | unable to find current IP address of domain newest-cni-906018 in network mk-newest-cni-906018
	I0501 04:01:45.766449   77015 main.go:141] libmachine: (newest-cni-906018) DBG | I0501 04:01:45.766375   77110 retry.go:31] will retry after 1.43028842s: waiting for machine to come up
	I0501 04:01:47.197859   77015 main.go:141] libmachine: (newest-cni-906018) DBG | domain newest-cni-906018 has defined MAC address 52:54:00:fa:e2:d3 in network mk-newest-cni-906018
	I0501 04:01:47.198339   77015 main.go:141] libmachine: (newest-cni-906018) DBG | unable to find current IP address of domain newest-cni-906018 in network mk-newest-cni-906018
	I0501 04:01:47.198370   77015 main.go:141] libmachine: (newest-cni-906018) DBG | I0501 04:01:47.198323   77110 retry.go:31] will retry after 1.662226716s: waiting for machine to come up
	I0501 04:01:48.863210   77015 main.go:141] libmachine: (newest-cni-906018) DBG | domain newest-cni-906018 has defined MAC address 52:54:00:fa:e2:d3 in network mk-newest-cni-906018
	I0501 04:01:48.863728   77015 main.go:141] libmachine: (newest-cni-906018) DBG | unable to find current IP address of domain newest-cni-906018 in network mk-newest-cni-906018
	I0501 04:01:48.863761   77015 main.go:141] libmachine: (newest-cni-906018) DBG | I0501 04:01:48.863659   77110 retry.go:31] will retry after 2.555423569s: waiting for machine to come up
	I0501 04:01:48.952359   76145 addons.go:505] duration metric: took 4.278969389s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0501 04:01:48.953127   76145 pod_ready.go:92] pod "kube-apiserver-auto-731347" in "kube-system" namespace has status "Ready":"True"
	I0501 04:01:48.953152   76145 pod_ready.go:81] duration metric: took 9.293211ms for pod "kube-apiserver-auto-731347" in "kube-system" namespace to be "Ready" ...
	I0501 04:01:48.953164   76145 pod_ready.go:78] waiting up to 15m0s for pod "kube-controller-manager-auto-731347" in "kube-system" namespace to be "Ready" ...
	I0501 04:01:48.958547   76145 pod_ready.go:92] pod "kube-controller-manager-auto-731347" in "kube-system" namespace has status "Ready":"True"
	I0501 04:01:48.958570   76145 pod_ready.go:81] duration metric: took 5.397231ms for pod "kube-controller-manager-auto-731347" in "kube-system" namespace to be "Ready" ...
	I0501 04:01:48.958582   76145 pod_ready.go:78] waiting up to 15m0s for pod "kube-proxy-bnw9j" in "kube-system" namespace to be "Ready" ...
	I0501 04:01:49.360435   76145 pod_ready.go:92] pod "kube-proxy-bnw9j" in "kube-system" namespace has status "Ready":"True"
	I0501 04:01:49.360461   76145 pod_ready.go:81] duration metric: took 401.871699ms for pod "kube-proxy-bnw9j" in "kube-system" namespace to be "Ready" ...
	I0501 04:01:49.360473   76145 pod_ready.go:78] waiting up to 15m0s for pod "kube-scheduler-auto-731347" in "kube-system" namespace to be "Ready" ...
	I0501 04:01:49.759733   76145 pod_ready.go:92] pod "kube-scheduler-auto-731347" in "kube-system" namespace has status "Ready":"True"
	I0501 04:01:49.759765   76145 pod_ready.go:81] duration metric: took 399.283547ms for pod "kube-scheduler-auto-731347" in "kube-system" namespace to be "Ready" ...
	I0501 04:01:49.759776   76145 pod_ready.go:38] duration metric: took 3.746523529s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0501 04:01:49.759794   76145 api_server.go:52] waiting for apiserver process to appear ...
	I0501 04:01:49.759858   76145 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 04:01:49.781298   76145 api_server.go:72] duration metric: took 5.107817792s to wait for apiserver process to appear ...
	I0501 04:01:49.781326   76145 api_server.go:88] waiting for apiserver healthz status ...
	I0501 04:01:49.781349   76145 api_server.go:253] Checking apiserver healthz at https://192.168.39.152:8443/healthz ...
	I0501 04:01:49.788916   76145 api_server.go:279] https://192.168.39.152:8443/healthz returned 200:
	ok
	I0501 04:01:49.790030   76145 api_server.go:141] control plane version: v1.30.0
	I0501 04:01:49.790114   76145 api_server.go:131] duration metric: took 8.779918ms to wait for apiserver health ...
	I0501 04:01:49.790128   76145 system_pods.go:43] waiting for kube-system pods to appear ...
	I0501 04:01:49.964779   76145 system_pods.go:59] 8 kube-system pods found
	I0501 04:01:49.964828   76145 system_pods.go:61] "coredns-7db6d8ff4d-975mt" [6cdf6b33-f171-4d88-9357-9108a7828f0b] Running
	I0501 04:01:49.964838   76145 system_pods.go:61] "coredns-7db6d8ff4d-xszpw" [06a1f1b0-938c-44a0-b8ed-dcce884a4927] Running
	I0501 04:01:49.964847   76145 system_pods.go:61] "etcd-auto-731347" [44417657-1be0-49d1-a075-8929e9040150] Running
	I0501 04:01:49.964853   76145 system_pods.go:61] "kube-apiserver-auto-731347" [d5b82aa6-cc0d-4cac-8929-dbe3d37168dd] Running
	I0501 04:01:49.964858   76145 system_pods.go:61] "kube-controller-manager-auto-731347" [681b7046-3a36-480f-88d3-679855a510f2] Running
	I0501 04:01:49.964863   76145 system_pods.go:61] "kube-proxy-bnw9j" [a1017d50-458d-47b0-8b0f-fe79cc260120] Running
	I0501 04:01:49.964868   76145 system_pods.go:61] "kube-scheduler-auto-731347" [f192910c-bcc2-45a6-a9d7-8826916d8b73] Running
	I0501 04:01:49.964877   76145 system_pods.go:61] "storage-provisioner" [507eaa6c-3c07-4371-88f8-ddddf8be0267] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0501 04:01:49.964886   76145 system_pods.go:74] duration metric: took 174.751697ms to wait for pod list to return data ...
	I0501 04:01:49.964898   76145 default_sa.go:34] waiting for default service account to be created ...
	I0501 04:01:50.159696   76145 default_sa.go:45] found service account: "default"
	I0501 04:01:50.159724   76145 default_sa.go:55] duration metric: took 194.818611ms for default service account to be created ...
	I0501 04:01:50.159735   76145 system_pods.go:116] waiting for k8s-apps to be running ...
	I0501 04:01:50.365653   76145 system_pods.go:86] 8 kube-system pods found
	I0501 04:01:50.365686   76145 system_pods.go:89] "coredns-7db6d8ff4d-975mt" [6cdf6b33-f171-4d88-9357-9108a7828f0b] Running
	I0501 04:01:50.365694   76145 system_pods.go:89] "coredns-7db6d8ff4d-xszpw" [06a1f1b0-938c-44a0-b8ed-dcce884a4927] Running
	I0501 04:01:50.365701   76145 system_pods.go:89] "etcd-auto-731347" [44417657-1be0-49d1-a075-8929e9040150] Running
	I0501 04:01:50.365707   76145 system_pods.go:89] "kube-apiserver-auto-731347" [d5b82aa6-cc0d-4cac-8929-dbe3d37168dd] Running
	I0501 04:01:50.365712   76145 system_pods.go:89] "kube-controller-manager-auto-731347" [681b7046-3a36-480f-88d3-679855a510f2] Running
	I0501 04:01:50.365716   76145 system_pods.go:89] "kube-proxy-bnw9j" [a1017d50-458d-47b0-8b0f-fe79cc260120] Running
	I0501 04:01:50.365720   76145 system_pods.go:89] "kube-scheduler-auto-731347" [f192910c-bcc2-45a6-a9d7-8826916d8b73] Running
	I0501 04:01:50.365730   76145 system_pods.go:89] "storage-provisioner" [507eaa6c-3c07-4371-88f8-ddddf8be0267] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0501 04:01:50.365740   76145 system_pods.go:126] duration metric: took 205.99829ms to wait for k8s-apps to be running ...
	I0501 04:01:50.365755   76145 system_svc.go:44] waiting for kubelet service to be running ....
	I0501 04:01:50.365810   76145 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 04:01:50.389267   76145 system_svc.go:56] duration metric: took 23.501804ms WaitForService to wait for kubelet
	I0501 04:01:50.389302   76145 kubeadm.go:576] duration metric: took 5.715825009s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0501 04:01:50.389325   76145 node_conditions.go:102] verifying NodePressure condition ...
	I0501 04:01:50.562928   76145 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0501 04:01:50.562966   76145 node_conditions.go:123] node cpu capacity is 2
	I0501 04:01:50.562980   76145 node_conditions.go:105] duration metric: took 173.649006ms to run NodePressure ...
	I0501 04:01:50.562995   76145 start.go:240] waiting for startup goroutines ...
	I0501 04:01:50.563005   76145 start.go:245] waiting for cluster config update ...
	I0501 04:01:50.563017   76145 start.go:254] writing updated cluster config ...
	I0501 04:01:50.563372   76145 ssh_runner.go:195] Run: rm -f paused
	I0501 04:01:50.630322   76145 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0501 04:01:50.633443   76145 out.go:177] * Done! kubectl is now configured to use "auto-731347" cluster and "default" namespace by default
	I0501 04:01:48.614502   76250 out.go:204]   - Generating certificates and keys ...
	I0501 04:01:48.614629   76250 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0501 04:01:48.614738   76250 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0501 04:01:48.810011   76250 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0501 04:01:49.197131   76250 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0501 04:01:49.256170   76250 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0501 04:01:49.685589   76250 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0501 04:01:49.940712   76250 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0501 04:01:49.941024   76250 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [kindnet-731347 localhost] and IPs [192.168.50.20 127.0.0.1 ::1]
	I0501 04:01:50.141755   76250 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0501 04:01:50.142138   76250 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [kindnet-731347 localhost] and IPs [192.168.50.20 127.0.0.1 ::1]
	I0501 04:01:50.405937   76250 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0501 04:01:50.673819   76250 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0501 04:01:50.870653   76250 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0501 04:01:50.871019   76250 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0501 04:01:51.207043   76250 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0501 04:01:51.312752   76250 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0501 04:01:51.493731   76250 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0501 04:01:51.737954   76250 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0501 04:01:51.994644   76250 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0501 04:01:51.995684   76250 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0501 04:01:51.999375   76250 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0501 04:01:52.001223   76250 out.go:204]   - Booting up control plane ...
	I0501 04:01:52.001347   76250 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0501 04:01:52.001453   76250 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0501 04:01:52.001849   76250 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0501 04:01:52.027229   76250 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0501 04:01:52.029042   76250 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0501 04:01:52.029151   76250 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0501 04:01:51.421771   77015 main.go:141] libmachine: (newest-cni-906018) DBG | domain newest-cni-906018 has defined MAC address 52:54:00:fa:e2:d3 in network mk-newest-cni-906018
	I0501 04:01:51.422423   77015 main.go:141] libmachine: (newest-cni-906018) DBG | unable to find current IP address of domain newest-cni-906018 in network mk-newest-cni-906018
	I0501 04:01:51.422456   77015 main.go:141] libmachine: (newest-cni-906018) DBG | I0501 04:01:51.422338   77110 retry.go:31] will retry after 3.359471959s: waiting for machine to come up
	I0501 04:01:54.784069   77015 main.go:141] libmachine: (newest-cni-906018) DBG | domain newest-cni-906018 has defined MAC address 52:54:00:fa:e2:d3 in network mk-newest-cni-906018
	I0501 04:01:54.784663   77015 main.go:141] libmachine: (newest-cni-906018) DBG | unable to find current IP address of domain newest-cni-906018 in network mk-newest-cni-906018
	I0501 04:01:54.784737   77015 main.go:141] libmachine: (newest-cni-906018) DBG | I0501 04:01:54.784646   77110 retry.go:31] will retry after 2.816009955s: waiting for machine to come up
	I0501 04:01:52.257538   76250 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0501 04:01:52.257655   76250 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0501 04:01:52.758723   76250 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 501.473672ms
	I0501 04:01:52.758830   76250 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0501 04:01:58.761525   76250 kubeadm.go:309] [api-check] The API server is healthy after 6.002174352s
	I0501 04:01:58.775420   76250 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0501 04:01:58.789765   76250 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0501 04:01:58.816203   76250 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0501 04:01:58.816407   76250 kubeadm.go:309] [mark-control-plane] Marking the node kindnet-731347 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0501 04:01:58.831517   76250 kubeadm.go:309] [bootstrap-token] Using token: x6ek94.tjzwbz4plffs1qh7
	I0501 04:01:57.603668   77015 main.go:141] libmachine: (newest-cni-906018) DBG | domain newest-cni-906018 has defined MAC address 52:54:00:fa:e2:d3 in network mk-newest-cni-906018
	I0501 04:01:57.604193   77015 main.go:141] libmachine: (newest-cni-906018) DBG | unable to find current IP address of domain newest-cni-906018 in network mk-newest-cni-906018
	I0501 04:01:57.604216   77015 main.go:141] libmachine: (newest-cni-906018) DBG | I0501 04:01:57.604149   77110 retry.go:31] will retry after 5.468074975s: waiting for machine to come up
	I0501 04:01:58.833045   76250 out.go:204]   - Configuring RBAC rules ...
	I0501 04:01:58.833207   76250 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0501 04:01:58.841667   76250 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0501 04:01:58.853912   76250 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0501 04:01:58.857895   76250 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0501 04:01:58.863964   76250 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0501 04:01:58.866757   76250 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0501 04:01:59.168761   76250 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0501 04:01:59.644574   76250 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0501 04:02:00.167744   76250 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0501 04:02:00.167769   76250 kubeadm.go:309] 
	I0501 04:02:00.167838   76250 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0501 04:02:00.167849   76250 kubeadm.go:309] 
	I0501 04:02:00.167984   76250 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0501 04:02:00.168007   76250 kubeadm.go:309] 
	I0501 04:02:00.168083   76250 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0501 04:02:00.168173   76250 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0501 04:02:00.168240   76250 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0501 04:02:00.168250   76250 kubeadm.go:309] 
	I0501 04:02:00.168330   76250 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0501 04:02:00.168339   76250 kubeadm.go:309] 
	I0501 04:02:00.168422   76250 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0501 04:02:00.168440   76250 kubeadm.go:309] 
	I0501 04:02:00.168514   76250 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0501 04:02:00.168623   76250 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0501 04:02:00.168719   76250 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0501 04:02:00.168733   76250 kubeadm.go:309] 
	I0501 04:02:00.168858   76250 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0501 04:02:00.168958   76250 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0501 04:02:00.168967   76250 kubeadm.go:309] 
	I0501 04:02:00.169099   76250 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token x6ek94.tjzwbz4plffs1qh7 \
	I0501 04:02:00.169242   76250 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:bd94cc6acfb95fe36f70b12c6a1f2980601b8235e7c4dea65cebdf35eb514754 \
	I0501 04:02:00.169273   76250 kubeadm.go:309] 	--control-plane 
	I0501 04:02:00.169282   76250 kubeadm.go:309] 
	I0501 04:02:00.169380   76250 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0501 04:02:00.169396   76250 kubeadm.go:309] 
	I0501 04:02:00.169480   76250 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token x6ek94.tjzwbz4plffs1qh7 \
	I0501 04:02:00.169608   76250 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:bd94cc6acfb95fe36f70b12c6a1f2980601b8235e7c4dea65cebdf35eb514754 
	I0501 04:02:00.169917   76250 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0501 04:02:00.169941   76250 cni.go:84] Creating CNI manager for "kindnet"
	I0501 04:02:00.171804   76250 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0501 04:02:00.173229   76250 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0501 04:02:00.179439   76250 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.0/kubectl ...
	I0501 04:02:00.179459   76250 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0501 04:02:00.200542   76250 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0501 04:02:00.501891   76250 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0501 04:02:00.502018   76250 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 04:02:00.502045   76250 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes kindnet-731347 minikube.k8s.io/updated_at=2024_05_01T04_02_00_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=2c4eae41cda912e6a762d77f0d8868e00f97bb4e minikube.k8s.io/name=kindnet-731347 minikube.k8s.io/primary=true
	I0501 04:02:00.684771   76250 ops.go:34] apiserver oom_adj: -16
	I0501 04:02:00.684979   76250 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 04:02:01.185907   76250 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 04:02:01.685901   76250 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 04:02:03.075409   77015 main.go:141] libmachine: (newest-cni-906018) DBG | domain newest-cni-906018 has defined MAC address 52:54:00:fa:e2:d3 in network mk-newest-cni-906018
	I0501 04:02:03.075892   77015 main.go:141] libmachine: (newest-cni-906018) Found IP for machine: 192.168.61.183
	I0501 04:02:03.075939   77015 main.go:141] libmachine: (newest-cni-906018) DBG | domain newest-cni-906018 has current primary IP address 192.168.61.183 and MAC address 52:54:00:fa:e2:d3 in network mk-newest-cni-906018
	I0501 04:02:03.075950   77015 main.go:141] libmachine: (newest-cni-906018) Reserving static IP address...
	I0501 04:02:03.076289   77015 main.go:141] libmachine: (newest-cni-906018) DBG | found host DHCP lease matching {name: "newest-cni-906018", mac: "52:54:00:fa:e2:d3", ip: "192.168.61.183"} in network mk-newest-cni-906018: {Iface:virbr4 ExpiryTime:2024-05-01 05:01:53 +0000 UTC Type:0 Mac:52:54:00:fa:e2:d3 Iaid: IPaddr:192.168.61.183 Prefix:24 Hostname:newest-cni-906018 Clientid:01:52:54:00:fa:e2:d3}
	I0501 04:02:03.076326   77015 main.go:141] libmachine: (newest-cni-906018) DBG | skip adding static IP to network mk-newest-cni-906018 - found existing host DHCP lease matching {name: "newest-cni-906018", mac: "52:54:00:fa:e2:d3", ip: "192.168.61.183"}
	I0501 04:02:03.076335   77015 main.go:141] libmachine: (newest-cni-906018) Reserved static IP address: 192.168.61.183
	I0501 04:02:03.076344   77015 main.go:141] libmachine: (newest-cni-906018) Waiting for SSH to be available...
	I0501 04:02:03.076351   77015 main.go:141] libmachine: (newest-cni-906018) DBG | Getting to WaitForSSH function...
	I0501 04:02:03.078299   77015 main.go:141] libmachine: (newest-cni-906018) DBG | domain newest-cni-906018 has defined MAC address 52:54:00:fa:e2:d3 in network mk-newest-cni-906018
	I0501 04:02:03.078660   77015 main.go:141] libmachine: (newest-cni-906018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:e2:d3", ip: ""} in network mk-newest-cni-906018: {Iface:virbr4 ExpiryTime:2024-05-01 05:01:53 +0000 UTC Type:0 Mac:52:54:00:fa:e2:d3 Iaid: IPaddr:192.168.61.183 Prefix:24 Hostname:newest-cni-906018 Clientid:01:52:54:00:fa:e2:d3}
	I0501 04:02:03.078693   77015 main.go:141] libmachine: (newest-cni-906018) DBG | domain newest-cni-906018 has defined IP address 192.168.61.183 and MAC address 52:54:00:fa:e2:d3 in network mk-newest-cni-906018
	I0501 04:02:03.078784   77015 main.go:141] libmachine: (newest-cni-906018) DBG | Using SSH client type: external
	I0501 04:02:03.078822   77015 main.go:141] libmachine: (newest-cni-906018) DBG | Using SSH private key: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/newest-cni-906018/id_rsa (-rw-------)
	I0501 04:02:03.078865   77015 main.go:141] libmachine: (newest-cni-906018) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.183 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18779-13391/.minikube/machines/newest-cni-906018/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0501 04:02:03.078884   77015 main.go:141] libmachine: (newest-cni-906018) DBG | About to run SSH command:
	I0501 04:02:03.078900   77015 main.go:141] libmachine: (newest-cni-906018) DBG | exit 0
	I0501 04:02:03.211251   77015 main.go:141] libmachine: (newest-cni-906018) DBG | SSH cmd err, output: <nil>: 
	I0501 04:02:03.211626   77015 main.go:141] libmachine: (newest-cni-906018) Calling .GetConfigRaw
	I0501 04:02:03.212389   77015 main.go:141] libmachine: (newest-cni-906018) Calling .GetIP
	I0501 04:02:03.215223   77015 main.go:141] libmachine: (newest-cni-906018) DBG | domain newest-cni-906018 has defined MAC address 52:54:00:fa:e2:d3 in network mk-newest-cni-906018
	I0501 04:02:03.215608   77015 main.go:141] libmachine: (newest-cni-906018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:e2:d3", ip: ""} in network mk-newest-cni-906018: {Iface:virbr4 ExpiryTime:2024-05-01 05:01:53 +0000 UTC Type:0 Mac:52:54:00:fa:e2:d3 Iaid: IPaddr:192.168.61.183 Prefix:24 Hostname:newest-cni-906018 Clientid:01:52:54:00:fa:e2:d3}
	I0501 04:02:03.215646   77015 main.go:141] libmachine: (newest-cni-906018) DBG | domain newest-cni-906018 has defined IP address 192.168.61.183 and MAC address 52:54:00:fa:e2:d3 in network mk-newest-cni-906018
	I0501 04:02:03.215874   77015 profile.go:143] Saving config to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/newest-cni-906018/config.json ...
	I0501 04:02:03.216116   77015 machine.go:94] provisionDockerMachine start ...
	I0501 04:02:03.216147   77015 main.go:141] libmachine: (newest-cni-906018) Calling .DriverName
	I0501 04:02:03.216384   77015 main.go:141] libmachine: (newest-cni-906018) Calling .GetSSHHostname
	I0501 04:02:03.219205   77015 main.go:141] libmachine: (newest-cni-906018) DBG | domain newest-cni-906018 has defined MAC address 52:54:00:fa:e2:d3 in network mk-newest-cni-906018
	I0501 04:02:03.219516   77015 main.go:141] libmachine: (newest-cni-906018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:e2:d3", ip: ""} in network mk-newest-cni-906018: {Iface:virbr4 ExpiryTime:2024-05-01 05:01:53 +0000 UTC Type:0 Mac:52:54:00:fa:e2:d3 Iaid: IPaddr:192.168.61.183 Prefix:24 Hostname:newest-cni-906018 Clientid:01:52:54:00:fa:e2:d3}
	I0501 04:02:03.219544   77015 main.go:141] libmachine: (newest-cni-906018) DBG | domain newest-cni-906018 has defined IP address 192.168.61.183 and MAC address 52:54:00:fa:e2:d3 in network mk-newest-cni-906018
	I0501 04:02:03.219731   77015 main.go:141] libmachine: (newest-cni-906018) Calling .GetSSHPort
	I0501 04:02:03.219902   77015 main.go:141] libmachine: (newest-cni-906018) Calling .GetSSHKeyPath
	I0501 04:02:03.220036   77015 main.go:141] libmachine: (newest-cni-906018) Calling .GetSSHKeyPath
	I0501 04:02:03.220239   77015 main.go:141] libmachine: (newest-cni-906018) Calling .GetSSHUsername
	I0501 04:02:03.220458   77015 main.go:141] libmachine: Using SSH client type: native
	I0501 04:02:03.220702   77015 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.183 22 <nil> <nil>}
	I0501 04:02:03.220717   77015 main.go:141] libmachine: About to run SSH command:
	hostname
	I0501 04:02:03.352185   77015 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0501 04:02:03.352216   77015 main.go:141] libmachine: (newest-cni-906018) Calling .GetMachineName
	I0501 04:02:03.352472   77015 buildroot.go:166] provisioning hostname "newest-cni-906018"
	I0501 04:02:03.352500   77015 main.go:141] libmachine: (newest-cni-906018) Calling .GetMachineName
	I0501 04:02:03.352697   77015 main.go:141] libmachine: (newest-cni-906018) Calling .GetSSHHostname
	I0501 04:02:03.355595   77015 main.go:141] libmachine: (newest-cni-906018) DBG | domain newest-cni-906018 has defined MAC address 52:54:00:fa:e2:d3 in network mk-newest-cni-906018
	I0501 04:02:03.355967   77015 main.go:141] libmachine: (newest-cni-906018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:e2:d3", ip: ""} in network mk-newest-cni-906018: {Iface:virbr4 ExpiryTime:2024-05-01 05:01:53 +0000 UTC Type:0 Mac:52:54:00:fa:e2:d3 Iaid: IPaddr:192.168.61.183 Prefix:24 Hostname:newest-cni-906018 Clientid:01:52:54:00:fa:e2:d3}
	I0501 04:02:03.355996   77015 main.go:141] libmachine: (newest-cni-906018) DBG | domain newest-cni-906018 has defined IP address 192.168.61.183 and MAC address 52:54:00:fa:e2:d3 in network mk-newest-cni-906018
	I0501 04:02:03.356069   77015 main.go:141] libmachine: (newest-cni-906018) Calling .GetSSHPort
	I0501 04:02:03.356257   77015 main.go:141] libmachine: (newest-cni-906018) Calling .GetSSHKeyPath
	I0501 04:02:03.356434   77015 main.go:141] libmachine: (newest-cni-906018) Calling .GetSSHKeyPath
	I0501 04:02:03.356586   77015 main.go:141] libmachine: (newest-cni-906018) Calling .GetSSHUsername
	I0501 04:02:03.356756   77015 main.go:141] libmachine: Using SSH client type: native
	I0501 04:02:03.356932   77015 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.183 22 <nil> <nil>}
	I0501 04:02:03.356949   77015 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-906018 && echo "newest-cni-906018" | sudo tee /etc/hostname
	I0501 04:02:03.503316   77015 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-906018
	
	I0501 04:02:03.503348   77015 main.go:141] libmachine: (newest-cni-906018) Calling .GetSSHHostname
	I0501 04:02:03.506435   77015 main.go:141] libmachine: (newest-cni-906018) DBG | domain newest-cni-906018 has defined MAC address 52:54:00:fa:e2:d3 in network mk-newest-cni-906018
	I0501 04:02:03.506849   77015 main.go:141] libmachine: (newest-cni-906018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:e2:d3", ip: ""} in network mk-newest-cni-906018: {Iface:virbr4 ExpiryTime:2024-05-01 05:01:53 +0000 UTC Type:0 Mac:52:54:00:fa:e2:d3 Iaid: IPaddr:192.168.61.183 Prefix:24 Hostname:newest-cni-906018 Clientid:01:52:54:00:fa:e2:d3}
	I0501 04:02:03.506880   77015 main.go:141] libmachine: (newest-cni-906018) DBG | domain newest-cni-906018 has defined IP address 192.168.61.183 and MAC address 52:54:00:fa:e2:d3 in network mk-newest-cni-906018
	I0501 04:02:03.507091   77015 main.go:141] libmachine: (newest-cni-906018) Calling .GetSSHPort
	I0501 04:02:03.507300   77015 main.go:141] libmachine: (newest-cni-906018) Calling .GetSSHKeyPath
	I0501 04:02:03.507499   77015 main.go:141] libmachine: (newest-cni-906018) Calling .GetSSHKeyPath
	I0501 04:02:03.507630   77015 main.go:141] libmachine: (newest-cni-906018) Calling .GetSSHUsername
	I0501 04:02:03.507804   77015 main.go:141] libmachine: Using SSH client type: native
	I0501 04:02:03.508047   77015 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.183 22 <nil> <nil>}
	I0501 04:02:03.508071   77015 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-906018' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-906018/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-906018' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0501 04:02:03.645609   77015 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0501 04:02:03.645637   77015 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18779-13391/.minikube CaCertPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18779-13391/.minikube}
	I0501 04:02:03.645679   77015 buildroot.go:174] setting up certificates
	I0501 04:02:03.645692   77015 provision.go:84] configureAuth start
	I0501 04:02:03.645708   77015 main.go:141] libmachine: (newest-cni-906018) Calling .GetMachineName
	I0501 04:02:03.646050   77015 main.go:141] libmachine: (newest-cni-906018) Calling .GetIP
	I0501 04:02:03.649358   77015 main.go:141] libmachine: (newest-cni-906018) DBG | domain newest-cni-906018 has defined MAC address 52:54:00:fa:e2:d3 in network mk-newest-cni-906018
	I0501 04:02:03.649657   77015 main.go:141] libmachine: (newest-cni-906018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:e2:d3", ip: ""} in network mk-newest-cni-906018: {Iface:virbr4 ExpiryTime:2024-05-01 05:01:53 +0000 UTC Type:0 Mac:52:54:00:fa:e2:d3 Iaid: IPaddr:192.168.61.183 Prefix:24 Hostname:newest-cni-906018 Clientid:01:52:54:00:fa:e2:d3}
	I0501 04:02:03.649686   77015 main.go:141] libmachine: (newest-cni-906018) DBG | domain newest-cni-906018 has defined IP address 192.168.61.183 and MAC address 52:54:00:fa:e2:d3 in network mk-newest-cni-906018
	I0501 04:02:03.649834   77015 main.go:141] libmachine: (newest-cni-906018) Calling .GetSSHHostname
	I0501 04:02:03.652132   77015 main.go:141] libmachine: (newest-cni-906018) DBG | domain newest-cni-906018 has defined MAC address 52:54:00:fa:e2:d3 in network mk-newest-cni-906018
	I0501 04:02:03.652525   77015 main.go:141] libmachine: (newest-cni-906018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:e2:d3", ip: ""} in network mk-newest-cni-906018: {Iface:virbr4 ExpiryTime:2024-05-01 05:01:53 +0000 UTC Type:0 Mac:52:54:00:fa:e2:d3 Iaid: IPaddr:192.168.61.183 Prefix:24 Hostname:newest-cni-906018 Clientid:01:52:54:00:fa:e2:d3}
	I0501 04:02:03.652549   77015 main.go:141] libmachine: (newest-cni-906018) DBG | domain newest-cni-906018 has defined IP address 192.168.61.183 and MAC address 52:54:00:fa:e2:d3 in network mk-newest-cni-906018
	I0501 04:02:03.652708   77015 provision.go:143] copyHostCerts
	I0501 04:02:03.652767   77015 exec_runner.go:144] found /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem, removing ...
	I0501 04:02:03.652782   77015 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem
	I0501 04:02:03.652846   77015 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem (1078 bytes)
	I0501 04:02:03.653020   77015 exec_runner.go:144] found /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem, removing ...
	I0501 04:02:03.653043   77015 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem
	I0501 04:02:03.653075   77015 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem (1123 bytes)
	I0501 04:02:03.653151   77015 exec_runner.go:144] found /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem, removing ...
	I0501 04:02:03.653160   77015 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem
	I0501 04:02:03.653178   77015 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem (1675 bytes)
	I0501 04:02:03.653242   77015 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca-key.pem org=jenkins.newest-cni-906018 san=[127.0.0.1 192.168.61.183 localhost minikube newest-cni-906018]
	I0501 04:02:03.768531   77015 provision.go:177] copyRemoteCerts
	I0501 04:02:03.768600   77015 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0501 04:02:03.768627   77015 main.go:141] libmachine: (newest-cni-906018) Calling .GetSSHHostname
	I0501 04:02:03.771769   77015 main.go:141] libmachine: (newest-cni-906018) DBG | domain newest-cni-906018 has defined MAC address 52:54:00:fa:e2:d3 in network mk-newest-cni-906018
	I0501 04:02:03.772108   77015 main.go:141] libmachine: (newest-cni-906018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:e2:d3", ip: ""} in network mk-newest-cni-906018: {Iface:virbr4 ExpiryTime:2024-05-01 05:01:53 +0000 UTC Type:0 Mac:52:54:00:fa:e2:d3 Iaid: IPaddr:192.168.61.183 Prefix:24 Hostname:newest-cni-906018 Clientid:01:52:54:00:fa:e2:d3}
	I0501 04:02:03.772138   77015 main.go:141] libmachine: (newest-cni-906018) DBG | domain newest-cni-906018 has defined IP address 192.168.61.183 and MAC address 52:54:00:fa:e2:d3 in network mk-newest-cni-906018
	I0501 04:02:03.772444   77015 main.go:141] libmachine: (newest-cni-906018) Calling .GetSSHPort
	I0501 04:02:03.772639   77015 main.go:141] libmachine: (newest-cni-906018) Calling .GetSSHKeyPath
	I0501 04:02:03.772800   77015 main.go:141] libmachine: (newest-cni-906018) Calling .GetSSHUsername
	I0501 04:02:03.772945   77015 sshutil.go:53] new ssh client: &{IP:192.168.61.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/newest-cni-906018/id_rsa Username:docker}
	I0501 04:02:03.872809   77015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0501 04:02:03.907519   77015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0501 04:02:03.940992   77015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0501 04:02:03.974894   77015 provision.go:87] duration metric: took 329.188854ms to configureAuth
	I0501 04:02:03.974922   77015 buildroot.go:189] setting minikube options for container-runtime
	I0501 04:02:03.975126   77015 config.go:182] Loaded profile config "newest-cni-906018": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0501 04:02:03.975202   77015 main.go:141] libmachine: (newest-cni-906018) Calling .GetSSHHostname
	I0501 04:02:03.977915   77015 main.go:141] libmachine: (newest-cni-906018) DBG | domain newest-cni-906018 has defined MAC address 52:54:00:fa:e2:d3 in network mk-newest-cni-906018
	I0501 04:02:03.978283   77015 main.go:141] libmachine: (newest-cni-906018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:e2:d3", ip: ""} in network mk-newest-cni-906018: {Iface:virbr4 ExpiryTime:2024-05-01 05:01:53 +0000 UTC Type:0 Mac:52:54:00:fa:e2:d3 Iaid: IPaddr:192.168.61.183 Prefix:24 Hostname:newest-cni-906018 Clientid:01:52:54:00:fa:e2:d3}
	I0501 04:02:03.978322   77015 main.go:141] libmachine: (newest-cni-906018) DBG | domain newest-cni-906018 has defined IP address 192.168.61.183 and MAC address 52:54:00:fa:e2:d3 in network mk-newest-cni-906018
	I0501 04:02:03.978486   77015 main.go:141] libmachine: (newest-cni-906018) Calling .GetSSHPort
	I0501 04:02:03.978687   77015 main.go:141] libmachine: (newest-cni-906018) Calling .GetSSHKeyPath
	I0501 04:02:03.978906   77015 main.go:141] libmachine: (newest-cni-906018) Calling .GetSSHKeyPath
	I0501 04:02:03.979084   77015 main.go:141] libmachine: (newest-cni-906018) Calling .GetSSHUsername
	I0501 04:02:03.979273   77015 main.go:141] libmachine: Using SSH client type: native
	I0501 04:02:03.979455   77015 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.183 22 <nil> <nil>}
	I0501 04:02:03.979478   77015 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0501 04:02:04.296499   77015 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0501 04:02:04.296522   77015 machine.go:97] duration metric: took 1.080391381s to provisionDockerMachine
	I0501 04:02:04.296532   77015 start.go:293] postStartSetup for "newest-cni-906018" (driver="kvm2")
	I0501 04:02:04.296542   77015 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0501 04:02:04.296577   77015 main.go:141] libmachine: (newest-cni-906018) Calling .DriverName
	I0501 04:02:04.296907   77015 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0501 04:02:04.296937   77015 main.go:141] libmachine: (newest-cni-906018) Calling .GetSSHHostname
	I0501 04:02:04.299588   77015 main.go:141] libmachine: (newest-cni-906018) DBG | domain newest-cni-906018 has defined MAC address 52:54:00:fa:e2:d3 in network mk-newest-cni-906018
	I0501 04:02:04.299989   77015 main.go:141] libmachine: (newest-cni-906018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:e2:d3", ip: ""} in network mk-newest-cni-906018: {Iface:virbr4 ExpiryTime:2024-05-01 05:01:53 +0000 UTC Type:0 Mac:52:54:00:fa:e2:d3 Iaid: IPaddr:192.168.61.183 Prefix:24 Hostname:newest-cni-906018 Clientid:01:52:54:00:fa:e2:d3}
	I0501 04:02:04.300025   77015 main.go:141] libmachine: (newest-cni-906018) DBG | domain newest-cni-906018 has defined IP address 192.168.61.183 and MAC address 52:54:00:fa:e2:d3 in network mk-newest-cni-906018
	I0501 04:02:04.300267   77015 main.go:141] libmachine: (newest-cni-906018) Calling .GetSSHPort
	I0501 04:02:04.300463   77015 main.go:141] libmachine: (newest-cni-906018) Calling .GetSSHKeyPath
	I0501 04:02:04.300618   77015 main.go:141] libmachine: (newest-cni-906018) Calling .GetSSHUsername
	I0501 04:02:04.300768   77015 sshutil.go:53] new ssh client: &{IP:192.168.61.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/newest-cni-906018/id_rsa Username:docker}
	I0501 04:02:04.391512   77015 ssh_runner.go:195] Run: cat /etc/os-release
	I0501 04:02:04.397592   77015 info.go:137] Remote host: Buildroot 2023.02.9
	I0501 04:02:04.397621   77015 filesync.go:126] Scanning /home/jenkins/minikube-integration/18779-13391/.minikube/addons for local assets ...
	I0501 04:02:04.397689   77015 filesync.go:126] Scanning /home/jenkins/minikube-integration/18779-13391/.minikube/files for local assets ...
	I0501 04:02:04.397815   77015 filesync.go:149] local asset: /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem -> 207242.pem in /etc/ssl/certs
	I0501 04:02:04.397941   77015 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0501 04:02:04.410079   77015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem --> /etc/ssl/certs/207242.pem (1708 bytes)
	I0501 04:02:04.437474   77015 start.go:296] duration metric: took 140.930513ms for postStartSetup
	I0501 04:02:04.437509   77015 fix.go:56] duration metric: took 24.905509559s for fixHost
	I0501 04:02:04.437530   77015 main.go:141] libmachine: (newest-cni-906018) Calling .GetSSHHostname
	I0501 04:02:04.439920   77015 main.go:141] libmachine: (newest-cni-906018) DBG | domain newest-cni-906018 has defined MAC address 52:54:00:fa:e2:d3 in network mk-newest-cni-906018
	I0501 04:02:04.440381   77015 main.go:141] libmachine: (newest-cni-906018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:e2:d3", ip: ""} in network mk-newest-cni-906018: {Iface:virbr4 ExpiryTime:2024-05-01 05:01:53 +0000 UTC Type:0 Mac:52:54:00:fa:e2:d3 Iaid: IPaddr:192.168.61.183 Prefix:24 Hostname:newest-cni-906018 Clientid:01:52:54:00:fa:e2:d3}
	I0501 04:02:04.440405   77015 main.go:141] libmachine: (newest-cni-906018) DBG | domain newest-cni-906018 has defined IP address 192.168.61.183 and MAC address 52:54:00:fa:e2:d3 in network mk-newest-cni-906018
	I0501 04:02:04.440590   77015 main.go:141] libmachine: (newest-cni-906018) Calling .GetSSHPort
	I0501 04:02:04.440789   77015 main.go:141] libmachine: (newest-cni-906018) Calling .GetSSHKeyPath
	I0501 04:02:04.440963   77015 main.go:141] libmachine: (newest-cni-906018) Calling .GetSSHKeyPath
	I0501 04:02:04.441116   77015 main.go:141] libmachine: (newest-cni-906018) Calling .GetSSHUsername
	I0501 04:02:04.441297   77015 main.go:141] libmachine: Using SSH client type: native
	I0501 04:02:04.441449   77015 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.183 22 <nil> <nil>}
	I0501 04:02:04.441459   77015 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0501 04:02:04.563879   77015 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714536124.551167124
	
	I0501 04:02:04.563904   77015 fix.go:216] guest clock: 1714536124.551167124
	I0501 04:02:04.563913   77015 fix.go:229] Guest: 2024-05-01 04:02:04.551167124 +0000 UTC Remote: 2024-05-01 04:02:04.437512533 +0000 UTC m=+34.691324262 (delta=113.654591ms)
	I0501 04:02:04.563937   77015 fix.go:200] guest clock delta is within tolerance: 113.654591ms
	I0501 04:02:04.563952   77015 start.go:83] releasing machines lock for "newest-cni-906018", held for 25.031982639s
	I0501 04:02:04.563974   77015 main.go:141] libmachine: (newest-cni-906018) Calling .DriverName
	I0501 04:02:04.564196   77015 main.go:141] libmachine: (newest-cni-906018) Calling .GetIP
	I0501 04:02:04.567128   77015 main.go:141] libmachine: (newest-cni-906018) DBG | domain newest-cni-906018 has defined MAC address 52:54:00:fa:e2:d3 in network mk-newest-cni-906018
	I0501 04:02:04.567491   77015 main.go:141] libmachine: (newest-cni-906018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:e2:d3", ip: ""} in network mk-newest-cni-906018: {Iface:virbr4 ExpiryTime:2024-05-01 05:01:53 +0000 UTC Type:0 Mac:52:54:00:fa:e2:d3 Iaid: IPaddr:192.168.61.183 Prefix:24 Hostname:newest-cni-906018 Clientid:01:52:54:00:fa:e2:d3}
	I0501 04:02:04.567522   77015 main.go:141] libmachine: (newest-cni-906018) DBG | domain newest-cni-906018 has defined IP address 192.168.61.183 and MAC address 52:54:00:fa:e2:d3 in network mk-newest-cni-906018
	I0501 04:02:04.567678   77015 main.go:141] libmachine: (newest-cni-906018) Calling .DriverName
	I0501 04:02:04.568157   77015 main.go:141] libmachine: (newest-cni-906018) Calling .DriverName
	I0501 04:02:04.568363   77015 main.go:141] libmachine: (newest-cni-906018) Calling .DriverName
	I0501 04:02:04.568464   77015 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0501 04:02:04.568504   77015 main.go:141] libmachine: (newest-cni-906018) Calling .GetSSHHostname
	I0501 04:02:04.568550   77015 ssh_runner.go:195] Run: cat /version.json
	I0501 04:02:04.568573   77015 main.go:141] libmachine: (newest-cni-906018) Calling .GetSSHHostname
	I0501 04:02:04.571200   77015 main.go:141] libmachine: (newest-cni-906018) DBG | domain newest-cni-906018 has defined MAC address 52:54:00:fa:e2:d3 in network mk-newest-cni-906018
	I0501 04:02:04.571406   77015 main.go:141] libmachine: (newest-cni-906018) DBG | domain newest-cni-906018 has defined MAC address 52:54:00:fa:e2:d3 in network mk-newest-cni-906018
	I0501 04:02:04.571583   77015 main.go:141] libmachine: (newest-cni-906018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:e2:d3", ip: ""} in network mk-newest-cni-906018: {Iface:virbr4 ExpiryTime:2024-05-01 05:01:53 +0000 UTC Type:0 Mac:52:54:00:fa:e2:d3 Iaid: IPaddr:192.168.61.183 Prefix:24 Hostname:newest-cni-906018 Clientid:01:52:54:00:fa:e2:d3}
	I0501 04:02:04.571624   77015 main.go:141] libmachine: (newest-cni-906018) DBG | domain newest-cni-906018 has defined IP address 192.168.61.183 and MAC address 52:54:00:fa:e2:d3 in network mk-newest-cni-906018
	I0501 04:02:04.571755   77015 main.go:141] libmachine: (newest-cni-906018) Calling .GetSSHPort
	I0501 04:02:04.571857   77015 main.go:141] libmachine: (newest-cni-906018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:e2:d3", ip: ""} in network mk-newest-cni-906018: {Iface:virbr4 ExpiryTime:2024-05-01 05:01:53 +0000 UTC Type:0 Mac:52:54:00:fa:e2:d3 Iaid: IPaddr:192.168.61.183 Prefix:24 Hostname:newest-cni-906018 Clientid:01:52:54:00:fa:e2:d3}
	I0501 04:02:04.571884   77015 main.go:141] libmachine: (newest-cni-906018) DBG | domain newest-cni-906018 has defined IP address 192.168.61.183 and MAC address 52:54:00:fa:e2:d3 in network mk-newest-cni-906018
	I0501 04:02:04.571904   77015 main.go:141] libmachine: (newest-cni-906018) Calling .GetSSHKeyPath
	I0501 04:02:04.572066   77015 main.go:141] libmachine: (newest-cni-906018) Calling .GetSSHPort
	I0501 04:02:04.572075   77015 main.go:141] libmachine: (newest-cni-906018) Calling .GetSSHUsername
	I0501 04:02:04.572218   77015 main.go:141] libmachine: (newest-cni-906018) Calling .GetSSHKeyPath
	I0501 04:02:04.572381   77015 main.go:141] libmachine: (newest-cni-906018) Calling .GetSSHUsername
	I0501 04:02:04.572380   77015 sshutil.go:53] new ssh client: &{IP:192.168.61.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/newest-cni-906018/id_rsa Username:docker}
	I0501 04:02:04.572510   77015 sshutil.go:53] new ssh client: &{IP:192.168.61.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/newest-cni-906018/id_rsa Username:docker}
	I0501 04:02:04.656046   77015 ssh_runner.go:195] Run: systemctl --version
	I0501 04:02:04.678489   77015 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0501 04:02:04.834074   77015 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0501 04:02:04.842067   77015 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0501 04:02:04.842143   77015 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0501 04:02:04.863433   77015 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0501 04:02:04.863462   77015 start.go:494] detecting cgroup driver to use...
	I0501 04:02:04.863529   77015 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0501 04:02:04.883243   77015 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0501 04:02:04.901993   77015 docker.go:217] disabling cri-docker service (if available) ...
	I0501 04:02:04.902082   77015 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0501 04:02:04.918426   77015 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0501 04:02:04.934616   77015 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0501 04:02:05.080334   77015 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0501 04:02:05.250062   77015 docker.go:233] disabling docker service ...
	I0501 04:02:05.250187   77015 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0501 04:02:05.266955   77015 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0501 04:02:05.282605   77015 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0501 04:02:05.415803   77015 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0501 04:02:05.548672   77015 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0501 04:02:05.565894   77015 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0501 04:02:05.588218   77015 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0501 04:02:05.588294   77015 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 04:02:05.599641   77015 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0501 04:02:05.599697   77015 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 04:02:05.611042   77015 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 04:02:05.622690   77015 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 04:02:05.635468   77015 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0501 04:02:05.647504   77015 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 04:02:05.658836   77015 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 04:02:05.678849   77015 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 04:02:05.690616   77015 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0501 04:02:05.701049   77015 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0501 04:02:05.701114   77015 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0501 04:02:05.716810   77015 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0501 04:02:05.727734   77015 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 04:02:05.849222   77015 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0501 04:02:06.013261   77015 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0501 04:02:06.013350   77015 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0501 04:02:06.019587   77015 start.go:562] Will wait 60s for crictl version
	I0501 04:02:06.019660   77015 ssh_runner.go:195] Run: which crictl
	I0501 04:02:06.024206   77015 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0501 04:02:06.068853   77015 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0501 04:02:06.068952   77015 ssh_runner.go:195] Run: crio --version
	I0501 04:02:06.101217   77015 ssh_runner.go:195] Run: crio --version
	I0501 04:02:06.144856   77015 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0501 04:02:06.146246   77015 main.go:141] libmachine: (newest-cni-906018) Calling .GetIP
	I0501 04:02:06.148724   77015 main.go:141] libmachine: (newest-cni-906018) DBG | domain newest-cni-906018 has defined MAC address 52:54:00:fa:e2:d3 in network mk-newest-cni-906018
	I0501 04:02:06.149011   77015 main.go:141] libmachine: (newest-cni-906018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:e2:d3", ip: ""} in network mk-newest-cni-906018: {Iface:virbr4 ExpiryTime:2024-05-01 05:01:53 +0000 UTC Type:0 Mac:52:54:00:fa:e2:d3 Iaid: IPaddr:192.168.61.183 Prefix:24 Hostname:newest-cni-906018 Clientid:01:52:54:00:fa:e2:d3}
	I0501 04:02:06.149039   77015 main.go:141] libmachine: (newest-cni-906018) DBG | domain newest-cni-906018 has defined IP address 192.168.61.183 and MAC address 52:54:00:fa:e2:d3 in network mk-newest-cni-906018
	I0501 04:02:06.149300   77015 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0501 04:02:06.154393   77015 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0501 04:02:06.169972   77015 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0501 04:02:02.185562   76250 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 04:02:02.685087   76250 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 04:02:03.185484   76250 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 04:02:03.685993   76250 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 04:02:04.185578   76250 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 04:02:04.685456   76250 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 04:02:05.185331   76250 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 04:02:05.685717   76250 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 04:02:06.185675   76250 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 04:02:06.685630   76250 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 04:02:06.171490   77015 kubeadm.go:877] updating cluster {Name:newest-cni-906018 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.0 ClusterName:newest-cni-906018 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.183 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:
6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0501 04:02:06.171647   77015 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0501 04:02:06.171743   77015 ssh_runner.go:195] Run: sudo crictl images --output json
	I0501 04:02:06.227972   77015 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0501 04:02:06.228044   77015 ssh_runner.go:195] Run: which lz4
	I0501 04:02:06.233242   77015 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0501 04:02:06.238375   77015 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0501 04:02:06.238432   77015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394544937 bytes)
	I0501 04:02:08.052417   77015 crio.go:462] duration metric: took 1.819214673s to copy over tarball
	I0501 04:02:08.052499   77015 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	
	
	==> CRI-O <==
	May 01 04:02:10 default-k8s-diff-port-715118 crio[726]: time="2024-05-01 04:02:10.940239438Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=72f99553-f19f-4f0d-b096-250d3ac437b9 name=/runtime.v1.RuntimeService/Version
	May 01 04:02:10 default-k8s-diff-port-715118 crio[726]: time="2024-05-01 04:02:10.945342043Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c642602f-4c2e-4305-81f8-2fb6424e93bd name=/runtime.v1.ImageService/ImageFsInfo
	May 01 04:02:10 default-k8s-diff-port-715118 crio[726]: time="2024-05-01 04:02:10.946941270Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714536130946812559,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133261,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c642602f-4c2e-4305-81f8-2fb6424e93bd name=/runtime.v1.ImageService/ImageFsInfo
	May 01 04:02:10 default-k8s-diff-port-715118 crio[726]: time="2024-05-01 04:02:10.948323567Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f04d6780-173e-4a23-bc89-3bd4828426b9 name=/runtime.v1.RuntimeService/ListContainers
	May 01 04:02:10 default-k8s-diff-port-715118 crio[726]: time="2024-05-01 04:02:10.948452504Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f04d6780-173e-4a23-bc89-3bd4828426b9 name=/runtime.v1.RuntimeService/ListContainers
	May 01 04:02:10 default-k8s-diff-port-715118 crio[726]: time="2024-05-01 04:02:10.948988328Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3d8ba3db0459896edb75b12157ddbf8810613153a6df76d1e4eb406b8f8b6e62,PodSandboxId:edb63b349a081379b6835b92a992c1b2eed12182273639db600a4b3f0b998243,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714535136228055114,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: debb3a59-143a-46d3-87da-c2403e264861,},Annotations:map[string]string{io.kubernetes.container.hash: 16db7a36,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52024cf28376a62381031eca8bea22e44266b9d223f6d5e99cf52755f6f9fa39,PodSandboxId:9f4f9990c585ec803f12bc5ab6e947b4d96f77913e0fe564009d8aeacfbfd70c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714535135125317856,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bg755,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 884d489a-bc1e-442c-8e00-4616f983d3e9,},Annotations:map[string]string{io.kubernetes.container.hash: 5f29ea52,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63bdbf22a7bd7a8493dba1bed9368968feb9bff095b2e32ce5d7867b3f9959c1,PodSandboxId:8cc39213f143b0273112f6f4b226ce4cf187dce36a1c8ab70af09d9314915f48,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714535135082897572,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mp6f5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 4c8550d0-0029-48f1-a892-1800f6639c75,},Annotations:map[string]string{io.kubernetes.container.hash: 60529517,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11308e8bbf31d7b87ceb42faf1dbf32e184d440a9ff0a0138c0aadd47365b83a,PodSandboxId:8f2b32a0f8500606b42c0b7e0e7f154d5e02360b0b774d4044971ebf2fbbb5cb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING
,CreatedAt:1714535134099600477,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2knrp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf1406ff-8a6e-49bb-b180-1e72f4b54fbf,},Annotations:map[string]string{io.kubernetes.container.hash: 5e978535,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ec59f96dc5ca807a780b0898a1ca13ae038a3e83e43df7eef31296e6f297120,PodSandboxId:c9df82a07399452bc24414ebb686eb25279999abd813c3a1bf5b1964ffe6a39a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:171453511463300084
1,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-715118,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6cf4f2377aeb7600128ff5c542633ad8,},Annotations:map[string]string{io.kubernetes.container.hash: 96fecfa7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e6542cbc796b648420284c6f298ed9fd813087e54aa092fe7efe6fa2afcecac,PodSandboxId:6bcfeffdf6dc222f0fa4c1489ac1111337f2a5f443f90be27dccdb8dd88e0189,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714535114600435459,Labels:map
[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-715118,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 292d2020dce8f2017946cc5de9055d9a,},Annotations:map[string]string{io.kubernetes.container.hash: e71e301a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2997ad24c9a671bec035780acc282ba18cf87b144bd77e595a59b06414d29f34,PodSandboxId:1ccda42299061a6f842aa6c71b6980f3047de6f3a4ba1a3cd0b3e30d3f578d36,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714535114563119381,Labels:map[string]string{io.kube
rnetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-715118,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: beb6d90258e2ad028130bb1ec0b8d9f6,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec5e0a63d43c5edbf108ba506af6763ae952fa85072a3ceba633eccb0fd4c710,PodSandboxId:9aa295a830c06bc2d5fc7eb2cec630a61f167f47b3afec6d2ed81a9efaf9cb95,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714535114467974977,Labels:map[string]string{
io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-715118,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 204b55e4a7dda2d8362d806ee3a56174,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f04d6780-173e-4a23-bc89-3bd4828426b9 name=/runtime.v1.RuntimeService/ListContainers
	May 01 04:02:11 default-k8s-diff-port-715118 crio[726]: time="2024-05-01 04:02:11.020372620Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0ab31889-49c2-4715-aca5-ab05666dc3ba name=/runtime.v1.RuntimeService/Version
	May 01 04:02:11 default-k8s-diff-port-715118 crio[726]: time="2024-05-01 04:02:11.020582056Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0ab31889-49c2-4715-aca5-ab05666dc3ba name=/runtime.v1.RuntimeService/Version
	May 01 04:02:11 default-k8s-diff-port-715118 crio[726]: time="2024-05-01 04:02:11.027346362Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9360433f-2545-4857-8cc6-88e31bcf64ae name=/runtime.v1.ImageService/ImageFsInfo
	May 01 04:02:11 default-k8s-diff-port-715118 crio[726]: time="2024-05-01 04:02:11.027986541Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714536131027949463,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133261,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9360433f-2545-4857-8cc6-88e31bcf64ae name=/runtime.v1.ImageService/ImageFsInfo
	May 01 04:02:11 default-k8s-diff-port-715118 crio[726]: time="2024-05-01 04:02:11.032750605Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=25a45793-d87c-473a-a7b9-ee60dfa6b3b9 name=/runtime.v1.RuntimeService/ListContainers
	May 01 04:02:11 default-k8s-diff-port-715118 crio[726]: time="2024-05-01 04:02:11.033200151Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=25a45793-d87c-473a-a7b9-ee60dfa6b3b9 name=/runtime.v1.RuntimeService/ListContainers
	May 01 04:02:11 default-k8s-diff-port-715118 crio[726]: time="2024-05-01 04:02:11.033865642Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3d8ba3db0459896edb75b12157ddbf8810613153a6df76d1e4eb406b8f8b6e62,PodSandboxId:edb63b349a081379b6835b92a992c1b2eed12182273639db600a4b3f0b998243,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714535136228055114,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: debb3a59-143a-46d3-87da-c2403e264861,},Annotations:map[string]string{io.kubernetes.container.hash: 16db7a36,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52024cf28376a62381031eca8bea22e44266b9d223f6d5e99cf52755f6f9fa39,PodSandboxId:9f4f9990c585ec803f12bc5ab6e947b4d96f77913e0fe564009d8aeacfbfd70c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714535135125317856,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bg755,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 884d489a-bc1e-442c-8e00-4616f983d3e9,},Annotations:map[string]string{io.kubernetes.container.hash: 5f29ea52,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63bdbf22a7bd7a8493dba1bed9368968feb9bff095b2e32ce5d7867b3f9959c1,PodSandboxId:8cc39213f143b0273112f6f4b226ce4cf187dce36a1c8ab70af09d9314915f48,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714535135082897572,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mp6f5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 4c8550d0-0029-48f1-a892-1800f6639c75,},Annotations:map[string]string{io.kubernetes.container.hash: 60529517,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11308e8bbf31d7b87ceb42faf1dbf32e184d440a9ff0a0138c0aadd47365b83a,PodSandboxId:8f2b32a0f8500606b42c0b7e0e7f154d5e02360b0b774d4044971ebf2fbbb5cb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING
,CreatedAt:1714535134099600477,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2knrp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf1406ff-8a6e-49bb-b180-1e72f4b54fbf,},Annotations:map[string]string{io.kubernetes.container.hash: 5e978535,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ec59f96dc5ca807a780b0898a1ca13ae038a3e83e43df7eef31296e6f297120,PodSandboxId:c9df82a07399452bc24414ebb686eb25279999abd813c3a1bf5b1964ffe6a39a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:171453511463300084
1,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-715118,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6cf4f2377aeb7600128ff5c542633ad8,},Annotations:map[string]string{io.kubernetes.container.hash: 96fecfa7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e6542cbc796b648420284c6f298ed9fd813087e54aa092fe7efe6fa2afcecac,PodSandboxId:6bcfeffdf6dc222f0fa4c1489ac1111337f2a5f443f90be27dccdb8dd88e0189,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714535114600435459,Labels:map
[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-715118,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 292d2020dce8f2017946cc5de9055d9a,},Annotations:map[string]string{io.kubernetes.container.hash: e71e301a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2997ad24c9a671bec035780acc282ba18cf87b144bd77e595a59b06414d29f34,PodSandboxId:1ccda42299061a6f842aa6c71b6980f3047de6f3a4ba1a3cd0b3e30d3f578d36,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714535114563119381,Labels:map[string]string{io.kube
rnetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-715118,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: beb6d90258e2ad028130bb1ec0b8d9f6,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec5e0a63d43c5edbf108ba506af6763ae952fa85072a3ceba633eccb0fd4c710,PodSandboxId:9aa295a830c06bc2d5fc7eb2cec630a61f167f47b3afec6d2ed81a9efaf9cb95,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714535114467974977,Labels:map[string]string{
io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-715118,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 204b55e4a7dda2d8362d806ee3a56174,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=25a45793-d87c-473a-a7b9-ee60dfa6b3b9 name=/runtime.v1.RuntimeService/ListContainers
	May 01 04:02:11 default-k8s-diff-port-715118 crio[726]: time="2024-05-01 04:02:11.089748275Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=06efcdef-3a60-443f-a310-44ec0dca5401 name=/runtime.v1.RuntimeService/Version
	May 01 04:02:11 default-k8s-diff-port-715118 crio[726]: time="2024-05-01 04:02:11.089856087Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=06efcdef-3a60-443f-a310-44ec0dca5401 name=/runtime.v1.RuntimeService/Version
	May 01 04:02:11 default-k8s-diff-port-715118 crio[726]: time="2024-05-01 04:02:11.091199286Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3e029428-43c6-4577-b901-f01bc700e530 name=/runtime.v1.ImageService/ImageFsInfo
	May 01 04:02:11 default-k8s-diff-port-715118 crio[726]: time="2024-05-01 04:02:11.091756993Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714536131091730894,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133261,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3e029428-43c6-4577-b901-f01bc700e530 name=/runtime.v1.ImageService/ImageFsInfo
	May 01 04:02:11 default-k8s-diff-port-715118 crio[726]: time="2024-05-01 04:02:11.092233687Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a05f461f-f012-4bcf-bb2d-1440336172c1 name=/runtime.v1.RuntimeService/ListContainers
	May 01 04:02:11 default-k8s-diff-port-715118 crio[726]: time="2024-05-01 04:02:11.092315680Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a05f461f-f012-4bcf-bb2d-1440336172c1 name=/runtime.v1.RuntimeService/ListContainers
	May 01 04:02:11 default-k8s-diff-port-715118 crio[726]: time="2024-05-01 04:02:11.092607874Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3d8ba3db0459896edb75b12157ddbf8810613153a6df76d1e4eb406b8f8b6e62,PodSandboxId:edb63b349a081379b6835b92a992c1b2eed12182273639db600a4b3f0b998243,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714535136228055114,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: debb3a59-143a-46d3-87da-c2403e264861,},Annotations:map[string]string{io.kubernetes.container.hash: 16db7a36,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52024cf28376a62381031eca8bea22e44266b9d223f6d5e99cf52755f6f9fa39,PodSandboxId:9f4f9990c585ec803f12bc5ab6e947b4d96f77913e0fe564009d8aeacfbfd70c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714535135125317856,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bg755,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 884d489a-bc1e-442c-8e00-4616f983d3e9,},Annotations:map[string]string{io.kubernetes.container.hash: 5f29ea52,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63bdbf22a7bd7a8493dba1bed9368968feb9bff095b2e32ce5d7867b3f9959c1,PodSandboxId:8cc39213f143b0273112f6f4b226ce4cf187dce36a1c8ab70af09d9314915f48,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714535135082897572,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mp6f5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 4c8550d0-0029-48f1-a892-1800f6639c75,},Annotations:map[string]string{io.kubernetes.container.hash: 60529517,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11308e8bbf31d7b87ceb42faf1dbf32e184d440a9ff0a0138c0aadd47365b83a,PodSandboxId:8f2b32a0f8500606b42c0b7e0e7f154d5e02360b0b774d4044971ebf2fbbb5cb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING
,CreatedAt:1714535134099600477,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2knrp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf1406ff-8a6e-49bb-b180-1e72f4b54fbf,},Annotations:map[string]string{io.kubernetes.container.hash: 5e978535,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ec59f96dc5ca807a780b0898a1ca13ae038a3e83e43df7eef31296e6f297120,PodSandboxId:c9df82a07399452bc24414ebb686eb25279999abd813c3a1bf5b1964ffe6a39a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:171453511463300084
1,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-715118,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6cf4f2377aeb7600128ff5c542633ad8,},Annotations:map[string]string{io.kubernetes.container.hash: 96fecfa7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e6542cbc796b648420284c6f298ed9fd813087e54aa092fe7efe6fa2afcecac,PodSandboxId:6bcfeffdf6dc222f0fa4c1489ac1111337f2a5f443f90be27dccdb8dd88e0189,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714535114600435459,Labels:map
[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-715118,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 292d2020dce8f2017946cc5de9055d9a,},Annotations:map[string]string{io.kubernetes.container.hash: e71e301a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2997ad24c9a671bec035780acc282ba18cf87b144bd77e595a59b06414d29f34,PodSandboxId:1ccda42299061a6f842aa6c71b6980f3047de6f3a4ba1a3cd0b3e30d3f578d36,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714535114563119381,Labels:map[string]string{io.kube
rnetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-715118,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: beb6d90258e2ad028130bb1ec0b8d9f6,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec5e0a63d43c5edbf108ba506af6763ae952fa85072a3ceba633eccb0fd4c710,PodSandboxId:9aa295a830c06bc2d5fc7eb2cec630a61f167f47b3afec6d2ed81a9efaf9cb95,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714535114467974977,Labels:map[string]string{
io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-715118,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 204b55e4a7dda2d8362d806ee3a56174,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a05f461f-f012-4bcf-bb2d-1440336172c1 name=/runtime.v1.RuntimeService/ListContainers
	May 01 04:02:11 default-k8s-diff-port-715118 crio[726]: time="2024-05-01 04:02:11.354574099Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=120ea97b-21cb-4f5a-b41c-a1dc3d9cb2d3 name=/runtime.v1.RuntimeService/ListPodSandbox
	May 01 04:02:11 default-k8s-diff-port-715118 crio[726]: time="2024-05-01 04:02:11.354951040Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:edb63b349a081379b6835b92a992c1b2eed12182273639db600a4b3f0b998243,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:debb3a59-143a-46d3-87da-c2403e264861,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1714535136098308929,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: debb3a59-143a-46d3-87da-c2403e264861,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespac
e\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-05-01T03:45:35.488358365Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8df7e185ce8e32c5511e5bb4ceada737bbd26c0b2e2ef5f71291f9afac2e9fbc,Metadata:&PodSandboxMetadata{Name:metrics-server-569cc877fc-xwxx9,Uid:a66f5df4-355c-47f0-8b6e-da29e1c4394e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1714535135943999119,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-569cc877fc-xwxx9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a66f5df4-355c-47f0-8b6e-d
a29e1c4394e,k8s-app: metrics-server,pod-template-hash: 569cc877fc,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-05-01T03:45:35.636901623Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9f4f9990c585ec803f12bc5ab6e947b4d96f77913e0fe564009d8aeacfbfd70c,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-bg755,Uid:884d489a-bc1e-442c-8e00-4616f983d3e9,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1714535134196638213,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-bg755,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 884d489a-bc1e-442c-8e00-4616f983d3e9,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-05-01T03:45:33.884413365Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8cc39213f143b0273112f6f4b226ce4cf187dce36a1c8ab70af09d9314915f48,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-mp6f5,Uid:4c8550d0
-0029-48f1-a892-1800f6639c75,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1714535134115366998,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-mp6f5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c8550d0-0029-48f1-a892-1800f6639c75,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-05-01T03:45:33.805851732Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8f2b32a0f8500606b42c0b7e0e7f154d5e02360b0b774d4044971ebf2fbbb5cb,Metadata:&PodSandboxMetadata{Name:kube-proxy-2knrp,Uid:cf1406ff-8a6e-49bb-b180-1e72f4b54fbf,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1714535133763212894,Labels:map[string]string{controller-revision-hash: 79cf874c65,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-2knrp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf1406ff-8a6e-49bb-b180-1e72f4b54fbf,k8s-app: kube-pro
xy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-05-01T03:45:33.448574400Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1ccda42299061a6f842aa6c71b6980f3047de6f3a4ba1a3cd0b3e30d3f578d36,Metadata:&PodSandboxMetadata{Name:kube-scheduler-default-k8s-diff-port-715118,Uid:beb6d90258e2ad028130bb1ec0b8d9f6,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1714535114303125918,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-715118,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: beb6d90258e2ad028130bb1ec0b8d9f6,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: beb6d90258e2ad028130bb1ec0b8d9f6,kubernetes.io/config.seen: 2024-05-01T03:45:13.811895081Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:9aa295a830c06bc2d5fc7eb2cec630a61f167f47b3afec6d2ed81a9efaf9cb95,Metadata:&PodSandb
oxMetadata{Name:kube-controller-manager-default-k8s-diff-port-715118,Uid:204b55e4a7dda2d8362d806ee3a56174,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1714535114286427881,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-715118,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 204b55e4a7dda2d8362d806ee3a56174,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 204b55e4a7dda2d8362d806ee3a56174,kubernetes.io/config.seen: 2024-05-01T03:45:13.811894267Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c9df82a07399452bc24414ebb686eb25279999abd813c3a1bf5b1964ffe6a39a,Metadata:&PodSandboxMetadata{Name:kube-apiserver-default-k8s-diff-port-715118,Uid:6cf4f2377aeb7600128ff5c542633ad8,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1714535114281426271,Labels:map[string]string{component: kube-apiserver,io.kubernete
s.container.name: POD,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-715118,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6cf4f2377aeb7600128ff5c542633ad8,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.72.158:8444,kubernetes.io/config.hash: 6cf4f2377aeb7600128ff5c542633ad8,kubernetes.io/config.seen: 2024-05-01T03:45:13.811892954Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:6bcfeffdf6dc222f0fa4c1489ac1111337f2a5f443f90be27dccdb8dd88e0189,Metadata:&PodSandboxMetadata{Name:etcd-default-k8s-diff-port-715118,Uid:292d2020dce8f2017946cc5de9055d9a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1714535114269369685,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-default-k8s-diff-port-715118,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 292d2020dce8f2017946cc5de9055d9a,tier: control-plane,},Annotat
ions:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.72.158:2379,kubernetes.io/config.hash: 292d2020dce8f2017946cc5de9055d9a,kubernetes.io/config.seen: 2024-05-01T03:45:13.811889372Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=120ea97b-21cb-4f5a-b41c-a1dc3d9cb2d3 name=/runtime.v1.RuntimeService/ListPodSandbox
	May 01 04:02:11 default-k8s-diff-port-715118 crio[726]: time="2024-05-01 04:02:11.356038040Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7bbadd35-35f0-41eb-915b-959b264809b3 name=/runtime.v1.RuntimeService/ListContainers
	May 01 04:02:11 default-k8s-diff-port-715118 crio[726]: time="2024-05-01 04:02:11.356105572Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7bbadd35-35f0-41eb-915b-959b264809b3 name=/runtime.v1.RuntimeService/ListContainers
	May 01 04:02:11 default-k8s-diff-port-715118 crio[726]: time="2024-05-01 04:02:11.356350296Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3d8ba3db0459896edb75b12157ddbf8810613153a6df76d1e4eb406b8f8b6e62,PodSandboxId:edb63b349a081379b6835b92a992c1b2eed12182273639db600a4b3f0b998243,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714535136228055114,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: debb3a59-143a-46d3-87da-c2403e264861,},Annotations:map[string]string{io.kubernetes.container.hash: 16db7a36,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52024cf28376a62381031eca8bea22e44266b9d223f6d5e99cf52755f6f9fa39,PodSandboxId:9f4f9990c585ec803f12bc5ab6e947b4d96f77913e0fe564009d8aeacfbfd70c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714535135125317856,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bg755,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 884d489a-bc1e-442c-8e00-4616f983d3e9,},Annotations:map[string]string{io.kubernetes.container.hash: 5f29ea52,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63bdbf22a7bd7a8493dba1bed9368968feb9bff095b2e32ce5d7867b3f9959c1,PodSandboxId:8cc39213f143b0273112f6f4b226ce4cf187dce36a1c8ab70af09d9314915f48,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714535135082897572,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mp6f5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 4c8550d0-0029-48f1-a892-1800f6639c75,},Annotations:map[string]string{io.kubernetes.container.hash: 60529517,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11308e8bbf31d7b87ceb42faf1dbf32e184d440a9ff0a0138c0aadd47365b83a,PodSandboxId:8f2b32a0f8500606b42c0b7e0e7f154d5e02360b0b774d4044971ebf2fbbb5cb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING
,CreatedAt:1714535134099600477,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2knrp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf1406ff-8a6e-49bb-b180-1e72f4b54fbf,},Annotations:map[string]string{io.kubernetes.container.hash: 5e978535,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ec59f96dc5ca807a780b0898a1ca13ae038a3e83e43df7eef31296e6f297120,PodSandboxId:c9df82a07399452bc24414ebb686eb25279999abd813c3a1bf5b1964ffe6a39a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:171453511463300084
1,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-715118,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6cf4f2377aeb7600128ff5c542633ad8,},Annotations:map[string]string{io.kubernetes.container.hash: 96fecfa7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e6542cbc796b648420284c6f298ed9fd813087e54aa092fe7efe6fa2afcecac,PodSandboxId:6bcfeffdf6dc222f0fa4c1489ac1111337f2a5f443f90be27dccdb8dd88e0189,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714535114600435459,Labels:map
[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-715118,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 292d2020dce8f2017946cc5de9055d9a,},Annotations:map[string]string{io.kubernetes.container.hash: e71e301a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2997ad24c9a671bec035780acc282ba18cf87b144bd77e595a59b06414d29f34,PodSandboxId:1ccda42299061a6f842aa6c71b6980f3047de6f3a4ba1a3cd0b3e30d3f578d36,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714535114563119381,Labels:map[string]string{io.kube
rnetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-715118,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: beb6d90258e2ad028130bb1ec0b8d9f6,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec5e0a63d43c5edbf108ba506af6763ae952fa85072a3ceba633eccb0fd4c710,PodSandboxId:9aa295a830c06bc2d5fc7eb2cec630a61f167f47b3afec6d2ed81a9efaf9cb95,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714535114467974977,Labels:map[string]string{
io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-715118,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 204b55e4a7dda2d8362d806ee3a56174,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7bbadd35-35f0-41eb-915b-959b264809b3 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	3d8ba3db04598       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   16 minutes ago      Running             storage-provisioner       0                   edb63b349a081       storage-provisioner
	52024cf28376a       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   16 minutes ago      Running             coredns                   0                   9f4f9990c585e       coredns-7db6d8ff4d-bg755
	63bdbf22a7bd7       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   16 minutes ago      Running             coredns                   0                   8cc39213f143b       coredns-7db6d8ff4d-mp6f5
	11308e8bbf31d       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b   16 minutes ago      Running             kube-proxy                0                   8f2b32a0f8500       kube-proxy-2knrp
	4ec59f96dc5ca       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0   16 minutes ago      Running             kube-apiserver            2                   c9df82a073994       kube-apiserver-default-k8s-diff-port-715118
	8e6542cbc796b       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   16 minutes ago      Running             etcd                      2                   6bcfeffdf6dc2       etcd-default-k8s-diff-port-715118
	2997ad24c9a67       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced   16 minutes ago      Running             kube-scheduler            2                   1ccda42299061       kube-scheduler-default-k8s-diff-port-715118
	ec5e0a63d43c5       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b   16 minutes ago      Running             kube-controller-manager   2                   9aa295a830c06       kube-controller-manager-default-k8s-diff-port-715118
	
	
	==> coredns [52024cf28376a62381031eca8bea22e44266b9d223f6d5e99cf52755f6f9fa39] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [63bdbf22a7bd7a8493dba1bed9368968feb9bff095b2e32ce5d7867b3f9959c1] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-715118
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-715118
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2c4eae41cda912e6a762d77f0d8868e00f97bb4e
	                    minikube.k8s.io/name=default-k8s-diff-port-715118
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_01T03_45_20_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 01 May 2024 03:45:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-715118
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 01 May 2024 04:02:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 01 May 2024 04:00:59 +0000   Wed, 01 May 2024 03:45:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 01 May 2024 04:00:59 +0000   Wed, 01 May 2024 03:45:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 01 May 2024 04:00:59 +0000   Wed, 01 May 2024 03:45:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 01 May 2024 04:00:59 +0000   Wed, 01 May 2024 03:45:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.158
	  Hostname:    default-k8s-diff-port-715118
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ca78afd83edb42498001e582216e9753
	  System UUID:                ca78afd8-3edb-4249-8001-e582216e9753
	  Boot ID:                    f24916e9-fc2a-4f3d-a80f-63bee0b9a0aa
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-bg755                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 coredns-7db6d8ff4d-mp6f5                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 etcd-default-k8s-diff-port-715118                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         16m
	  kube-system                 kube-apiserver-default-k8s-diff-port-715118             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-715118    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-proxy-2knrp                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-scheduler-default-k8s-diff-port-715118             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 metrics-server-569cc877fc-xwxx9                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         16m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 16m   kube-proxy       
	  Normal  Starting                 16m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  16m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  16m   kubelet          Node default-k8s-diff-port-715118 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m   kubelet          Node default-k8s-diff-port-715118 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m   kubelet          Node default-k8s-diff-port-715118 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           16m   node-controller  Node default-k8s-diff-port-715118 event: Registered Node default-k8s-diff-port-715118 in Controller
	
	
	==> dmesg <==
	[  +0.045654] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[May 1 03:40] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.471209] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.570433] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.189641] systemd-fstab-generator[641]: Ignoring "noauto" option for root device
	[  +0.134501] systemd-fstab-generator[653]: Ignoring "noauto" option for root device
	[  +0.229895] systemd-fstab-generator[667]: Ignoring "noauto" option for root device
	[  +0.135829] systemd-fstab-generator[679]: Ignoring "noauto" option for root device
	[  +0.341667] systemd-fstab-generator[710]: Ignoring "noauto" option for root device
	[  +5.336904] systemd-fstab-generator[807]: Ignoring "noauto" option for root device
	[  +0.061899] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.158285] systemd-fstab-generator[939]: Ignoring "noauto" option for root device
	[  +5.594307] kauditd_printk_skb: 97 callbacks suppressed
	[  +7.351619] kauditd_printk_skb: 50 callbacks suppressed
	[  +6.685649] kauditd_printk_skb: 27 callbacks suppressed
	[May 1 03:45] kauditd_printk_skb: 9 callbacks suppressed
	[  +1.607913] systemd-fstab-generator[3603]: Ignoring "noauto" option for root device
	[  +4.523756] kauditd_printk_skb: 53 callbacks suppressed
	[  +2.035180] systemd-fstab-generator[3925]: Ignoring "noauto" option for root device
	[ +13.932253] systemd-fstab-generator[4145]: Ignoring "noauto" option for root device
	[  +0.130974] kauditd_printk_skb: 14 callbacks suppressed
	[May 1 03:46] kauditd_printk_skb: 86 callbacks suppressed
	[May 1 04:01] hrtimer: interrupt took 2023261 ns
	
	
	==> etcd [8e6542cbc796b648420284c6f298ed9fd813087e54aa092fe7efe6fa2afcecac] <==
	{"level":"info","ts":"2024-05-01T03:45:15.725206Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"244d86dcb1337571","local-member-attributes":"{Name:default-k8s-diff-port-715118 ClientURLs:[https://192.168.72.158:2379]}","request-path":"/0/members/244d86dcb1337571/attributes","cluster-id":"c08228541f5dd967","publish-timeout":"7s"}
	{"level":"info","ts":"2024-05-01T03:45:15.725412Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-01T03:45:15.726542Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-01T03:45:15.743519Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-05-01T03:45:15.743622Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-05-01T03:45:15.761766Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-05-01T03:45:15.761929Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"c08228541f5dd967","local-member-id":"244d86dcb1337571","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-01T03:45:15.762033Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-01T03:45:15.762087Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-01T03:45:15.76212Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.158:2379"}
	{"level":"info","ts":"2024-05-01T03:55:15.824669Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":684}
	{"level":"info","ts":"2024-05-01T03:55:15.84004Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":684,"took":"15.020517ms","hash":2125099591,"current-db-size-bytes":2248704,"current-db-size":"2.2 MB","current-db-size-in-use-bytes":2248704,"current-db-size-in-use":"2.2 MB"}
	{"level":"info","ts":"2024-05-01T03:55:15.840138Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2125099591,"revision":684,"compact-revision":-1}
	{"level":"info","ts":"2024-05-01T04:00:15.836289Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":928}
	{"level":"info","ts":"2024-05-01T04:00:15.841914Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":928,"took":"5.086416ms","hash":1322185750,"current-db-size-bytes":2248704,"current-db-size":"2.2 MB","current-db-size-in-use-bytes":1634304,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-05-01T04:00:15.841976Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1322185750,"revision":928,"compact-revision":684}
	{"level":"info","ts":"2024-05-01T04:00:49.650643Z","caller":"traceutil/trace.go:171","msg":"trace[43189824] transaction","detail":"{read_only:false; response_revision:1200; number_of_response:1; }","duration":"111.95254ms","start":"2024-05-01T04:00:49.538632Z","end":"2024-05-01T04:00:49.650585Z","steps":["trace[43189824] 'process raft request'  (duration: 111.710862ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-01T04:00:50.972388Z","caller":"traceutil/trace.go:171","msg":"trace[494523521] transaction","detail":"{read_only:false; response_revision:1201; number_of_response:1; }","duration":"262.275392ms","start":"2024-05-01T04:00:50.710091Z","end":"2024-05-01T04:00:50.972367Z","steps":["trace[494523521] 'process raft request'  (duration: 261.943813ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-01T04:00:51.806967Z","caller":"traceutil/trace.go:171","msg":"trace[1472271950] transaction","detail":"{read_only:false; response_revision:1202; number_of_response:1; }","duration":"145.389684ms","start":"2024-05-01T04:00:51.661554Z","end":"2024-05-01T04:00:51.806944Z","steps":["trace[1472271950] 'process raft request'  (duration: 145.244579ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-01T04:01:19.539254Z","caller":"traceutil/trace.go:171","msg":"trace[849307203] transaction","detail":"{read_only:false; response_revision:1223; number_of_response:1; }","duration":"216.465157ms","start":"2024-05-01T04:01:19.322769Z","end":"2024-05-01T04:01:19.539234Z","steps":["trace[849307203] 'process raft request'  (duration: 154.214115ms)","trace[849307203] 'compare'  (duration: 62.057338ms)"],"step_count":2}
	{"level":"warn","ts":"2024-05-01T04:01:19.7942Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"115.053017ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-05-01T04:01:19.79457Z","caller":"traceutil/trace.go:171","msg":"trace[1710890173] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1223; }","duration":"115.970674ms","start":"2024-05-01T04:01:19.678575Z","end":"2024-05-01T04:01:19.794546Z","steps":["trace[1710890173] 'range keys from in-memory index tree'  (duration: 114.979187ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-01T04:01:20.225967Z","caller":"traceutil/trace.go:171","msg":"trace[789197940] transaction","detail":"{read_only:false; response_revision:1225; number_of_response:1; }","duration":"210.493645ms","start":"2024-05-01T04:01:20.015451Z","end":"2024-05-01T04:01:20.225945Z","steps":["trace[789197940] 'process raft request'  (duration: 209.995949ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-01T04:01:46.677571Z","caller":"traceutil/trace.go:171","msg":"trace[88204615] transaction","detail":"{read_only:false; response_revision:1246; number_of_response:1; }","duration":"299.590182ms","start":"2024-05-01T04:01:46.377858Z","end":"2024-05-01T04:01:46.677448Z","steps":["trace[88204615] 'process raft request'  (duration: 299.413598ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-01T04:02:13.003277Z","caller":"traceutil/trace.go:171","msg":"trace[791773879] transaction","detail":"{read_only:false; response_revision:1269; number_of_response:1; }","duration":"160.997832ms","start":"2024-05-01T04:02:12.842223Z","end":"2024-05-01T04:02:13.003221Z","steps":["trace[791773879] 'process raft request'  (duration: 160.818525ms)"],"step_count":1}
	
	
	==> kernel <==
	 04:02:13 up 22 min,  0 users,  load average: 0.06, 0.16, 0.18
	Linux default-k8s-diff-port-715118 5.10.207 #1 SMP Tue Apr 30 22:38:43 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [4ec59f96dc5ca807a780b0898a1ca13ae038a3e83e43df7eef31296e6f297120] <==
	I0501 03:56:18.435660       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0501 03:58:18.434996       1 handler_proxy.go:93] no RequestInfo found in the context
	E0501 03:58:18.435588       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0501 03:58:18.435659       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0501 03:58:18.435845       1 handler_proxy.go:93] no RequestInfo found in the context
	E0501 03:58:18.435938       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0501 03:58:18.437726       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0501 04:00:17.440251       1 handler_proxy.go:93] no RequestInfo found in the context
	E0501 04:00:17.440608       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0501 04:00:18.441371       1 handler_proxy.go:93] no RequestInfo found in the context
	W0501 04:00:18.441380       1 handler_proxy.go:93] no RequestInfo found in the context
	E0501 04:00:18.441777       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0501 04:00:18.441875       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0501 04:00:18.441867       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0501 04:00:18.443675       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0501 04:01:18.443107       1 handler_proxy.go:93] no RequestInfo found in the context
	E0501 04:01:18.443206       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0501 04:01:18.443220       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0501 04:01:18.444175       1 handler_proxy.go:93] no RequestInfo found in the context
	E0501 04:01:18.444368       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0501 04:01:18.444422       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [ec5e0a63d43c5edbf108ba506af6763ae952fa85072a3ceba633eccb0fd4c710] <==
	I0501 03:56:48.201969       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="87.416µs"
	E0501 03:57:02.911716       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0501 03:57:03.517176       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0501 03:57:32.917115       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0501 03:57:33.525256       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0501 03:58:02.922087       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0501 03:58:03.534754       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0501 03:58:32.928351       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0501 03:58:33.542672       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0501 03:59:02.933094       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0501 03:59:03.552882       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0501 03:59:32.938929       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0501 03:59:33.562757       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0501 04:00:02.944293       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0501 04:00:03.571138       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0501 04:00:32.951103       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0501 04:00:33.580005       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0501 04:01:02.957440       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0501 04:01:03.588623       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0501 04:01:32.961875       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0501 04:01:33.597733       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0501 04:01:38.206390       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="214.023µs"
	I0501 04:01:49.202931       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="181.914µs"
	E0501 04:02:02.967240       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0501 04:02:03.607959       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [11308e8bbf31d7b87ceb42faf1dbf32e184d440a9ff0a0138c0aadd47365b83a] <==
	I0501 03:45:34.508086       1 server_linux.go:69] "Using iptables proxy"
	I0501 03:45:34.538174       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.72.158"]
	I0501 03:45:34.695855       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0501 03:45:34.695917       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0501 03:45:34.695936       1 server_linux.go:165] "Using iptables Proxier"
	I0501 03:45:34.724663       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0501 03:45:34.724968       1 server.go:872] "Version info" version="v1.30.0"
	I0501 03:45:34.725010       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0501 03:45:34.726059       1 config.go:192] "Starting service config controller"
	I0501 03:45:34.726103       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0501 03:45:34.726140       1 config.go:101] "Starting endpoint slice config controller"
	I0501 03:45:34.726171       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0501 03:45:34.730943       1 config.go:319] "Starting node config controller"
	I0501 03:45:34.730985       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0501 03:45:34.827445       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0501 03:45:34.827540       1 shared_informer.go:320] Caches are synced for service config
	I0501 03:45:34.837410       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [2997ad24c9a671bec035780acc282ba18cf87b144bd77e595a59b06414d29f34] <==
	W0501 03:45:18.303447       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0501 03:45:18.303640       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0501 03:45:18.313080       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0501 03:45:18.313133       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0501 03:45:18.404035       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0501 03:45:18.404093       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0501 03:45:18.540939       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0501 03:45:18.541043       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0501 03:45:18.572743       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0501 03:45:18.572982       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0501 03:45:18.594732       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0501 03:45:18.595047       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0501 03:45:18.596546       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0501 03:45:18.597152       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0501 03:45:18.611656       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0501 03:45:18.611730       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0501 03:45:18.707290       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0501 03:45:18.707778       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0501 03:45:18.708142       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0501 03:45:18.708215       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0501 03:45:18.734199       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0501 03:45:18.734284       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0501 03:45:19.011905       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0501 03:45:19.011959       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0501 03:45:21.457931       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	May 01 03:59:44 default-k8s-diff-port-715118 kubelet[3932]: E0501 03:59:44.184597    3932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-xwxx9" podUID="a66f5df4-355c-47f0-8b6e-da29e1c4394e"
	May 01 03:59:56 default-k8s-diff-port-715118 kubelet[3932]: E0501 03:59:56.184907    3932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-xwxx9" podUID="a66f5df4-355c-47f0-8b6e-da29e1c4394e"
	May 01 04:00:10 default-k8s-diff-port-715118 kubelet[3932]: E0501 04:00:10.186776    3932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-xwxx9" podUID="a66f5df4-355c-47f0-8b6e-da29e1c4394e"
	May 01 04:00:20 default-k8s-diff-port-715118 kubelet[3932]: E0501 04:00:20.233956    3932 iptables.go:577] "Could not set up iptables canary" err=<
	May 01 04:00:20 default-k8s-diff-port-715118 kubelet[3932]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 01 04:00:20 default-k8s-diff-port-715118 kubelet[3932]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 01 04:00:20 default-k8s-diff-port-715118 kubelet[3932]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 01 04:00:20 default-k8s-diff-port-715118 kubelet[3932]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 01 04:00:25 default-k8s-diff-port-715118 kubelet[3932]: E0501 04:00:25.183677    3932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-xwxx9" podUID="a66f5df4-355c-47f0-8b6e-da29e1c4394e"
	May 01 04:00:39 default-k8s-diff-port-715118 kubelet[3932]: E0501 04:00:39.184594    3932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-xwxx9" podUID="a66f5df4-355c-47f0-8b6e-da29e1c4394e"
	May 01 04:00:51 default-k8s-diff-port-715118 kubelet[3932]: E0501 04:00:51.183896    3932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-xwxx9" podUID="a66f5df4-355c-47f0-8b6e-da29e1c4394e"
	May 01 04:01:02 default-k8s-diff-port-715118 kubelet[3932]: E0501 04:01:02.185581    3932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-xwxx9" podUID="a66f5df4-355c-47f0-8b6e-da29e1c4394e"
	May 01 04:01:14 default-k8s-diff-port-715118 kubelet[3932]: E0501 04:01:14.183717    3932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-xwxx9" podUID="a66f5df4-355c-47f0-8b6e-da29e1c4394e"
	May 01 04:01:20 default-k8s-diff-port-715118 kubelet[3932]: E0501 04:01:20.239665    3932 iptables.go:577] "Could not set up iptables canary" err=<
	May 01 04:01:20 default-k8s-diff-port-715118 kubelet[3932]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 01 04:01:20 default-k8s-diff-port-715118 kubelet[3932]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 01 04:01:20 default-k8s-diff-port-715118 kubelet[3932]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 01 04:01:20 default-k8s-diff-port-715118 kubelet[3932]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 01 04:01:26 default-k8s-diff-port-715118 kubelet[3932]: E0501 04:01:26.208238    3932 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	May 01 04:01:26 default-k8s-diff-port-715118 kubelet[3932]: E0501 04:01:26.208308    3932 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	May 01 04:01:26 default-k8s-diff-port-715118 kubelet[3932]: E0501 04:01:26.209690    3932 kuberuntime_manager.go:1256] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fp9jz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathE
xpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,Stdi
nOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-569cc877fc-xwxx9_kube-system(a66f5df4-355c-47f0-8b6e-da29e1c4394e): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	May 01 04:01:26 default-k8s-diff-port-715118 kubelet[3932]: E0501 04:01:26.209855    3932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-569cc877fc-xwxx9" podUID="a66f5df4-355c-47f0-8b6e-da29e1c4394e"
	May 01 04:01:38 default-k8s-diff-port-715118 kubelet[3932]: E0501 04:01:38.184311    3932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-xwxx9" podUID="a66f5df4-355c-47f0-8b6e-da29e1c4394e"
	May 01 04:01:49 default-k8s-diff-port-715118 kubelet[3932]: E0501 04:01:49.183646    3932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-xwxx9" podUID="a66f5df4-355c-47f0-8b6e-da29e1c4394e"
	May 01 04:02:04 default-k8s-diff-port-715118 kubelet[3932]: E0501 04:02:04.183670    3932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-xwxx9" podUID="a66f5df4-355c-47f0-8b6e-da29e1c4394e"
	
	
	==> storage-provisioner [3d8ba3db0459896edb75b12157ddbf8810613153a6df76d1e4eb406b8f8b6e62] <==
	I0501 03:45:36.325513       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0501 03:45:36.337266       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0501 03:45:36.337545       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0501 03:45:36.349200       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0501 03:45:36.350211       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-715118_c675c8ff-db06-4458-ad4e-38e4966957bd!
	I0501 03:45:36.349957       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c297f778-6158-476d-8a08-666ad6d4f2da", APIVersion:"v1", ResourceVersion:"412", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-715118_c675c8ff-db06-4458-ad4e-38e4966957bd became leader
	I0501 03:45:36.451639       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-715118_c675c8ff-db06-4458-ad4e-38e4966957bd!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-715118 -n default-k8s-diff-port-715118
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-715118 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-xwxx9
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-715118 describe pod metrics-server-569cc877fc-xwxx9
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-715118 describe pod metrics-server-569cc877fc-xwxx9: exit status 1 (74.074665ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-xwxx9" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-715118 describe pod metrics-server-569cc877fc-xwxx9: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (451.57s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (277.32s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-892672 -n no-preload-892672
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-05-01 04:00:17.609836813 +0000 UTC m=+6792.715753103
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-892672 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-892672 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.812µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-892672 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-892672 -n no-preload-892672
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-892672 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-892672 logs -n 25: (1.609040248s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p no-preload-892672                                   | no-preload-892672            | jenkins | v1.33.0 | 01 May 24 03:30 UTC | 01 May 24 03:32 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-277128                                  | embed-certs-277128           | jenkins | v1.33.0 | 01 May 24 03:30 UTC | 01 May 24 03:32 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-046243                           | kubernetes-upgrade-046243    | jenkins | v1.33.0 | 01 May 24 03:30 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-046243                           | kubernetes-upgrade-046243    | jenkins | v1.33.0 | 01 May 24 03:30 UTC | 01 May 24 03:32 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-046243                           | kubernetes-upgrade-046243    | jenkins | v1.33.0 | 01 May 24 03:32 UTC | 01 May 24 03:32 UTC |
	| delete  | -p                                                     | disable-driver-mounts-483221 | jenkins | v1.33.0 | 01 May 24 03:32 UTC | 01 May 24 03:32 UTC |
	|         | disable-driver-mounts-483221                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-715118 | jenkins | v1.33.0 | 01 May 24 03:32 UTC | 01 May 24 03:33 UTC |
	|         | default-k8s-diff-port-715118                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-892672             | no-preload-892672            | jenkins | v1.33.0 | 01 May 24 03:32 UTC | 01 May 24 03:32 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-892672                                   | no-preload-892672            | jenkins | v1.33.0 | 01 May 24 03:32 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-277128            | embed-certs-277128           | jenkins | v1.33.0 | 01 May 24 03:32 UTC | 01 May 24 03:32 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-277128                                  | embed-certs-277128           | jenkins | v1.33.0 | 01 May 24 03:32 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-715118  | default-k8s-diff-port-715118 | jenkins | v1.33.0 | 01 May 24 03:33 UTC | 01 May 24 03:33 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-715118 | jenkins | v1.33.0 | 01 May 24 03:33 UTC |                     |
	|         | default-k8s-diff-port-715118                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-892672                  | no-preload-892672            | jenkins | v1.33.0 | 01 May 24 03:34 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-277128                 | embed-certs-277128           | jenkins | v1.33.0 | 01 May 24 03:34 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-892672                                   | no-preload-892672            | jenkins | v1.33.0 | 01 May 24 03:34 UTC | 01 May 24 03:46 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-503971        | old-k8s-version-503971       | jenkins | v1.33.0 | 01 May 24 03:34 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p embed-certs-277128                                  | embed-certs-277128           | jenkins | v1.33.0 | 01 May 24 03:34 UTC | 01 May 24 03:44 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-715118       | default-k8s-diff-port-715118 | jenkins | v1.33.0 | 01 May 24 03:35 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-715118 | jenkins | v1.33.0 | 01 May 24 03:35 UTC | 01 May 24 03:45 UTC |
	|         | default-k8s-diff-port-715118                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-503971                              | old-k8s-version-503971       | jenkins | v1.33.0 | 01 May 24 03:36 UTC | 01 May 24 03:36 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-503971             | old-k8s-version-503971       | jenkins | v1.33.0 | 01 May 24 03:36 UTC | 01 May 24 03:36 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-503971                              | old-k8s-version-503971       | jenkins | v1.33.0 | 01 May 24 03:36 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-503971                              | old-k8s-version-503971       | jenkins | v1.33.0 | 01 May 24 04:00 UTC | 01 May 24 04:00 UTC |
	| start   | -p newest-cni-906018 --memory=2200 --alsologtostderr   | newest-cni-906018            | jenkins | v1.33.0 | 01 May 24 04:00 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/01 04:00:17
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0501 04:00:17.117291   75701 out.go:291] Setting OutFile to fd 1 ...
	I0501 04:00:17.117426   75701 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 04:00:17.117435   75701 out.go:304] Setting ErrFile to fd 2...
	I0501 04:00:17.117439   75701 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 04:00:17.117634   75701 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18779-13391/.minikube/bin
	I0501 04:00:17.118209   75701 out.go:298] Setting JSON to false
	I0501 04:00:17.119147   75701 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":9760,"bootTime":1714526257,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0501 04:00:17.119202   75701 start.go:139] virtualization: kvm guest
	I0501 04:00:17.121577   75701 out.go:177] * [newest-cni-906018] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0501 04:00:17.123158   75701 notify.go:220] Checking for updates...
	I0501 04:00:17.123166   75701 out.go:177]   - MINIKUBE_LOCATION=18779
	I0501 04:00:17.124550   75701 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0501 04:00:17.125663   75701 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18779-13391/kubeconfig
	I0501 04:00:17.126851   75701 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18779-13391/.minikube
	I0501 04:00:17.127982   75701 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0501 04:00:17.129175   75701 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0501 04:00:17.130986   75701 config.go:182] Loaded profile config "default-k8s-diff-port-715118": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0501 04:00:17.131133   75701 config.go:182] Loaded profile config "embed-certs-277128": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0501 04:00:17.131276   75701 config.go:182] Loaded profile config "no-preload-892672": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0501 04:00:17.131390   75701 driver.go:392] Setting default libvirt URI to qemu:///system
	I0501 04:00:17.168580   75701 out.go:177] * Using the kvm2 driver based on user configuration
	I0501 04:00:17.169984   75701 start.go:297] selected driver: kvm2
	I0501 04:00:17.169999   75701 start.go:901] validating driver "kvm2" against <nil>
	I0501 04:00:17.170009   75701 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0501 04:00:17.170837   75701 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0501 04:00:17.170904   75701 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18779-13391/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0501 04:00:17.185826   75701 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0501 04:00:17.185878   75701 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0501 04:00:17.185901   75701 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0501 04:00:17.186105   75701 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0501 04:00:17.186164   75701 cni.go:84] Creating CNI manager for ""
	I0501 04:00:17.186176   75701 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0501 04:00:17.186188   75701 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0501 04:00:17.186235   75701 start.go:340] cluster config:
	{Name:newest-cni-906018 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:newest-cni-906018 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 04:00:17.186324   75701 iso.go:125] acquiring lock: {Name:mkcd2496aadb29931e179193d707c731bfda98b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0501 04:00:17.188002   75701 out.go:177] * Starting "newest-cni-906018" primary control-plane node in "newest-cni-906018" cluster
	I0501 04:00:17.189196   75701 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0501 04:00:17.189226   75701 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0501 04:00:17.189233   75701 cache.go:56] Caching tarball of preloaded images
	I0501 04:00:17.189316   75701 preload.go:173] Found /home/jenkins/minikube-integration/18779-13391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0501 04:00:17.189326   75701 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0501 04:00:17.189410   75701 profile.go:143] Saving config to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/newest-cni-906018/config.json ...
	I0501 04:00:17.189426   75701 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/newest-cni-906018/config.json: {Name:mk36e297e787aa320875d4c2133eb9c1395184fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 04:00:17.189535   75701 start.go:360] acquireMachinesLock for newest-cni-906018: {Name:mkc4548e5afe1d4b0a833f6c522103562b0cefff Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0501 04:00:17.189561   75701 start.go:364] duration metric: took 14.659µs to acquireMachinesLock for "newest-cni-906018"
	I0501 04:00:17.189585   75701 start.go:93] Provisioning new machine with config: &{Name:newest-cni-906018 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.0 ClusterName:newest-cni-906018 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0501 04:00:17.189666   75701 start.go:125] createHost starting for "" (driver="kvm2")
	
	
	==> CRI-O <==
	May 01 04:00:18 no-preload-892672 crio[731]: time="2024-05-01 04:00:18.385544677Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714536018385524149,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:99941,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ee49d0df-dc1d-4ae7-8ffa-ea2762dc71df name=/runtime.v1.ImageService/ImageFsInfo
	May 01 04:00:18 no-preload-892672 crio[731]: time="2024-05-01 04:00:18.386245194Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=768c5735-1a64-420d-8c4e-2c8c16712cac name=/runtime.v1.RuntimeService/ListContainers
	May 01 04:00:18 no-preload-892672 crio[731]: time="2024-05-01 04:00:18.386301159Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=768c5735-1a64-420d-8c4e-2c8c16712cac name=/runtime.v1.RuntimeService/ListContainers
	May 01 04:00:18 no-preload-892672 crio[731]: time="2024-05-01 04:00:18.387042184Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:229139f4b20254ba487deecee0957c02e4f011770d365596c0c3b1a7cb75aafe,PodSandboxId:36d1422b84d96f29c7b5c5c115029f07e361297527c5cda590996788e6df2618,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714535195778139410,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-c6lnj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8b8c1f1-7696-43f2-98be-339f99963e7c,},Annotations:map[string]string{io.kubernetes.container.hash: dd92bd6c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b243772338376823399e57b784e713d89c9c15400f25af5fed738127fe432a08,PodSandboxId:541b7bcfe6dd1ba293905ff34808d1eaae351f7f54d5d8c239bf7fc63d25f7f4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714535195700634320,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-57k52,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: f98cb358-71ba-49c5-8213-0f3160c6e38b,},Annotations:map[string]string{io.kubernetes.container.hash: f7e62959,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04f79e62716c5afcb3b939952b3de3a05e6166f5847ade1fbd8dca444a3fa313,PodSandboxId:a46f8f22e3d4c24f67bf26ccaaca42a528a92292e7031601647d30ba5c57d02e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNIN
G,CreatedAt:1714535195294421620,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-czsqz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4254b019-b6c8-4ff9-a361-c96eaf20dc65,},Annotations:map[string]string{io.kubernetes.container.hash: 3d6570b6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a075f10431025603b7e9b5776296ff25449e3e5d51294564a01819472c4dca0,PodSandboxId:7a62bb2a7d3f6de789b97414c2171f092bb841d9041b3ec47c00196e6d8d1ecc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:171453519514
9151540,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b55b7e8b-4de0-40f8-96ff-bf0b550699d1,},Annotations:map[string]string{io.kubernetes.container.hash: 3f614e11,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbdcf5ff9f94c64a25e5d5db485d98b35e56f58847e7ee075ec3a11b9b03f77e,PodSandboxId:832bd8bc8daecc585f587d113c26ea91219068eef2b7f50c9f3dbf5975a1cd7e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714535173470897940,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-892672,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb6628181fb0fd531dcdedce99926112,},Annotations:map[string]string{io.kubernetes.container.hash: 4e95860,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:188801d1d61ccf3dc55289bf9fd5e10246328ef4baecbfa211addd80c00d256a,PodSandboxId:f37b4459a01badcd37acdb1d54d3055b85e5279497875c6733ac116960476f52,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714535173438381585,Labels:map[string]string{io.kubernetes.container.name
: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-892672,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 043564780f07ce23cfcadab65c7a3f99,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49c8b9ee369c3f7c9f427f2e761135d1c6b58c7847503aa7a66c55f5046fa31f,PodSandboxId:cc89d396df98699a11f805c34c3d86f49e0908ab5497295df6019472cd74c88d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714535173383895353,Labels:map[string]string{io.kubernetes.container.name: ku
be-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-892672,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46dd0fcfa81ec0b723fbe5f0491243d6,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94dcf48a1f2ca610f98afb5926a769111580b5a6b7ac380fe96fde3d9d32804e,PodSandboxId:242654a439354540c50f23961d2d1b4ed7eba4bcb23dc3009c96cb9d447706fb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714535173346150522,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-892672,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31f60d94b4d435b7b8a84f622c3f01ba,},Annotations:map[string]string{io.kubernetes.container.hash: 68bdc315,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=768c5735-1a64-420d-8c4e-2c8c16712cac name=/runtime.v1.RuntimeService/ListContainers
	May 01 04:00:18 no-preload-892672 crio[731]: time="2024-05-01 04:00:18.438884277Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b8e60082-450b-4804-a5f0-4bb6e68bb382 name=/runtime.v1.RuntimeService/Version
	May 01 04:00:18 no-preload-892672 crio[731]: time="2024-05-01 04:00:18.438988591Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b8e60082-450b-4804-a5f0-4bb6e68bb382 name=/runtime.v1.RuntimeService/Version
	May 01 04:00:18 no-preload-892672 crio[731]: time="2024-05-01 04:00:18.442449567Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3ec5daa1-71c4-46b4-854f-3f27e2676ea5 name=/runtime.v1.ImageService/ImageFsInfo
	May 01 04:00:18 no-preload-892672 crio[731]: time="2024-05-01 04:00:18.443536133Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714536018443500600,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:99941,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3ec5daa1-71c4-46b4-854f-3f27e2676ea5 name=/runtime.v1.ImageService/ImageFsInfo
	May 01 04:00:18 no-preload-892672 crio[731]: time="2024-05-01 04:00:18.444436733Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5cd125c8-5077-481f-a54d-d0423a2c806e name=/runtime.v1.RuntimeService/ListContainers
	May 01 04:00:18 no-preload-892672 crio[731]: time="2024-05-01 04:00:18.444493018Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5cd125c8-5077-481f-a54d-d0423a2c806e name=/runtime.v1.RuntimeService/ListContainers
	May 01 04:00:18 no-preload-892672 crio[731]: time="2024-05-01 04:00:18.444769498Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:229139f4b20254ba487deecee0957c02e4f011770d365596c0c3b1a7cb75aafe,PodSandboxId:36d1422b84d96f29c7b5c5c115029f07e361297527c5cda590996788e6df2618,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714535195778139410,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-c6lnj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8b8c1f1-7696-43f2-98be-339f99963e7c,},Annotations:map[string]string{io.kubernetes.container.hash: dd92bd6c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b243772338376823399e57b784e713d89c9c15400f25af5fed738127fe432a08,PodSandboxId:541b7bcfe6dd1ba293905ff34808d1eaae351f7f54d5d8c239bf7fc63d25f7f4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714535195700634320,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-57k52,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: f98cb358-71ba-49c5-8213-0f3160c6e38b,},Annotations:map[string]string{io.kubernetes.container.hash: f7e62959,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04f79e62716c5afcb3b939952b3de3a05e6166f5847ade1fbd8dca444a3fa313,PodSandboxId:a46f8f22e3d4c24f67bf26ccaaca42a528a92292e7031601647d30ba5c57d02e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNIN
G,CreatedAt:1714535195294421620,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-czsqz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4254b019-b6c8-4ff9-a361-c96eaf20dc65,},Annotations:map[string]string{io.kubernetes.container.hash: 3d6570b6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a075f10431025603b7e9b5776296ff25449e3e5d51294564a01819472c4dca0,PodSandboxId:7a62bb2a7d3f6de789b97414c2171f092bb841d9041b3ec47c00196e6d8d1ecc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:171453519514
9151540,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b55b7e8b-4de0-40f8-96ff-bf0b550699d1,},Annotations:map[string]string{io.kubernetes.container.hash: 3f614e11,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbdcf5ff9f94c64a25e5d5db485d98b35e56f58847e7ee075ec3a11b9b03f77e,PodSandboxId:832bd8bc8daecc585f587d113c26ea91219068eef2b7f50c9f3dbf5975a1cd7e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714535173470897940,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-892672,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb6628181fb0fd531dcdedce99926112,},Annotations:map[string]string{io.kubernetes.container.hash: 4e95860,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:188801d1d61ccf3dc55289bf9fd5e10246328ef4baecbfa211addd80c00d256a,PodSandboxId:f37b4459a01badcd37acdb1d54d3055b85e5279497875c6733ac116960476f52,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714535173438381585,Labels:map[string]string{io.kubernetes.container.name
: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-892672,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 043564780f07ce23cfcadab65c7a3f99,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49c8b9ee369c3f7c9f427f2e761135d1c6b58c7847503aa7a66c55f5046fa31f,PodSandboxId:cc89d396df98699a11f805c34c3d86f49e0908ab5497295df6019472cd74c88d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714535173383895353,Labels:map[string]string{io.kubernetes.container.name: ku
be-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-892672,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46dd0fcfa81ec0b723fbe5f0491243d6,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94dcf48a1f2ca610f98afb5926a769111580b5a6b7ac380fe96fde3d9d32804e,PodSandboxId:242654a439354540c50f23961d2d1b4ed7eba4bcb23dc3009c96cb9d447706fb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714535173346150522,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-892672,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31f60d94b4d435b7b8a84f622c3f01ba,},Annotations:map[string]string{io.kubernetes.container.hash: 68bdc315,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5cd125c8-5077-481f-a54d-d0423a2c806e name=/runtime.v1.RuntimeService/ListContainers
	May 01 04:00:18 no-preload-892672 crio[731]: time="2024-05-01 04:00:18.506072334Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fb700e7f-1115-461b-abb9-f46ae9ddef9b name=/runtime.v1.RuntimeService/Version
	May 01 04:00:18 no-preload-892672 crio[731]: time="2024-05-01 04:00:18.506215919Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fb700e7f-1115-461b-abb9-f46ae9ddef9b name=/runtime.v1.RuntimeService/Version
	May 01 04:00:18 no-preload-892672 crio[731]: time="2024-05-01 04:00:18.507654812Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2ba5d9e8-4e7c-4b37-af31-9a51d0f5f36a name=/runtime.v1.ImageService/ImageFsInfo
	May 01 04:00:18 no-preload-892672 crio[731]: time="2024-05-01 04:00:18.508344667Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714536018508319584,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:99941,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2ba5d9e8-4e7c-4b37-af31-9a51d0f5f36a name=/runtime.v1.ImageService/ImageFsInfo
	May 01 04:00:18 no-preload-892672 crio[731]: time="2024-05-01 04:00:18.509045262Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e3299e84-47f3-4c0b-9c57-4247cd130799 name=/runtime.v1.RuntimeService/ListContainers
	May 01 04:00:18 no-preload-892672 crio[731]: time="2024-05-01 04:00:18.509162733Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e3299e84-47f3-4c0b-9c57-4247cd130799 name=/runtime.v1.RuntimeService/ListContainers
	May 01 04:00:18 no-preload-892672 crio[731]: time="2024-05-01 04:00:18.509461693Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:229139f4b20254ba487deecee0957c02e4f011770d365596c0c3b1a7cb75aafe,PodSandboxId:36d1422b84d96f29c7b5c5c115029f07e361297527c5cda590996788e6df2618,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714535195778139410,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-c6lnj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8b8c1f1-7696-43f2-98be-339f99963e7c,},Annotations:map[string]string{io.kubernetes.container.hash: dd92bd6c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b243772338376823399e57b784e713d89c9c15400f25af5fed738127fe432a08,PodSandboxId:541b7bcfe6dd1ba293905ff34808d1eaae351f7f54d5d8c239bf7fc63d25f7f4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714535195700634320,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-57k52,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: f98cb358-71ba-49c5-8213-0f3160c6e38b,},Annotations:map[string]string{io.kubernetes.container.hash: f7e62959,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04f79e62716c5afcb3b939952b3de3a05e6166f5847ade1fbd8dca444a3fa313,PodSandboxId:a46f8f22e3d4c24f67bf26ccaaca42a528a92292e7031601647d30ba5c57d02e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNIN
G,CreatedAt:1714535195294421620,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-czsqz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4254b019-b6c8-4ff9-a361-c96eaf20dc65,},Annotations:map[string]string{io.kubernetes.container.hash: 3d6570b6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a075f10431025603b7e9b5776296ff25449e3e5d51294564a01819472c4dca0,PodSandboxId:7a62bb2a7d3f6de789b97414c2171f092bb841d9041b3ec47c00196e6d8d1ecc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:171453519514
9151540,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b55b7e8b-4de0-40f8-96ff-bf0b550699d1,},Annotations:map[string]string{io.kubernetes.container.hash: 3f614e11,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbdcf5ff9f94c64a25e5d5db485d98b35e56f58847e7ee075ec3a11b9b03f77e,PodSandboxId:832bd8bc8daecc585f587d113c26ea91219068eef2b7f50c9f3dbf5975a1cd7e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714535173470897940,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-892672,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb6628181fb0fd531dcdedce99926112,},Annotations:map[string]string{io.kubernetes.container.hash: 4e95860,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:188801d1d61ccf3dc55289bf9fd5e10246328ef4baecbfa211addd80c00d256a,PodSandboxId:f37b4459a01badcd37acdb1d54d3055b85e5279497875c6733ac116960476f52,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714535173438381585,Labels:map[string]string{io.kubernetes.container.name
: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-892672,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 043564780f07ce23cfcadab65c7a3f99,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49c8b9ee369c3f7c9f427f2e761135d1c6b58c7847503aa7a66c55f5046fa31f,PodSandboxId:cc89d396df98699a11f805c34c3d86f49e0908ab5497295df6019472cd74c88d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714535173383895353,Labels:map[string]string{io.kubernetes.container.name: ku
be-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-892672,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46dd0fcfa81ec0b723fbe5f0491243d6,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94dcf48a1f2ca610f98afb5926a769111580b5a6b7ac380fe96fde3d9d32804e,PodSandboxId:242654a439354540c50f23961d2d1b4ed7eba4bcb23dc3009c96cb9d447706fb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714535173346150522,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-892672,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31f60d94b4d435b7b8a84f622c3f01ba,},Annotations:map[string]string{io.kubernetes.container.hash: 68bdc315,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e3299e84-47f3-4c0b-9c57-4247cd130799 name=/runtime.v1.RuntimeService/ListContainers
	May 01 04:00:18 no-preload-892672 crio[731]: time="2024-05-01 04:00:18.567938723Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6227ba09-3e2b-458c-a929-f298ac489698 name=/runtime.v1.RuntimeService/Version
	May 01 04:00:18 no-preload-892672 crio[731]: time="2024-05-01 04:00:18.568038928Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6227ba09-3e2b-458c-a929-f298ac489698 name=/runtime.v1.RuntimeService/Version
	May 01 04:00:18 no-preload-892672 crio[731]: time="2024-05-01 04:00:18.569743722Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=42903740-7c88-4b53-822e-9762502abdfb name=/runtime.v1.ImageService/ImageFsInfo
	May 01 04:00:18 no-preload-892672 crio[731]: time="2024-05-01 04:00:18.570275374Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714536018570238159,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:99941,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=42903740-7c88-4b53-822e-9762502abdfb name=/runtime.v1.ImageService/ImageFsInfo
	May 01 04:00:18 no-preload-892672 crio[731]: time="2024-05-01 04:00:18.571374535Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0244dd81-23ca-4d78-b4c9-accbbe76aec1 name=/runtime.v1.RuntimeService/ListContainers
	May 01 04:00:18 no-preload-892672 crio[731]: time="2024-05-01 04:00:18.571481601Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0244dd81-23ca-4d78-b4c9-accbbe76aec1 name=/runtime.v1.RuntimeService/ListContainers
	May 01 04:00:18 no-preload-892672 crio[731]: time="2024-05-01 04:00:18.571972182Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:229139f4b20254ba487deecee0957c02e4f011770d365596c0c3b1a7cb75aafe,PodSandboxId:36d1422b84d96f29c7b5c5c115029f07e361297527c5cda590996788e6df2618,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714535195778139410,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-c6lnj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8b8c1f1-7696-43f2-98be-339f99963e7c,},Annotations:map[string]string{io.kubernetes.container.hash: dd92bd6c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b243772338376823399e57b784e713d89c9c15400f25af5fed738127fe432a08,PodSandboxId:541b7bcfe6dd1ba293905ff34808d1eaae351f7f54d5d8c239bf7fc63d25f7f4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714535195700634320,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-57k52,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: f98cb358-71ba-49c5-8213-0f3160c6e38b,},Annotations:map[string]string{io.kubernetes.container.hash: f7e62959,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04f79e62716c5afcb3b939952b3de3a05e6166f5847ade1fbd8dca444a3fa313,PodSandboxId:a46f8f22e3d4c24f67bf26ccaaca42a528a92292e7031601647d30ba5c57d02e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNIN
G,CreatedAt:1714535195294421620,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-czsqz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4254b019-b6c8-4ff9-a361-c96eaf20dc65,},Annotations:map[string]string{io.kubernetes.container.hash: 3d6570b6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a075f10431025603b7e9b5776296ff25449e3e5d51294564a01819472c4dca0,PodSandboxId:7a62bb2a7d3f6de789b97414c2171f092bb841d9041b3ec47c00196e6d8d1ecc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:171453519514
9151540,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b55b7e8b-4de0-40f8-96ff-bf0b550699d1,},Annotations:map[string]string{io.kubernetes.container.hash: 3f614e11,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbdcf5ff9f94c64a25e5d5db485d98b35e56f58847e7ee075ec3a11b9b03f77e,PodSandboxId:832bd8bc8daecc585f587d113c26ea91219068eef2b7f50c9f3dbf5975a1cd7e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714535173470897940,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-892672,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb6628181fb0fd531dcdedce99926112,},Annotations:map[string]string{io.kubernetes.container.hash: 4e95860,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:188801d1d61ccf3dc55289bf9fd5e10246328ef4baecbfa211addd80c00d256a,PodSandboxId:f37b4459a01badcd37acdb1d54d3055b85e5279497875c6733ac116960476f52,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714535173438381585,Labels:map[string]string{io.kubernetes.container.name
: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-892672,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 043564780f07ce23cfcadab65c7a3f99,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49c8b9ee369c3f7c9f427f2e761135d1c6b58c7847503aa7a66c55f5046fa31f,PodSandboxId:cc89d396df98699a11f805c34c3d86f49e0908ab5497295df6019472cd74c88d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714535173383895353,Labels:map[string]string{io.kubernetes.container.name: ku
be-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-892672,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46dd0fcfa81ec0b723fbe5f0491243d6,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94dcf48a1f2ca610f98afb5926a769111580b5a6b7ac380fe96fde3d9d32804e,PodSandboxId:242654a439354540c50f23961d2d1b4ed7eba4bcb23dc3009c96cb9d447706fb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714535173346150522,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-892672,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31f60d94b4d435b7b8a84f622c3f01ba,},Annotations:map[string]string{io.kubernetes.container.hash: 68bdc315,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0244dd81-23ca-4d78-b4c9-accbbe76aec1 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	229139f4b2025       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   13 minutes ago      Running             coredns                   0                   36d1422b84d96       coredns-7db6d8ff4d-c6lnj
	b243772338376       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   13 minutes ago      Running             coredns                   0                   541b7bcfe6dd1       coredns-7db6d8ff4d-57k52
	04f79e62716c5       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b   13 minutes ago      Running             kube-proxy                0                   a46f8f22e3d4c       kube-proxy-czsqz
	9a075f1043102       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   13 minutes ago      Running             storage-provisioner       0                   7a62bb2a7d3f6       storage-provisioner
	fbdcf5ff9f94c       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   14 minutes ago      Running             etcd                      2                   832bd8bc8daec       etcd-no-preload-892672
	188801d1d61cc       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced   14 minutes ago      Running             kube-scheduler            2                   f37b4459a01ba       kube-scheduler-no-preload-892672
	49c8b9ee369c3       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b   14 minutes ago      Running             kube-controller-manager   2                   cc89d396df986       kube-controller-manager-no-preload-892672
	94dcf48a1f2ca       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0   14 minutes ago      Running             kube-apiserver            2                   242654a439354       kube-apiserver-no-preload-892672
	
	
	==> coredns [229139f4b20254ba487deecee0957c02e4f011770d365596c0c3b1a7cb75aafe] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [b243772338376823399e57b784e713d89c9c15400f25af5fed738127fe432a08] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               no-preload-892672
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-892672
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2c4eae41cda912e6a762d77f0d8868e00f97bb4e
	                    minikube.k8s.io/name=no-preload-892672
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_01T03_46_19_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 01 May 2024 03:46:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-892672
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 01 May 2024 04:00:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 01 May 2024 03:56:52 +0000   Wed, 01 May 2024 03:46:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 01 May 2024 03:56:52 +0000   Wed, 01 May 2024 03:46:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 01 May 2024 03:56:52 +0000   Wed, 01 May 2024 03:46:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 01 May 2024 03:56:52 +0000   Wed, 01 May 2024 03:46:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.144
	  Hostname:    no-preload-892672
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c0d4545d9aa14df2be84b492fcdf0657
	  System UUID:                c0d4545d-9aa1-4df2-be84-b492fcdf0657
	  Boot ID:                    17a54706-8e44-454d-a770-5b63194216fc
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-57k52                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 coredns-7db6d8ff4d-c6lnj                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 etcd-no-preload-892672                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 kube-apiserver-no-preload-892672             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-controller-manager-no-preload-892672    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-proxy-czsqz                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-no-preload-892672             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 metrics-server-569cc877fc-5m5qf              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 13m   kube-proxy       
	  Normal  Starting                 14m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  13m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  13m   kubelet          Node no-preload-892672 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m   kubelet          Node no-preload-892672 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m   kubelet          Node no-preload-892672 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           13m   node-controller  Node no-preload-892672 event: Registered Node no-preload-892672 in Controller
	
	
	==> dmesg <==
	[  +0.046797] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.185471] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.663624] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.733241] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +10.536645] systemd-fstab-generator[647]: Ignoring "noauto" option for root device
	[  +0.063622] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.070077] systemd-fstab-generator[660]: Ignoring "noauto" option for root device
	[  +0.198940] systemd-fstab-generator[674]: Ignoring "noauto" option for root device
	[  +0.144815] systemd-fstab-generator[686]: Ignoring "noauto" option for root device
	[  +0.324977] systemd-fstab-generator[715]: Ignoring "noauto" option for root device
	[May 1 03:41] systemd-fstab-generator[1246]: Ignoring "noauto" option for root device
	[  +0.064165] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.066241] systemd-fstab-generator[1372]: Ignoring "noauto" option for root device
	[  +5.563183] kauditd_printk_skb: 94 callbacks suppressed
	[  +7.373870] kauditd_printk_skb: 50 callbacks suppressed
	[  +6.156674] kauditd_printk_skb: 24 callbacks suppressed
	[May 1 03:46] kauditd_printk_skb: 5 callbacks suppressed
	[  +2.503242] systemd-fstab-generator[4049]: Ignoring "noauto" option for root device
	[  +4.664129] kauditd_printk_skb: 57 callbacks suppressed
	[  +1.922428] systemd-fstab-generator[4369]: Ignoring "noauto" option for root device
	[ +14.484031] systemd-fstab-generator[4584]: Ignoring "noauto" option for root device
	[  +0.132829] kauditd_printk_skb: 14 callbacks suppressed
	[May 1 03:47] kauditd_printk_skb: 80 callbacks suppressed
	
	
	==> etcd [fbdcf5ff9f94c64a25e5d5db485d98b35e56f58847e7ee075ec3a11b9b03f77e] <==
	{"level":"info","ts":"2024-05-01T03:46:14.128916Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"42163c43c38ae515","initial-advertise-peer-urls":["https://192.168.39.144:2380"],"listen-peer-urls":["https://192.168.39.144:2380"],"advertise-client-urls":["https://192.168.39.144:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.144:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-05-01T03:46:14.129025Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-05-01T03:46:14.129147Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.144:2380"}
	{"level":"info","ts":"2024-05-01T03:46:14.129189Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.144:2380"}
	{"level":"info","ts":"2024-05-01T03:46:14.172106Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"42163c43c38ae515 is starting a new election at term 1"}
	{"level":"info","ts":"2024-05-01T03:46:14.172283Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"42163c43c38ae515 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-05-01T03:46:14.172423Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"42163c43c38ae515 received MsgPreVoteResp from 42163c43c38ae515 at term 1"}
	{"level":"info","ts":"2024-05-01T03:46:14.172511Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"42163c43c38ae515 became candidate at term 2"}
	{"level":"info","ts":"2024-05-01T03:46:14.172567Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"42163c43c38ae515 received MsgVoteResp from 42163c43c38ae515 at term 2"}
	{"level":"info","ts":"2024-05-01T03:46:14.172598Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"42163c43c38ae515 became leader at term 2"}
	{"level":"info","ts":"2024-05-01T03:46:14.172679Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 42163c43c38ae515 elected leader 42163c43c38ae515 at term 2"}
	{"level":"info","ts":"2024-05-01T03:46:14.174773Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"42163c43c38ae515","local-member-attributes":"{Name:no-preload-892672 ClientURLs:[https://192.168.39.144:2379]}","request-path":"/0/members/42163c43c38ae515/attributes","cluster-id":"b6240fb2000e40e9","publish-timeout":"7s"}
	{"level":"info","ts":"2024-05-01T03:46:14.178072Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-01T03:46:14.178613Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-01T03:46:14.178933Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-01T03:46:14.181873Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-05-01T03:46:14.18192Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-05-01T03:46:14.187489Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.144:2379"}
	{"level":"info","ts":"2024-05-01T03:46:14.189997Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"b6240fb2000e40e9","local-member-id":"42163c43c38ae515","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-01T03:46:14.190171Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-01T03:46:14.191918Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-01T03:46:14.190595Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-05-01T03:56:14.289484Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":682}
	{"level":"info","ts":"2024-05-01T03:56:14.29966Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":682,"took":"9.150674ms","hash":968182765,"current-db-size-bytes":2195456,"current-db-size":"2.2 MB","current-db-size-in-use-bytes":2195456,"current-db-size-in-use":"2.2 MB"}
	{"level":"info","ts":"2024-05-01T03:56:14.299878Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":968182765,"revision":682,"compact-revision":-1}
	
	
	==> kernel <==
	 04:00:19 up 19 min,  0 users,  load average: 0.14, 0.11, 0.13
	Linux no-preload-892672 5.10.207 #1 SMP Tue Apr 30 22:38:43 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [94dcf48a1f2ca610f98afb5926a769111580b5a6b7ac380fe96fde3d9d32804e] <==
	I0501 03:54:17.343657       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0501 03:56:16.344282       1 handler_proxy.go:93] no RequestInfo found in the context
	E0501 03:56:16.344573       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0501 03:56:17.345554       1 handler_proxy.go:93] no RequestInfo found in the context
	E0501 03:56:17.345690       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0501 03:56:17.345707       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0501 03:56:17.345869       1 handler_proxy.go:93] no RequestInfo found in the context
	E0501 03:56:17.345906       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0501 03:56:17.346906       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0501 03:57:17.346130       1 handler_proxy.go:93] no RequestInfo found in the context
	E0501 03:57:17.346263       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0501 03:57:17.346274       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0501 03:57:17.347330       1 handler_proxy.go:93] no RequestInfo found in the context
	E0501 03:57:17.347423       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0501 03:57:17.347469       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0501 03:59:17.346551       1 handler_proxy.go:93] no RequestInfo found in the context
	E0501 03:59:17.346731       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0501 03:59:17.346740       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0501 03:59:17.347976       1 handler_proxy.go:93] no RequestInfo found in the context
	E0501 03:59:17.348046       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0501 03:59:17.348056       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [49c8b9ee369c3f7c9f427f2e761135d1c6b58c7847503aa7a66c55f5046fa31f] <==
	I0501 03:54:33.651256       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0501 03:55:03.170564       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0501 03:55:03.664149       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0501 03:55:33.177704       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0501 03:55:33.673430       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0501 03:56:03.183351       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0501 03:56:03.684365       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0501 03:56:33.190106       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0501 03:56:33.692365       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0501 03:57:03.196005       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0501 03:57:03.701278       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0501 03:57:33.202515       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0501 03:57:33.710151       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0501 03:57:40.975580       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="87.407µs"
	I0501 03:57:53.974308       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="47.662µs"
	E0501 03:58:03.208200       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0501 03:58:03.718768       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0501 03:58:33.213930       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0501 03:58:33.729499       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0501 03:59:03.219219       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0501 03:59:03.738428       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0501 03:59:33.225657       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0501 03:59:33.746646       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0501 04:00:03.230423       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0501 04:00:03.756276       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [04f79e62716c5afcb3b939952b3de3a05e6166f5847ade1fbd8dca444a3fa313] <==
	I0501 03:46:36.027292       1 server_linux.go:69] "Using iptables proxy"
	I0501 03:46:36.041871       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.144"]
	I0501 03:46:36.164001       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0501 03:46:36.167908       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0501 03:46:36.168215       1 server_linux.go:165] "Using iptables Proxier"
	I0501 03:46:36.185580       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0501 03:46:36.185952       1 server.go:872] "Version info" version="v1.30.0"
	I0501 03:46:36.186055       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0501 03:46:36.187954       1 config.go:192] "Starting service config controller"
	I0501 03:46:36.188150       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0501 03:46:36.188204       1 config.go:101] "Starting endpoint slice config controller"
	I0501 03:46:36.188221       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0501 03:46:36.188889       1 config.go:319] "Starting node config controller"
	I0501 03:46:36.196508       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0501 03:46:36.288634       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0501 03:46:36.288683       1 shared_informer.go:320] Caches are synced for service config
	I0501 03:46:36.298157       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [188801d1d61ccf3dc55289bf9fd5e10246328ef4baecbfa211addd80c00d256a] <==
	W0501 03:46:16.398323       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0501 03:46:16.398444       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0501 03:46:16.398694       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0501 03:46:16.398750       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0501 03:46:16.402024       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0501 03:46:16.402071       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0501 03:46:17.240287       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0501 03:46:17.240345       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0501 03:46:17.246898       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0501 03:46:17.246990       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0501 03:46:17.267260       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0501 03:46:17.267552       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0501 03:46:17.328359       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0501 03:46:17.330203       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0501 03:46:17.397871       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0501 03:46:17.398034       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0501 03:46:17.461615       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0501 03:46:17.462264       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0501 03:46:17.532705       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0501 03:46:17.532840       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0501 03:46:17.601668       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0501 03:46:17.601964       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0501 03:46:17.802927       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0501 03:46:17.803050       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0501 03:46:20.989941       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	May 01 03:58:06 no-preload-892672 kubelet[4376]: E0501 03:58:06.963100    4376 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-5m5qf" podUID="a1ec3e6c-fe90-4168-b0ec-54f82f17b46d"
	May 01 03:58:18 no-preload-892672 kubelet[4376]: E0501 03:58:18.987333    4376 iptables.go:577] "Could not set up iptables canary" err=<
	May 01 03:58:18 no-preload-892672 kubelet[4376]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 01 03:58:18 no-preload-892672 kubelet[4376]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 01 03:58:18 no-preload-892672 kubelet[4376]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 01 03:58:18 no-preload-892672 kubelet[4376]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 01 03:58:21 no-preload-892672 kubelet[4376]: E0501 03:58:21.956261    4376 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-5m5qf" podUID="a1ec3e6c-fe90-4168-b0ec-54f82f17b46d"
	May 01 03:58:36 no-preload-892672 kubelet[4376]: E0501 03:58:36.956419    4376 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-5m5qf" podUID="a1ec3e6c-fe90-4168-b0ec-54f82f17b46d"
	May 01 03:58:49 no-preload-892672 kubelet[4376]: E0501 03:58:49.956041    4376 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-5m5qf" podUID="a1ec3e6c-fe90-4168-b0ec-54f82f17b46d"
	May 01 03:59:00 no-preload-892672 kubelet[4376]: E0501 03:59:00.955223    4376 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-5m5qf" podUID="a1ec3e6c-fe90-4168-b0ec-54f82f17b46d"
	May 01 03:59:11 no-preload-892672 kubelet[4376]: E0501 03:59:11.955625    4376 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-5m5qf" podUID="a1ec3e6c-fe90-4168-b0ec-54f82f17b46d"
	May 01 03:59:18 no-preload-892672 kubelet[4376]: E0501 03:59:18.990349    4376 iptables.go:577] "Could not set up iptables canary" err=<
	May 01 03:59:18 no-preload-892672 kubelet[4376]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 01 03:59:18 no-preload-892672 kubelet[4376]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 01 03:59:18 no-preload-892672 kubelet[4376]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 01 03:59:18 no-preload-892672 kubelet[4376]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 01 03:59:22 no-preload-892672 kubelet[4376]: E0501 03:59:22.957263    4376 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-5m5qf" podUID="a1ec3e6c-fe90-4168-b0ec-54f82f17b46d"
	May 01 03:59:37 no-preload-892672 kubelet[4376]: E0501 03:59:37.955862    4376 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-5m5qf" podUID="a1ec3e6c-fe90-4168-b0ec-54f82f17b46d"
	May 01 03:59:51 no-preload-892672 kubelet[4376]: E0501 03:59:51.955995    4376 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-5m5qf" podUID="a1ec3e6c-fe90-4168-b0ec-54f82f17b46d"
	May 01 04:00:05 no-preload-892672 kubelet[4376]: E0501 04:00:05.956570    4376 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-5m5qf" podUID="a1ec3e6c-fe90-4168-b0ec-54f82f17b46d"
	May 01 04:00:19 no-preload-892672 kubelet[4376]: E0501 04:00:19.001533    4376 iptables.go:577] "Could not set up iptables canary" err=<
	May 01 04:00:19 no-preload-892672 kubelet[4376]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 01 04:00:19 no-preload-892672 kubelet[4376]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 01 04:00:19 no-preload-892672 kubelet[4376]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 01 04:00:19 no-preload-892672 kubelet[4376]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	
	
	==> storage-provisioner [9a075f10431025603b7e9b5776296ff25449e3e5d51294564a01819472c4dca0] <==
	I0501 03:46:35.400997       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0501 03:46:35.435610       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0501 03:46:35.436127       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0501 03:46:35.462332       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0501 03:46:35.462545       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-892672_a14caf25-04c4-401c-ab7b-a47f70852afc!
	I0501 03:46:35.468503       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"cd682fd8-a3d5-4611-8c6e-a47a39515fc6", APIVersion:"v1", ResourceVersion:"396", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-892672_a14caf25-04c4-401c-ab7b-a47f70852afc became leader
	I0501 03:46:35.593114       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-892672_a14caf25-04c4-401c-ab7b-a47f70852afc!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-892672 -n no-preload-892672
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-892672 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-5m5qf
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-892672 describe pod metrics-server-569cc877fc-5m5qf
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-892672 describe pod metrics-server-569cc877fc-5m5qf: exit status 1 (90.399234ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-5m5qf" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-892672 describe pod metrics-server-569cc877fc-5m5qf: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (277.32s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (144.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
E0501 03:57:59.251055   20724 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/functional-960026/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
E0501 03:59:56.198347   20724 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/functional-960026/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.104:8443: connect: connection refused
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-503971 -n old-k8s-version-503971
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-503971 -n old-k8s-version-503971: exit status 2 (260.413618ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-503971" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-503971 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-503971 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.922µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-503971 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-503971 -n old-k8s-version-503971
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-503971 -n old-k8s-version-503971: exit status 2 (256.099367ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-503971 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-503971 logs -n 25: (1.685619654s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p cert-options-582976                                 | cert-options-582976          | jenkins | v1.33.0 | 01 May 24 03:30 UTC | 01 May 24 03:30 UTC |
	| delete  | -p pause-542495                                        | pause-542495                 | jenkins | v1.33.0 | 01 May 24 03:30 UTC | 01 May 24 03:30 UTC |
	| start   | -p no-preload-892672                                   | no-preload-892672            | jenkins | v1.33.0 | 01 May 24 03:30 UTC | 01 May 24 03:32 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-277128                                  | embed-certs-277128           | jenkins | v1.33.0 | 01 May 24 03:30 UTC | 01 May 24 03:32 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-046243                           | kubernetes-upgrade-046243    | jenkins | v1.33.0 | 01 May 24 03:30 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-046243                           | kubernetes-upgrade-046243    | jenkins | v1.33.0 | 01 May 24 03:30 UTC | 01 May 24 03:32 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-046243                           | kubernetes-upgrade-046243    | jenkins | v1.33.0 | 01 May 24 03:32 UTC | 01 May 24 03:32 UTC |
	| delete  | -p                                                     | disable-driver-mounts-483221 | jenkins | v1.33.0 | 01 May 24 03:32 UTC | 01 May 24 03:32 UTC |
	|         | disable-driver-mounts-483221                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-715118 | jenkins | v1.33.0 | 01 May 24 03:32 UTC | 01 May 24 03:33 UTC |
	|         | default-k8s-diff-port-715118                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-892672             | no-preload-892672            | jenkins | v1.33.0 | 01 May 24 03:32 UTC | 01 May 24 03:32 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-892672                                   | no-preload-892672            | jenkins | v1.33.0 | 01 May 24 03:32 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-277128            | embed-certs-277128           | jenkins | v1.33.0 | 01 May 24 03:32 UTC | 01 May 24 03:32 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-277128                                  | embed-certs-277128           | jenkins | v1.33.0 | 01 May 24 03:32 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-715118  | default-k8s-diff-port-715118 | jenkins | v1.33.0 | 01 May 24 03:33 UTC | 01 May 24 03:33 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-715118 | jenkins | v1.33.0 | 01 May 24 03:33 UTC |                     |
	|         | default-k8s-diff-port-715118                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-892672                  | no-preload-892672            | jenkins | v1.33.0 | 01 May 24 03:34 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-277128                 | embed-certs-277128           | jenkins | v1.33.0 | 01 May 24 03:34 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-892672                                   | no-preload-892672            | jenkins | v1.33.0 | 01 May 24 03:34 UTC | 01 May 24 03:46 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-503971        | old-k8s-version-503971       | jenkins | v1.33.0 | 01 May 24 03:34 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p embed-certs-277128                                  | embed-certs-277128           | jenkins | v1.33.0 | 01 May 24 03:34 UTC | 01 May 24 03:44 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-715118       | default-k8s-diff-port-715118 | jenkins | v1.33.0 | 01 May 24 03:35 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-715118 | jenkins | v1.33.0 | 01 May 24 03:35 UTC | 01 May 24 03:45 UTC |
	|         | default-k8s-diff-port-715118                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-503971                              | old-k8s-version-503971       | jenkins | v1.33.0 | 01 May 24 03:36 UTC | 01 May 24 03:36 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-503971             | old-k8s-version-503971       | jenkins | v1.33.0 | 01 May 24 03:36 UTC | 01 May 24 03:36 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-503971                              | old-k8s-version-503971       | jenkins | v1.33.0 | 01 May 24 03:36 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/01 03:36:41
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0501 03:36:41.470152   69580 out.go:291] Setting OutFile to fd 1 ...
	I0501 03:36:41.470256   69580 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 03:36:41.470264   69580 out.go:304] Setting ErrFile to fd 2...
	I0501 03:36:41.470268   69580 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 03:36:41.470484   69580 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18779-13391/.minikube/bin
	I0501 03:36:41.470989   69580 out.go:298] Setting JSON to false
	I0501 03:36:41.471856   69580 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":8345,"bootTime":1714526257,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0501 03:36:41.471911   69580 start.go:139] virtualization: kvm guest
	I0501 03:36:41.473901   69580 out.go:177] * [old-k8s-version-503971] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0501 03:36:41.474994   69580 out.go:177]   - MINIKUBE_LOCATION=18779
	I0501 03:36:41.475003   69580 notify.go:220] Checking for updates...
	I0501 03:36:41.477150   69580 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0501 03:36:41.478394   69580 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18779-13391/kubeconfig
	I0501 03:36:41.479462   69580 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18779-13391/.minikube
	I0501 03:36:41.480507   69580 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0501 03:36:41.481543   69580 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0501 03:36:41.482907   69580 config.go:182] Loaded profile config "old-k8s-version-503971": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0501 03:36:41.483279   69580 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:36:41.483311   69580 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:36:41.497758   69580 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40255
	I0501 03:36:41.498090   69580 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:36:41.498591   69580 main.go:141] libmachine: Using API Version  1
	I0501 03:36:41.498616   69580 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:36:41.498891   69580 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:36:41.499055   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .DriverName
	I0501 03:36:41.500675   69580 out.go:177] * Kubernetes 1.30.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.0
	I0501 03:36:41.501716   69580 driver.go:392] Setting default libvirt URI to qemu:///system
	I0501 03:36:41.501974   69580 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:36:41.502024   69580 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:36:41.515991   69580 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40591
	I0501 03:36:41.516392   69580 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:36:41.516826   69580 main.go:141] libmachine: Using API Version  1
	I0501 03:36:41.516846   69580 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:36:41.517120   69580 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:36:41.517281   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .DriverName
	I0501 03:36:41.551130   69580 out.go:177] * Using the kvm2 driver based on existing profile
	I0501 03:36:41.552244   69580 start.go:297] selected driver: kvm2
	I0501 03:36:41.552253   69580 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-503971 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-503971 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.104 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 03:36:41.552369   69580 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0501 03:36:41.553004   69580 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0501 03:36:41.553071   69580 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18779-13391/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0501 03:36:41.567362   69580 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0501 03:36:41.567736   69580 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0501 03:36:41.567815   69580 cni.go:84] Creating CNI manager for ""
	I0501 03:36:41.567832   69580 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0501 03:36:41.567881   69580 start.go:340] cluster config:
	{Name:old-k8s-version-503971 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-503971 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.104 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 03:36:41.568012   69580 iso.go:125] acquiring lock: {Name:mkcd2496aadb29931e179193d707c731bfda98b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0501 03:36:41.569791   69580 out.go:177] * Starting "old-k8s-version-503971" primary control-plane node in "old-k8s-version-503971" cluster
	I0501 03:36:38.886755   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:36:41.571352   69580 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0501 03:36:41.571389   69580 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0501 03:36:41.571408   69580 cache.go:56] Caching tarball of preloaded images
	I0501 03:36:41.571478   69580 preload.go:173] Found /home/jenkins/minikube-integration/18779-13391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0501 03:36:41.571490   69580 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0501 03:36:41.571588   69580 profile.go:143] Saving config to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/old-k8s-version-503971/config.json ...
	I0501 03:36:41.571775   69580 start.go:360] acquireMachinesLock for old-k8s-version-503971: {Name:mkc4548e5afe1d4b0a833f6c522103562b0cefff Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0501 03:36:44.966689   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:36:48.038769   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:36:54.118675   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:36:57.190716   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:37:03.270653   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:37:06.342693   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:37:12.422726   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:37:15.494702   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:37:21.574646   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:37:24.646711   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:37:30.726724   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:37:33.798628   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:37:39.878657   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:37:42.950647   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:37:49.030699   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:37:52.102665   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:37:58.182647   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:38:01.254620   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:38:07.334707   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:38:10.406670   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:38:16.486684   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:38:19.558714   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:38:25.638642   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:38:28.710687   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:38:34.790659   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:38:37.862651   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:38:43.942639   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:38:47.014729   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:38:53.094674   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:38:56.166684   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:39:02.246662   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:39:05.318633   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:39:11.398705   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:39:14.470640   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:39:20.550642   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:39:23.622701   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:39:32.707273   68864 start.go:364] duration metric: took 4m38.787656406s to acquireMachinesLock for "embed-certs-277128"
	I0501 03:39:32.707327   68864 start.go:96] Skipping create...Using existing machine configuration
	I0501 03:39:32.707336   68864 fix.go:54] fixHost starting: 
	I0501 03:39:32.707655   68864 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:39:32.707697   68864 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:39:32.722689   68864 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35015
	I0501 03:39:32.723061   68864 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:39:32.723536   68864 main.go:141] libmachine: Using API Version  1
	I0501 03:39:32.723557   68864 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:39:32.723848   68864 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:39:32.724041   68864 main.go:141] libmachine: (embed-certs-277128) Calling .DriverName
	I0501 03:39:32.724164   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetState
	I0501 03:39:32.725542   68864 fix.go:112] recreateIfNeeded on embed-certs-277128: state=Stopped err=<nil>
	I0501 03:39:32.725569   68864 main.go:141] libmachine: (embed-certs-277128) Calling .DriverName
	W0501 03:39:32.725714   68864 fix.go:138] unexpected machine state, will restart: <nil>
	I0501 03:39:32.727403   68864 out.go:177] * Restarting existing kvm2 VM for "embed-certs-277128" ...
	I0501 03:39:29.702654   68640 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.144:22: connect: no route to host
	I0501 03:39:32.704906   68640 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0501 03:39:32.704940   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetMachineName
	I0501 03:39:32.705254   68640 buildroot.go:166] provisioning hostname "no-preload-892672"
	I0501 03:39:32.705278   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetMachineName
	I0501 03:39:32.705499   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHHostname
	I0501 03:39:32.707128   68640 machine.go:97] duration metric: took 4m44.649178925s to provisionDockerMachine
	I0501 03:39:32.707171   68640 fix.go:56] duration metric: took 4m44.67002247s for fixHost
	I0501 03:39:32.707178   68640 start.go:83] releasing machines lock for "no-preload-892672", held for 4m44.670048235s
	W0501 03:39:32.707201   68640 start.go:713] error starting host: provision: host is not running
	W0501 03:39:32.707293   68640 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0501 03:39:32.707305   68640 start.go:728] Will try again in 5 seconds ...
	I0501 03:39:32.728616   68864 main.go:141] libmachine: (embed-certs-277128) Calling .Start
	I0501 03:39:32.728768   68864 main.go:141] libmachine: (embed-certs-277128) Ensuring networks are active...
	I0501 03:39:32.729434   68864 main.go:141] libmachine: (embed-certs-277128) Ensuring network default is active
	I0501 03:39:32.729789   68864 main.go:141] libmachine: (embed-certs-277128) Ensuring network mk-embed-certs-277128 is active
	I0501 03:39:32.730218   68864 main.go:141] libmachine: (embed-certs-277128) Getting domain xml...
	I0501 03:39:32.730972   68864 main.go:141] libmachine: (embed-certs-277128) Creating domain...
	I0501 03:39:37.711605   68640 start.go:360] acquireMachinesLock for no-preload-892672: {Name:mkc4548e5afe1d4b0a833f6c522103562b0cefff Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0501 03:39:33.914124   68864 main.go:141] libmachine: (embed-certs-277128) Waiting to get IP...
	I0501 03:39:33.915022   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:33.915411   68864 main.go:141] libmachine: (embed-certs-277128) DBG | unable to find current IP address of domain embed-certs-277128 in network mk-embed-certs-277128
	I0501 03:39:33.915473   68864 main.go:141] libmachine: (embed-certs-277128) DBG | I0501 03:39:33.915391   70171 retry.go:31] will retry after 278.418743ms: waiting for machine to come up
	I0501 03:39:34.195933   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:34.196470   68864 main.go:141] libmachine: (embed-certs-277128) DBG | unable to find current IP address of domain embed-certs-277128 in network mk-embed-certs-277128
	I0501 03:39:34.196497   68864 main.go:141] libmachine: (embed-certs-277128) DBG | I0501 03:39:34.196417   70171 retry.go:31] will retry after 375.593174ms: waiting for machine to come up
	I0501 03:39:34.574178   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:34.574666   68864 main.go:141] libmachine: (embed-certs-277128) DBG | unable to find current IP address of domain embed-certs-277128 in network mk-embed-certs-277128
	I0501 03:39:34.574689   68864 main.go:141] libmachine: (embed-certs-277128) DBG | I0501 03:39:34.574617   70171 retry.go:31] will retry after 377.853045ms: waiting for machine to come up
	I0501 03:39:34.954022   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:34.954436   68864 main.go:141] libmachine: (embed-certs-277128) DBG | unable to find current IP address of domain embed-certs-277128 in network mk-embed-certs-277128
	I0501 03:39:34.954465   68864 main.go:141] libmachine: (embed-certs-277128) DBG | I0501 03:39:34.954375   70171 retry.go:31] will retry after 374.024178ms: waiting for machine to come up
	I0501 03:39:35.330087   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:35.330514   68864 main.go:141] libmachine: (embed-certs-277128) DBG | unable to find current IP address of domain embed-certs-277128 in network mk-embed-certs-277128
	I0501 03:39:35.330545   68864 main.go:141] libmachine: (embed-certs-277128) DBG | I0501 03:39:35.330478   70171 retry.go:31] will retry after 488.296666ms: waiting for machine to come up
	I0501 03:39:35.820177   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:35.820664   68864 main.go:141] libmachine: (embed-certs-277128) DBG | unable to find current IP address of domain embed-certs-277128 in network mk-embed-certs-277128
	I0501 03:39:35.820692   68864 main.go:141] libmachine: (embed-certs-277128) DBG | I0501 03:39:35.820629   70171 retry.go:31] will retry after 665.825717ms: waiting for machine to come up
	I0501 03:39:36.488492   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:36.488910   68864 main.go:141] libmachine: (embed-certs-277128) DBG | unable to find current IP address of domain embed-certs-277128 in network mk-embed-certs-277128
	I0501 03:39:36.488941   68864 main.go:141] libmachine: (embed-certs-277128) DBG | I0501 03:39:36.488860   70171 retry.go:31] will retry after 1.04269192s: waiting for machine to come up
	I0501 03:39:37.532622   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:37.533006   68864 main.go:141] libmachine: (embed-certs-277128) DBG | unable to find current IP address of domain embed-certs-277128 in network mk-embed-certs-277128
	I0501 03:39:37.533032   68864 main.go:141] libmachine: (embed-certs-277128) DBG | I0501 03:39:37.532968   70171 retry.go:31] will retry after 1.348239565s: waiting for machine to come up
	I0501 03:39:38.882895   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:38.883364   68864 main.go:141] libmachine: (embed-certs-277128) DBG | unable to find current IP address of domain embed-certs-277128 in network mk-embed-certs-277128
	I0501 03:39:38.883396   68864 main.go:141] libmachine: (embed-certs-277128) DBG | I0501 03:39:38.883301   70171 retry.go:31] will retry after 1.718495999s: waiting for machine to come up
	I0501 03:39:40.604329   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:40.604760   68864 main.go:141] libmachine: (embed-certs-277128) DBG | unable to find current IP address of domain embed-certs-277128 in network mk-embed-certs-277128
	I0501 03:39:40.604791   68864 main.go:141] libmachine: (embed-certs-277128) DBG | I0501 03:39:40.604703   70171 retry.go:31] will retry after 2.237478005s: waiting for machine to come up
	I0501 03:39:42.843398   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:42.843920   68864 main.go:141] libmachine: (embed-certs-277128) DBG | unable to find current IP address of domain embed-certs-277128 in network mk-embed-certs-277128
	I0501 03:39:42.843949   68864 main.go:141] libmachine: (embed-certs-277128) DBG | I0501 03:39:42.843869   70171 retry.go:31] will retry after 2.618059388s: waiting for machine to come up
	I0501 03:39:45.465576   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:45.465968   68864 main.go:141] libmachine: (embed-certs-277128) DBG | unable to find current IP address of domain embed-certs-277128 in network mk-embed-certs-277128
	I0501 03:39:45.465992   68864 main.go:141] libmachine: (embed-certs-277128) DBG | I0501 03:39:45.465928   70171 retry.go:31] will retry after 2.895120972s: waiting for machine to come up
	I0501 03:39:48.362239   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:48.362651   68864 main.go:141] libmachine: (embed-certs-277128) DBG | unable to find current IP address of domain embed-certs-277128 in network mk-embed-certs-277128
	I0501 03:39:48.362683   68864 main.go:141] libmachine: (embed-certs-277128) DBG | I0501 03:39:48.362617   70171 retry.go:31] will retry after 2.857441112s: waiting for machine to come up
	I0501 03:39:52.791989   69237 start.go:364] duration metric: took 4m2.036138912s to acquireMachinesLock for "default-k8s-diff-port-715118"
	I0501 03:39:52.792059   69237 start.go:96] Skipping create...Using existing machine configuration
	I0501 03:39:52.792071   69237 fix.go:54] fixHost starting: 
	I0501 03:39:52.792454   69237 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:39:52.792492   69237 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:39:52.809707   69237 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46419
	I0501 03:39:52.810075   69237 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:39:52.810544   69237 main.go:141] libmachine: Using API Version  1
	I0501 03:39:52.810564   69237 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:39:52.810881   69237 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:39:52.811067   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .DriverName
	I0501 03:39:52.811217   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetState
	I0501 03:39:52.812787   69237 fix.go:112] recreateIfNeeded on default-k8s-diff-port-715118: state=Stopped err=<nil>
	I0501 03:39:52.812820   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .DriverName
	W0501 03:39:52.812969   69237 fix.go:138] unexpected machine state, will restart: <nil>
	I0501 03:39:52.815136   69237 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-715118" ...
	I0501 03:39:51.223450   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:51.223938   68864 main.go:141] libmachine: (embed-certs-277128) Found IP for machine: 192.168.50.218
	I0501 03:39:51.223965   68864 main.go:141] libmachine: (embed-certs-277128) Reserving static IP address...
	I0501 03:39:51.223982   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has current primary IP address 192.168.50.218 and MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:51.224375   68864 main.go:141] libmachine: (embed-certs-277128) Reserved static IP address: 192.168.50.218
	I0501 03:39:51.224440   68864 main.go:141] libmachine: (embed-certs-277128) DBG | found host DHCP lease matching {name: "embed-certs-277128", mac: "52:54:00:96:11:7d", ip: "192.168.50.218"} in network mk-embed-certs-277128: {Iface:virbr2 ExpiryTime:2024-05-01 04:39:45 +0000 UTC Type:0 Mac:52:54:00:96:11:7d Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-277128 Clientid:01:52:54:00:96:11:7d}
	I0501 03:39:51.224454   68864 main.go:141] libmachine: (embed-certs-277128) Waiting for SSH to be available...
	I0501 03:39:51.224491   68864 main.go:141] libmachine: (embed-certs-277128) DBG | skip adding static IP to network mk-embed-certs-277128 - found existing host DHCP lease matching {name: "embed-certs-277128", mac: "52:54:00:96:11:7d", ip: "192.168.50.218"}
	I0501 03:39:51.224507   68864 main.go:141] libmachine: (embed-certs-277128) DBG | Getting to WaitForSSH function...
	I0501 03:39:51.226437   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:51.226733   68864 main.go:141] libmachine: (embed-certs-277128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:7d", ip: ""} in network mk-embed-certs-277128: {Iface:virbr2 ExpiryTime:2024-05-01 04:39:45 +0000 UTC Type:0 Mac:52:54:00:96:11:7d Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-277128 Clientid:01:52:54:00:96:11:7d}
	I0501 03:39:51.226764   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined IP address 192.168.50.218 and MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:51.226863   68864 main.go:141] libmachine: (embed-certs-277128) DBG | Using SSH client type: external
	I0501 03:39:51.226886   68864 main.go:141] libmachine: (embed-certs-277128) DBG | Using SSH private key: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/embed-certs-277128/id_rsa (-rw-------)
	I0501 03:39:51.226917   68864 main.go:141] libmachine: (embed-certs-277128) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.218 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18779-13391/.minikube/machines/embed-certs-277128/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0501 03:39:51.226930   68864 main.go:141] libmachine: (embed-certs-277128) DBG | About to run SSH command:
	I0501 03:39:51.226941   68864 main.go:141] libmachine: (embed-certs-277128) DBG | exit 0
	I0501 03:39:51.354225   68864 main.go:141] libmachine: (embed-certs-277128) DBG | SSH cmd err, output: <nil>: 
	I0501 03:39:51.354641   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetConfigRaw
	I0501 03:39:51.355337   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetIP
	I0501 03:39:51.357934   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:51.358265   68864 main.go:141] libmachine: (embed-certs-277128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:7d", ip: ""} in network mk-embed-certs-277128: {Iface:virbr2 ExpiryTime:2024-05-01 04:39:45 +0000 UTC Type:0 Mac:52:54:00:96:11:7d Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-277128 Clientid:01:52:54:00:96:11:7d}
	I0501 03:39:51.358302   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined IP address 192.168.50.218 and MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:51.358584   68864 profile.go:143] Saving config to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/embed-certs-277128/config.json ...
	I0501 03:39:51.358753   68864 machine.go:94] provisionDockerMachine start ...
	I0501 03:39:51.358771   68864 main.go:141] libmachine: (embed-certs-277128) Calling .DriverName
	I0501 03:39:51.358940   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHHostname
	I0501 03:39:51.361202   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:51.361564   68864 main.go:141] libmachine: (embed-certs-277128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:7d", ip: ""} in network mk-embed-certs-277128: {Iface:virbr2 ExpiryTime:2024-05-01 04:39:45 +0000 UTC Type:0 Mac:52:54:00:96:11:7d Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-277128 Clientid:01:52:54:00:96:11:7d}
	I0501 03:39:51.361600   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined IP address 192.168.50.218 and MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:51.361711   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHPort
	I0501 03:39:51.361884   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHKeyPath
	I0501 03:39:51.362054   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHKeyPath
	I0501 03:39:51.362170   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHUsername
	I0501 03:39:51.362344   68864 main.go:141] libmachine: Using SSH client type: native
	I0501 03:39:51.362572   68864 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.218 22 <nil> <nil>}
	I0501 03:39:51.362586   68864 main.go:141] libmachine: About to run SSH command:
	hostname
	I0501 03:39:51.467448   68864 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0501 03:39:51.467480   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetMachineName
	I0501 03:39:51.467740   68864 buildroot.go:166] provisioning hostname "embed-certs-277128"
	I0501 03:39:51.467772   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetMachineName
	I0501 03:39:51.467953   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHHostname
	I0501 03:39:51.470653   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:51.471022   68864 main.go:141] libmachine: (embed-certs-277128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:7d", ip: ""} in network mk-embed-certs-277128: {Iface:virbr2 ExpiryTime:2024-05-01 04:39:45 +0000 UTC Type:0 Mac:52:54:00:96:11:7d Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-277128 Clientid:01:52:54:00:96:11:7d}
	I0501 03:39:51.471044   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined IP address 192.168.50.218 and MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:51.471159   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHPort
	I0501 03:39:51.471341   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHKeyPath
	I0501 03:39:51.471482   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHKeyPath
	I0501 03:39:51.471590   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHUsername
	I0501 03:39:51.471729   68864 main.go:141] libmachine: Using SSH client type: native
	I0501 03:39:51.471913   68864 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.218 22 <nil> <nil>}
	I0501 03:39:51.471934   68864 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-277128 && echo "embed-certs-277128" | sudo tee /etc/hostname
	I0501 03:39:51.594372   68864 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-277128
	
	I0501 03:39:51.594422   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHHostname
	I0501 03:39:51.596978   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:51.597307   68864 main.go:141] libmachine: (embed-certs-277128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:7d", ip: ""} in network mk-embed-certs-277128: {Iface:virbr2 ExpiryTime:2024-05-01 04:39:45 +0000 UTC Type:0 Mac:52:54:00:96:11:7d Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-277128 Clientid:01:52:54:00:96:11:7d}
	I0501 03:39:51.597334   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined IP address 192.168.50.218 and MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:51.597495   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHPort
	I0501 03:39:51.597710   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHKeyPath
	I0501 03:39:51.597865   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHKeyPath
	I0501 03:39:51.597971   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHUsername
	I0501 03:39:51.598097   68864 main.go:141] libmachine: Using SSH client type: native
	I0501 03:39:51.598250   68864 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.218 22 <nil> <nil>}
	I0501 03:39:51.598271   68864 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-277128' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-277128/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-277128' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0501 03:39:51.712791   68864 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0501 03:39:51.712825   68864 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18779-13391/.minikube CaCertPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18779-13391/.minikube}
	I0501 03:39:51.712850   68864 buildroot.go:174] setting up certificates
	I0501 03:39:51.712860   68864 provision.go:84] configureAuth start
	I0501 03:39:51.712869   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetMachineName
	I0501 03:39:51.713158   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetIP
	I0501 03:39:51.715577   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:51.715885   68864 main.go:141] libmachine: (embed-certs-277128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:7d", ip: ""} in network mk-embed-certs-277128: {Iface:virbr2 ExpiryTime:2024-05-01 04:39:45 +0000 UTC Type:0 Mac:52:54:00:96:11:7d Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-277128 Clientid:01:52:54:00:96:11:7d}
	I0501 03:39:51.715918   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined IP address 192.168.50.218 and MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:51.716040   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHHostname
	I0501 03:39:51.718057   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:51.718342   68864 main.go:141] libmachine: (embed-certs-277128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:7d", ip: ""} in network mk-embed-certs-277128: {Iface:virbr2 ExpiryTime:2024-05-01 04:39:45 +0000 UTC Type:0 Mac:52:54:00:96:11:7d Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-277128 Clientid:01:52:54:00:96:11:7d}
	I0501 03:39:51.718367   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined IP address 192.168.50.218 and MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:51.718550   68864 provision.go:143] copyHostCerts
	I0501 03:39:51.718612   68864 exec_runner.go:144] found /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem, removing ...
	I0501 03:39:51.718622   68864 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem
	I0501 03:39:51.718685   68864 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem (1078 bytes)
	I0501 03:39:51.718790   68864 exec_runner.go:144] found /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem, removing ...
	I0501 03:39:51.718798   68864 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem
	I0501 03:39:51.718823   68864 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem (1123 bytes)
	I0501 03:39:51.718881   68864 exec_runner.go:144] found /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem, removing ...
	I0501 03:39:51.718888   68864 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem
	I0501 03:39:51.718907   68864 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem (1675 bytes)
	I0501 03:39:51.718957   68864 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca-key.pem org=jenkins.embed-certs-277128 san=[127.0.0.1 192.168.50.218 embed-certs-277128 localhost minikube]
	I0501 03:39:52.100402   68864 provision.go:177] copyRemoteCerts
	I0501 03:39:52.100459   68864 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0501 03:39:52.100494   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHHostname
	I0501 03:39:52.103133   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:52.103363   68864 main.go:141] libmachine: (embed-certs-277128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:7d", ip: ""} in network mk-embed-certs-277128: {Iface:virbr2 ExpiryTime:2024-05-01 04:39:45 +0000 UTC Type:0 Mac:52:54:00:96:11:7d Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-277128 Clientid:01:52:54:00:96:11:7d}
	I0501 03:39:52.103391   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined IP address 192.168.50.218 and MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:52.103522   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHPort
	I0501 03:39:52.103694   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHKeyPath
	I0501 03:39:52.103790   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHUsername
	I0501 03:39:52.103874   68864 sshutil.go:53] new ssh client: &{IP:192.168.50.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/embed-certs-277128/id_rsa Username:docker}
	I0501 03:39:52.186017   68864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0501 03:39:52.211959   68864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0501 03:39:52.237362   68864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0501 03:39:52.264036   68864 provision.go:87] duration metric: took 551.163591ms to configureAuth
	I0501 03:39:52.264060   68864 buildroot.go:189] setting minikube options for container-runtime
	I0501 03:39:52.264220   68864 config.go:182] Loaded profile config "embed-certs-277128": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0501 03:39:52.264290   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHHostname
	I0501 03:39:52.266809   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:52.267117   68864 main.go:141] libmachine: (embed-certs-277128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:7d", ip: ""} in network mk-embed-certs-277128: {Iface:virbr2 ExpiryTime:2024-05-01 04:39:45 +0000 UTC Type:0 Mac:52:54:00:96:11:7d Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-277128 Clientid:01:52:54:00:96:11:7d}
	I0501 03:39:52.267140   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined IP address 192.168.50.218 and MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:52.267336   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHPort
	I0501 03:39:52.267529   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHKeyPath
	I0501 03:39:52.267713   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHKeyPath
	I0501 03:39:52.267863   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHUsername
	I0501 03:39:52.268096   68864 main.go:141] libmachine: Using SSH client type: native
	I0501 03:39:52.268273   68864 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.218 22 <nil> <nil>}
	I0501 03:39:52.268290   68864 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0501 03:39:52.543539   68864 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0501 03:39:52.543569   68864 machine.go:97] duration metric: took 1.184800934s to provisionDockerMachine
	I0501 03:39:52.543585   68864 start.go:293] postStartSetup for "embed-certs-277128" (driver="kvm2")
	I0501 03:39:52.543600   68864 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0501 03:39:52.543621   68864 main.go:141] libmachine: (embed-certs-277128) Calling .DriverName
	I0501 03:39:52.543974   68864 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0501 03:39:52.544007   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHHostname
	I0501 03:39:52.546566   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:52.546918   68864 main.go:141] libmachine: (embed-certs-277128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:7d", ip: ""} in network mk-embed-certs-277128: {Iface:virbr2 ExpiryTime:2024-05-01 04:39:45 +0000 UTC Type:0 Mac:52:54:00:96:11:7d Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-277128 Clientid:01:52:54:00:96:11:7d}
	I0501 03:39:52.546955   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined IP address 192.168.50.218 and MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:52.547108   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHPort
	I0501 03:39:52.547310   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHKeyPath
	I0501 03:39:52.547480   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHUsername
	I0501 03:39:52.547622   68864 sshutil.go:53] new ssh client: &{IP:192.168.50.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/embed-certs-277128/id_rsa Username:docker}
	I0501 03:39:52.636313   68864 ssh_runner.go:195] Run: cat /etc/os-release
	I0501 03:39:52.641408   68864 info.go:137] Remote host: Buildroot 2023.02.9
	I0501 03:39:52.641435   68864 filesync.go:126] Scanning /home/jenkins/minikube-integration/18779-13391/.minikube/addons for local assets ...
	I0501 03:39:52.641514   68864 filesync.go:126] Scanning /home/jenkins/minikube-integration/18779-13391/.minikube/files for local assets ...
	I0501 03:39:52.641598   68864 filesync.go:149] local asset: /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem -> 207242.pem in /etc/ssl/certs
	I0501 03:39:52.641708   68864 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0501 03:39:52.653421   68864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem --> /etc/ssl/certs/207242.pem (1708 bytes)
	I0501 03:39:52.681796   68864 start.go:296] duration metric: took 138.197388ms for postStartSetup
	I0501 03:39:52.681840   68864 fix.go:56] duration metric: took 19.974504059s for fixHost
	I0501 03:39:52.681866   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHHostname
	I0501 03:39:52.684189   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:52.684447   68864 main.go:141] libmachine: (embed-certs-277128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:7d", ip: ""} in network mk-embed-certs-277128: {Iface:virbr2 ExpiryTime:2024-05-01 04:39:45 +0000 UTC Type:0 Mac:52:54:00:96:11:7d Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-277128 Clientid:01:52:54:00:96:11:7d}
	I0501 03:39:52.684475   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined IP address 192.168.50.218 and MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:52.684691   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHPort
	I0501 03:39:52.684901   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHKeyPath
	I0501 03:39:52.685077   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHKeyPath
	I0501 03:39:52.685226   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHUsername
	I0501 03:39:52.685393   68864 main.go:141] libmachine: Using SSH client type: native
	I0501 03:39:52.685556   68864 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.218 22 <nil> <nil>}
	I0501 03:39:52.685568   68864 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0501 03:39:52.791802   68864 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714534792.758254619
	
	I0501 03:39:52.791830   68864 fix.go:216] guest clock: 1714534792.758254619
	I0501 03:39:52.791841   68864 fix.go:229] Guest: 2024-05-01 03:39:52.758254619 +0000 UTC Remote: 2024-05-01 03:39:52.681844878 +0000 UTC m=+298.906990848 (delta=76.409741ms)
	I0501 03:39:52.791886   68864 fix.go:200] guest clock delta is within tolerance: 76.409741ms
	I0501 03:39:52.791892   68864 start.go:83] releasing machines lock for "embed-certs-277128", held for 20.08458366s
	I0501 03:39:52.791918   68864 main.go:141] libmachine: (embed-certs-277128) Calling .DriverName
	I0501 03:39:52.792188   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetIP
	I0501 03:39:52.794820   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:52.795217   68864 main.go:141] libmachine: (embed-certs-277128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:7d", ip: ""} in network mk-embed-certs-277128: {Iface:virbr2 ExpiryTime:2024-05-01 04:39:45 +0000 UTC Type:0 Mac:52:54:00:96:11:7d Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-277128 Clientid:01:52:54:00:96:11:7d}
	I0501 03:39:52.795256   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined IP address 192.168.50.218 and MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:52.795427   68864 main.go:141] libmachine: (embed-certs-277128) Calling .DriverName
	I0501 03:39:52.795971   68864 main.go:141] libmachine: (embed-certs-277128) Calling .DriverName
	I0501 03:39:52.796142   68864 main.go:141] libmachine: (embed-certs-277128) Calling .DriverName
	I0501 03:39:52.796235   68864 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0501 03:39:52.796285   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHHostname
	I0501 03:39:52.796324   68864 ssh_runner.go:195] Run: cat /version.json
	I0501 03:39:52.796346   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHHostname
	I0501 03:39:52.799128   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:52.799153   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:52.799536   68864 main.go:141] libmachine: (embed-certs-277128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:7d", ip: ""} in network mk-embed-certs-277128: {Iface:virbr2 ExpiryTime:2024-05-01 04:39:45 +0000 UTC Type:0 Mac:52:54:00:96:11:7d Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-277128 Clientid:01:52:54:00:96:11:7d}
	I0501 03:39:52.799570   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined IP address 192.168.50.218 and MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:52.799617   68864 main.go:141] libmachine: (embed-certs-277128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:7d", ip: ""} in network mk-embed-certs-277128: {Iface:virbr2 ExpiryTime:2024-05-01 04:39:45 +0000 UTC Type:0 Mac:52:54:00:96:11:7d Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-277128 Clientid:01:52:54:00:96:11:7d}
	I0501 03:39:52.799647   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined IP address 192.168.50.218 and MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:52.799779   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHPort
	I0501 03:39:52.799878   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHPort
	I0501 03:39:52.799961   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHKeyPath
	I0501 03:39:52.800048   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHKeyPath
	I0501 03:39:52.800117   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHUsername
	I0501 03:39:52.800189   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHUsername
	I0501 03:39:52.800243   68864 sshutil.go:53] new ssh client: &{IP:192.168.50.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/embed-certs-277128/id_rsa Username:docker}
	I0501 03:39:52.800299   68864 sshutil.go:53] new ssh client: &{IP:192.168.50.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/embed-certs-277128/id_rsa Username:docker}
	I0501 03:39:52.901147   68864 ssh_runner.go:195] Run: systemctl --version
	I0501 03:39:52.908399   68864 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0501 03:39:53.065012   68864 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0501 03:39:53.073635   68864 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0501 03:39:53.073724   68864 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0501 03:39:53.096146   68864 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0501 03:39:53.096179   68864 start.go:494] detecting cgroup driver to use...
	I0501 03:39:53.096253   68864 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0501 03:39:53.118525   68864 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0501 03:39:53.136238   68864 docker.go:217] disabling cri-docker service (if available) ...
	I0501 03:39:53.136301   68864 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0501 03:39:53.152535   68864 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0501 03:39:53.171415   68864 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0501 03:39:53.297831   68864 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0501 03:39:53.479469   68864 docker.go:233] disabling docker service ...
	I0501 03:39:53.479552   68864 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0501 03:39:53.497271   68864 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0501 03:39:53.512645   68864 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0501 03:39:53.658448   68864 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0501 03:39:53.787528   68864 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0501 03:39:53.804078   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0501 03:39:53.836146   68864 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0501 03:39:53.836206   68864 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:39:53.853846   68864 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0501 03:39:53.853915   68864 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:39:53.866319   68864 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:39:53.878410   68864 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:39:53.890304   68864 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0501 03:39:53.903821   68864 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:39:53.916750   68864 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:39:53.938933   68864 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:39:53.952103   68864 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0501 03:39:53.964833   68864 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0501 03:39:53.964893   68864 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0501 03:39:53.983039   68864 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0501 03:39:53.995830   68864 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 03:39:54.156748   68864 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0501 03:39:54.306973   68864 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0501 03:39:54.307051   68864 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0501 03:39:54.313515   68864 start.go:562] Will wait 60s for crictl version
	I0501 03:39:54.313569   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:39:54.317943   68864 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0501 03:39:54.356360   68864 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0501 03:39:54.356437   68864 ssh_runner.go:195] Run: crio --version
	I0501 03:39:54.391717   68864 ssh_runner.go:195] Run: crio --version
	I0501 03:39:54.428403   68864 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0501 03:39:52.816428   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .Start
	I0501 03:39:52.816592   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Ensuring networks are active...
	I0501 03:39:52.817317   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Ensuring network default is active
	I0501 03:39:52.817668   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Ensuring network mk-default-k8s-diff-port-715118 is active
	I0501 03:39:52.818040   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Getting domain xml...
	I0501 03:39:52.818777   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Creating domain...
	I0501 03:39:54.069624   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Waiting to get IP...
	I0501 03:39:54.070436   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:39:54.070855   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | unable to find current IP address of domain default-k8s-diff-port-715118 in network mk-default-k8s-diff-port-715118
	I0501 03:39:54.070891   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | I0501 03:39:54.070820   70304 retry.go:31] will retry after 260.072623ms: waiting for machine to come up
	I0501 03:39:54.332646   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:39:54.333077   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | unable to find current IP address of domain default-k8s-diff-port-715118 in network mk-default-k8s-diff-port-715118
	I0501 03:39:54.333115   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | I0501 03:39:54.333047   70304 retry.go:31] will retry after 270.897102ms: waiting for machine to come up
	I0501 03:39:54.605705   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:39:54.606102   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | unable to find current IP address of domain default-k8s-diff-port-715118 in network mk-default-k8s-diff-port-715118
	I0501 03:39:54.606155   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | I0501 03:39:54.606070   70304 retry.go:31] will retry after 417.613249ms: waiting for machine to come up
	I0501 03:39:55.025827   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:39:55.026340   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | unable to find current IP address of domain default-k8s-diff-port-715118 in network mk-default-k8s-diff-port-715118
	I0501 03:39:55.026371   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | I0501 03:39:55.026291   70304 retry.go:31] will retry after 428.515161ms: waiting for machine to come up
	I0501 03:39:55.456828   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:39:55.457443   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | unable to find current IP address of domain default-k8s-diff-port-715118 in network mk-default-k8s-diff-port-715118
	I0501 03:39:55.457480   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | I0501 03:39:55.457405   70304 retry.go:31] will retry after 701.294363ms: waiting for machine to come up
	I0501 03:39:54.429689   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetIP
	I0501 03:39:54.432488   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:54.432817   68864 main.go:141] libmachine: (embed-certs-277128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:7d", ip: ""} in network mk-embed-certs-277128: {Iface:virbr2 ExpiryTime:2024-05-01 04:39:45 +0000 UTC Type:0 Mac:52:54:00:96:11:7d Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-277128 Clientid:01:52:54:00:96:11:7d}
	I0501 03:39:54.432858   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined IP address 192.168.50.218 and MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:39:54.433039   68864 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0501 03:39:54.437866   68864 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0501 03:39:54.451509   68864 kubeadm.go:877] updating cluster {Name:embed-certs-277128 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.0 ClusterName:embed-certs-277128 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.218 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0501 03:39:54.451615   68864 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0501 03:39:54.451665   68864 ssh_runner.go:195] Run: sudo crictl images --output json
	I0501 03:39:54.494304   68864 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0501 03:39:54.494379   68864 ssh_runner.go:195] Run: which lz4
	I0501 03:39:54.499090   68864 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0501 03:39:54.503970   68864 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0501 03:39:54.503992   68864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394544937 bytes)
	I0501 03:39:56.216407   68864 crio.go:462] duration metric: took 1.717351739s to copy over tarball
	I0501 03:39:56.216488   68864 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0501 03:39:58.703133   68864 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.48661051s)
	I0501 03:39:58.703161   68864 crio.go:469] duration metric: took 2.486721448s to extract the tarball
	I0501 03:39:58.703171   68864 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0501 03:39:58.751431   68864 ssh_runner.go:195] Run: sudo crictl images --output json
	I0501 03:39:58.800353   68864 crio.go:514] all images are preloaded for cri-o runtime.
	I0501 03:39:58.800379   68864 cache_images.go:84] Images are preloaded, skipping loading
	I0501 03:39:58.800389   68864 kubeadm.go:928] updating node { 192.168.50.218 8443 v1.30.0 crio true true} ...
	I0501 03:39:58.800516   68864 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-277128 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.218
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:embed-certs-277128 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0501 03:39:58.800598   68864 ssh_runner.go:195] Run: crio config
	I0501 03:39:56.159966   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:39:56.160373   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | unable to find current IP address of domain default-k8s-diff-port-715118 in network mk-default-k8s-diff-port-715118
	I0501 03:39:56.160404   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | I0501 03:39:56.160334   70304 retry.go:31] will retry after 774.079459ms: waiting for machine to come up
	I0501 03:39:56.936654   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:39:56.937201   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | unable to find current IP address of domain default-k8s-diff-port-715118 in network mk-default-k8s-diff-port-715118
	I0501 03:39:56.937232   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | I0501 03:39:56.937161   70304 retry.go:31] will retry after 877.420181ms: waiting for machine to come up
	I0501 03:39:57.816002   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:39:57.816467   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | unable to find current IP address of domain default-k8s-diff-port-715118 in network mk-default-k8s-diff-port-715118
	I0501 03:39:57.816501   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | I0501 03:39:57.816425   70304 retry.go:31] will retry after 1.477997343s: waiting for machine to come up
	I0501 03:39:59.296533   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:39:59.296970   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | unable to find current IP address of domain default-k8s-diff-port-715118 in network mk-default-k8s-diff-port-715118
	I0501 03:39:59.296995   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | I0501 03:39:59.296922   70304 retry.go:31] will retry after 1.199617253s: waiting for machine to come up
	I0501 03:40:00.498388   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:00.498817   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | unable to find current IP address of domain default-k8s-diff-port-715118 in network mk-default-k8s-diff-port-715118
	I0501 03:40:00.498845   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | I0501 03:40:00.498770   70304 retry.go:31] will retry after 2.227608697s: waiting for machine to come up
	I0501 03:39:58.855600   68864 cni.go:84] Creating CNI manager for ""
	I0501 03:39:58.855630   68864 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0501 03:39:58.855650   68864 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0501 03:39:58.855686   68864 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.218 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-277128 NodeName:embed-certs-277128 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.218"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.218 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0501 03:39:58.855826   68864 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.218
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-277128"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.218
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.218"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0501 03:39:58.855890   68864 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0501 03:39:58.868074   68864 binaries.go:44] Found k8s binaries, skipping transfer
	I0501 03:39:58.868145   68864 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0501 03:39:58.879324   68864 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0501 03:39:58.897572   68864 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0501 03:39:58.918416   68864 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0501 03:39:58.940317   68864 ssh_runner.go:195] Run: grep 192.168.50.218	control-plane.minikube.internal$ /etc/hosts
	I0501 03:39:58.944398   68864 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.218	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0501 03:39:58.959372   68864 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 03:39:59.094172   68864 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0501 03:39:59.113612   68864 certs.go:68] Setting up /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/embed-certs-277128 for IP: 192.168.50.218
	I0501 03:39:59.113653   68864 certs.go:194] generating shared ca certs ...
	I0501 03:39:59.113669   68864 certs.go:226] acquiring lock for ca certs: {Name:mk85a75d14c00780d4bf09cd9bdaf0f96f1b8fce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 03:39:59.113863   68864 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18779-13391/.minikube/ca.key
	I0501 03:39:59.113919   68864 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.key
	I0501 03:39:59.113931   68864 certs.go:256] generating profile certs ...
	I0501 03:39:59.114044   68864 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/embed-certs-277128/client.key
	I0501 03:39:59.114117   68864 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/embed-certs-277128/apiserver.key.65584253
	I0501 03:39:59.114166   68864 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/embed-certs-277128/proxy-client.key
	I0501 03:39:59.114325   68864 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/20724.pem (1338 bytes)
	W0501 03:39:59.114369   68864 certs.go:480] ignoring /home/jenkins/minikube-integration/18779-13391/.minikube/certs/20724_empty.pem, impossibly tiny 0 bytes
	I0501 03:39:59.114383   68864 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca-key.pem (1679 bytes)
	I0501 03:39:59.114430   68864 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem (1078 bytes)
	I0501 03:39:59.114466   68864 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem (1123 bytes)
	I0501 03:39:59.114497   68864 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem (1675 bytes)
	I0501 03:39:59.114550   68864 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem (1708 bytes)
	I0501 03:39:59.115448   68864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0501 03:39:59.155890   68864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0501 03:39:59.209160   68864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0501 03:39:59.251552   68864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0501 03:39:59.288100   68864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/embed-certs-277128/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0501 03:39:59.325437   68864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/embed-certs-277128/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0501 03:39:59.352593   68864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/embed-certs-277128/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0501 03:39:59.378992   68864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/embed-certs-277128/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0501 03:39:59.405517   68864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/certs/20724.pem --> /usr/share/ca-certificates/20724.pem (1338 bytes)
	I0501 03:39:59.431253   68864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem --> /usr/share/ca-certificates/207242.pem (1708 bytes)
	I0501 03:39:59.457155   68864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0501 03:39:59.483696   68864 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0501 03:39:59.502758   68864 ssh_runner.go:195] Run: openssl version
	I0501 03:39:59.509307   68864 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20724.pem && ln -fs /usr/share/ca-certificates/20724.pem /etc/ssl/certs/20724.pem"
	I0501 03:39:59.521438   68864 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20724.pem
	I0501 03:39:59.526658   68864 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May  1 02:20 /usr/share/ca-certificates/20724.pem
	I0501 03:39:59.526706   68864 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20724.pem
	I0501 03:39:59.533201   68864 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20724.pem /etc/ssl/certs/51391683.0"
	I0501 03:39:59.546837   68864 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/207242.pem && ln -fs /usr/share/ca-certificates/207242.pem /etc/ssl/certs/207242.pem"
	I0501 03:39:59.560612   68864 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/207242.pem
	I0501 03:39:59.565545   68864 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May  1 02:20 /usr/share/ca-certificates/207242.pem
	I0501 03:39:59.565589   68864 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/207242.pem
	I0501 03:39:59.571737   68864 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/207242.pem /etc/ssl/certs/3ec20f2e.0"
	I0501 03:39:59.584602   68864 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0501 03:39:59.599088   68864 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0501 03:39:59.604230   68864 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May  1 02:08 /usr/share/ca-certificates/minikubeCA.pem
	I0501 03:39:59.604296   68864 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0501 03:39:59.610536   68864 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0501 03:39:59.624810   68864 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0501 03:39:59.629692   68864 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0501 03:39:59.636209   68864 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0501 03:39:59.642907   68864 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0501 03:39:59.649491   68864 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0501 03:39:59.655702   68864 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0501 03:39:59.661884   68864 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0501 03:39:59.668075   68864 kubeadm.go:391] StartCluster: {Name:embed-certs-277128 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.0 ClusterName:embed-certs-277128 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.218 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 03:39:59.668209   68864 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0501 03:39:59.668255   68864 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0501 03:39:59.712172   68864 cri.go:89] found id: ""
	I0501 03:39:59.712262   68864 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0501 03:39:59.723825   68864 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0501 03:39:59.723848   68864 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0501 03:39:59.723854   68864 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0501 03:39:59.723890   68864 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0501 03:39:59.735188   68864 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0501 03:39:59.736670   68864 kubeconfig.go:125] found "embed-certs-277128" server: "https://192.168.50.218:8443"
	I0501 03:39:59.739665   68864 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0501 03:39:59.750292   68864 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.218
	I0501 03:39:59.750329   68864 kubeadm.go:1154] stopping kube-system containers ...
	I0501 03:39:59.750339   68864 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0501 03:39:59.750388   68864 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0501 03:39:59.791334   68864 cri.go:89] found id: ""
	I0501 03:39:59.791436   68864 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0501 03:39:59.809162   68864 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0501 03:39:59.820979   68864 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0501 03:39:59.821013   68864 kubeadm.go:156] found existing configuration files:
	
	I0501 03:39:59.821072   68864 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0501 03:39:59.832368   68864 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0501 03:39:59.832443   68864 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0501 03:39:59.843920   68864 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0501 03:39:59.855489   68864 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0501 03:39:59.855562   68864 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0501 03:39:59.867337   68864 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0501 03:39:59.878582   68864 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0501 03:39:59.878659   68864 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0501 03:39:59.890049   68864 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0501 03:39:59.901054   68864 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0501 03:39:59.901110   68864 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0501 03:39:59.912900   68864 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0501 03:39:59.925358   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:40:00.065105   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:40:00.861756   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:40:01.089790   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:40:01.158944   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:40:01.249842   68864 api_server.go:52] waiting for apiserver process to appear ...
	I0501 03:40:01.250063   68864 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:01.750273   68864 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:02.250155   68864 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:02.291774   68864 api_server.go:72] duration metric: took 1.041932793s to wait for apiserver process to appear ...
	I0501 03:40:02.291807   68864 api_server.go:88] waiting for apiserver healthz status ...
	I0501 03:40:02.291831   68864 api_server.go:253] Checking apiserver healthz at https://192.168.50.218:8443/healthz ...
	I0501 03:40:02.292377   68864 api_server.go:269] stopped: https://192.168.50.218:8443/healthz: Get "https://192.168.50.218:8443/healthz": dial tcp 192.168.50.218:8443: connect: connection refused
	I0501 03:40:02.792584   68864 api_server.go:253] Checking apiserver healthz at https://192.168.50.218:8443/healthz ...
	I0501 03:40:02.727799   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:02.728314   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | unable to find current IP address of domain default-k8s-diff-port-715118 in network mk-default-k8s-diff-port-715118
	I0501 03:40:02.728347   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | I0501 03:40:02.728270   70304 retry.go:31] will retry after 1.844071576s: waiting for machine to come up
	I0501 03:40:04.574870   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:04.575326   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | unable to find current IP address of domain default-k8s-diff-port-715118 in network mk-default-k8s-diff-port-715118
	I0501 03:40:04.575349   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | I0501 03:40:04.575278   70304 retry.go:31] will retry after 2.989286916s: waiting for machine to come up
	I0501 03:40:04.843311   68864 api_server.go:279] https://192.168.50.218:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0501 03:40:04.843360   68864 api_server.go:103] status: https://192.168.50.218:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0501 03:40:04.843377   68864 api_server.go:253] Checking apiserver healthz at https://192.168.50.218:8443/healthz ...
	I0501 03:40:04.899616   68864 api_server.go:279] https://192.168.50.218:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0501 03:40:04.899655   68864 api_server.go:103] status: https://192.168.50.218:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0501 03:40:05.292097   68864 api_server.go:253] Checking apiserver healthz at https://192.168.50.218:8443/healthz ...
	I0501 03:40:05.300803   68864 api_server.go:279] https://192.168.50.218:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0501 03:40:05.300843   68864 api_server.go:103] status: https://192.168.50.218:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0501 03:40:05.792151   68864 api_server.go:253] Checking apiserver healthz at https://192.168.50.218:8443/healthz ...
	I0501 03:40:05.797124   68864 api_server.go:279] https://192.168.50.218:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0501 03:40:05.797158   68864 api_server.go:103] status: https://192.168.50.218:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0501 03:40:06.292821   68864 api_server.go:253] Checking apiserver healthz at https://192.168.50.218:8443/healthz ...
	I0501 03:40:06.297912   68864 api_server.go:279] https://192.168.50.218:8443/healthz returned 200:
	ok
	I0501 03:40:06.305165   68864 api_server.go:141] control plane version: v1.30.0
	I0501 03:40:06.305199   68864 api_server.go:131] duration metric: took 4.013383351s to wait for apiserver health ...
	I0501 03:40:06.305211   68864 cni.go:84] Creating CNI manager for ""
	I0501 03:40:06.305220   68864 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0501 03:40:06.306925   68864 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0501 03:40:06.308450   68864 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0501 03:40:06.325186   68864 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0501 03:40:06.380997   68864 system_pods.go:43] waiting for kube-system pods to appear ...
	I0501 03:40:06.394134   68864 system_pods.go:59] 8 kube-system pods found
	I0501 03:40:06.394178   68864 system_pods.go:61] "coredns-7db6d8ff4d-sjplt" [6701ee8e-0630-4332-b01c-26741ed3a7b7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0501 03:40:06.394191   68864 system_pods.go:61] "etcd-embed-certs-277128" [744d7481-dd80-4435-90da-685fc16a76a4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0501 03:40:06.394206   68864 system_pods.go:61] "kube-apiserver-embed-certs-277128" [2f1d0f09-b270-4cdc-af3c-e17e3a244bbf] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0501 03:40:06.394215   68864 system_pods.go:61] "kube-controller-manager-embed-certs-277128" [729aa138-dc0d-4616-97d4-1592453971b8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0501 03:40:06.394222   68864 system_pods.go:61] "kube-proxy-phx7x" [56c0381e-c140-4f69-bbe4-09d393db8b23] Running
	I0501 03:40:06.394232   68864 system_pods.go:61] "kube-scheduler-embed-certs-277128" [cc56e9bc-7f21-4f57-a65a-73a10ffd7145] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0501 03:40:06.394253   68864 system_pods.go:61] "metrics-server-569cc877fc-p8j59" [f8ad6c24-dd5d-4515-9052-c9aca7412b55] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0501 03:40:06.394258   68864 system_pods.go:61] "storage-provisioner" [785be666-58d5-4b9d-92fd-bcacdbdebeb2] Running
	I0501 03:40:06.394273   68864 system_pods.go:74] duration metric: took 13.25246ms to wait for pod list to return data ...
	I0501 03:40:06.394293   68864 node_conditions.go:102] verifying NodePressure condition ...
	I0501 03:40:06.399912   68864 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0501 03:40:06.399950   68864 node_conditions.go:123] node cpu capacity is 2
	I0501 03:40:06.399974   68864 node_conditions.go:105] duration metric: took 5.664461ms to run NodePressure ...
	I0501 03:40:06.399996   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:40:06.675573   68864 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0501 03:40:06.680567   68864 kubeadm.go:733] kubelet initialised
	I0501 03:40:06.680591   68864 kubeadm.go:734] duration metric: took 4.987942ms waiting for restarted kubelet to initialise ...
	I0501 03:40:06.680598   68864 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0501 03:40:06.687295   68864 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-sjplt" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:06.692224   68864 pod_ready.go:97] node "embed-certs-277128" hosting pod "coredns-7db6d8ff4d-sjplt" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-277128" has status "Ready":"False"
	I0501 03:40:06.692248   68864 pod_ready.go:81] duration metric: took 4.930388ms for pod "coredns-7db6d8ff4d-sjplt" in "kube-system" namespace to be "Ready" ...
	E0501 03:40:06.692258   68864 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-277128" hosting pod "coredns-7db6d8ff4d-sjplt" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-277128" has status "Ready":"False"
	I0501 03:40:06.692266   68864 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-277128" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:06.699559   68864 pod_ready.go:97] node "embed-certs-277128" hosting pod "etcd-embed-certs-277128" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-277128" has status "Ready":"False"
	I0501 03:40:06.699591   68864 pod_ready.go:81] duration metric: took 7.309622ms for pod "etcd-embed-certs-277128" in "kube-system" namespace to be "Ready" ...
	E0501 03:40:06.699602   68864 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-277128" hosting pod "etcd-embed-certs-277128" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-277128" has status "Ready":"False"
	I0501 03:40:06.699613   68864 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-277128" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:06.705459   68864 pod_ready.go:97] node "embed-certs-277128" hosting pod "kube-apiserver-embed-certs-277128" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-277128" has status "Ready":"False"
	I0501 03:40:06.705485   68864 pod_ready.go:81] duration metric: took 5.86335ms for pod "kube-apiserver-embed-certs-277128" in "kube-system" namespace to be "Ready" ...
	E0501 03:40:06.705497   68864 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-277128" hosting pod "kube-apiserver-embed-certs-277128" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-277128" has status "Ready":"False"
	I0501 03:40:06.705504   68864 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-277128" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:06.786157   68864 pod_ready.go:97] node "embed-certs-277128" hosting pod "kube-controller-manager-embed-certs-277128" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-277128" has status "Ready":"False"
	I0501 03:40:06.786186   68864 pod_ready.go:81] duration metric: took 80.673223ms for pod "kube-controller-manager-embed-certs-277128" in "kube-system" namespace to be "Ready" ...
	E0501 03:40:06.786198   68864 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-277128" hosting pod "kube-controller-manager-embed-certs-277128" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-277128" has status "Ready":"False"
	I0501 03:40:06.786205   68864 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-phx7x" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:07.184262   68864 pod_ready.go:97] node "embed-certs-277128" hosting pod "kube-proxy-phx7x" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-277128" has status "Ready":"False"
	I0501 03:40:07.184297   68864 pod_ready.go:81] duration metric: took 398.081204ms for pod "kube-proxy-phx7x" in "kube-system" namespace to be "Ready" ...
	E0501 03:40:07.184309   68864 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-277128" hosting pod "kube-proxy-phx7x" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-277128" has status "Ready":"False"
	I0501 03:40:07.184319   68864 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-277128" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:07.584569   68864 pod_ready.go:97] node "embed-certs-277128" hosting pod "kube-scheduler-embed-certs-277128" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-277128" has status "Ready":"False"
	I0501 03:40:07.584607   68864 pod_ready.go:81] duration metric: took 400.279023ms for pod "kube-scheduler-embed-certs-277128" in "kube-system" namespace to be "Ready" ...
	E0501 03:40:07.584620   68864 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-277128" hosting pod "kube-scheduler-embed-certs-277128" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-277128" has status "Ready":"False"
	I0501 03:40:07.584630   68864 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:07.984376   68864 pod_ready.go:97] node "embed-certs-277128" hosting pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-277128" has status "Ready":"False"
	I0501 03:40:07.984408   68864 pod_ready.go:81] duration metric: took 399.766342ms for pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace to be "Ready" ...
	E0501 03:40:07.984419   68864 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-277128" hosting pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-277128" has status "Ready":"False"
	I0501 03:40:07.984428   68864 pod_ready.go:38] duration metric: took 1.303821777s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0501 03:40:07.984448   68864 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0501 03:40:08.000370   68864 ops.go:34] apiserver oom_adj: -16
	I0501 03:40:08.000391   68864 kubeadm.go:591] duration metric: took 8.276531687s to restartPrimaryControlPlane
	I0501 03:40:08.000401   68864 kubeadm.go:393] duration metric: took 8.332343707s to StartCluster
	I0501 03:40:08.000416   68864 settings.go:142] acquiring lock: {Name:mkcfe08bcadd45d99c00ed1eaf4e9c226a489fe2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 03:40:08.000482   68864 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18779-13391/kubeconfig
	I0501 03:40:08.002013   68864 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18779-13391/kubeconfig: {Name:mk5d1131557f885800877622151c0915337cb23a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 03:40:08.002343   68864 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.218 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0501 03:40:08.004301   68864 out.go:177] * Verifying Kubernetes components...
	I0501 03:40:08.002423   68864 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0501 03:40:08.002582   68864 config.go:182] Loaded profile config "embed-certs-277128": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0501 03:40:08.005608   68864 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-277128"
	I0501 03:40:08.005624   68864 addons.go:69] Setting metrics-server=true in profile "embed-certs-277128"
	I0501 03:40:08.005658   68864 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-277128"
	W0501 03:40:08.005670   68864 addons.go:243] addon storage-provisioner should already be in state true
	I0501 03:40:08.005609   68864 addons.go:69] Setting default-storageclass=true in profile "embed-certs-277128"
	I0501 03:40:08.005785   68864 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-277128"
	I0501 03:40:08.005659   68864 addons.go:234] Setting addon metrics-server=true in "embed-certs-277128"
	W0501 03:40:08.005819   68864 addons.go:243] addon metrics-server should already be in state true
	I0501 03:40:08.005851   68864 host.go:66] Checking if "embed-certs-277128" exists ...
	I0501 03:40:08.005613   68864 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 03:40:08.005695   68864 host.go:66] Checking if "embed-certs-277128" exists ...
	I0501 03:40:08.006230   68864 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:40:08.006258   68864 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:40:08.006291   68864 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:40:08.006310   68864 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:40:08.006326   68864 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:40:08.006378   68864 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:40:08.021231   68864 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35583
	I0501 03:40:08.021276   68864 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44971
	I0501 03:40:08.021621   68864 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:40:08.021673   68864 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:40:08.022126   68864 main.go:141] libmachine: Using API Version  1
	I0501 03:40:08.022146   68864 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:40:08.022353   68864 main.go:141] libmachine: Using API Version  1
	I0501 03:40:08.022390   68864 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:40:08.022537   68864 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:40:08.022730   68864 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:40:08.022904   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetState
	I0501 03:40:08.023118   68864 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:40:08.023165   68864 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:40:08.024792   68864 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33047
	I0501 03:40:08.025226   68864 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:40:08.025734   68864 main.go:141] libmachine: Using API Version  1
	I0501 03:40:08.025761   68864 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:40:08.026090   68864 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:40:08.026569   68864 addons.go:234] Setting addon default-storageclass=true in "embed-certs-277128"
	W0501 03:40:08.026593   68864 addons.go:243] addon default-storageclass should already be in state true
	I0501 03:40:08.026620   68864 host.go:66] Checking if "embed-certs-277128" exists ...
	I0501 03:40:08.026696   68864 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:40:08.026730   68864 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:40:08.026977   68864 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:40:08.027033   68864 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:40:08.039119   68864 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42975
	I0501 03:40:08.039585   68864 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:40:08.040083   68864 main.go:141] libmachine: Using API Version  1
	I0501 03:40:08.040106   68864 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:40:08.040419   68864 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:40:08.040599   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetState
	I0501 03:40:08.042228   68864 main.go:141] libmachine: (embed-certs-277128) Calling .DriverName
	I0501 03:40:08.044289   68864 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0501 03:40:08.045766   68864 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0501 03:40:08.045787   68864 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0501 03:40:08.045804   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHHostname
	I0501 03:40:08.043677   68864 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37141
	I0501 03:40:08.045633   68864 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41653
	I0501 03:40:08.046247   68864 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:40:08.046326   68864 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:40:08.046989   68864 main.go:141] libmachine: Using API Version  1
	I0501 03:40:08.047012   68864 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:40:08.047196   68864 main.go:141] libmachine: Using API Version  1
	I0501 03:40:08.047216   68864 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:40:08.047279   68864 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:40:08.047403   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetState
	I0501 03:40:08.047515   68864 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:40:08.048047   68864 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:40:08.048081   68864 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:40:08.049225   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:40:08.049623   68864 main.go:141] libmachine: (embed-certs-277128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:7d", ip: ""} in network mk-embed-certs-277128: {Iface:virbr2 ExpiryTime:2024-05-01 04:39:45 +0000 UTC Type:0 Mac:52:54:00:96:11:7d Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-277128 Clientid:01:52:54:00:96:11:7d}
	I0501 03:40:08.049649   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined IP address 192.168.50.218 and MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:40:08.049773   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHPort
	I0501 03:40:08.049915   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHKeyPath
	I0501 03:40:08.050096   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHUsername
	I0501 03:40:08.050165   68864 main.go:141] libmachine: (embed-certs-277128) Calling .DriverName
	I0501 03:40:08.050297   68864 sshutil.go:53] new ssh client: &{IP:192.168.50.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/embed-certs-277128/id_rsa Username:docker}
	I0501 03:40:08.052006   68864 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0501 03:40:08.053365   68864 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0501 03:40:08.053380   68864 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0501 03:40:08.053394   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHHostname
	I0501 03:40:08.056360   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:40:08.056752   68864 main.go:141] libmachine: (embed-certs-277128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:7d", ip: ""} in network mk-embed-certs-277128: {Iface:virbr2 ExpiryTime:2024-05-01 04:39:45 +0000 UTC Type:0 Mac:52:54:00:96:11:7d Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-277128 Clientid:01:52:54:00:96:11:7d}
	I0501 03:40:08.056782   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined IP address 192.168.50.218 and MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:40:08.056892   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHPort
	I0501 03:40:08.057074   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHKeyPath
	I0501 03:40:08.057215   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHUsername
	I0501 03:40:08.057334   68864 sshutil.go:53] new ssh client: &{IP:192.168.50.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/embed-certs-277128/id_rsa Username:docker}
	I0501 03:40:08.064476   68864 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43029
	I0501 03:40:08.064882   68864 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:40:08.065323   68864 main.go:141] libmachine: Using API Version  1
	I0501 03:40:08.065352   68864 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:40:08.065696   68864 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:40:08.065895   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetState
	I0501 03:40:08.067420   68864 main.go:141] libmachine: (embed-certs-277128) Calling .DriverName
	I0501 03:40:08.067740   68864 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0501 03:40:08.067762   68864 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0501 03:40:08.067774   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHHostname
	I0501 03:40:08.070587   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:40:08.071043   68864 main.go:141] libmachine: (embed-certs-277128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:7d", ip: ""} in network mk-embed-certs-277128: {Iface:virbr2 ExpiryTime:2024-05-01 04:39:45 +0000 UTC Type:0 Mac:52:54:00:96:11:7d Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-277128 Clientid:01:52:54:00:96:11:7d}
	I0501 03:40:08.071073   68864 main.go:141] libmachine: (embed-certs-277128) DBG | domain embed-certs-277128 has defined IP address 192.168.50.218 and MAC address 52:54:00:96:11:7d in network mk-embed-certs-277128
	I0501 03:40:08.071225   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHPort
	I0501 03:40:08.071401   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHKeyPath
	I0501 03:40:08.071554   68864 main.go:141] libmachine: (embed-certs-277128) Calling .GetSSHUsername
	I0501 03:40:08.071688   68864 sshutil.go:53] new ssh client: &{IP:192.168.50.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/embed-certs-277128/id_rsa Username:docker}
	I0501 03:40:08.204158   68864 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0501 03:40:08.229990   68864 node_ready.go:35] waiting up to 6m0s for node "embed-certs-277128" to be "Ready" ...
	I0501 03:40:08.289511   68864 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0501 03:40:08.289535   68864 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0501 03:40:08.301855   68864 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0501 03:40:08.311966   68864 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0501 03:40:08.330943   68864 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0501 03:40:08.330973   68864 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0501 03:40:08.384842   68864 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0501 03:40:08.384867   68864 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0501 03:40:08.445206   68864 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0501 03:40:09.434390   68864 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.122391479s)
	I0501 03:40:09.434458   68864 main.go:141] libmachine: Making call to close driver server
	I0501 03:40:09.434471   68864 main.go:141] libmachine: (embed-certs-277128) Calling .Close
	I0501 03:40:09.434518   68864 main.go:141] libmachine: Making call to close driver server
	I0501 03:40:09.434541   68864 main.go:141] libmachine: (embed-certs-277128) Calling .Close
	I0501 03:40:09.434567   68864 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.132680542s)
	I0501 03:40:09.434595   68864 main.go:141] libmachine: Making call to close driver server
	I0501 03:40:09.434604   68864 main.go:141] libmachine: (embed-certs-277128) Calling .Close
	I0501 03:40:09.434833   68864 main.go:141] libmachine: Successfully made call to close driver server
	I0501 03:40:09.434859   68864 main.go:141] libmachine: Successfully made call to close driver server
	I0501 03:40:09.434870   68864 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 03:40:09.434872   68864 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 03:40:09.434881   68864 main.go:141] libmachine: Making call to close driver server
	I0501 03:40:09.434882   68864 main.go:141] libmachine: Making call to close driver server
	I0501 03:40:09.434889   68864 main.go:141] libmachine: (embed-certs-277128) Calling .Close
	I0501 03:40:09.434890   68864 main.go:141] libmachine: (embed-certs-277128) Calling .Close
	I0501 03:40:09.434936   68864 main.go:141] libmachine: Successfully made call to close driver server
	I0501 03:40:09.434949   68864 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 03:40:09.434967   68864 main.go:141] libmachine: Making call to close driver server
	I0501 03:40:09.434994   68864 main.go:141] libmachine: (embed-certs-277128) Calling .Close
	I0501 03:40:09.434832   68864 main.go:141] libmachine: (embed-certs-277128) DBG | Closing plugin on server side
	I0501 03:40:09.435072   68864 main.go:141] libmachine: (embed-certs-277128) DBG | Closing plugin on server side
	I0501 03:40:09.437116   68864 main.go:141] libmachine: Successfully made call to close driver server
	I0501 03:40:09.437138   68864 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 03:40:09.437146   68864 main.go:141] libmachine: (embed-certs-277128) DBG | Closing plugin on server side
	I0501 03:40:09.437179   68864 main.go:141] libmachine: (embed-certs-277128) DBG | Closing plugin on server side
	I0501 03:40:09.437194   68864 main.go:141] libmachine: Successfully made call to close driver server
	I0501 03:40:09.437215   68864 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 03:40:09.437297   68864 main.go:141] libmachine: Successfully made call to close driver server
	I0501 03:40:09.437342   68864 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 03:40:09.437359   68864 addons.go:470] Verifying addon metrics-server=true in "embed-certs-277128"
	I0501 03:40:09.445787   68864 main.go:141] libmachine: Making call to close driver server
	I0501 03:40:09.445817   68864 main.go:141] libmachine: (embed-certs-277128) Calling .Close
	I0501 03:40:09.446053   68864 main.go:141] libmachine: Successfully made call to close driver server
	I0501 03:40:09.446090   68864 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 03:40:09.446112   68864 main.go:141] libmachine: (embed-certs-277128) DBG | Closing plugin on server side
	I0501 03:40:09.448129   68864 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass
	I0501 03:40:07.567551   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:07.567914   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | unable to find current IP address of domain default-k8s-diff-port-715118 in network mk-default-k8s-diff-port-715118
	I0501 03:40:07.567948   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | I0501 03:40:07.567860   70304 retry.go:31] will retry after 4.440791777s: waiting for machine to come up
	I0501 03:40:13.516002   69580 start.go:364] duration metric: took 3m31.9441828s to acquireMachinesLock for "old-k8s-version-503971"
	I0501 03:40:13.516087   69580 start.go:96] Skipping create...Using existing machine configuration
	I0501 03:40:13.516100   69580 fix.go:54] fixHost starting: 
	I0501 03:40:13.516559   69580 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:40:13.516601   69580 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:40:13.537158   69580 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38535
	I0501 03:40:13.537631   69580 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:40:13.538169   69580 main.go:141] libmachine: Using API Version  1
	I0501 03:40:13.538197   69580 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:40:13.538570   69580 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:40:13.538769   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .DriverName
	I0501 03:40:13.538958   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetState
	I0501 03:40:13.540454   69580 fix.go:112] recreateIfNeeded on old-k8s-version-503971: state=Stopped err=<nil>
	I0501 03:40:13.540486   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .DriverName
	W0501 03:40:13.540787   69580 fix.go:138] unexpected machine state, will restart: <nil>
	I0501 03:40:13.542670   69580 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-503971" ...
	I0501 03:40:09.449483   68864 addons.go:505] duration metric: took 1.447068548s for enable addons: enabled=[storage-provisioner metrics-server default-storageclass]
	I0501 03:40:10.233650   68864 node_ready.go:53] node "embed-certs-277128" has status "Ready":"False"
	I0501 03:40:12.234270   68864 node_ready.go:53] node "embed-certs-277128" has status "Ready":"False"
	I0501 03:40:12.011886   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:12.012305   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Found IP for machine: 192.168.72.158
	I0501 03:40:12.012335   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has current primary IP address 192.168.72.158 and MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:12.012347   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Reserving static IP address...
	I0501 03:40:12.012759   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-715118", mac: "52:54:00:87:12:31", ip: "192.168.72.158"} in network mk-default-k8s-diff-port-715118: {Iface:virbr3 ExpiryTime:2024-05-01 04:40:05 +0000 UTC Type:0 Mac:52:54:00:87:12:31 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-715118 Clientid:01:52:54:00:87:12:31}
	I0501 03:40:12.012796   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | skip adding static IP to network mk-default-k8s-diff-port-715118 - found existing host DHCP lease matching {name: "default-k8s-diff-port-715118", mac: "52:54:00:87:12:31", ip: "192.168.72.158"}
	I0501 03:40:12.012809   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Reserved static IP address: 192.168.72.158
	I0501 03:40:12.012828   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Waiting for SSH to be available...
	I0501 03:40:12.012835   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | Getting to WaitForSSH function...
	I0501 03:40:12.014719   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:12.015044   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:12:31", ip: ""} in network mk-default-k8s-diff-port-715118: {Iface:virbr3 ExpiryTime:2024-05-01 04:40:05 +0000 UTC Type:0 Mac:52:54:00:87:12:31 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-715118 Clientid:01:52:54:00:87:12:31}
	I0501 03:40:12.015080   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined IP address 192.168.72.158 and MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:12.015193   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | Using SSH client type: external
	I0501 03:40:12.015220   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | Using SSH private key: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/default-k8s-diff-port-715118/id_rsa (-rw-------)
	I0501 03:40:12.015269   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.158 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18779-13391/.minikube/machines/default-k8s-diff-port-715118/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0501 03:40:12.015280   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | About to run SSH command:
	I0501 03:40:12.015289   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | exit 0
	I0501 03:40:12.138881   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | SSH cmd err, output: <nil>: 
	I0501 03:40:12.139286   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetConfigRaw
	I0501 03:40:12.140056   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetIP
	I0501 03:40:12.142869   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:12.143322   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:12:31", ip: ""} in network mk-default-k8s-diff-port-715118: {Iface:virbr3 ExpiryTime:2024-05-01 04:40:05 +0000 UTC Type:0 Mac:52:54:00:87:12:31 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-715118 Clientid:01:52:54:00:87:12:31}
	I0501 03:40:12.143353   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined IP address 192.168.72.158 and MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:12.143662   69237 profile.go:143] Saving config to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/default-k8s-diff-port-715118/config.json ...
	I0501 03:40:12.143858   69237 machine.go:94] provisionDockerMachine start ...
	I0501 03:40:12.143876   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .DriverName
	I0501 03:40:12.144117   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHHostname
	I0501 03:40:12.146145   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:12.146535   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:12:31", ip: ""} in network mk-default-k8s-diff-port-715118: {Iface:virbr3 ExpiryTime:2024-05-01 04:40:05 +0000 UTC Type:0 Mac:52:54:00:87:12:31 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-715118 Clientid:01:52:54:00:87:12:31}
	I0501 03:40:12.146563   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined IP address 192.168.72.158 and MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:12.146712   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHPort
	I0501 03:40:12.146889   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHKeyPath
	I0501 03:40:12.147021   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHKeyPath
	I0501 03:40:12.147130   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHUsername
	I0501 03:40:12.147310   69237 main.go:141] libmachine: Using SSH client type: native
	I0501 03:40:12.147558   69237 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.158 22 <nil> <nil>}
	I0501 03:40:12.147574   69237 main.go:141] libmachine: About to run SSH command:
	hostname
	I0501 03:40:12.251357   69237 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0501 03:40:12.251387   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetMachineName
	I0501 03:40:12.251629   69237 buildroot.go:166] provisioning hostname "default-k8s-diff-port-715118"
	I0501 03:40:12.251658   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetMachineName
	I0501 03:40:12.251862   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHHostname
	I0501 03:40:12.254582   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:12.254892   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:12:31", ip: ""} in network mk-default-k8s-diff-port-715118: {Iface:virbr3 ExpiryTime:2024-05-01 04:40:05 +0000 UTC Type:0 Mac:52:54:00:87:12:31 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-715118 Clientid:01:52:54:00:87:12:31}
	I0501 03:40:12.254924   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined IP address 192.168.72.158 and MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:12.255073   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHPort
	I0501 03:40:12.255276   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHKeyPath
	I0501 03:40:12.255435   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHKeyPath
	I0501 03:40:12.255575   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHUsername
	I0501 03:40:12.255744   69237 main.go:141] libmachine: Using SSH client type: native
	I0501 03:40:12.255905   69237 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.158 22 <nil> <nil>}
	I0501 03:40:12.255917   69237 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-715118 && echo "default-k8s-diff-port-715118" | sudo tee /etc/hostname
	I0501 03:40:12.377588   69237 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-715118
	
	I0501 03:40:12.377628   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHHostname
	I0501 03:40:12.380627   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:12.380927   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:12:31", ip: ""} in network mk-default-k8s-diff-port-715118: {Iface:virbr3 ExpiryTime:2024-05-01 04:40:05 +0000 UTC Type:0 Mac:52:54:00:87:12:31 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-715118 Clientid:01:52:54:00:87:12:31}
	I0501 03:40:12.380958   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined IP address 192.168.72.158 and MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:12.381155   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHPort
	I0501 03:40:12.381372   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHKeyPath
	I0501 03:40:12.381550   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHKeyPath
	I0501 03:40:12.381723   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHUsername
	I0501 03:40:12.381907   69237 main.go:141] libmachine: Using SSH client type: native
	I0501 03:40:12.382148   69237 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.158 22 <nil> <nil>}
	I0501 03:40:12.382170   69237 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-715118' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-715118/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-715118' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0501 03:40:12.494424   69237 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0501 03:40:12.494454   69237 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18779-13391/.minikube CaCertPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18779-13391/.minikube}
	I0501 03:40:12.494484   69237 buildroot.go:174] setting up certificates
	I0501 03:40:12.494493   69237 provision.go:84] configureAuth start
	I0501 03:40:12.494504   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetMachineName
	I0501 03:40:12.494786   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetIP
	I0501 03:40:12.497309   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:12.497584   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:12:31", ip: ""} in network mk-default-k8s-diff-port-715118: {Iface:virbr3 ExpiryTime:2024-05-01 04:40:05 +0000 UTC Type:0 Mac:52:54:00:87:12:31 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-715118 Clientid:01:52:54:00:87:12:31}
	I0501 03:40:12.497616   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined IP address 192.168.72.158 and MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:12.497746   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHHostname
	I0501 03:40:12.500010   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:12.500302   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:12:31", ip: ""} in network mk-default-k8s-diff-port-715118: {Iface:virbr3 ExpiryTime:2024-05-01 04:40:05 +0000 UTC Type:0 Mac:52:54:00:87:12:31 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-715118 Clientid:01:52:54:00:87:12:31}
	I0501 03:40:12.500322   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined IP address 192.168.72.158 and MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:12.500449   69237 provision.go:143] copyHostCerts
	I0501 03:40:12.500505   69237 exec_runner.go:144] found /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem, removing ...
	I0501 03:40:12.500524   69237 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem
	I0501 03:40:12.500598   69237 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem (1078 bytes)
	I0501 03:40:12.500759   69237 exec_runner.go:144] found /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem, removing ...
	I0501 03:40:12.500772   69237 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem
	I0501 03:40:12.500815   69237 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem (1123 bytes)
	I0501 03:40:12.500891   69237 exec_runner.go:144] found /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem, removing ...
	I0501 03:40:12.500900   69237 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem
	I0501 03:40:12.500925   69237 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem (1675 bytes)
	I0501 03:40:12.500991   69237 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-715118 san=[127.0.0.1 192.168.72.158 default-k8s-diff-port-715118 localhost minikube]
	I0501 03:40:12.779037   69237 provision.go:177] copyRemoteCerts
	I0501 03:40:12.779104   69237 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0501 03:40:12.779139   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHHostname
	I0501 03:40:12.781800   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:12.782159   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:12:31", ip: ""} in network mk-default-k8s-diff-port-715118: {Iface:virbr3 ExpiryTime:2024-05-01 04:40:05 +0000 UTC Type:0 Mac:52:54:00:87:12:31 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-715118 Clientid:01:52:54:00:87:12:31}
	I0501 03:40:12.782195   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined IP address 192.168.72.158 and MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:12.782356   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHPort
	I0501 03:40:12.782655   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHKeyPath
	I0501 03:40:12.782812   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHUsername
	I0501 03:40:12.782946   69237 sshutil.go:53] new ssh client: &{IP:192.168.72.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/default-k8s-diff-port-715118/id_rsa Username:docker}
	I0501 03:40:12.867622   69237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0501 03:40:12.897105   69237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0501 03:40:12.926675   69237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0501 03:40:12.955373   69237 provision.go:87] duration metric: took 460.865556ms to configureAuth
	I0501 03:40:12.955405   69237 buildroot.go:189] setting minikube options for container-runtime
	I0501 03:40:12.955606   69237 config.go:182] Loaded profile config "default-k8s-diff-port-715118": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0501 03:40:12.955700   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHHostname
	I0501 03:40:12.958286   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:12.958632   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:12:31", ip: ""} in network mk-default-k8s-diff-port-715118: {Iface:virbr3 ExpiryTime:2024-05-01 04:40:05 +0000 UTC Type:0 Mac:52:54:00:87:12:31 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-715118 Clientid:01:52:54:00:87:12:31}
	I0501 03:40:12.958670   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined IP address 192.168.72.158 and MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:12.958800   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHPort
	I0501 03:40:12.959007   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHKeyPath
	I0501 03:40:12.959225   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHKeyPath
	I0501 03:40:12.959374   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHUsername
	I0501 03:40:12.959554   69237 main.go:141] libmachine: Using SSH client type: native
	I0501 03:40:12.959729   69237 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.158 22 <nil> <nil>}
	I0501 03:40:12.959748   69237 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0501 03:40:13.253328   69237 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0501 03:40:13.253356   69237 machine.go:97] duration metric: took 1.109484866s to provisionDockerMachine
	I0501 03:40:13.253371   69237 start.go:293] postStartSetup for "default-k8s-diff-port-715118" (driver="kvm2")
	I0501 03:40:13.253385   69237 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0501 03:40:13.253405   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .DriverName
	I0501 03:40:13.253753   69237 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0501 03:40:13.253790   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHHostname
	I0501 03:40:13.256734   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:13.257187   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:12:31", ip: ""} in network mk-default-k8s-diff-port-715118: {Iface:virbr3 ExpiryTime:2024-05-01 04:40:05 +0000 UTC Type:0 Mac:52:54:00:87:12:31 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-715118 Clientid:01:52:54:00:87:12:31}
	I0501 03:40:13.257214   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined IP address 192.168.72.158 and MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:13.257345   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHPort
	I0501 03:40:13.257547   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHKeyPath
	I0501 03:40:13.257708   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHUsername
	I0501 03:40:13.257856   69237 sshutil.go:53] new ssh client: &{IP:192.168.72.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/default-k8s-diff-port-715118/id_rsa Username:docker}
	I0501 03:40:13.353373   69237 ssh_runner.go:195] Run: cat /etc/os-release
	I0501 03:40:13.359653   69237 info.go:137] Remote host: Buildroot 2023.02.9
	I0501 03:40:13.359679   69237 filesync.go:126] Scanning /home/jenkins/minikube-integration/18779-13391/.minikube/addons for local assets ...
	I0501 03:40:13.359747   69237 filesync.go:126] Scanning /home/jenkins/minikube-integration/18779-13391/.minikube/files for local assets ...
	I0501 03:40:13.359854   69237 filesync.go:149] local asset: /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem -> 207242.pem in /etc/ssl/certs
	I0501 03:40:13.359964   69237 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0501 03:40:13.370608   69237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem --> /etc/ssl/certs/207242.pem (1708 bytes)
	I0501 03:40:13.402903   69237 start.go:296] duration metric: took 149.518346ms for postStartSetup
	I0501 03:40:13.402946   69237 fix.go:56] duration metric: took 20.610871873s for fixHost
	I0501 03:40:13.402967   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHHostname
	I0501 03:40:13.406324   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:13.406762   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:12:31", ip: ""} in network mk-default-k8s-diff-port-715118: {Iface:virbr3 ExpiryTime:2024-05-01 04:40:05 +0000 UTC Type:0 Mac:52:54:00:87:12:31 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-715118 Clientid:01:52:54:00:87:12:31}
	I0501 03:40:13.406792   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined IP address 192.168.72.158 and MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:13.407028   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHPort
	I0501 03:40:13.407274   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHKeyPath
	I0501 03:40:13.407505   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHKeyPath
	I0501 03:40:13.407645   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHUsername
	I0501 03:40:13.407831   69237 main.go:141] libmachine: Using SSH client type: native
	I0501 03:40:13.408034   69237 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.158 22 <nil> <nil>}
	I0501 03:40:13.408045   69237 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0501 03:40:13.515775   69237 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714534813.490981768
	
	I0501 03:40:13.515814   69237 fix.go:216] guest clock: 1714534813.490981768
	I0501 03:40:13.515852   69237 fix.go:229] Guest: 2024-05-01 03:40:13.490981768 +0000 UTC Remote: 2024-05-01 03:40:13.402950224 +0000 UTC m=+262.796298359 (delta=88.031544ms)
	I0501 03:40:13.515884   69237 fix.go:200] guest clock delta is within tolerance: 88.031544ms
	I0501 03:40:13.515891   69237 start.go:83] releasing machines lock for "default-k8s-diff-port-715118", held for 20.723857967s
	I0501 03:40:13.515976   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .DriverName
	I0501 03:40:13.516272   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetIP
	I0501 03:40:13.519627   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:13.520098   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:12:31", ip: ""} in network mk-default-k8s-diff-port-715118: {Iface:virbr3 ExpiryTime:2024-05-01 04:40:05 +0000 UTC Type:0 Mac:52:54:00:87:12:31 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-715118 Clientid:01:52:54:00:87:12:31}
	I0501 03:40:13.520128   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined IP address 192.168.72.158 and MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:13.520304   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .DriverName
	I0501 03:40:13.520922   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .DriverName
	I0501 03:40:13.521122   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .DriverName
	I0501 03:40:13.521212   69237 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0501 03:40:13.521292   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHHostname
	I0501 03:40:13.521355   69237 ssh_runner.go:195] Run: cat /version.json
	I0501 03:40:13.521387   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHHostname
	I0501 03:40:13.524292   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:13.524328   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:13.524612   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:12:31", ip: ""} in network mk-default-k8s-diff-port-715118: {Iface:virbr3 ExpiryTime:2024-05-01 04:40:05 +0000 UTC Type:0 Mac:52:54:00:87:12:31 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-715118 Clientid:01:52:54:00:87:12:31}
	I0501 03:40:13.524672   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined IP address 192.168.72.158 and MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:13.524819   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:12:31", ip: ""} in network mk-default-k8s-diff-port-715118: {Iface:virbr3 ExpiryTime:2024-05-01 04:40:05 +0000 UTC Type:0 Mac:52:54:00:87:12:31 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-715118 Clientid:01:52:54:00:87:12:31}
	I0501 03:40:13.524948   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined IP address 192.168.72.158 and MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:13.524989   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHPort
	I0501 03:40:13.525033   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHPort
	I0501 03:40:13.525171   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHKeyPath
	I0501 03:40:13.525196   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHKeyPath
	I0501 03:40:13.525306   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHUsername
	I0501 03:40:13.525401   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHUsername
	I0501 03:40:13.525490   69237 sshutil.go:53] new ssh client: &{IP:192.168.72.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/default-k8s-diff-port-715118/id_rsa Username:docker}
	I0501 03:40:13.525553   69237 sshutil.go:53] new ssh client: &{IP:192.168.72.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/default-k8s-diff-port-715118/id_rsa Username:docker}
	I0501 03:40:13.628623   69237 ssh_runner.go:195] Run: systemctl --version
	I0501 03:40:13.636013   69237 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0501 03:40:13.787414   69237 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0501 03:40:13.795777   69237 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0501 03:40:13.795867   69237 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0501 03:40:13.822287   69237 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0501 03:40:13.822326   69237 start.go:494] detecting cgroup driver to use...
	I0501 03:40:13.822507   69237 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0501 03:40:13.841310   69237 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0501 03:40:13.857574   69237 docker.go:217] disabling cri-docker service (if available) ...
	I0501 03:40:13.857645   69237 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0501 03:40:13.872903   69237 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0501 03:40:13.889032   69237 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0501 03:40:14.020563   69237 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0501 03:40:14.222615   69237 docker.go:233] disabling docker service ...
	I0501 03:40:14.222691   69237 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0501 03:40:14.245841   69237 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0501 03:40:14.261001   69237 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0501 03:40:14.385943   69237 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0501 03:40:14.516899   69237 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0501 03:40:14.545138   69237 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0501 03:40:14.570308   69237 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0501 03:40:14.570373   69237 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:40:14.586460   69237 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0501 03:40:14.586535   69237 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:40:14.598947   69237 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:40:14.617581   69237 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:40:14.630097   69237 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0501 03:40:14.642379   69237 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:40:14.653723   69237 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:40:14.674508   69237 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:40:14.685890   69237 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0501 03:40:14.696560   69237 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0501 03:40:14.696614   69237 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0501 03:40:14.713050   69237 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0501 03:40:14.723466   69237 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 03:40:14.884910   69237 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0501 03:40:15.030618   69237 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0501 03:40:15.030689   69237 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0501 03:40:15.036403   69237 start.go:562] Will wait 60s for crictl version
	I0501 03:40:15.036470   69237 ssh_runner.go:195] Run: which crictl
	I0501 03:40:15.040924   69237 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0501 03:40:15.082944   69237 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0501 03:40:15.083037   69237 ssh_runner.go:195] Run: crio --version
	I0501 03:40:15.123492   69237 ssh_runner.go:195] Run: crio --version
	I0501 03:40:15.160739   69237 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0501 03:40:15.162026   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetIP
	I0501 03:40:15.164966   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:15.165378   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:12:31", ip: ""} in network mk-default-k8s-diff-port-715118: {Iface:virbr3 ExpiryTime:2024-05-01 04:40:05 +0000 UTC Type:0 Mac:52:54:00:87:12:31 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-715118 Clientid:01:52:54:00:87:12:31}
	I0501 03:40:15.165417   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined IP address 192.168.72.158 and MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:40:15.165621   69237 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0501 03:40:15.171717   69237 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0501 03:40:15.190203   69237 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-715118 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.0 ClusterName:default-k8s-diff-port-715118 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.158 Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0501 03:40:15.190359   69237 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0501 03:40:15.190439   69237 ssh_runner.go:195] Run: sudo crictl images --output json
	I0501 03:40:15.240549   69237 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0501 03:40:15.240606   69237 ssh_runner.go:195] Run: which lz4
	I0501 03:40:15.246523   69237 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0501 03:40:15.253094   69237 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0501 03:40:15.253139   69237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394544937 bytes)
	I0501 03:40:13.544100   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .Start
	I0501 03:40:13.544328   69580 main.go:141] libmachine: (old-k8s-version-503971) Ensuring networks are active...
	I0501 03:40:13.545238   69580 main.go:141] libmachine: (old-k8s-version-503971) Ensuring network default is active
	I0501 03:40:13.545621   69580 main.go:141] libmachine: (old-k8s-version-503971) Ensuring network mk-old-k8s-version-503971 is active
	I0501 03:40:13.546072   69580 main.go:141] libmachine: (old-k8s-version-503971) Getting domain xml...
	I0501 03:40:13.546928   69580 main.go:141] libmachine: (old-k8s-version-503971) Creating domain...
	I0501 03:40:14.858558   69580 main.go:141] libmachine: (old-k8s-version-503971) Waiting to get IP...
	I0501 03:40:14.859690   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:14.860108   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | unable to find current IP address of domain old-k8s-version-503971 in network mk-old-k8s-version-503971
	I0501 03:40:14.860215   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | I0501 03:40:14.860103   70499 retry.go:31] will retry after 294.057322ms: waiting for machine to come up
	I0501 03:40:15.155490   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:15.155922   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | unable to find current IP address of domain old-k8s-version-503971 in network mk-old-k8s-version-503971
	I0501 03:40:15.155954   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | I0501 03:40:15.155870   70499 retry.go:31] will retry after 281.238966ms: waiting for machine to come up
	I0501 03:40:15.439196   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:15.439735   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | unable to find current IP address of domain old-k8s-version-503971 in network mk-old-k8s-version-503971
	I0501 03:40:15.439783   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | I0501 03:40:15.439697   70499 retry.go:31] will retry after 429.353689ms: waiting for machine to come up
	I0501 03:40:15.871266   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:15.871947   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | unable to find current IP address of domain old-k8s-version-503971 in network mk-old-k8s-version-503971
	I0501 03:40:15.871970   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | I0501 03:40:15.871895   70499 retry.go:31] will retry after 478.685219ms: waiting for machine to come up
	I0501 03:40:16.352661   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:16.353125   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | unable to find current IP address of domain old-k8s-version-503971 in network mk-old-k8s-version-503971
	I0501 03:40:16.353161   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | I0501 03:40:16.353087   70499 retry.go:31] will retry after 642.905156ms: waiting for machine to come up
	I0501 03:40:14.235378   68864 node_ready.go:53] node "embed-certs-277128" has status "Ready":"False"
	I0501 03:40:15.735465   68864 node_ready.go:49] node "embed-certs-277128" has status "Ready":"True"
	I0501 03:40:15.735494   68864 node_ready.go:38] duration metric: took 7.50546727s for node "embed-certs-277128" to be "Ready" ...
	I0501 03:40:15.735503   68864 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0501 03:40:15.743215   68864 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-sjplt" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:17.752821   68864 pod_ready.go:102] pod "coredns-7db6d8ff4d-sjplt" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:17.121023   69237 crio.go:462] duration metric: took 1.874524806s to copy over tarball
	I0501 03:40:17.121097   69237 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0501 03:40:19.792970   69237 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.671840765s)
	I0501 03:40:19.793004   69237 crio.go:469] duration metric: took 2.67194801s to extract the tarball
	I0501 03:40:19.793014   69237 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0501 03:40:19.834845   69237 ssh_runner.go:195] Run: sudo crictl images --output json
	I0501 03:40:19.896841   69237 crio.go:514] all images are preloaded for cri-o runtime.
	I0501 03:40:19.896881   69237 cache_images.go:84] Images are preloaded, skipping loading
	I0501 03:40:19.896892   69237 kubeadm.go:928] updating node { 192.168.72.158 8444 v1.30.0 crio true true} ...
	I0501 03:40:19.897027   69237 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-715118 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.158
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:default-k8s-diff-port-715118 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0501 03:40:19.897113   69237 ssh_runner.go:195] Run: crio config
	I0501 03:40:19.953925   69237 cni.go:84] Creating CNI manager for ""
	I0501 03:40:19.953956   69237 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0501 03:40:19.953971   69237 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0501 03:40:19.953991   69237 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.158 APIServerPort:8444 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-715118 NodeName:default-k8s-diff-port-715118 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.158"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.158 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0501 03:40:19.954133   69237 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.158
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-715118"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.158
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.158"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0501 03:40:19.954198   69237 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0501 03:40:19.967632   69237 binaries.go:44] Found k8s binaries, skipping transfer
	I0501 03:40:19.967708   69237 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0501 03:40:19.984161   69237 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0501 03:40:20.006540   69237 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0501 03:40:20.029218   69237 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0501 03:40:20.051612   69237 ssh_runner.go:195] Run: grep 192.168.72.158	control-plane.minikube.internal$ /etc/hosts
	I0501 03:40:20.056502   69237 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.158	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0501 03:40:20.071665   69237 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 03:40:20.194289   69237 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0501 03:40:20.215402   69237 certs.go:68] Setting up /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/default-k8s-diff-port-715118 for IP: 192.168.72.158
	I0501 03:40:20.215440   69237 certs.go:194] generating shared ca certs ...
	I0501 03:40:20.215471   69237 certs.go:226] acquiring lock for ca certs: {Name:mk85a75d14c00780d4bf09cd9bdaf0f96f1b8fce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 03:40:20.215698   69237 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18779-13391/.minikube/ca.key
	I0501 03:40:20.215769   69237 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.key
	I0501 03:40:20.215785   69237 certs.go:256] generating profile certs ...
	I0501 03:40:20.215922   69237 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/default-k8s-diff-port-715118/client.key
	I0501 03:40:20.216023   69237 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/default-k8s-diff-port-715118/apiserver.key.91bc3872
	I0501 03:40:20.216094   69237 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/default-k8s-diff-port-715118/proxy-client.key
	I0501 03:40:20.216275   69237 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/20724.pem (1338 bytes)
	W0501 03:40:20.216321   69237 certs.go:480] ignoring /home/jenkins/minikube-integration/18779-13391/.minikube/certs/20724_empty.pem, impossibly tiny 0 bytes
	I0501 03:40:20.216337   69237 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca-key.pem (1679 bytes)
	I0501 03:40:20.216375   69237 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem (1078 bytes)
	I0501 03:40:20.216439   69237 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem (1123 bytes)
	I0501 03:40:20.216472   69237 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem (1675 bytes)
	I0501 03:40:20.216560   69237 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem (1708 bytes)
	I0501 03:40:20.217306   69237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0501 03:40:20.256162   69237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0501 03:40:20.293643   69237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0501 03:40:20.329175   69237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0501 03:40:20.367715   69237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/default-k8s-diff-port-715118/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0501 03:40:20.400024   69237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/default-k8s-diff-port-715118/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0501 03:40:20.428636   69237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/default-k8s-diff-port-715118/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0501 03:40:20.458689   69237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/default-k8s-diff-port-715118/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0501 03:40:20.487619   69237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem --> /usr/share/ca-certificates/207242.pem (1708 bytes)
	I0501 03:40:20.518140   69237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0501 03:40:20.547794   69237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/certs/20724.pem --> /usr/share/ca-certificates/20724.pem (1338 bytes)
	I0501 03:40:20.580453   69237 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0501 03:40:20.605211   69237 ssh_runner.go:195] Run: openssl version
	I0501 03:40:20.612269   69237 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/207242.pem && ln -fs /usr/share/ca-certificates/207242.pem /etc/ssl/certs/207242.pem"
	I0501 03:40:20.626575   69237 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/207242.pem
	I0501 03:40:20.632370   69237 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May  1 02:20 /usr/share/ca-certificates/207242.pem
	I0501 03:40:20.632439   69237 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/207242.pem
	I0501 03:40:20.639563   69237 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/207242.pem /etc/ssl/certs/3ec20f2e.0"
	I0501 03:40:16.997533   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:16.998034   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | unable to find current IP address of domain old-k8s-version-503971 in network mk-old-k8s-version-503971
	I0501 03:40:16.998076   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | I0501 03:40:16.997984   70499 retry.go:31] will retry after 596.56948ms: waiting for machine to come up
	I0501 03:40:17.596671   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:17.597182   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | unable to find current IP address of domain old-k8s-version-503971 in network mk-old-k8s-version-503971
	I0501 03:40:17.597207   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | I0501 03:40:17.597132   70499 retry.go:31] will retry after 770.742109ms: waiting for machine to come up
	I0501 03:40:18.369337   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:18.369833   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | unable to find current IP address of domain old-k8s-version-503971 in network mk-old-k8s-version-503971
	I0501 03:40:18.369864   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | I0501 03:40:18.369780   70499 retry.go:31] will retry after 1.382502808s: waiting for machine to come up
	I0501 03:40:19.753936   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:19.754419   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | unable to find current IP address of domain old-k8s-version-503971 in network mk-old-k8s-version-503971
	I0501 03:40:19.754458   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | I0501 03:40:19.754363   70499 retry.go:31] will retry after 1.344792989s: waiting for machine to come up
	I0501 03:40:21.101047   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:21.101474   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | unable to find current IP address of domain old-k8s-version-503971 in network mk-old-k8s-version-503971
	I0501 03:40:21.101514   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | I0501 03:40:21.101442   70499 retry.go:31] will retry after 1.636964906s: waiting for machine to come up
	I0501 03:40:20.252239   68864 pod_ready.go:102] pod "coredns-7db6d8ff4d-sjplt" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:22.751407   68864 pod_ready.go:92] pod "coredns-7db6d8ff4d-sjplt" in "kube-system" namespace has status "Ready":"True"
	I0501 03:40:22.751431   68864 pod_ready.go:81] duration metric: took 7.008190087s for pod "coredns-7db6d8ff4d-sjplt" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:22.751442   68864 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-277128" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:22.757104   68864 pod_ready.go:92] pod "etcd-embed-certs-277128" in "kube-system" namespace has status "Ready":"True"
	I0501 03:40:22.757124   68864 pod_ready.go:81] duration metric: took 5.677117ms for pod "etcd-embed-certs-277128" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:22.757141   68864 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-277128" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:22.763083   68864 pod_ready.go:92] pod "kube-apiserver-embed-certs-277128" in "kube-system" namespace has status "Ready":"True"
	I0501 03:40:22.763107   68864 pod_ready.go:81] duration metric: took 5.958961ms for pod "kube-apiserver-embed-certs-277128" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:22.763119   68864 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-277128" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:22.768163   68864 pod_ready.go:92] pod "kube-controller-manager-embed-certs-277128" in "kube-system" namespace has status "Ready":"True"
	I0501 03:40:22.768182   68864 pod_ready.go:81] duration metric: took 5.055934ms for pod "kube-controller-manager-embed-certs-277128" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:22.768193   68864 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-phx7x" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:22.772478   68864 pod_ready.go:92] pod "kube-proxy-phx7x" in "kube-system" namespace has status "Ready":"True"
	I0501 03:40:22.772497   68864 pod_ready.go:81] duration metric: took 4.297358ms for pod "kube-proxy-phx7x" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:22.772505   68864 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-277128" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:23.149692   68864 pod_ready.go:92] pod "kube-scheduler-embed-certs-277128" in "kube-system" namespace has status "Ready":"True"
	I0501 03:40:23.149726   68864 pod_ready.go:81] duration metric: took 377.213314ms for pod "kube-scheduler-embed-certs-277128" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:23.149741   68864 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:20.653202   69237 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0501 03:40:20.878582   69237 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0501 03:40:20.884671   69237 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May  1 02:08 /usr/share/ca-certificates/minikubeCA.pem
	I0501 03:40:20.884755   69237 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0501 03:40:20.891633   69237 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0501 03:40:20.906032   69237 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20724.pem && ln -fs /usr/share/ca-certificates/20724.pem /etc/ssl/certs/20724.pem"
	I0501 03:40:20.924491   69237 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20724.pem
	I0501 03:40:20.931346   69237 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May  1 02:20 /usr/share/ca-certificates/20724.pem
	I0501 03:40:20.931421   69237 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20724.pem
	I0501 03:40:20.937830   69237 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20724.pem /etc/ssl/certs/51391683.0"
	I0501 03:40:20.951239   69237 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0501 03:40:20.956883   69237 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0501 03:40:20.964048   69237 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0501 03:40:20.971156   69237 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0501 03:40:20.978243   69237 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0501 03:40:20.985183   69237 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0501 03:40:20.991709   69237 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0501 03:40:20.998390   69237 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-715118 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.0 ClusterName:default-k8s-diff-port-715118 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.158 Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 03:40:20.998509   69237 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0501 03:40:20.998558   69237 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0501 03:40:21.051469   69237 cri.go:89] found id: ""
	I0501 03:40:21.051575   69237 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0501 03:40:21.063280   69237 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0501 03:40:21.063301   69237 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0501 03:40:21.063307   69237 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0501 03:40:21.063381   69237 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0501 03:40:21.077380   69237 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0501 03:40:21.078445   69237 kubeconfig.go:125] found "default-k8s-diff-port-715118" server: "https://192.168.72.158:8444"
	I0501 03:40:21.080872   69237 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0501 03:40:21.095004   69237 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.158
	I0501 03:40:21.095045   69237 kubeadm.go:1154] stopping kube-system containers ...
	I0501 03:40:21.095059   69237 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0501 03:40:21.095123   69237 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0501 03:40:21.151629   69237 cri.go:89] found id: ""
	I0501 03:40:21.151711   69237 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0501 03:40:21.177077   69237 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0501 03:40:21.192057   69237 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0501 03:40:21.192087   69237 kubeadm.go:156] found existing configuration files:
	
	I0501 03:40:21.192146   69237 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0501 03:40:21.206784   69237 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0501 03:40:21.206870   69237 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0501 03:40:21.221942   69237 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0501 03:40:21.236442   69237 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0501 03:40:21.236516   69237 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0501 03:40:21.251285   69237 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0501 03:40:21.265997   69237 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0501 03:40:21.266049   69237 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0501 03:40:21.281137   69237 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0501 03:40:21.297713   69237 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0501 03:40:21.297783   69237 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0501 03:40:21.314264   69237 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0501 03:40:21.328605   69237 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:40:21.478475   69237 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:40:22.161692   69237 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:40:22.432136   69237 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:40:22.514744   69237 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:40:22.597689   69237 api_server.go:52] waiting for apiserver process to appear ...
	I0501 03:40:22.597770   69237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:23.098146   69237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:23.597831   69237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:23.629375   69237 api_server.go:72] duration metric: took 1.031684055s to wait for apiserver process to appear ...
	I0501 03:40:23.629462   69237 api_server.go:88] waiting for apiserver healthz status ...
	I0501 03:40:23.629500   69237 api_server.go:253] Checking apiserver healthz at https://192.168.72.158:8444/healthz ...
	I0501 03:40:23.630045   69237 api_server.go:269] stopped: https://192.168.72.158:8444/healthz: Get "https://192.168.72.158:8444/healthz": dial tcp 192.168.72.158:8444: connect: connection refused
	I0501 03:40:24.129831   69237 api_server.go:253] Checking apiserver healthz at https://192.168.72.158:8444/healthz ...
	I0501 03:40:22.740241   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:22.740692   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | unable to find current IP address of domain old-k8s-version-503971 in network mk-old-k8s-version-503971
	I0501 03:40:22.740722   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | I0501 03:40:22.740656   70499 retry.go:31] will retry after 1.899831455s: waiting for machine to come up
	I0501 03:40:24.642609   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:24.643075   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | unable to find current IP address of domain old-k8s-version-503971 in network mk-old-k8s-version-503971
	I0501 03:40:24.643104   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | I0501 03:40:24.643019   70499 retry.go:31] will retry after 3.503333894s: waiting for machine to come up
	I0501 03:40:25.157335   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:27.160083   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:27.091079   69237 api_server.go:279] https://192.168.72.158:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0501 03:40:27.091134   69237 api_server.go:103] status: https://192.168.72.158:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0501 03:40:27.091152   69237 api_server.go:253] Checking apiserver healthz at https://192.168.72.158:8444/healthz ...
	I0501 03:40:27.163481   69237 api_server.go:279] https://192.168.72.158:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0501 03:40:27.163509   69237 api_server.go:103] status: https://192.168.72.158:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0501 03:40:27.163522   69237 api_server.go:253] Checking apiserver healthz at https://192.168.72.158:8444/healthz ...
	I0501 03:40:27.175097   69237 api_server.go:279] https://192.168.72.158:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0501 03:40:27.175129   69237 api_server.go:103] status: https://192.168.72.158:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0501 03:40:27.629613   69237 api_server.go:253] Checking apiserver healthz at https://192.168.72.158:8444/healthz ...
	I0501 03:40:27.637166   69237 api_server.go:279] https://192.168.72.158:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0501 03:40:27.637202   69237 api_server.go:103] status: https://192.168.72.158:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0501 03:40:28.130467   69237 api_server.go:253] Checking apiserver healthz at https://192.168.72.158:8444/healthz ...
	I0501 03:40:28.148799   69237 api_server.go:279] https://192.168.72.158:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0501 03:40:28.148823   69237 api_server.go:103] status: https://192.168.72.158:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0501 03:40:28.630500   69237 api_server.go:253] Checking apiserver healthz at https://192.168.72.158:8444/healthz ...
	I0501 03:40:28.642856   69237 api_server.go:279] https://192.168.72.158:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0501 03:40:28.642890   69237 api_server.go:103] status: https://192.168.72.158:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0501 03:40:29.130453   69237 api_server.go:253] Checking apiserver healthz at https://192.168.72.158:8444/healthz ...
	I0501 03:40:29.137783   69237 api_server.go:279] https://192.168.72.158:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0501 03:40:29.137819   69237 api_server.go:103] status: https://192.168.72.158:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0501 03:40:29.630448   69237 api_server.go:253] Checking apiserver healthz at https://192.168.72.158:8444/healthz ...
	I0501 03:40:29.634736   69237 api_server.go:279] https://192.168.72.158:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0501 03:40:29.634764   69237 api_server.go:103] status: https://192.168.72.158:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0501 03:40:30.130371   69237 api_server.go:253] Checking apiserver healthz at https://192.168.72.158:8444/healthz ...
	I0501 03:40:30.134727   69237 api_server.go:279] https://192.168.72.158:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0501 03:40:30.134755   69237 api_server.go:103] status: https://192.168.72.158:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0501 03:40:30.630555   69237 api_server.go:253] Checking apiserver healthz at https://192.168.72.158:8444/healthz ...
	I0501 03:40:30.637025   69237 api_server.go:279] https://192.168.72.158:8444/healthz returned 200:
	ok
	I0501 03:40:30.644179   69237 api_server.go:141] control plane version: v1.30.0
	I0501 03:40:30.644209   69237 api_server.go:131] duration metric: took 7.014727807s to wait for apiserver health ...
	I0501 03:40:30.644217   69237 cni.go:84] Creating CNI manager for ""
	I0501 03:40:30.644223   69237 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0501 03:40:30.646018   69237 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0501 03:40:30.647222   69237 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0501 03:40:28.148102   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:28.148506   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | unable to find current IP address of domain old-k8s-version-503971 in network mk-old-k8s-version-503971
	I0501 03:40:28.148547   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | I0501 03:40:28.148463   70499 retry.go:31] will retry after 4.150508159s: waiting for machine to come up
	I0501 03:40:33.783990   68640 start.go:364] duration metric: took 56.072338201s to acquireMachinesLock for "no-preload-892672"
	I0501 03:40:33.784047   68640 start.go:96] Skipping create...Using existing machine configuration
	I0501 03:40:33.784056   68640 fix.go:54] fixHost starting: 
	I0501 03:40:33.784468   68640 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:40:33.784504   68640 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:40:33.801460   68640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37745
	I0501 03:40:33.802023   68640 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:40:33.802634   68640 main.go:141] libmachine: Using API Version  1
	I0501 03:40:33.802669   68640 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:40:33.803062   68640 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:40:33.803262   68640 main.go:141] libmachine: (no-preload-892672) Calling .DriverName
	I0501 03:40:33.803379   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetState
	I0501 03:40:33.805241   68640 fix.go:112] recreateIfNeeded on no-preload-892672: state=Stopped err=<nil>
	I0501 03:40:33.805266   68640 main.go:141] libmachine: (no-preload-892672) Calling .DriverName
	W0501 03:40:33.805452   68640 fix.go:138] unexpected machine state, will restart: <nil>
	I0501 03:40:33.807020   68640 out.go:177] * Restarting existing kvm2 VM for "no-preload-892672" ...
	I0501 03:40:29.656911   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:32.158119   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:32.303427   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:32.303804   69580 main.go:141] libmachine: (old-k8s-version-503971) Found IP for machine: 192.168.61.104
	I0501 03:40:32.303837   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has current primary IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:32.303851   69580 main.go:141] libmachine: (old-k8s-version-503971) Reserving static IP address...
	I0501 03:40:32.304254   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "old-k8s-version-503971", mac: "52:54:00:7d:68:83", ip: "192.168.61.104"} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:40:32.304286   69580 main.go:141] libmachine: (old-k8s-version-503971) Reserved static IP address: 192.168.61.104
	I0501 03:40:32.304305   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | skip adding static IP to network mk-old-k8s-version-503971 - found existing host DHCP lease matching {name: "old-k8s-version-503971", mac: "52:54:00:7d:68:83", ip: "192.168.61.104"}
	I0501 03:40:32.304323   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | Getting to WaitForSSH function...
	I0501 03:40:32.304337   69580 main.go:141] libmachine: (old-k8s-version-503971) Waiting for SSH to be available...
	I0501 03:40:32.306619   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:32.306972   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:68:83", ip: ""} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:40:32.307011   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:32.307114   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | Using SSH client type: external
	I0501 03:40:32.307138   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | Using SSH private key: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/old-k8s-version-503971/id_rsa (-rw-------)
	I0501 03:40:32.307174   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.104 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18779-13391/.minikube/machines/old-k8s-version-503971/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0501 03:40:32.307188   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | About to run SSH command:
	I0501 03:40:32.307224   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | exit 0
	I0501 03:40:32.438508   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | SSH cmd err, output: <nil>: 
	I0501 03:40:32.438882   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetConfigRaw
	I0501 03:40:32.439452   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetIP
	I0501 03:40:32.441984   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:32.442342   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:68:83", ip: ""} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:40:32.442369   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:32.442668   69580 profile.go:143] Saving config to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/old-k8s-version-503971/config.json ...
	I0501 03:40:32.442875   69580 machine.go:94] provisionDockerMachine start ...
	I0501 03:40:32.442897   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .DriverName
	I0501 03:40:32.443077   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHHostname
	I0501 03:40:32.445129   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:32.445442   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:68:83", ip: ""} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:40:32.445480   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:32.445628   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHPort
	I0501 03:40:32.445806   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHKeyPath
	I0501 03:40:32.445974   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHKeyPath
	I0501 03:40:32.446122   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHUsername
	I0501 03:40:32.446314   69580 main.go:141] libmachine: Using SSH client type: native
	I0501 03:40:32.446548   69580 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.104 22 <nil> <nil>}
	I0501 03:40:32.446564   69580 main.go:141] libmachine: About to run SSH command:
	hostname
	I0501 03:40:32.559346   69580 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0501 03:40:32.559379   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetMachineName
	I0501 03:40:32.559630   69580 buildroot.go:166] provisioning hostname "old-k8s-version-503971"
	I0501 03:40:32.559654   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetMachineName
	I0501 03:40:32.559832   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHHostname
	I0501 03:40:32.562176   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:32.562547   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:68:83", ip: ""} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:40:32.562582   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:32.562716   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHPort
	I0501 03:40:32.562892   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHKeyPath
	I0501 03:40:32.563019   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHKeyPath
	I0501 03:40:32.563161   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHUsername
	I0501 03:40:32.563332   69580 main.go:141] libmachine: Using SSH client type: native
	I0501 03:40:32.563545   69580 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.104 22 <nil> <nil>}
	I0501 03:40:32.563564   69580 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-503971 && echo "old-k8s-version-503971" | sudo tee /etc/hostname
	I0501 03:40:32.699918   69580 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-503971
	
	I0501 03:40:32.699961   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHHostname
	I0501 03:40:32.702721   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:32.703134   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:68:83", ip: ""} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:40:32.703158   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:32.703361   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHPort
	I0501 03:40:32.703547   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHKeyPath
	I0501 03:40:32.703744   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHKeyPath
	I0501 03:40:32.703881   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHUsername
	I0501 03:40:32.704037   69580 main.go:141] libmachine: Using SSH client type: native
	I0501 03:40:32.704199   69580 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.104 22 <nil> <nil>}
	I0501 03:40:32.704215   69580 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-503971' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-503971/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-503971' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0501 03:40:32.830277   69580 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0501 03:40:32.830307   69580 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18779-13391/.minikube CaCertPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18779-13391/.minikube}
	I0501 03:40:32.830323   69580 buildroot.go:174] setting up certificates
	I0501 03:40:32.830331   69580 provision.go:84] configureAuth start
	I0501 03:40:32.830340   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetMachineName
	I0501 03:40:32.830629   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetIP
	I0501 03:40:32.833575   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:32.833887   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:68:83", ip: ""} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:40:32.833932   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:32.834070   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHHostname
	I0501 03:40:32.836309   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:32.836664   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:68:83", ip: ""} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:40:32.836691   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:32.836824   69580 provision.go:143] copyHostCerts
	I0501 03:40:32.836885   69580 exec_runner.go:144] found /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem, removing ...
	I0501 03:40:32.836895   69580 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem
	I0501 03:40:32.836945   69580 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem (1078 bytes)
	I0501 03:40:32.837046   69580 exec_runner.go:144] found /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem, removing ...
	I0501 03:40:32.837054   69580 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem
	I0501 03:40:32.837072   69580 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem (1123 bytes)
	I0501 03:40:32.837129   69580 exec_runner.go:144] found /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem, removing ...
	I0501 03:40:32.837136   69580 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem
	I0501 03:40:32.837152   69580 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem (1675 bytes)
	I0501 03:40:32.837202   69580 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-503971 san=[127.0.0.1 192.168.61.104 localhost minikube old-k8s-version-503971]
	I0501 03:40:33.047948   69580 provision.go:177] copyRemoteCerts
	I0501 03:40:33.048004   69580 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0501 03:40:33.048030   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHHostname
	I0501 03:40:33.050591   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:33.050975   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:68:83", ip: ""} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:40:33.051012   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:33.051142   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHPort
	I0501 03:40:33.051310   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHKeyPath
	I0501 03:40:33.051465   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHUsername
	I0501 03:40:33.051574   69580 sshutil.go:53] new ssh client: &{IP:192.168.61.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/old-k8s-version-503971/id_rsa Username:docker}
	I0501 03:40:33.143991   69580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0501 03:40:33.175494   69580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0501 03:40:33.204770   69580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0501 03:40:33.232728   69580 provision.go:87] duration metric: took 402.386279ms to configureAuth
	I0501 03:40:33.232756   69580 buildroot.go:189] setting minikube options for container-runtime
	I0501 03:40:33.232962   69580 config.go:182] Loaded profile config "old-k8s-version-503971": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0501 03:40:33.233051   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHHostname
	I0501 03:40:33.235656   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:33.236006   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:68:83", ip: ""} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:40:33.236038   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:33.236162   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHPort
	I0501 03:40:33.236339   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHKeyPath
	I0501 03:40:33.236484   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHKeyPath
	I0501 03:40:33.236633   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHUsername
	I0501 03:40:33.236817   69580 main.go:141] libmachine: Using SSH client type: native
	I0501 03:40:33.236980   69580 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.104 22 <nil> <nil>}
	I0501 03:40:33.236997   69580 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0501 03:40:33.526370   69580 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0501 03:40:33.526419   69580 machine.go:97] duration metric: took 1.083510254s to provisionDockerMachine
	I0501 03:40:33.526432   69580 start.go:293] postStartSetup for "old-k8s-version-503971" (driver="kvm2")
	I0501 03:40:33.526443   69580 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0501 03:40:33.526470   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .DriverName
	I0501 03:40:33.526788   69580 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0501 03:40:33.526831   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHHostname
	I0501 03:40:33.529815   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:33.530209   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:68:83", ip: ""} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:40:33.530268   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:33.530364   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHPort
	I0501 03:40:33.530559   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHKeyPath
	I0501 03:40:33.530741   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHUsername
	I0501 03:40:33.530909   69580 sshutil.go:53] new ssh client: &{IP:192.168.61.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/old-k8s-version-503971/id_rsa Username:docker}
	I0501 03:40:33.620224   69580 ssh_runner.go:195] Run: cat /etc/os-release
	I0501 03:40:33.625417   69580 info.go:137] Remote host: Buildroot 2023.02.9
	I0501 03:40:33.625447   69580 filesync.go:126] Scanning /home/jenkins/minikube-integration/18779-13391/.minikube/addons for local assets ...
	I0501 03:40:33.625511   69580 filesync.go:126] Scanning /home/jenkins/minikube-integration/18779-13391/.minikube/files for local assets ...
	I0501 03:40:33.625594   69580 filesync.go:149] local asset: /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem -> 207242.pem in /etc/ssl/certs
	I0501 03:40:33.625691   69580 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0501 03:40:33.637311   69580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem --> /etc/ssl/certs/207242.pem (1708 bytes)
	I0501 03:40:33.666707   69580 start.go:296] duration metric: took 140.263297ms for postStartSetup
	I0501 03:40:33.666740   69580 fix.go:56] duration metric: took 20.150640355s for fixHost
	I0501 03:40:33.666758   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHHostname
	I0501 03:40:33.669394   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:33.669822   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:68:83", ip: ""} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:40:33.669852   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:33.669963   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHPort
	I0501 03:40:33.670213   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHKeyPath
	I0501 03:40:33.670388   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHKeyPath
	I0501 03:40:33.670589   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHUsername
	I0501 03:40:33.670794   69580 main.go:141] libmachine: Using SSH client type: native
	I0501 03:40:33.670972   69580 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.104 22 <nil> <nil>}
	I0501 03:40:33.670984   69580 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0501 03:40:33.783810   69580 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714534833.728910946
	
	I0501 03:40:33.783839   69580 fix.go:216] guest clock: 1714534833.728910946
	I0501 03:40:33.783850   69580 fix.go:229] Guest: 2024-05-01 03:40:33.728910946 +0000 UTC Remote: 2024-05-01 03:40:33.666743363 +0000 UTC m=+232.246108464 (delta=62.167583ms)
	I0501 03:40:33.783893   69580 fix.go:200] guest clock delta is within tolerance: 62.167583ms
	I0501 03:40:33.783903   69580 start.go:83] releasing machines lock for "old-k8s-version-503971", held for 20.267840723s
	I0501 03:40:33.783933   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .DriverName
	I0501 03:40:33.784203   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetIP
	I0501 03:40:33.786846   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:33.787202   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:68:83", ip: ""} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:40:33.787230   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:33.787385   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .DriverName
	I0501 03:40:33.787837   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .DriverName
	I0501 03:40:33.787997   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .DriverName
	I0501 03:40:33.788085   69580 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0501 03:40:33.788126   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHHostname
	I0501 03:40:33.788252   69580 ssh_runner.go:195] Run: cat /version.json
	I0501 03:40:33.788279   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHHostname
	I0501 03:40:33.790748   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:33.791086   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:33.791118   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:68:83", ip: ""} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:40:33.791142   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:33.791435   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHPort
	I0501 03:40:33.791491   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:68:83", ip: ""} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:40:33.791532   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:33.791618   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHKeyPath
	I0501 03:40:33.791740   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHPort
	I0501 03:40:33.791815   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHUsername
	I0501 03:40:33.791937   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHKeyPath
	I0501 03:40:33.792014   69580 sshutil.go:53] new ssh client: &{IP:192.168.61.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/old-k8s-version-503971/id_rsa Username:docker}
	I0501 03:40:33.792069   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetSSHUsername
	I0501 03:40:33.792206   69580 sshutil.go:53] new ssh client: &{IP:192.168.61.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/old-k8s-version-503971/id_rsa Username:docker}
	I0501 03:40:33.876242   69580 ssh_runner.go:195] Run: systemctl --version
	I0501 03:40:33.901692   69580 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0501 03:40:34.056758   69580 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0501 03:40:34.065070   69580 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0501 03:40:34.065156   69580 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0501 03:40:34.085337   69580 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0501 03:40:34.085364   69580 start.go:494] detecting cgroup driver to use...
	I0501 03:40:34.085432   69580 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0501 03:40:34.102723   69580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0501 03:40:34.118792   69580 docker.go:217] disabling cri-docker service (if available) ...
	I0501 03:40:34.118847   69580 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0501 03:40:34.133978   69580 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0501 03:40:34.153890   69580 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0501 03:40:34.283815   69580 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0501 03:40:34.475851   69580 docker.go:233] disabling docker service ...
	I0501 03:40:34.475926   69580 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0501 03:40:34.500769   69580 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0501 03:40:34.517315   69580 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0501 03:40:34.674322   69580 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0501 03:40:34.833281   69580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0501 03:40:34.852610   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0501 03:40:34.879434   69580 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0501 03:40:34.879517   69580 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:40:34.892197   69580 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0501 03:40:34.892269   69580 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:40:34.904437   69580 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:40:34.919950   69580 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:40:34.933772   69580 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0501 03:40:34.947563   69580 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0501 03:40:34.965724   69580 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0501 03:40:34.965795   69580 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0501 03:40:34.984251   69580 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0501 03:40:34.997050   69580 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 03:40:35.155852   69580 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0501 03:40:35.362090   69580 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0501 03:40:35.362164   69580 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0501 03:40:35.368621   69580 start.go:562] Will wait 60s for crictl version
	I0501 03:40:35.368701   69580 ssh_runner.go:195] Run: which crictl
	I0501 03:40:35.373792   69580 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0501 03:40:35.436905   69580 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0501 03:40:35.437018   69580 ssh_runner.go:195] Run: crio --version
	I0501 03:40:35.485130   69580 ssh_runner.go:195] Run: crio --version
	I0501 03:40:35.528700   69580 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0501 03:40:30.661395   69237 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0501 03:40:30.682810   69237 system_pods.go:43] waiting for kube-system pods to appear ...
	I0501 03:40:30.694277   69237 system_pods.go:59] 8 kube-system pods found
	I0501 03:40:30.694326   69237 system_pods.go:61] "coredns-7db6d8ff4d-9r7dt" [75d43a25-d309-427e-befc-7f1851b90d8c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0501 03:40:30.694343   69237 system_pods.go:61] "etcd-default-k8s-diff-port-715118" [21f6a4cd-f662-4865-9208-83959f0a6782] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0501 03:40:30.694354   69237 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-715118" [4dc3e45e-a5d8-480f-a8e8-763ecab0976b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0501 03:40:30.694369   69237 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-715118" [340580a3-040e-48fc-b89c-36a4f6fccfc2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0501 03:40:30.694376   69237 system_pods.go:61] "kube-proxy-vg7ts" [e55f3363-178c-427a-819d-0dc94c3116f3] Running
	I0501 03:40:30.694388   69237 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-715118" [b850fc4a-da6b-4714-98bb-e36e185880dd] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0501 03:40:30.694417   69237 system_pods.go:61] "metrics-server-569cc877fc-2btjj" [9b8ff94d-9e59-46d4-ac6d-7accca8b3552] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0501 03:40:30.694427   69237 system_pods.go:61] "storage-provisioner" [d44a3cf1-c8a5-4a20-8dd6-b854680b33b9] Running
	I0501 03:40:30.694435   69237 system_pods.go:74] duration metric: took 11.599113ms to wait for pod list to return data ...
	I0501 03:40:30.694449   69237 node_conditions.go:102] verifying NodePressure condition ...
	I0501 03:40:30.697795   69237 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0501 03:40:30.697825   69237 node_conditions.go:123] node cpu capacity is 2
	I0501 03:40:30.697838   69237 node_conditions.go:105] duration metric: took 3.383507ms to run NodePressure ...
	I0501 03:40:30.697858   69237 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:40:30.978827   69237 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0501 03:40:30.984628   69237 kubeadm.go:733] kubelet initialised
	I0501 03:40:30.984650   69237 kubeadm.go:734] duration metric: took 5.799905ms waiting for restarted kubelet to initialise ...
	I0501 03:40:30.984656   69237 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0501 03:40:30.992354   69237 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-9r7dt" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:30.999663   69237 pod_ready.go:97] node "default-k8s-diff-port-715118" hosting pod "coredns-7db6d8ff4d-9r7dt" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-715118" has status "Ready":"False"
	I0501 03:40:30.999690   69237 pod_ready.go:81] duration metric: took 7.312969ms for pod "coredns-7db6d8ff4d-9r7dt" in "kube-system" namespace to be "Ready" ...
	E0501 03:40:30.999700   69237 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-715118" hosting pod "coredns-7db6d8ff4d-9r7dt" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-715118" has status "Ready":"False"
	I0501 03:40:30.999706   69237 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-715118" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:31.006163   69237 pod_ready.go:97] node "default-k8s-diff-port-715118" hosting pod "etcd-default-k8s-diff-port-715118" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-715118" has status "Ready":"False"
	I0501 03:40:31.006187   69237 pod_ready.go:81] duration metric: took 6.471262ms for pod "etcd-default-k8s-diff-port-715118" in "kube-system" namespace to be "Ready" ...
	E0501 03:40:31.006199   69237 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-715118" hosting pod "etcd-default-k8s-diff-port-715118" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-715118" has status "Ready":"False"
	I0501 03:40:31.006208   69237 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-715118" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:31.011772   69237 pod_ready.go:97] node "default-k8s-diff-port-715118" hosting pod "kube-apiserver-default-k8s-diff-port-715118" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-715118" has status "Ready":"False"
	I0501 03:40:31.011793   69237 pod_ready.go:81] duration metric: took 5.576722ms for pod "kube-apiserver-default-k8s-diff-port-715118" in "kube-system" namespace to be "Ready" ...
	E0501 03:40:31.011803   69237 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-715118" hosting pod "kube-apiserver-default-k8s-diff-port-715118" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-715118" has status "Ready":"False"
	I0501 03:40:31.011810   69237 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-715118" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:31.086163   69237 pod_ready.go:97] node "default-k8s-diff-port-715118" hosting pod "kube-controller-manager-default-k8s-diff-port-715118" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-715118" has status "Ready":"False"
	I0501 03:40:31.086194   69237 pod_ready.go:81] duration metric: took 74.377197ms for pod "kube-controller-manager-default-k8s-diff-port-715118" in "kube-system" namespace to be "Ready" ...
	E0501 03:40:31.086207   69237 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-715118" hosting pod "kube-controller-manager-default-k8s-diff-port-715118" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-715118" has status "Ready":"False"
	I0501 03:40:31.086214   69237 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-vg7ts" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:31.487056   69237 pod_ready.go:92] pod "kube-proxy-vg7ts" in "kube-system" namespace has status "Ready":"True"
	I0501 03:40:31.487078   69237 pod_ready.go:81] duration metric: took 400.857543ms for pod "kube-proxy-vg7ts" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:31.487088   69237 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-715118" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:33.502448   69237 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-715118" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:35.530015   69580 main.go:141] libmachine: (old-k8s-version-503971) Calling .GetIP
	I0501 03:40:35.533706   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:35.534178   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:68:83", ip: ""} in network mk-old-k8s-version-503971: {Iface:virbr4 ExpiryTime:2024-05-01 04:30:32 +0000 UTC Type:0 Mac:52:54:00:7d:68:83 Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:old-k8s-version-503971 Clientid:01:52:54:00:7d:68:83}
	I0501 03:40:35.534254   69580 main.go:141] libmachine: (old-k8s-version-503971) DBG | domain old-k8s-version-503971 has defined IP address 192.168.61.104 and MAC address 52:54:00:7d:68:83 in network mk-old-k8s-version-503971
	I0501 03:40:35.534515   69580 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0501 03:40:35.541542   69580 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0501 03:40:35.563291   69580 kubeadm.go:877] updating cluster {Name:old-k8s-version-503971 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-503971 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.104 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0501 03:40:35.563434   69580 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0501 03:40:35.563512   69580 ssh_runner.go:195] Run: sudo crictl images --output json
	I0501 03:40:35.646548   69580 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0501 03:40:35.646635   69580 ssh_runner.go:195] Run: which lz4
	I0501 03:40:35.652824   69580 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0501 03:40:35.660056   69580 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0501 03:40:35.660099   69580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0501 03:40:33.808828   68640 main.go:141] libmachine: (no-preload-892672) Calling .Start
	I0501 03:40:33.809083   68640 main.go:141] libmachine: (no-preload-892672) Ensuring networks are active...
	I0501 03:40:33.809829   68640 main.go:141] libmachine: (no-preload-892672) Ensuring network default is active
	I0501 03:40:33.810166   68640 main.go:141] libmachine: (no-preload-892672) Ensuring network mk-no-preload-892672 is active
	I0501 03:40:33.810632   68640 main.go:141] libmachine: (no-preload-892672) Getting domain xml...
	I0501 03:40:33.811386   68640 main.go:141] libmachine: (no-preload-892672) Creating domain...
	I0501 03:40:35.133886   68640 main.go:141] libmachine: (no-preload-892672) Waiting to get IP...
	I0501 03:40:35.134756   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:35.135216   68640 main.go:141] libmachine: (no-preload-892672) DBG | unable to find current IP address of domain no-preload-892672 in network mk-no-preload-892672
	I0501 03:40:35.135280   68640 main.go:141] libmachine: (no-preload-892672) DBG | I0501 03:40:35.135178   70664 retry.go:31] will retry after 275.796908ms: waiting for machine to come up
	I0501 03:40:35.412670   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:35.413206   68640 main.go:141] libmachine: (no-preload-892672) DBG | unable to find current IP address of domain no-preload-892672 in network mk-no-preload-892672
	I0501 03:40:35.413232   68640 main.go:141] libmachine: (no-preload-892672) DBG | I0501 03:40:35.413162   70664 retry.go:31] will retry after 326.173381ms: waiting for machine to come up
	I0501 03:40:35.740734   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:35.741314   68640 main.go:141] libmachine: (no-preload-892672) DBG | unable to find current IP address of domain no-preload-892672 in network mk-no-preload-892672
	I0501 03:40:35.741342   68640 main.go:141] libmachine: (no-preload-892672) DBG | I0501 03:40:35.741260   70664 retry.go:31] will retry after 476.50915ms: waiting for machine to come up
	I0501 03:40:36.219908   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:36.220440   68640 main.go:141] libmachine: (no-preload-892672) DBG | unable to find current IP address of domain no-preload-892672 in network mk-no-preload-892672
	I0501 03:40:36.220473   68640 main.go:141] libmachine: (no-preload-892672) DBG | I0501 03:40:36.220399   70664 retry.go:31] will retry after 377.277784ms: waiting for machine to come up
	I0501 03:40:36.598936   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:36.599391   68640 main.go:141] libmachine: (no-preload-892672) DBG | unable to find current IP address of domain no-preload-892672 in network mk-no-preload-892672
	I0501 03:40:36.599417   68640 main.go:141] libmachine: (no-preload-892672) DBG | I0501 03:40:36.599348   70664 retry.go:31] will retry after 587.166276ms: waiting for machine to come up
	I0501 03:40:37.188757   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:37.189406   68640 main.go:141] libmachine: (no-preload-892672) DBG | unable to find current IP address of domain no-preload-892672 in network mk-no-preload-892672
	I0501 03:40:37.189441   68640 main.go:141] libmachine: (no-preload-892672) DBG | I0501 03:40:37.189311   70664 retry.go:31] will retry after 801.958256ms: waiting for machine to come up
	I0501 03:40:34.658104   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:36.660517   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:35.998453   69237 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-715118" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:38.495088   69237 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-715118" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:39.004175   69237 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-715118" in "kube-system" namespace has status "Ready":"True"
	I0501 03:40:39.004198   69237 pod_ready.go:81] duration metric: took 7.517103824s for pod "kube-scheduler-default-k8s-diff-port-715118" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:39.004209   69237 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace to be "Ready" ...
	I0501 03:40:37.870306   69580 crio.go:462] duration metric: took 2.217531377s to copy over tarball
	I0501 03:40:37.870393   69580 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0501 03:40:37.992669   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:37.993052   68640 main.go:141] libmachine: (no-preload-892672) DBG | unable to find current IP address of domain no-preload-892672 in network mk-no-preload-892672
	I0501 03:40:37.993080   68640 main.go:141] libmachine: (no-preload-892672) DBG | I0501 03:40:37.993016   70664 retry.go:31] will retry after 1.085029482s: waiting for machine to come up
	I0501 03:40:39.079315   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:39.079739   68640 main.go:141] libmachine: (no-preload-892672) DBG | unable to find current IP address of domain no-preload-892672 in network mk-no-preload-892672
	I0501 03:40:39.079779   68640 main.go:141] libmachine: (no-preload-892672) DBG | I0501 03:40:39.079682   70664 retry.go:31] will retry after 1.140448202s: waiting for machine to come up
	I0501 03:40:40.221645   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:40.222165   68640 main.go:141] libmachine: (no-preload-892672) DBG | unable to find current IP address of domain no-preload-892672 in network mk-no-preload-892672
	I0501 03:40:40.222192   68640 main.go:141] libmachine: (no-preload-892672) DBG | I0501 03:40:40.222103   70664 retry.go:31] will retry after 1.434247869s: waiting for machine to come up
	I0501 03:40:41.658447   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:41.659034   68640 main.go:141] libmachine: (no-preload-892672) DBG | unable to find current IP address of domain no-preload-892672 in network mk-no-preload-892672
	I0501 03:40:41.659072   68640 main.go:141] libmachine: (no-preload-892672) DBG | I0501 03:40:41.659003   70664 retry.go:31] will retry after 1.759453732s: waiting for machine to come up
	I0501 03:40:39.157834   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:41.164729   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:43.658248   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:41.014770   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:43.513038   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:45.516821   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:41.534681   69580 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.664236925s)
	I0501 03:40:41.599216   69580 crio.go:469] duration metric: took 3.72886857s to extract the tarball
	I0501 03:40:41.599238   69580 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0501 03:40:41.649221   69580 ssh_runner.go:195] Run: sudo crictl images --output json
	I0501 03:40:41.697169   69580 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0501 03:40:41.697198   69580 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0501 03:40:41.697302   69580 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0501 03:40:41.697346   69580 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0501 03:40:41.697367   69580 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0501 03:40:41.697352   69580 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0501 03:40:41.697375   69580 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0501 03:40:41.697329   69580 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0501 03:40:41.697275   69580 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0501 03:40:41.697329   69580 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0501 03:40:41.698950   69580 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0501 03:40:41.699010   69580 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0501 03:40:41.699114   69580 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0501 03:40:41.699251   69580 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0501 03:40:41.699292   69580 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0501 03:40:41.699020   69580 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0501 03:40:41.699550   69580 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0501 03:40:41.699715   69580 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0501 03:40:41.830042   69580 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0501 03:40:41.881770   69580 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0501 03:40:41.881834   69580 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0501 03:40:41.881896   69580 ssh_runner.go:195] Run: which crictl
	I0501 03:40:41.887083   69580 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0501 03:40:41.894597   69580 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0501 03:40:41.935993   69580 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0501 03:40:41.937339   69580 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0501 03:40:41.961728   69580 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0501 03:40:41.961778   69580 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0501 03:40:41.961827   69580 ssh_runner.go:195] Run: which crictl
	I0501 03:40:42.004327   69580 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0501 03:40:42.004395   69580 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0501 03:40:42.004435   69580 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0501 03:40:42.004493   69580 ssh_runner.go:195] Run: which crictl
	I0501 03:40:42.053743   69580 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0501 03:40:42.055914   69580 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0501 03:40:42.056267   69580 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0501 03:40:42.056610   69580 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0501 03:40:42.060229   69580 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0501 03:40:42.070489   69580 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0501 03:40:42.127829   69580 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0501 03:40:42.127880   69580 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0501 03:40:42.127927   69580 ssh_runner.go:195] Run: which crictl
	I0501 03:40:42.201731   69580 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0501 03:40:42.201783   69580 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0501 03:40:42.201814   69580 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0501 03:40:42.201842   69580 ssh_runner.go:195] Run: which crictl
	I0501 03:40:42.211112   69580 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0501 03:40:42.211163   69580 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0501 03:40:42.211227   69580 ssh_runner.go:195] Run: which crictl
	I0501 03:40:42.217794   69580 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0501 03:40:42.217835   69580 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0501 03:40:42.217873   69580 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0501 03:40:42.217917   69580 ssh_runner.go:195] Run: which crictl
	I0501 03:40:42.217873   69580 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0501 03:40:42.220250   69580 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0501 03:40:42.274880   69580 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0501 03:40:42.294354   69580 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0501 03:40:42.294436   69580 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0501 03:40:42.305191   69580 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0501 03:40:42.342502   69580 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0501 03:40:42.560474   69580 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0501 03:40:42.712970   69580 cache_images.go:92] duration metric: took 1.015752585s to LoadCachedImages
	W0501 03:40:42.713057   69580 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	I0501 03:40:42.713074   69580 kubeadm.go:928] updating node { 192.168.61.104 8443 v1.20.0 crio true true} ...
	I0501 03:40:42.713227   69580 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-503971 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.104
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-503971 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0501 03:40:42.713323   69580 ssh_runner.go:195] Run: crio config
	I0501 03:40:42.771354   69580 cni.go:84] Creating CNI manager for ""
	I0501 03:40:42.771384   69580 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0501 03:40:42.771403   69580 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0501 03:40:42.771428   69580 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.104 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-503971 NodeName:old-k8s-version-503971 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.104"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.104 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0501 03:40:42.771644   69580 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.104
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-503971"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.104
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.104"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0501 03:40:42.771722   69580 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0501 03:40:42.784978   69580 binaries.go:44] Found k8s binaries, skipping transfer
	I0501 03:40:42.785057   69580 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0501 03:40:42.800945   69580 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0501 03:40:42.824293   69580 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0501 03:40:42.845949   69580 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0501 03:40:42.867390   69580 ssh_runner.go:195] Run: grep 192.168.61.104	control-plane.minikube.internal$ /etc/hosts
	I0501 03:40:42.872038   69580 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.104	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0501 03:40:42.890213   69580 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 03:40:43.041533   69580 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0501 03:40:43.070048   69580 certs.go:68] Setting up /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/old-k8s-version-503971 for IP: 192.168.61.104
	I0501 03:40:43.070075   69580 certs.go:194] generating shared ca certs ...
	I0501 03:40:43.070097   69580 certs.go:226] acquiring lock for ca certs: {Name:mk85a75d14c00780d4bf09cd9bdaf0f96f1b8fce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 03:40:43.070315   69580 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18779-13391/.minikube/ca.key
	I0501 03:40:43.070388   69580 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.key
	I0501 03:40:43.070419   69580 certs.go:256] generating profile certs ...
	I0501 03:40:43.070558   69580 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/old-k8s-version-503971/client.key
	I0501 03:40:43.070631   69580 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/old-k8s-version-503971/apiserver.key.760b883a
	I0501 03:40:43.070670   69580 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/old-k8s-version-503971/proxy-client.key
	I0501 03:40:43.070804   69580 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/20724.pem (1338 bytes)
	W0501 03:40:43.070852   69580 certs.go:480] ignoring /home/jenkins/minikube-integration/18779-13391/.minikube/certs/20724_empty.pem, impossibly tiny 0 bytes
	I0501 03:40:43.070865   69580 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca-key.pem (1679 bytes)
	I0501 03:40:43.070914   69580 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem (1078 bytes)
	I0501 03:40:43.070955   69580 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem (1123 bytes)
	I0501 03:40:43.070985   69580 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem (1675 bytes)
	I0501 03:40:43.071044   69580 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem (1708 bytes)
	I0501 03:40:43.071869   69580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0501 03:40:43.110078   69580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0501 03:40:43.164382   69580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0501 03:40:43.197775   69580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0501 03:40:43.230575   69580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/old-k8s-version-503971/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0501 03:40:43.260059   69580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/old-k8s-version-503971/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0501 03:40:43.288704   69580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/old-k8s-version-503971/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0501 03:40:43.315417   69580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/old-k8s-version-503971/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0501 03:40:43.363440   69580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0501 03:40:43.396043   69580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/certs/20724.pem --> /usr/share/ca-certificates/20724.pem (1338 bytes)
	I0501 03:40:43.425997   69580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem --> /usr/share/ca-certificates/207242.pem (1708 bytes)
	I0501 03:40:43.456927   69580 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0501 03:40:43.478177   69580 ssh_runner.go:195] Run: openssl version
	I0501 03:40:43.484513   69580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0501 03:40:43.497230   69580 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0501 03:40:43.504025   69580 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May  1 02:08 /usr/share/ca-certificates/minikubeCA.pem
	I0501 03:40:43.504112   69580 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0501 03:40:43.513309   69580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0501 03:40:43.528592   69580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20724.pem && ln -fs /usr/share/ca-certificates/20724.pem /etc/ssl/certs/20724.pem"
	I0501 03:40:43.544560   69580 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20724.pem
	I0501 03:40:43.550975   69580 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May  1 02:20 /usr/share/ca-certificates/20724.pem
	I0501 03:40:43.551047   69580 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20724.pem
	I0501 03:40:43.559214   69580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20724.pem /etc/ssl/certs/51391683.0"
	I0501 03:40:43.575362   69580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/207242.pem && ln -fs /usr/share/ca-certificates/207242.pem /etc/ssl/certs/207242.pem"
	I0501 03:40:43.587848   69580 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/207242.pem
	I0501 03:40:43.593131   69580 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May  1 02:20 /usr/share/ca-certificates/207242.pem
	I0501 03:40:43.593183   69580 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/207242.pem
	I0501 03:40:43.600365   69580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/207242.pem /etc/ssl/certs/3ec20f2e.0"
	I0501 03:40:43.613912   69580 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0501 03:40:43.619576   69580 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0501 03:40:43.628551   69580 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0501 03:40:43.637418   69580 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0501 03:40:43.645060   69580 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0501 03:40:43.654105   69580 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0501 03:40:43.663501   69580 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0501 03:40:43.670855   69580 kubeadm.go:391] StartCluster: {Name:old-k8s-version-503971 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-503971 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.104 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 03:40:43.670937   69580 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0501 03:40:43.670982   69580 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0501 03:40:43.720350   69580 cri.go:89] found id: ""
	I0501 03:40:43.720419   69580 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0501 03:40:43.732518   69580 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0501 03:40:43.732544   69580 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0501 03:40:43.732552   69580 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0501 03:40:43.732612   69580 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0501 03:40:43.743804   69580 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0501 03:40:43.745071   69580 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-503971" does not appear in /home/jenkins/minikube-integration/18779-13391/kubeconfig
	I0501 03:40:43.745785   69580 kubeconfig.go:62] /home/jenkins/minikube-integration/18779-13391/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-503971" cluster setting kubeconfig missing "old-k8s-version-503971" context setting]
	I0501 03:40:43.747054   69580 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18779-13391/kubeconfig: {Name:mk5d1131557f885800877622151c0915337cb23a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 03:40:43.748989   69580 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0501 03:40:43.760349   69580 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.104
	I0501 03:40:43.760389   69580 kubeadm.go:1154] stopping kube-system containers ...
	I0501 03:40:43.760403   69580 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0501 03:40:43.760473   69580 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0501 03:40:43.804745   69580 cri.go:89] found id: ""
	I0501 03:40:43.804841   69580 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0501 03:40:43.825960   69580 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0501 03:40:43.838038   69580 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0501 03:40:43.838062   69580 kubeadm.go:156] found existing configuration files:
	
	I0501 03:40:43.838115   69580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0501 03:40:43.849075   69580 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0501 03:40:43.849164   69580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0501 03:40:43.860634   69580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0501 03:40:43.871244   69580 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0501 03:40:43.871313   69580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0501 03:40:43.882184   69580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0501 03:40:43.893193   69580 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0501 03:40:43.893254   69580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0501 03:40:43.904257   69580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0501 03:40:43.915414   69580 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0501 03:40:43.915492   69580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0501 03:40:43.927372   69580 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0501 03:40:43.939117   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:40:44.098502   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:40:45.150125   69580 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.051581029s)
	I0501 03:40:45.150161   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:40:45.443307   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:40:45.563369   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:40:45.678620   69580 api_server.go:52] waiting for apiserver process to appear ...
	I0501 03:40:45.678731   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:46.179675   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:43.419480   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:43.419952   68640 main.go:141] libmachine: (no-preload-892672) DBG | unable to find current IP address of domain no-preload-892672 in network mk-no-preload-892672
	I0501 03:40:43.419980   68640 main.go:141] libmachine: (no-preload-892672) DBG | I0501 03:40:43.419907   70664 retry.go:31] will retry after 2.329320519s: waiting for machine to come up
	I0501 03:40:45.751405   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:45.751871   68640 main.go:141] libmachine: (no-preload-892672) DBG | unable to find current IP address of domain no-preload-892672 in network mk-no-preload-892672
	I0501 03:40:45.751902   68640 main.go:141] libmachine: (no-preload-892672) DBG | I0501 03:40:45.751822   70664 retry.go:31] will retry after 3.262804058s: waiting for machine to come up
	I0501 03:40:45.659845   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:48.157145   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:48.013520   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:50.514729   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:46.679449   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:47.179179   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:47.678890   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:48.179190   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:48.679276   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:49.179698   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:49.679121   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:50.179723   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:50.679603   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:51.179094   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:49.016460   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:49.016856   68640 main.go:141] libmachine: (no-preload-892672) DBG | unable to find current IP address of domain no-preload-892672 in network mk-no-preload-892672
	I0501 03:40:49.016878   68640 main.go:141] libmachine: (no-preload-892672) DBG | I0501 03:40:49.016826   70664 retry.go:31] will retry after 3.440852681s: waiting for machine to come up
	I0501 03:40:52.461349   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:52.461771   68640 main.go:141] libmachine: (no-preload-892672) DBG | unable to find current IP address of domain no-preload-892672 in network mk-no-preload-892672
	I0501 03:40:52.461800   68640 main.go:141] libmachine: (no-preload-892672) DBG | I0501 03:40:52.461722   70664 retry.go:31] will retry after 4.871322728s: waiting for machine to come up
	I0501 03:40:50.157703   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:52.655677   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:53.011851   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:55.510458   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:51.679850   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:52.179568   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:52.679100   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:53.179470   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:53.679115   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:54.178815   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:54.679769   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:55.179576   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:55.678864   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:56.179617   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:57.335855   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:57.336228   68640 main.go:141] libmachine: (no-preload-892672) Found IP for machine: 192.168.39.144
	I0501 03:40:57.336263   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has current primary IP address 192.168.39.144 and MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:57.336281   68640 main.go:141] libmachine: (no-preload-892672) Reserving static IP address...
	I0501 03:40:57.336629   68640 main.go:141] libmachine: (no-preload-892672) DBG | found host DHCP lease matching {name: "no-preload-892672", mac: "52:54:00:c7:6d:9a", ip: "192.168.39.144"} in network mk-no-preload-892672: {Iface:virbr1 ExpiryTime:2024-05-01 04:40:47 +0000 UTC Type:0 Mac:52:54:00:c7:6d:9a Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:no-preload-892672 Clientid:01:52:54:00:c7:6d:9a}
	I0501 03:40:57.336649   68640 main.go:141] libmachine: (no-preload-892672) DBG | skip adding static IP to network mk-no-preload-892672 - found existing host DHCP lease matching {name: "no-preload-892672", mac: "52:54:00:c7:6d:9a", ip: "192.168.39.144"}
	I0501 03:40:57.336661   68640 main.go:141] libmachine: (no-preload-892672) Reserved static IP address: 192.168.39.144
	I0501 03:40:57.336671   68640 main.go:141] libmachine: (no-preload-892672) Waiting for SSH to be available...
	I0501 03:40:57.336680   68640 main.go:141] libmachine: (no-preload-892672) DBG | Getting to WaitForSSH function...
	I0501 03:40:57.338862   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:57.339135   68640 main.go:141] libmachine: (no-preload-892672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6d:9a", ip: ""} in network mk-no-preload-892672: {Iface:virbr1 ExpiryTime:2024-05-01 04:40:47 +0000 UTC Type:0 Mac:52:54:00:c7:6d:9a Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:no-preload-892672 Clientid:01:52:54:00:c7:6d:9a}
	I0501 03:40:57.339163   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined IP address 192.168.39.144 and MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:57.339268   68640 main.go:141] libmachine: (no-preload-892672) DBG | Using SSH client type: external
	I0501 03:40:57.339296   68640 main.go:141] libmachine: (no-preload-892672) DBG | Using SSH private key: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/no-preload-892672/id_rsa (-rw-------)
	I0501 03:40:57.339328   68640 main.go:141] libmachine: (no-preload-892672) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.144 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18779-13391/.minikube/machines/no-preload-892672/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0501 03:40:57.339341   68640 main.go:141] libmachine: (no-preload-892672) DBG | About to run SSH command:
	I0501 03:40:57.339370   68640 main.go:141] libmachine: (no-preload-892672) DBG | exit 0
	I0501 03:40:57.466775   68640 main.go:141] libmachine: (no-preload-892672) DBG | SSH cmd err, output: <nil>: 
	I0501 03:40:57.467183   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetConfigRaw
	I0501 03:40:57.467890   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetIP
	I0501 03:40:57.470097   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:57.470527   68640 main.go:141] libmachine: (no-preload-892672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6d:9a", ip: ""} in network mk-no-preload-892672: {Iface:virbr1 ExpiryTime:2024-05-01 04:40:47 +0000 UTC Type:0 Mac:52:54:00:c7:6d:9a Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:no-preload-892672 Clientid:01:52:54:00:c7:6d:9a}
	I0501 03:40:57.470555   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined IP address 192.168.39.144 and MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:57.470767   68640 profile.go:143] Saving config to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/no-preload-892672/config.json ...
	I0501 03:40:57.470929   68640 machine.go:94] provisionDockerMachine start ...
	I0501 03:40:57.470950   68640 main.go:141] libmachine: (no-preload-892672) Calling .DriverName
	I0501 03:40:57.471177   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHHostname
	I0501 03:40:57.473301   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:57.473599   68640 main.go:141] libmachine: (no-preload-892672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6d:9a", ip: ""} in network mk-no-preload-892672: {Iface:virbr1 ExpiryTime:2024-05-01 04:40:47 +0000 UTC Type:0 Mac:52:54:00:c7:6d:9a Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:no-preload-892672 Clientid:01:52:54:00:c7:6d:9a}
	I0501 03:40:57.473626   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined IP address 192.168.39.144 and MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:57.473724   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHPort
	I0501 03:40:57.473863   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHKeyPath
	I0501 03:40:57.474032   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHKeyPath
	I0501 03:40:57.474181   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHUsername
	I0501 03:40:57.474337   68640 main.go:141] libmachine: Using SSH client type: native
	I0501 03:40:57.474545   68640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.144 22 <nil> <nil>}
	I0501 03:40:57.474558   68640 main.go:141] libmachine: About to run SSH command:
	hostname
	I0501 03:40:57.591733   68640 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0501 03:40:57.591766   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetMachineName
	I0501 03:40:57.592016   68640 buildroot.go:166] provisioning hostname "no-preload-892672"
	I0501 03:40:57.592048   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetMachineName
	I0501 03:40:57.592308   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHHostname
	I0501 03:40:57.595192   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:57.595593   68640 main.go:141] libmachine: (no-preload-892672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6d:9a", ip: ""} in network mk-no-preload-892672: {Iface:virbr1 ExpiryTime:2024-05-01 04:40:47 +0000 UTC Type:0 Mac:52:54:00:c7:6d:9a Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:no-preload-892672 Clientid:01:52:54:00:c7:6d:9a}
	I0501 03:40:57.595618   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined IP address 192.168.39.144 and MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:57.595697   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHPort
	I0501 03:40:57.595891   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHKeyPath
	I0501 03:40:57.596041   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHKeyPath
	I0501 03:40:57.596192   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHUsername
	I0501 03:40:57.596376   68640 main.go:141] libmachine: Using SSH client type: native
	I0501 03:40:57.596544   68640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.144 22 <nil> <nil>}
	I0501 03:40:57.596559   68640 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-892672 && echo "no-preload-892672" | sudo tee /etc/hostname
	I0501 03:40:57.727738   68640 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-892672
	
	I0501 03:40:57.727770   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHHostname
	I0501 03:40:57.730673   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:57.731033   68640 main.go:141] libmachine: (no-preload-892672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6d:9a", ip: ""} in network mk-no-preload-892672: {Iface:virbr1 ExpiryTime:2024-05-01 04:40:47 +0000 UTC Type:0 Mac:52:54:00:c7:6d:9a Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:no-preload-892672 Clientid:01:52:54:00:c7:6d:9a}
	I0501 03:40:57.731066   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined IP address 192.168.39.144 and MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:57.731202   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHPort
	I0501 03:40:57.731383   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHKeyPath
	I0501 03:40:57.731577   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHKeyPath
	I0501 03:40:57.731744   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHUsername
	I0501 03:40:57.731936   68640 main.go:141] libmachine: Using SSH client type: native
	I0501 03:40:57.732155   68640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.144 22 <nil> <nil>}
	I0501 03:40:57.732173   68640 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-892672' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-892672/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-892672' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0501 03:40:57.857465   68640 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0501 03:40:57.857492   68640 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18779-13391/.minikube CaCertPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18779-13391/.minikube}
	I0501 03:40:57.857515   68640 buildroot.go:174] setting up certificates
	I0501 03:40:57.857524   68640 provision.go:84] configureAuth start
	I0501 03:40:57.857532   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetMachineName
	I0501 03:40:57.857791   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetIP
	I0501 03:40:57.860530   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:57.860881   68640 main.go:141] libmachine: (no-preload-892672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6d:9a", ip: ""} in network mk-no-preload-892672: {Iface:virbr1 ExpiryTime:2024-05-01 04:40:47 +0000 UTC Type:0 Mac:52:54:00:c7:6d:9a Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:no-preload-892672 Clientid:01:52:54:00:c7:6d:9a}
	I0501 03:40:57.860911   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined IP address 192.168.39.144 and MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:57.861035   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHHostname
	I0501 03:40:57.863122   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:57.863445   68640 main.go:141] libmachine: (no-preload-892672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6d:9a", ip: ""} in network mk-no-preload-892672: {Iface:virbr1 ExpiryTime:2024-05-01 04:40:47 +0000 UTC Type:0 Mac:52:54:00:c7:6d:9a Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:no-preload-892672 Clientid:01:52:54:00:c7:6d:9a}
	I0501 03:40:57.863472   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined IP address 192.168.39.144 and MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:57.863565   68640 provision.go:143] copyHostCerts
	I0501 03:40:57.863614   68640 exec_runner.go:144] found /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem, removing ...
	I0501 03:40:57.863624   68640 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem
	I0501 03:40:57.863689   68640 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18779-13391/.minikube/ca.pem (1078 bytes)
	I0501 03:40:57.863802   68640 exec_runner.go:144] found /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem, removing ...
	I0501 03:40:57.863814   68640 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem
	I0501 03:40:57.863843   68640 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18779-13391/.minikube/cert.pem (1123 bytes)
	I0501 03:40:57.863928   68640 exec_runner.go:144] found /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem, removing ...
	I0501 03:40:57.863938   68640 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem
	I0501 03:40:57.863962   68640 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18779-13391/.minikube/key.pem (1675 bytes)
	I0501 03:40:57.864040   68640 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca-key.pem org=jenkins.no-preload-892672 san=[127.0.0.1 192.168.39.144 localhost minikube no-preload-892672]
	I0501 03:40:54.658003   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:56.658041   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:58.125270   68640 provision.go:177] copyRemoteCerts
	I0501 03:40:58.125321   68640 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0501 03:40:58.125342   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHHostname
	I0501 03:40:58.127890   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:58.128299   68640 main.go:141] libmachine: (no-preload-892672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6d:9a", ip: ""} in network mk-no-preload-892672: {Iface:virbr1 ExpiryTime:2024-05-01 04:40:47 +0000 UTC Type:0 Mac:52:54:00:c7:6d:9a Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:no-preload-892672 Clientid:01:52:54:00:c7:6d:9a}
	I0501 03:40:58.128330   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined IP address 192.168.39.144 and MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:58.128469   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHPort
	I0501 03:40:58.128645   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHKeyPath
	I0501 03:40:58.128809   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHUsername
	I0501 03:40:58.128941   68640 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/no-preload-892672/id_rsa Username:docker}
	I0501 03:40:58.222112   68640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0501 03:40:58.249760   68640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0501 03:40:58.277574   68640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0501 03:40:58.304971   68640 provision.go:87] duration metric: took 447.420479ms to configureAuth
	I0501 03:40:58.305017   68640 buildroot.go:189] setting minikube options for container-runtime
	I0501 03:40:58.305270   68640 config.go:182] Loaded profile config "no-preload-892672": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0501 03:40:58.305434   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHHostname
	I0501 03:40:58.308098   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:58.308487   68640 main.go:141] libmachine: (no-preload-892672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6d:9a", ip: ""} in network mk-no-preload-892672: {Iface:virbr1 ExpiryTime:2024-05-01 04:40:47 +0000 UTC Type:0 Mac:52:54:00:c7:6d:9a Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:no-preload-892672 Clientid:01:52:54:00:c7:6d:9a}
	I0501 03:40:58.308528   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined IP address 192.168.39.144 and MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:58.308658   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHPort
	I0501 03:40:58.308857   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHKeyPath
	I0501 03:40:58.309025   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHKeyPath
	I0501 03:40:58.309173   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHUsername
	I0501 03:40:58.309354   68640 main.go:141] libmachine: Using SSH client type: native
	I0501 03:40:58.309510   68640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.144 22 <nil> <nil>}
	I0501 03:40:58.309526   68640 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0501 03:40:58.609833   68640 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0501 03:40:58.609859   68640 machine.go:97] duration metric: took 1.138916322s to provisionDockerMachine
	I0501 03:40:58.609873   68640 start.go:293] postStartSetup for "no-preload-892672" (driver="kvm2")
	I0501 03:40:58.609885   68640 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0501 03:40:58.609905   68640 main.go:141] libmachine: (no-preload-892672) Calling .DriverName
	I0501 03:40:58.610271   68640 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0501 03:40:58.610307   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHHostname
	I0501 03:40:58.612954   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:58.613308   68640 main.go:141] libmachine: (no-preload-892672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6d:9a", ip: ""} in network mk-no-preload-892672: {Iface:virbr1 ExpiryTime:2024-05-01 04:40:47 +0000 UTC Type:0 Mac:52:54:00:c7:6d:9a Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:no-preload-892672 Clientid:01:52:54:00:c7:6d:9a}
	I0501 03:40:58.613322   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined IP address 192.168.39.144 and MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:58.613485   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHPort
	I0501 03:40:58.613683   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHKeyPath
	I0501 03:40:58.613871   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHUsername
	I0501 03:40:58.614005   68640 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/no-preload-892672/id_rsa Username:docker}
	I0501 03:40:58.702752   68640 ssh_runner.go:195] Run: cat /etc/os-release
	I0501 03:40:58.707441   68640 info.go:137] Remote host: Buildroot 2023.02.9
	I0501 03:40:58.707468   68640 filesync.go:126] Scanning /home/jenkins/minikube-integration/18779-13391/.minikube/addons for local assets ...
	I0501 03:40:58.707577   68640 filesync.go:126] Scanning /home/jenkins/minikube-integration/18779-13391/.minikube/files for local assets ...
	I0501 03:40:58.707646   68640 filesync.go:149] local asset: /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem -> 207242.pem in /etc/ssl/certs
	I0501 03:40:58.707728   68640 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0501 03:40:58.718247   68640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem --> /etc/ssl/certs/207242.pem (1708 bytes)
	I0501 03:40:58.745184   68640 start.go:296] duration metric: took 135.29943ms for postStartSetup
	I0501 03:40:58.745218   68640 fix.go:56] duration metric: took 24.96116093s for fixHost
	I0501 03:40:58.745236   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHHostname
	I0501 03:40:58.747809   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:58.748228   68640 main.go:141] libmachine: (no-preload-892672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6d:9a", ip: ""} in network mk-no-preload-892672: {Iface:virbr1 ExpiryTime:2024-05-01 04:40:47 +0000 UTC Type:0 Mac:52:54:00:c7:6d:9a Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:no-preload-892672 Clientid:01:52:54:00:c7:6d:9a}
	I0501 03:40:58.748261   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined IP address 192.168.39.144 and MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:58.748380   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHPort
	I0501 03:40:58.748591   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHKeyPath
	I0501 03:40:58.748747   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHKeyPath
	I0501 03:40:58.748870   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHUsername
	I0501 03:40:58.749049   68640 main.go:141] libmachine: Using SSH client type: native
	I0501 03:40:58.749262   68640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.144 22 <nil> <nil>}
	I0501 03:40:58.749275   68640 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0501 03:40:58.867651   68640 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714534858.808639015
	
	I0501 03:40:58.867676   68640 fix.go:216] guest clock: 1714534858.808639015
	I0501 03:40:58.867686   68640 fix.go:229] Guest: 2024-05-01 03:40:58.808639015 +0000 UTC Remote: 2024-05-01 03:40:58.745221709 +0000 UTC m=+370.854832040 (delta=63.417306ms)
	I0501 03:40:58.867735   68640 fix.go:200] guest clock delta is within tolerance: 63.417306ms
	I0501 03:40:58.867746   68640 start.go:83] releasing machines lock for "no-preload-892672", held for 25.083724737s
	I0501 03:40:58.867770   68640 main.go:141] libmachine: (no-preload-892672) Calling .DriverName
	I0501 03:40:58.868053   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetIP
	I0501 03:40:58.871193   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:58.871618   68640 main.go:141] libmachine: (no-preload-892672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6d:9a", ip: ""} in network mk-no-preload-892672: {Iface:virbr1 ExpiryTime:2024-05-01 04:40:47 +0000 UTC Type:0 Mac:52:54:00:c7:6d:9a Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:no-preload-892672 Clientid:01:52:54:00:c7:6d:9a}
	I0501 03:40:58.871664   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined IP address 192.168.39.144 and MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:58.871815   68640 main.go:141] libmachine: (no-preload-892672) Calling .DriverName
	I0501 03:40:58.872441   68640 main.go:141] libmachine: (no-preload-892672) Calling .DriverName
	I0501 03:40:58.872665   68640 main.go:141] libmachine: (no-preload-892672) Calling .DriverName
	I0501 03:40:58.872750   68640 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0501 03:40:58.872787   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHHostname
	I0501 03:40:58.872918   68640 ssh_runner.go:195] Run: cat /version.json
	I0501 03:40:58.872946   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHHostname
	I0501 03:40:58.875797   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:58.875976   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:58.876230   68640 main.go:141] libmachine: (no-preload-892672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6d:9a", ip: ""} in network mk-no-preload-892672: {Iface:virbr1 ExpiryTime:2024-05-01 04:40:47 +0000 UTC Type:0 Mac:52:54:00:c7:6d:9a Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:no-preload-892672 Clientid:01:52:54:00:c7:6d:9a}
	I0501 03:40:58.876341   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined IP address 192.168.39.144 and MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:58.876377   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHPort
	I0501 03:40:58.876502   68640 main.go:141] libmachine: (no-preload-892672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6d:9a", ip: ""} in network mk-no-preload-892672: {Iface:virbr1 ExpiryTime:2024-05-01 04:40:47 +0000 UTC Type:0 Mac:52:54:00:c7:6d:9a Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:no-preload-892672 Clientid:01:52:54:00:c7:6d:9a}
	I0501 03:40:58.876539   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined IP address 192.168.39.144 and MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:40:58.876587   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHKeyPath
	I0501 03:40:58.876756   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHUsername
	I0501 03:40:58.876894   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHPort
	I0501 03:40:58.876969   68640 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/no-preload-892672/id_rsa Username:docker}
	I0501 03:40:58.877057   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHKeyPath
	I0501 03:40:58.877246   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHUsername
	I0501 03:40:58.877424   68640 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/no-preload-892672/id_rsa Username:docker}
	I0501 03:40:58.983384   68640 ssh_runner.go:195] Run: systemctl --version
	I0501 03:40:58.991625   68640 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0501 03:40:59.143916   68640 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0501 03:40:59.151065   68640 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0501 03:40:59.151124   68640 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0501 03:40:59.168741   68640 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0501 03:40:59.168763   68640 start.go:494] detecting cgroup driver to use...
	I0501 03:40:59.168825   68640 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0501 03:40:59.188524   68640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0501 03:40:59.205602   68640 docker.go:217] disabling cri-docker service (if available) ...
	I0501 03:40:59.205668   68640 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0501 03:40:59.221173   68640 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0501 03:40:59.236546   68640 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0501 03:40:59.364199   68640 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0501 03:40:59.533188   68640 docker.go:233] disabling docker service ...
	I0501 03:40:59.533266   68640 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0501 03:40:59.549488   68640 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0501 03:40:59.562910   68640 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0501 03:40:59.705451   68640 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0501 03:40:59.843226   68640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0501 03:40:59.858878   68640 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0501 03:40:59.882729   68640 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0501 03:40:59.882808   68640 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:40:59.895678   68640 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0501 03:40:59.895763   68640 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:40:59.908439   68640 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:40:59.921319   68640 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:40:59.934643   68640 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0501 03:40:59.947416   68640 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:40:59.959887   68640 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:40:59.981849   68640 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0501 03:40:59.994646   68640 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0501 03:41:00.006059   68640 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0501 03:41:00.006133   68640 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0501 03:41:00.024850   68640 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0501 03:41:00.036834   68640 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 03:41:00.161283   68640 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0501 03:41:00.312304   68640 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0501 03:41:00.312375   68640 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0501 03:41:00.317980   68640 start.go:562] Will wait 60s for crictl version
	I0501 03:41:00.318043   68640 ssh_runner.go:195] Run: which crictl
	I0501 03:41:00.322780   68640 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0501 03:41:00.362830   68640 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0501 03:41:00.362920   68640 ssh_runner.go:195] Run: crio --version
	I0501 03:41:00.399715   68640 ssh_runner.go:195] Run: crio --version
	I0501 03:41:00.432510   68640 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0501 03:40:57.511719   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:00.013693   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:40:56.679034   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:57.179062   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:57.679579   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:58.179221   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:58.679728   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:59.178851   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:40:59.679647   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:00.179397   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:00.678839   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:01.179679   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:00.433777   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetIP
	I0501 03:41:00.436557   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:41:00.436892   68640 main.go:141] libmachine: (no-preload-892672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6d:9a", ip: ""} in network mk-no-preload-892672: {Iface:virbr1 ExpiryTime:2024-05-01 04:40:47 +0000 UTC Type:0 Mac:52:54:00:c7:6d:9a Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:no-preload-892672 Clientid:01:52:54:00:c7:6d:9a}
	I0501 03:41:00.436920   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined IP address 192.168.39.144 and MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:41:00.437124   68640 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0501 03:41:00.441861   68640 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0501 03:41:00.455315   68640 kubeadm.go:877] updating cluster {Name:no-preload-892672 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.0 ClusterName:no-preload-892672 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.144 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0501 03:41:00.455417   68640 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0501 03:41:00.455462   68640 ssh_runner.go:195] Run: sudo crictl images --output json
	I0501 03:41:00.496394   68640 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0501 03:41:00.496422   68640 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.30.0 registry.k8s.io/kube-controller-manager:v1.30.0 registry.k8s.io/kube-scheduler:v1.30.0 registry.k8s.io/kube-proxy:v1.30.0 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.12-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0501 03:41:00.496508   68640 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0501 03:41:00.496532   68640 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.0
	I0501 03:41:00.496551   68640 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.0
	I0501 03:41:00.496581   68640 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0501 03:41:00.496679   68640 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.0
	I0501 03:41:00.496701   68640 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0501 03:41:00.496736   68640 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0501 03:41:00.496529   68640 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.0
	I0501 03:41:00.498207   68640 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.0
	I0501 03:41:00.498227   68640 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0501 03:41:00.498246   68640 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.0
	I0501 03:41:00.498250   68640 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.0
	I0501 03:41:00.498270   68640 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.0
	I0501 03:41:00.498254   68640 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0501 03:41:00.498298   68640 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0501 03:41:00.498477   68640 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0501 03:41:00.617430   68640 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.30.0
	I0501 03:41:00.621346   68640 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.30.0
	I0501 03:41:00.622759   68640 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0501 03:41:00.628313   68640 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.30.0
	I0501 03:41:00.629087   68640 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0501 03:41:00.633625   68640 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.12-0
	I0501 03:41:00.652130   68640 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.30.0
	I0501 03:41:00.722500   68640 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.30.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.30.0" does not exist at hash "c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0" in container runtime
	I0501 03:41:00.722554   68640 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.30.0
	I0501 03:41:00.722623   68640 ssh_runner.go:195] Run: which crictl
	I0501 03:41:00.796476   68640 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.30.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.30.0" does not exist at hash "259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced" in container runtime
	I0501 03:41:00.796530   68640 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.30.0
	I0501 03:41:00.796580   68640 ssh_runner.go:195] Run: which crictl
	I0501 03:41:00.944235   68640 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.30.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.30.0" does not exist at hash "c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b" in container runtime
	I0501 03:41:00.944262   68640 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0501 03:41:00.944289   68640 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.30.0
	I0501 03:41:00.944297   68640 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0501 03:41:00.944305   68640 cache_images.go:116] "registry.k8s.io/etcd:3.5.12-0" needs transfer: "registry.k8s.io/etcd:3.5.12-0" does not exist at hash "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899" in container runtime
	I0501 03:41:00.944325   68640 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.12-0
	I0501 03:41:00.944344   68640 ssh_runner.go:195] Run: which crictl
	I0501 03:41:00.944357   68640 ssh_runner.go:195] Run: which crictl
	I0501 03:41:00.944398   68640 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.30.0" needs transfer: "registry.k8s.io/kube-proxy:v1.30.0" does not exist at hash "a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b" in container runtime
	I0501 03:41:00.944348   68640 ssh_runner.go:195] Run: which crictl
	I0501 03:41:00.944434   68640 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.30.0
	I0501 03:41:00.944422   68640 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.30.0
	I0501 03:41:00.944452   68640 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.0
	I0501 03:41:00.944464   68640 ssh_runner.go:195] Run: which crictl
	I0501 03:41:00.998765   68640 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.30.0
	I0501 03:41:00.998791   68640 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0
	I0501 03:41:00.998846   68640 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.12-0
	I0501 03:41:00.998891   68640 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.0
	I0501 03:41:01.017469   68640 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.30.0
	I0501 03:41:01.017494   68640 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0
	I0501 03:41:01.017584   68640 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.0
	I0501 03:41:01.018040   68640 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0501 03:41:01.105445   68640 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0
	I0501 03:41:01.105517   68640 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0
	I0501 03:41:01.105560   68640 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0
	I0501 03:41:01.105583   68640 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.30.0 (exists)
	I0501 03:41:01.105595   68640 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.30.0
	I0501 03:41:01.105635   68640 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.0
	I0501 03:41:01.105645   68640 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0
	I0501 03:41:01.105734   68640 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.30.0 (exists)
	I0501 03:41:01.105814   68640 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0
	I0501 03:41:01.105888   68640 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.0
	I0501 03:41:01.120943   68640 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0501 03:41:01.121044   68640 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0501 03:41:01.127975   68640 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.12-0 (exists)
	I0501 03:41:01.359381   68640 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0501 03:40:59.156924   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:01.659307   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:03.661498   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:02.511652   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:05.011220   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:01.679527   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:02.179435   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:02.679626   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:03.179351   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:03.679618   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:04.179426   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:04.678853   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:05.179143   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:05.679065   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:06.179513   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:04.315680   68640 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.0: (3.210016587s)
	I0501 03:41:04.315725   68640 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.30.0 (exists)
	I0501 03:41:04.315756   68640 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.0: (3.209843913s)
	I0501 03:41:04.315784   68640 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1: (3.194721173s)
	I0501 03:41:04.315799   68640 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0: (3.210139611s)
	I0501 03:41:04.315812   68640 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.30.0 (exists)
	I0501 03:41:04.315813   68640 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0501 03:41:04.315813   68640 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0 from cache
	I0501 03:41:04.315844   68640 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.956432506s)
	I0501 03:41:04.315859   68640 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.30.0
	I0501 03:41:04.315902   68640 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0501 03:41:04.315905   68640 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0
	I0501 03:41:04.315927   68640 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0501 03:41:04.315962   68640 ssh_runner.go:195] Run: which crictl
	I0501 03:41:05.691351   68640 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0: (1.375419764s)
	I0501 03:41:05.691394   68640 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0 from cache
	I0501 03:41:05.691418   68640 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.12-0
	I0501 03:41:05.691467   68640 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0
	I0501 03:41:05.691477   68640 ssh_runner.go:235] Completed: which crictl: (1.375499162s)
	I0501 03:41:05.691529   68640 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0501 03:41:06.159381   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:08.659756   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:07.012126   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:09.511459   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:06.679246   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:07.179629   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:07.679601   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:08.179634   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:08.679603   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:09.179675   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:09.678837   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:10.178860   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:10.679638   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:11.179802   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:09.757005   68640 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0: (4.065509843s)
	I0501 03:41:09.757044   68640 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 from cache
	I0501 03:41:09.757079   68640 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.30.0
	I0501 03:41:09.757093   68640 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (4.065539206s)
	I0501 03:41:09.757137   68640 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0501 03:41:09.757158   68640 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0
	I0501 03:41:09.757222   68640 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0501 03:41:12.125691   68640 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0: (2.368504788s)
	I0501 03:41:12.125729   68640 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0 from cache
	I0501 03:41:12.125726   68640 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.368475622s)
	I0501 03:41:12.125755   68640 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0501 03:41:12.125754   68640 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.30.0
	I0501 03:41:12.125817   68640 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0
	I0501 03:41:11.157019   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:13.157632   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:11.513027   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:14.013463   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:11.679355   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:12.178847   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:12.679660   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:13.179641   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:13.678808   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:14.178955   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:14.679651   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:15.179623   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:15.678862   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:16.179775   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:14.315765   68640 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0: (2.18991878s)
	I0501 03:41:14.315791   68640 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0 from cache
	I0501 03:41:14.315835   68640 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0501 03:41:14.315911   68640 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0501 03:41:16.401221   68640 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.085281928s)
	I0501 03:41:16.401261   68640 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0501 03:41:16.401291   68640 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0501 03:41:16.401335   68640 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0501 03:41:17.152926   68640 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18779-13391/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0501 03:41:17.152969   68640 cache_images.go:123] Successfully loaded all cached images
	I0501 03:41:17.152976   68640 cache_images.go:92] duration metric: took 16.656540612s to LoadCachedImages
	I0501 03:41:17.152989   68640 kubeadm.go:928] updating node { 192.168.39.144 8443 v1.30.0 crio true true} ...
	I0501 03:41:17.153119   68640 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-892672 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.144
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:no-preload-892672 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0501 03:41:17.153241   68640 ssh_runner.go:195] Run: crio config
	I0501 03:41:17.207153   68640 cni.go:84] Creating CNI manager for ""
	I0501 03:41:17.207181   68640 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0501 03:41:17.207196   68640 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0501 03:41:17.207225   68640 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.144 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-892672 NodeName:no-preload-892672 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.144"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.144 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0501 03:41:17.207407   68640 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.144
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-892672"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.144
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.144"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0501 03:41:17.207488   68640 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0501 03:41:17.221033   68640 binaries.go:44] Found k8s binaries, skipping transfer
	I0501 03:41:17.221099   68640 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0501 03:41:17.232766   68640 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0501 03:41:17.252543   68640 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0501 03:41:17.272030   68640 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0501 03:41:17.291541   68640 ssh_runner.go:195] Run: grep 192.168.39.144	control-plane.minikube.internal$ /etc/hosts
	I0501 03:41:17.295801   68640 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.144	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0501 03:41:17.309880   68640 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 03:41:17.432917   68640 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0501 03:41:17.452381   68640 certs.go:68] Setting up /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/no-preload-892672 for IP: 192.168.39.144
	I0501 03:41:17.452406   68640 certs.go:194] generating shared ca certs ...
	I0501 03:41:17.452425   68640 certs.go:226] acquiring lock for ca certs: {Name:mk85a75d14c00780d4bf09cd9bdaf0f96f1b8fce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 03:41:17.452606   68640 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18779-13391/.minikube/ca.key
	I0501 03:41:17.452655   68640 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.key
	I0501 03:41:17.452669   68640 certs.go:256] generating profile certs ...
	I0501 03:41:17.452746   68640 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/no-preload-892672/client.key
	I0501 03:41:17.452809   68640 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/no-preload-892672/apiserver.key.3644a8af
	I0501 03:41:17.452848   68640 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/no-preload-892672/proxy-client.key
	I0501 03:41:17.452963   68640 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/20724.pem (1338 bytes)
	W0501 03:41:17.453007   68640 certs.go:480] ignoring /home/jenkins/minikube-integration/18779-13391/.minikube/certs/20724_empty.pem, impossibly tiny 0 bytes
	I0501 03:41:17.453021   68640 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca-key.pem (1679 bytes)
	I0501 03:41:17.453050   68640 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/ca.pem (1078 bytes)
	I0501 03:41:17.453083   68640 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/cert.pem (1123 bytes)
	I0501 03:41:17.453116   68640 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/certs/key.pem (1675 bytes)
	I0501 03:41:17.453166   68640 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem (1708 bytes)
	I0501 03:41:17.453767   68640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0501 03:41:17.490616   68640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0501 03:41:17.545217   68640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0501 03:41:17.576908   68640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0501 03:41:17.607371   68640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/no-preload-892672/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0501 03:41:17.657675   68640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/no-preload-892672/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0501 03:41:17.684681   68640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/no-preload-892672/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0501 03:41:17.716319   68640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/no-preload-892672/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0501 03:41:17.745731   68640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0501 03:41:17.770939   68640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/certs/20724.pem --> /usr/share/ca-certificates/20724.pem (1338 bytes)
	I0501 03:41:17.796366   68640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/ssl/certs/207242.pem --> /usr/share/ca-certificates/207242.pem (1708 bytes)
	I0501 03:41:17.823301   68640 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0501 03:41:17.841496   68640 ssh_runner.go:195] Run: openssl version
	I0501 03:41:17.848026   68640 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0501 03:41:17.860734   68640 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0501 03:41:17.865978   68640 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May  1 02:08 /usr/share/ca-certificates/minikubeCA.pem
	I0501 03:41:17.866037   68640 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0501 03:41:17.872644   68640 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0501 03:41:17.886241   68640 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20724.pem && ln -fs /usr/share/ca-certificates/20724.pem /etc/ssl/certs/20724.pem"
	I0501 03:41:17.899619   68640 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20724.pem
	I0501 03:41:17.904664   68640 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May  1 02:20 /usr/share/ca-certificates/20724.pem
	I0501 03:41:17.904701   68640 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20724.pem
	I0501 03:41:17.910799   68640 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20724.pem /etc/ssl/certs/51391683.0"
	I0501 03:41:17.923007   68640 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/207242.pem && ln -fs /usr/share/ca-certificates/207242.pem /etc/ssl/certs/207242.pem"
	I0501 03:41:15.657403   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:18.156777   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:16.511834   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:18.512735   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:20.513144   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:16.679614   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:17.179604   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:17.679100   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:18.179166   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:18.679202   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:19.179631   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:19.679583   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:20.179584   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:20.679493   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:21.178945   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:17.935647   68640 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/207242.pem
	I0501 03:41:17.942147   68640 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May  1 02:20 /usr/share/ca-certificates/207242.pem
	I0501 03:41:17.942187   68640 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/207242.pem
	I0501 03:41:17.948468   68640 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/207242.pem /etc/ssl/certs/3ec20f2e.0"
	I0501 03:41:17.962737   68640 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0501 03:41:17.968953   68640 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0501 03:41:17.975849   68640 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0501 03:41:17.982324   68640 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0501 03:41:17.988930   68640 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0501 03:41:17.995221   68640 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0501 03:41:18.001868   68640 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0501 03:41:18.008701   68640 kubeadm.go:391] StartCluster: {Name:no-preload-892672 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.0 ClusterName:no-preload-892672 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.144 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 03:41:18.008831   68640 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0501 03:41:18.008893   68640 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0501 03:41:18.056939   68640 cri.go:89] found id: ""
	I0501 03:41:18.057005   68640 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0501 03:41:18.070898   68640 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0501 03:41:18.070921   68640 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0501 03:41:18.070926   68640 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0501 03:41:18.070968   68640 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0501 03:41:18.083907   68640 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0501 03:41:18.085116   68640 kubeconfig.go:125] found "no-preload-892672" server: "https://192.168.39.144:8443"
	I0501 03:41:18.088582   68640 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0501 03:41:18.101426   68640 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.144
	I0501 03:41:18.101471   68640 kubeadm.go:1154] stopping kube-system containers ...
	I0501 03:41:18.101493   68640 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0501 03:41:18.101543   68640 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0501 03:41:18.153129   68640 cri.go:89] found id: ""
	I0501 03:41:18.153193   68640 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0501 03:41:18.173100   68640 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0501 03:41:18.188443   68640 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0501 03:41:18.188463   68640 kubeadm.go:156] found existing configuration files:
	
	I0501 03:41:18.188509   68640 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0501 03:41:18.202153   68640 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0501 03:41:18.202204   68640 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0501 03:41:18.215390   68640 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0501 03:41:18.227339   68640 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0501 03:41:18.227404   68640 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0501 03:41:18.239160   68640 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0501 03:41:18.251992   68640 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0501 03:41:18.252053   68640 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0501 03:41:18.265088   68640 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0501 03:41:18.277922   68640 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0501 03:41:18.277983   68640 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0501 03:41:18.291307   68640 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0501 03:41:18.304879   68640 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:41:18.417921   68640 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:41:19.350848   68640 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:41:19.586348   68640 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:41:19.761056   68640 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:41:19.867315   68640 api_server.go:52] waiting for apiserver process to appear ...
	I0501 03:41:19.867435   68640 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:20.368520   68640 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:20.868444   68640 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:20.913411   68640 api_server.go:72] duration metric: took 1.046095165s to wait for apiserver process to appear ...
	I0501 03:41:20.913444   68640 api_server.go:88] waiting for apiserver healthz status ...
	I0501 03:41:20.913469   68640 api_server.go:253] Checking apiserver healthz at https://192.168.39.144:8443/healthz ...
	I0501 03:41:20.914000   68640 api_server.go:269] stopped: https://192.168.39.144:8443/healthz: Get "https://192.168.39.144:8443/healthz": dial tcp 192.168.39.144:8443: connect: connection refused
	I0501 03:41:21.414544   68640 api_server.go:253] Checking apiserver healthz at https://192.168.39.144:8443/healthz ...
	I0501 03:41:20.658333   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:23.157298   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:23.011395   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:25.012164   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:21.678785   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:22.179435   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:22.679615   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:23.179610   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:23.679473   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:24.179613   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:24.679672   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:25.179400   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:25.679793   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:26.179809   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:24.166756   68640 api_server.go:279] https://192.168.39.144:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0501 03:41:24.166786   68640 api_server.go:103] status: https://192.168.39.144:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0501 03:41:24.166807   68640 api_server.go:253] Checking apiserver healthz at https://192.168.39.144:8443/healthz ...
	I0501 03:41:24.205679   68640 api_server.go:279] https://192.168.39.144:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0501 03:41:24.205713   68640 api_server.go:103] status: https://192.168.39.144:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0501 03:41:24.414055   68640 api_server.go:253] Checking apiserver healthz at https://192.168.39.144:8443/healthz ...
	I0501 03:41:24.420468   68640 api_server.go:279] https://192.168.39.144:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0501 03:41:24.420502   68640 api_server.go:103] status: https://192.168.39.144:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0501 03:41:24.914021   68640 api_server.go:253] Checking apiserver healthz at https://192.168.39.144:8443/healthz ...
	I0501 03:41:24.919717   68640 api_server.go:279] https://192.168.39.144:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0501 03:41:24.919754   68640 api_server.go:103] status: https://192.168.39.144:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0501 03:41:25.414015   68640 api_server.go:253] Checking apiserver healthz at https://192.168.39.144:8443/healthz ...
	I0501 03:41:25.422149   68640 api_server.go:279] https://192.168.39.144:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0501 03:41:25.422180   68640 api_server.go:103] status: https://192.168.39.144:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0501 03:41:25.913751   68640 api_server.go:253] Checking apiserver healthz at https://192.168.39.144:8443/healthz ...
	I0501 03:41:25.917839   68640 api_server.go:279] https://192.168.39.144:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0501 03:41:25.917865   68640 api_server.go:103] status: https://192.168.39.144:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0501 03:41:26.414458   68640 api_server.go:253] Checking apiserver healthz at https://192.168.39.144:8443/healthz ...
	I0501 03:41:26.419346   68640 api_server.go:279] https://192.168.39.144:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0501 03:41:26.419367   68640 api_server.go:103] status: https://192.168.39.144:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0501 03:41:26.913912   68640 api_server.go:253] Checking apiserver healthz at https://192.168.39.144:8443/healthz ...
	I0501 03:41:26.918504   68640 api_server.go:279] https://192.168.39.144:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0501 03:41:26.918537   68640 api_server.go:103] status: https://192.168.39.144:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0501 03:41:27.413693   68640 api_server.go:253] Checking apiserver healthz at https://192.168.39.144:8443/healthz ...
	I0501 03:41:27.421752   68640 api_server.go:279] https://192.168.39.144:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0501 03:41:27.421776   68640 api_server.go:103] status: https://192.168.39.144:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0501 03:41:27.913582   68640 api_server.go:253] Checking apiserver healthz at https://192.168.39.144:8443/healthz ...
	I0501 03:41:27.918116   68640 api_server.go:279] https://192.168.39.144:8443/healthz returned 200:
	ok
	I0501 03:41:27.927764   68640 api_server.go:141] control plane version: v1.30.0
	I0501 03:41:27.927790   68640 api_server.go:131] duration metric: took 7.014339409s to wait for apiserver health ...
	I0501 03:41:27.927799   68640 cni.go:84] Creating CNI manager for ""
	I0501 03:41:27.927805   68640 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0501 03:41:27.929889   68640 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0501 03:41:27.931210   68640 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0501 03:41:25.158177   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:27.656879   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:27.511692   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:30.010468   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:26.679430   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:27.179043   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:27.678801   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:28.179629   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:28.679111   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:29.179599   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:29.679624   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:30.179585   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:30.679442   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:31.179530   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:27.945852   68640 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0501 03:41:27.968311   68640 system_pods.go:43] waiting for kube-system pods to appear ...
	I0501 03:41:27.981571   68640 system_pods.go:59] 8 kube-system pods found
	I0501 03:41:27.981609   68640 system_pods.go:61] "coredns-7db6d8ff4d-v8bqq" [bf389521-9f19-4f2b-83a5-6d469c7ce0fe] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0501 03:41:27.981615   68640 system_pods.go:61] "etcd-no-preload-892672" [108fce6d-03f3-4bb9-a410-a58c58e8f186] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0501 03:41:27.981621   68640 system_pods.go:61] "kube-apiserver-no-preload-892672" [a18b7242-1865-4a67-aab6-c6cc19552326] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0501 03:41:27.981629   68640 system_pods.go:61] "kube-controller-manager-no-preload-892672" [318d39e1-5265-42e5-a3d5-4408b7b73542] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0501 03:41:27.981636   68640 system_pods.go:61] "kube-proxy-dwvdl" [f7a97598-aaa1-4df5-8d6a-8f6286568ad6] Running
	I0501 03:41:27.981642   68640 system_pods.go:61] "kube-scheduler-no-preload-892672" [cbf1c183-16df-42c8-b1c8-b9adf3c25a7a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0501 03:41:27.981647   68640 system_pods.go:61] "metrics-server-569cc877fc-k8jnl" [1dd0fb29-4d90-41c8-9de2-d163eeb0247b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0501 03:41:27.981651   68640 system_pods.go:61] "storage-provisioner" [fc703ab1-f14b-4766-8ee2-a43477d3df21] Running
	I0501 03:41:27.981657   68640 system_pods.go:74] duration metric: took 13.322893ms to wait for pod list to return data ...
	I0501 03:41:27.981667   68640 node_conditions.go:102] verifying NodePressure condition ...
	I0501 03:41:27.985896   68640 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0501 03:41:27.985931   68640 node_conditions.go:123] node cpu capacity is 2
	I0501 03:41:27.985944   68640 node_conditions.go:105] duration metric: took 4.271726ms to run NodePressure ...
	I0501 03:41:27.985966   68640 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0501 03:41:28.269675   68640 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0501 03:41:28.276487   68640 kubeadm.go:733] kubelet initialised
	I0501 03:41:28.276512   68640 kubeadm.go:734] duration metric: took 6.808875ms waiting for restarted kubelet to initialise ...
	I0501 03:41:28.276522   68640 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0501 03:41:28.287109   68640 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-v8bqq" in "kube-system" namespace to be "Ready" ...
	I0501 03:41:28.297143   68640 pod_ready.go:97] node "no-preload-892672" hosting pod "coredns-7db6d8ff4d-v8bqq" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-892672" has status "Ready":"False"
	I0501 03:41:28.297185   68640 pod_ready.go:81] duration metric: took 10.040841ms for pod "coredns-7db6d8ff4d-v8bqq" in "kube-system" namespace to be "Ready" ...
	E0501 03:41:28.297198   68640 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-892672" hosting pod "coredns-7db6d8ff4d-v8bqq" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-892672" has status "Ready":"False"
	I0501 03:41:28.297206   68640 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-892672" in "kube-system" namespace to be "Ready" ...
	I0501 03:41:28.307648   68640 pod_ready.go:97] node "no-preload-892672" hosting pod "etcd-no-preload-892672" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-892672" has status "Ready":"False"
	I0501 03:41:28.307682   68640 pod_ready.go:81] duration metric: took 10.464199ms for pod "etcd-no-preload-892672" in "kube-system" namespace to be "Ready" ...
	E0501 03:41:28.307695   68640 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-892672" hosting pod "etcd-no-preload-892672" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-892672" has status "Ready":"False"
	I0501 03:41:28.307707   68640 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-892672" in "kube-system" namespace to be "Ready" ...
	I0501 03:41:30.319652   68640 pod_ready.go:102] pod "kube-apiserver-no-preload-892672" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:32.821375   68640 pod_ready.go:102] pod "kube-apiserver-no-preload-892672" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:29.657167   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:32.157549   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:32.012009   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:34.511543   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:31.679423   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:32.179628   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:32.679456   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:33.179336   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:33.679221   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:34.178900   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:34.679236   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:35.179595   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:35.679520   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:36.179639   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:35.317202   68640 pod_ready.go:102] pod "kube-apiserver-no-preload-892672" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:37.318125   68640 pod_ready.go:92] pod "kube-apiserver-no-preload-892672" in "kube-system" namespace has status "Ready":"True"
	I0501 03:41:37.318157   68640 pod_ready.go:81] duration metric: took 9.010440772s for pod "kube-apiserver-no-preload-892672" in "kube-system" namespace to be "Ready" ...
	I0501 03:41:37.318170   68640 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-892672" in "kube-system" namespace to be "Ready" ...
	I0501 03:41:37.327390   68640 pod_ready.go:92] pod "kube-controller-manager-no-preload-892672" in "kube-system" namespace has status "Ready":"True"
	I0501 03:41:37.327412   68640 pod_ready.go:81] duration metric: took 9.233689ms for pod "kube-controller-manager-no-preload-892672" in "kube-system" namespace to be "Ready" ...
	I0501 03:41:37.327425   68640 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-dwvdl" in "kube-system" namespace to be "Ready" ...
	I0501 03:41:37.333971   68640 pod_ready.go:92] pod "kube-proxy-dwvdl" in "kube-system" namespace has status "Ready":"True"
	I0501 03:41:37.333994   68640 pod_ready.go:81] duration metric: took 6.561014ms for pod "kube-proxy-dwvdl" in "kube-system" namespace to be "Ready" ...
	I0501 03:41:37.334006   68640 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-892672" in "kube-system" namespace to be "Ready" ...
	I0501 03:41:37.338637   68640 pod_ready.go:92] pod "kube-scheduler-no-preload-892672" in "kube-system" namespace has status "Ready":"True"
	I0501 03:41:37.338657   68640 pod_ready.go:81] duration metric: took 4.644395ms for pod "kube-scheduler-no-preload-892672" in "kube-system" namespace to be "Ready" ...
	I0501 03:41:37.338665   68640 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace to be "Ready" ...
	I0501 03:41:34.657958   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:36.658191   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:36.512234   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:39.012636   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:36.678883   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:37.179198   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:37.679101   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:38.179088   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:38.679354   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:39.179163   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:39.678809   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:40.179768   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:40.679046   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:41.179618   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:39.346054   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:41.346434   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:39.157142   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:41.656902   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:41.510939   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:43.511571   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:45.511959   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:41.679751   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:42.178848   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:42.679525   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:43.179706   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:43.679665   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:44.179053   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:44.679615   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:45.178830   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:45.679547   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:41:45.679620   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:41:45.718568   69580 cri.go:89] found id: ""
	I0501 03:41:45.718597   69580 logs.go:276] 0 containers: []
	W0501 03:41:45.718611   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:41:45.718619   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:41:45.718678   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:41:45.755572   69580 cri.go:89] found id: ""
	I0501 03:41:45.755596   69580 logs.go:276] 0 containers: []
	W0501 03:41:45.755604   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:41:45.755609   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:41:45.755654   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:41:45.793411   69580 cri.go:89] found id: ""
	I0501 03:41:45.793440   69580 logs.go:276] 0 containers: []
	W0501 03:41:45.793450   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:41:45.793458   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:41:45.793526   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:41:45.834547   69580 cri.go:89] found id: ""
	I0501 03:41:45.834572   69580 logs.go:276] 0 containers: []
	W0501 03:41:45.834579   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:41:45.834585   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:41:45.834668   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:41:45.873293   69580 cri.go:89] found id: ""
	I0501 03:41:45.873321   69580 logs.go:276] 0 containers: []
	W0501 03:41:45.873332   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:41:45.873348   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:41:45.873411   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:41:45.911703   69580 cri.go:89] found id: ""
	I0501 03:41:45.911734   69580 logs.go:276] 0 containers: []
	W0501 03:41:45.911745   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:41:45.911766   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:41:45.911826   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:41:45.949577   69580 cri.go:89] found id: ""
	I0501 03:41:45.949602   69580 logs.go:276] 0 containers: []
	W0501 03:41:45.949610   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:41:45.949616   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:41:45.949666   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:41:45.986174   69580 cri.go:89] found id: ""
	I0501 03:41:45.986199   69580 logs.go:276] 0 containers: []
	W0501 03:41:45.986207   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:41:45.986216   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:41:45.986228   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:41:46.041028   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:41:46.041064   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:41:46.057097   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:41:46.057126   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:41:46.195021   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:41:46.195042   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:41:46.195055   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:41:46.261153   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:41:46.261197   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:41:43.845096   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:45.845950   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:47.849620   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:44.157041   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:46.158028   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:48.658062   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:48.011975   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:50.512345   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:48.809274   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:48.824295   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:41:48.824369   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:41:48.869945   69580 cri.go:89] found id: ""
	I0501 03:41:48.869975   69580 logs.go:276] 0 containers: []
	W0501 03:41:48.869985   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:41:48.869993   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:41:48.870053   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:41:48.918088   69580 cri.go:89] found id: ""
	I0501 03:41:48.918113   69580 logs.go:276] 0 containers: []
	W0501 03:41:48.918122   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:41:48.918131   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:41:48.918190   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:41:48.958102   69580 cri.go:89] found id: ""
	I0501 03:41:48.958132   69580 logs.go:276] 0 containers: []
	W0501 03:41:48.958143   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:41:48.958149   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:41:48.958207   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:41:48.997163   69580 cri.go:89] found id: ""
	I0501 03:41:48.997194   69580 logs.go:276] 0 containers: []
	W0501 03:41:48.997211   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:41:48.997218   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:41:48.997284   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:41:49.040132   69580 cri.go:89] found id: ""
	I0501 03:41:49.040156   69580 logs.go:276] 0 containers: []
	W0501 03:41:49.040164   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:41:49.040170   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:41:49.040228   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:41:49.079680   69580 cri.go:89] found id: ""
	I0501 03:41:49.079712   69580 logs.go:276] 0 containers: []
	W0501 03:41:49.079724   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:41:49.079732   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:41:49.079790   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:41:49.120577   69580 cri.go:89] found id: ""
	I0501 03:41:49.120610   69580 logs.go:276] 0 containers: []
	W0501 03:41:49.120623   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:41:49.120630   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:41:49.120700   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:41:49.167098   69580 cri.go:89] found id: ""
	I0501 03:41:49.167123   69580 logs.go:276] 0 containers: []
	W0501 03:41:49.167133   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:41:49.167141   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:41:49.167152   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:41:49.242834   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:41:49.242868   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:41:49.264011   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:41:49.264033   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:41:49.367711   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:41:49.367739   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:41:49.367764   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:41:49.441925   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:41:49.441964   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:41:50.346009   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:52.346333   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:51.156287   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:53.657588   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:53.010720   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:55.012329   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:51.986536   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:52.001651   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:41:52.001734   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:41:52.039550   69580 cri.go:89] found id: ""
	I0501 03:41:52.039571   69580 logs.go:276] 0 containers: []
	W0501 03:41:52.039579   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:41:52.039584   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:41:52.039636   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:41:52.082870   69580 cri.go:89] found id: ""
	I0501 03:41:52.082892   69580 logs.go:276] 0 containers: []
	W0501 03:41:52.082900   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:41:52.082905   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:41:52.082949   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:41:52.126970   69580 cri.go:89] found id: ""
	I0501 03:41:52.126996   69580 logs.go:276] 0 containers: []
	W0501 03:41:52.127009   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:41:52.127014   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:41:52.127076   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:41:52.169735   69580 cri.go:89] found id: ""
	I0501 03:41:52.169761   69580 logs.go:276] 0 containers: []
	W0501 03:41:52.169769   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:41:52.169774   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:41:52.169826   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:41:52.207356   69580 cri.go:89] found id: ""
	I0501 03:41:52.207392   69580 logs.go:276] 0 containers: []
	W0501 03:41:52.207404   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:41:52.207412   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:41:52.207472   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:41:52.250074   69580 cri.go:89] found id: ""
	I0501 03:41:52.250102   69580 logs.go:276] 0 containers: []
	W0501 03:41:52.250113   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:41:52.250121   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:41:52.250180   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:41:52.290525   69580 cri.go:89] found id: ""
	I0501 03:41:52.290550   69580 logs.go:276] 0 containers: []
	W0501 03:41:52.290558   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:41:52.290564   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:41:52.290610   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:41:52.336058   69580 cri.go:89] found id: ""
	I0501 03:41:52.336084   69580 logs.go:276] 0 containers: []
	W0501 03:41:52.336092   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:41:52.336103   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:41:52.336118   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:41:52.392738   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:41:52.392773   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:41:52.408475   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:41:52.408503   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:41:52.493567   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:41:52.493594   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:41:52.493608   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:41:52.566550   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:41:52.566583   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:41:55.117129   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:55.134840   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:41:55.134918   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:41:55.193990   69580 cri.go:89] found id: ""
	I0501 03:41:55.194019   69580 logs.go:276] 0 containers: []
	W0501 03:41:55.194029   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:41:55.194038   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:41:55.194100   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:41:55.261710   69580 cri.go:89] found id: ""
	I0501 03:41:55.261743   69580 logs.go:276] 0 containers: []
	W0501 03:41:55.261754   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:41:55.261761   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:41:55.261823   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:41:55.302432   69580 cri.go:89] found id: ""
	I0501 03:41:55.302468   69580 logs.go:276] 0 containers: []
	W0501 03:41:55.302480   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:41:55.302488   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:41:55.302550   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:41:55.346029   69580 cri.go:89] found id: ""
	I0501 03:41:55.346058   69580 logs.go:276] 0 containers: []
	W0501 03:41:55.346067   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:41:55.346073   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:41:55.346117   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:41:55.393206   69580 cri.go:89] found id: ""
	I0501 03:41:55.393229   69580 logs.go:276] 0 containers: []
	W0501 03:41:55.393236   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:41:55.393242   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:41:55.393295   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:41:55.437908   69580 cri.go:89] found id: ""
	I0501 03:41:55.437940   69580 logs.go:276] 0 containers: []
	W0501 03:41:55.437952   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:41:55.437960   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:41:55.438020   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:41:55.480439   69580 cri.go:89] found id: ""
	I0501 03:41:55.480468   69580 logs.go:276] 0 containers: []
	W0501 03:41:55.480480   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:41:55.480488   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:41:55.480589   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:41:55.524782   69580 cri.go:89] found id: ""
	I0501 03:41:55.524811   69580 logs.go:276] 0 containers: []
	W0501 03:41:55.524819   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:41:55.524828   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:41:55.524840   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:41:55.604337   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:41:55.604373   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:41:55.649427   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:41:55.649455   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:41:55.707928   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:41:55.707976   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:41:55.723289   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:41:55.723316   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:41:55.805146   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:41:54.347203   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:56.847806   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:55.658387   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:58.156886   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:57.511280   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:59.511460   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:41:58.306145   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:41:58.322207   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:41:58.322280   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:41:58.370291   69580 cri.go:89] found id: ""
	I0501 03:41:58.370319   69580 logs.go:276] 0 containers: []
	W0501 03:41:58.370331   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:41:58.370338   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:41:58.370417   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:41:58.421230   69580 cri.go:89] found id: ""
	I0501 03:41:58.421256   69580 logs.go:276] 0 containers: []
	W0501 03:41:58.421264   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:41:58.421270   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:41:58.421317   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:41:58.463694   69580 cri.go:89] found id: ""
	I0501 03:41:58.463724   69580 logs.go:276] 0 containers: []
	W0501 03:41:58.463735   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:41:58.463743   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:41:58.463797   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:41:58.507756   69580 cri.go:89] found id: ""
	I0501 03:41:58.507785   69580 logs.go:276] 0 containers: []
	W0501 03:41:58.507791   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:41:58.507797   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:41:58.507870   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:41:58.554852   69580 cri.go:89] found id: ""
	I0501 03:41:58.554884   69580 logs.go:276] 0 containers: []
	W0501 03:41:58.554895   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:41:58.554903   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:41:58.554969   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:41:58.602467   69580 cri.go:89] found id: ""
	I0501 03:41:58.602495   69580 logs.go:276] 0 containers: []
	W0501 03:41:58.602505   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:41:58.602511   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:41:58.602561   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:41:58.652718   69580 cri.go:89] found id: ""
	I0501 03:41:58.652749   69580 logs.go:276] 0 containers: []
	W0501 03:41:58.652759   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:41:58.652766   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:41:58.652837   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:41:58.694351   69580 cri.go:89] found id: ""
	I0501 03:41:58.694377   69580 logs.go:276] 0 containers: []
	W0501 03:41:58.694385   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:41:58.694393   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:41:58.694434   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:41:58.779878   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:41:58.779911   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:41:58.826733   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:41:58.826768   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:41:58.883808   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:41:58.883842   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:41:58.900463   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:41:58.900495   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:41:58.991346   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:41:59.345807   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:01.846099   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:00.157131   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:02.157204   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:01.511711   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:03.512536   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:01.492396   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:42:01.508620   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:42:01.508756   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:42:01.555669   69580 cri.go:89] found id: ""
	I0501 03:42:01.555696   69580 logs.go:276] 0 containers: []
	W0501 03:42:01.555712   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:42:01.555720   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:42:01.555782   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:42:01.597591   69580 cri.go:89] found id: ""
	I0501 03:42:01.597615   69580 logs.go:276] 0 containers: []
	W0501 03:42:01.597626   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:42:01.597635   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:42:01.597693   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:42:01.636259   69580 cri.go:89] found id: ""
	I0501 03:42:01.636286   69580 logs.go:276] 0 containers: []
	W0501 03:42:01.636297   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:42:01.636305   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:42:01.636361   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:42:01.684531   69580 cri.go:89] found id: ""
	I0501 03:42:01.684562   69580 logs.go:276] 0 containers: []
	W0501 03:42:01.684572   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:42:01.684579   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:42:01.684647   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:42:01.725591   69580 cri.go:89] found id: ""
	I0501 03:42:01.725621   69580 logs.go:276] 0 containers: []
	W0501 03:42:01.725628   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:42:01.725652   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:42:01.725718   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:42:01.767868   69580 cri.go:89] found id: ""
	I0501 03:42:01.767901   69580 logs.go:276] 0 containers: []
	W0501 03:42:01.767910   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:42:01.767917   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:42:01.767977   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:42:01.817590   69580 cri.go:89] found id: ""
	I0501 03:42:01.817618   69580 logs.go:276] 0 containers: []
	W0501 03:42:01.817629   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:42:01.817637   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:42:01.817697   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:42:01.863549   69580 cri.go:89] found id: ""
	I0501 03:42:01.863576   69580 logs.go:276] 0 containers: []
	W0501 03:42:01.863586   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:42:01.863595   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:42:01.863607   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:42:01.879134   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:42:01.879162   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:42:01.967015   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:42:01.967043   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:42:01.967059   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:42:02.051576   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:42:02.051614   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:42:02.095614   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:42:02.095644   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:42:04.652974   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:42:04.671018   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:42:04.671103   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:42:04.712392   69580 cri.go:89] found id: ""
	I0501 03:42:04.712425   69580 logs.go:276] 0 containers: []
	W0501 03:42:04.712435   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:42:04.712442   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:42:04.712503   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:42:04.756854   69580 cri.go:89] found id: ""
	I0501 03:42:04.756881   69580 logs.go:276] 0 containers: []
	W0501 03:42:04.756893   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:42:04.756900   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:42:04.756962   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:42:04.797665   69580 cri.go:89] found id: ""
	I0501 03:42:04.797694   69580 logs.go:276] 0 containers: []
	W0501 03:42:04.797703   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:42:04.797709   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:42:04.797756   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:42:04.838441   69580 cri.go:89] found id: ""
	I0501 03:42:04.838472   69580 logs.go:276] 0 containers: []
	W0501 03:42:04.838483   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:42:04.838491   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:42:04.838556   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:42:04.879905   69580 cri.go:89] found id: ""
	I0501 03:42:04.879935   69580 logs.go:276] 0 containers: []
	W0501 03:42:04.879945   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:42:04.879952   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:42:04.880012   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:42:04.924759   69580 cri.go:89] found id: ""
	I0501 03:42:04.924792   69580 logs.go:276] 0 containers: []
	W0501 03:42:04.924804   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:42:04.924813   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:42:04.924879   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:42:04.965638   69580 cri.go:89] found id: ""
	I0501 03:42:04.965663   69580 logs.go:276] 0 containers: []
	W0501 03:42:04.965670   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:42:04.965676   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:42:04.965721   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:42:05.013127   69580 cri.go:89] found id: ""
	I0501 03:42:05.013153   69580 logs.go:276] 0 containers: []
	W0501 03:42:05.013163   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:42:05.013173   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:42:05.013185   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:42:05.108388   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:42:05.108409   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:42:05.108422   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:42:05.198239   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:42:05.198281   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:42:05.241042   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:42:05.241076   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:42:05.299017   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:42:05.299069   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:42:04.345910   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:06.346830   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:04.657438   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:06.657707   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:06.011511   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:08.016548   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:10.510503   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:07.815458   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:42:07.832047   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:42:07.832125   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:42:07.882950   69580 cri.go:89] found id: ""
	I0501 03:42:07.882985   69580 logs.go:276] 0 containers: []
	W0501 03:42:07.882996   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:42:07.883002   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:42:07.883051   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:42:07.928086   69580 cri.go:89] found id: ""
	I0501 03:42:07.928111   69580 logs.go:276] 0 containers: []
	W0501 03:42:07.928119   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:42:07.928124   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:42:07.928177   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:42:07.976216   69580 cri.go:89] found id: ""
	I0501 03:42:07.976250   69580 logs.go:276] 0 containers: []
	W0501 03:42:07.976268   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:42:07.976274   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:42:07.976331   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:42:08.019903   69580 cri.go:89] found id: ""
	I0501 03:42:08.019932   69580 logs.go:276] 0 containers: []
	W0501 03:42:08.019943   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:42:08.019951   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:42:08.020009   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:42:08.075980   69580 cri.go:89] found id: ""
	I0501 03:42:08.076004   69580 logs.go:276] 0 containers: []
	W0501 03:42:08.076012   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:42:08.076018   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:42:08.076065   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:42:08.114849   69580 cri.go:89] found id: ""
	I0501 03:42:08.114881   69580 logs.go:276] 0 containers: []
	W0501 03:42:08.114891   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:42:08.114897   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:42:08.114955   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:42:08.159427   69580 cri.go:89] found id: ""
	I0501 03:42:08.159457   69580 logs.go:276] 0 containers: []
	W0501 03:42:08.159468   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:42:08.159476   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:42:08.159543   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:42:08.200117   69580 cri.go:89] found id: ""
	I0501 03:42:08.200151   69580 logs.go:276] 0 containers: []
	W0501 03:42:08.200163   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:42:08.200182   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:42:08.200197   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:42:08.281926   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:42:08.281972   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:42:08.331393   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:42:08.331429   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:42:08.386758   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:42:08.386793   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:42:08.402551   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:42:08.402581   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:42:08.489678   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:42:10.990653   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:42:11.007879   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:42:11.007958   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:42:11.049842   69580 cri.go:89] found id: ""
	I0501 03:42:11.049867   69580 logs.go:276] 0 containers: []
	W0501 03:42:11.049879   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:42:11.049885   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:42:11.049933   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:42:11.091946   69580 cri.go:89] found id: ""
	I0501 03:42:11.091980   69580 logs.go:276] 0 containers: []
	W0501 03:42:11.091992   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:42:11.092000   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:42:11.092079   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:42:11.140100   69580 cri.go:89] found id: ""
	I0501 03:42:11.140129   69580 logs.go:276] 0 containers: []
	W0501 03:42:11.140138   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:42:11.140144   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:42:11.140207   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:42:11.182796   69580 cri.go:89] found id: ""
	I0501 03:42:11.182821   69580 logs.go:276] 0 containers: []
	W0501 03:42:11.182832   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:42:11.182838   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:42:11.182896   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:42:11.222985   69580 cri.go:89] found id: ""
	I0501 03:42:11.223016   69580 logs.go:276] 0 containers: []
	W0501 03:42:11.223027   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:42:11.223033   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:42:11.223114   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:42:11.265793   69580 cri.go:89] found id: ""
	I0501 03:42:11.265818   69580 logs.go:276] 0 containers: []
	W0501 03:42:11.265830   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:42:11.265838   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:42:11.265913   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:42:11.309886   69580 cri.go:89] found id: ""
	I0501 03:42:11.309912   69580 logs.go:276] 0 containers: []
	W0501 03:42:11.309924   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:42:11.309931   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:42:11.309989   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:42:11.357757   69580 cri.go:89] found id: ""
	I0501 03:42:11.357791   69580 logs.go:276] 0 containers: []
	W0501 03:42:11.357803   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:42:11.357823   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:42:11.357839   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:42:11.412668   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:42:11.412704   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:42:11.428380   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:42:11.428422   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0501 03:42:08.347511   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:10.846691   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:09.156632   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:11.158047   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:13.657603   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:12.512713   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:15.011382   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	W0501 03:42:11.521898   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:42:11.521924   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:42:11.521940   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:42:11.607081   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:42:11.607116   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:42:14.153054   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:42:14.173046   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:42:14.173150   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:42:14.219583   69580 cri.go:89] found id: ""
	I0501 03:42:14.219605   69580 logs.go:276] 0 containers: []
	W0501 03:42:14.219613   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:42:14.219619   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:42:14.219664   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:42:14.260316   69580 cri.go:89] found id: ""
	I0501 03:42:14.260349   69580 logs.go:276] 0 containers: []
	W0501 03:42:14.260357   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:42:14.260366   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:42:14.260420   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:42:14.305049   69580 cri.go:89] found id: ""
	I0501 03:42:14.305085   69580 logs.go:276] 0 containers: []
	W0501 03:42:14.305109   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:42:14.305117   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:42:14.305198   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:42:14.359589   69580 cri.go:89] found id: ""
	I0501 03:42:14.359614   69580 logs.go:276] 0 containers: []
	W0501 03:42:14.359622   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:42:14.359628   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:42:14.359672   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:42:14.403867   69580 cri.go:89] found id: ""
	I0501 03:42:14.403895   69580 logs.go:276] 0 containers: []
	W0501 03:42:14.403904   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:42:14.403910   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:42:14.403987   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:42:14.446626   69580 cri.go:89] found id: ""
	I0501 03:42:14.446655   69580 logs.go:276] 0 containers: []
	W0501 03:42:14.446675   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:42:14.446683   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:42:14.446754   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:42:14.490983   69580 cri.go:89] found id: ""
	I0501 03:42:14.491016   69580 logs.go:276] 0 containers: []
	W0501 03:42:14.491028   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:42:14.491036   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:42:14.491117   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:42:14.534180   69580 cri.go:89] found id: ""
	I0501 03:42:14.534205   69580 logs.go:276] 0 containers: []
	W0501 03:42:14.534213   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:42:14.534221   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:42:14.534236   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:42:14.621433   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:42:14.621491   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:42:14.680265   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:42:14.680310   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:42:14.738943   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:42:14.738983   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:42:14.754145   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:42:14.754176   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:42:14.839974   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:42:13.347081   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:15.847072   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:17.847749   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:16.157433   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:18.158120   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:17.017276   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:19.514339   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:17.340948   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:42:17.360007   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:42:17.360068   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:42:17.403201   69580 cri.go:89] found id: ""
	I0501 03:42:17.403231   69580 logs.go:276] 0 containers: []
	W0501 03:42:17.403239   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:42:17.403245   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:42:17.403301   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:42:17.442940   69580 cri.go:89] found id: ""
	I0501 03:42:17.442966   69580 logs.go:276] 0 containers: []
	W0501 03:42:17.442975   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:42:17.442981   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:42:17.443038   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:42:17.487219   69580 cri.go:89] found id: ""
	I0501 03:42:17.487248   69580 logs.go:276] 0 containers: []
	W0501 03:42:17.487259   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:42:17.487267   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:42:17.487324   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:42:17.528551   69580 cri.go:89] found id: ""
	I0501 03:42:17.528583   69580 logs.go:276] 0 containers: []
	W0501 03:42:17.528593   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:42:17.528601   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:42:17.528668   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:42:17.577005   69580 cri.go:89] found id: ""
	I0501 03:42:17.577041   69580 logs.go:276] 0 containers: []
	W0501 03:42:17.577052   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:42:17.577061   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:42:17.577132   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:42:17.618924   69580 cri.go:89] found id: ""
	I0501 03:42:17.618949   69580 logs.go:276] 0 containers: []
	W0501 03:42:17.618957   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:42:17.618963   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:42:17.619022   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:42:17.660487   69580 cri.go:89] found id: ""
	I0501 03:42:17.660514   69580 logs.go:276] 0 containers: []
	W0501 03:42:17.660525   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:42:17.660532   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:42:17.660592   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:42:17.701342   69580 cri.go:89] found id: ""
	I0501 03:42:17.701370   69580 logs.go:276] 0 containers: []
	W0501 03:42:17.701378   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:42:17.701387   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:42:17.701400   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:42:17.757034   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:42:17.757069   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:42:17.772955   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:42:17.772984   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:42:17.888062   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:42:17.888088   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:42:17.888101   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:42:17.969274   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:42:17.969312   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:42:20.521053   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:42:20.536065   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:42:20.536141   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:42:20.577937   69580 cri.go:89] found id: ""
	I0501 03:42:20.577967   69580 logs.go:276] 0 containers: []
	W0501 03:42:20.577977   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:42:20.577986   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:42:20.578055   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:42:20.626690   69580 cri.go:89] found id: ""
	I0501 03:42:20.626714   69580 logs.go:276] 0 containers: []
	W0501 03:42:20.626722   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:42:20.626728   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:42:20.626809   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:42:20.670849   69580 cri.go:89] found id: ""
	I0501 03:42:20.670872   69580 logs.go:276] 0 containers: []
	W0501 03:42:20.670881   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:42:20.670886   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:42:20.670946   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:42:20.711481   69580 cri.go:89] found id: ""
	I0501 03:42:20.711511   69580 logs.go:276] 0 containers: []
	W0501 03:42:20.711522   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:42:20.711531   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:42:20.711596   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:42:20.753413   69580 cri.go:89] found id: ""
	I0501 03:42:20.753443   69580 logs.go:276] 0 containers: []
	W0501 03:42:20.753452   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:42:20.753459   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:42:20.753536   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:42:20.791424   69580 cri.go:89] found id: ""
	I0501 03:42:20.791452   69580 logs.go:276] 0 containers: []
	W0501 03:42:20.791461   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:42:20.791466   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:42:20.791526   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:42:20.833718   69580 cri.go:89] found id: ""
	I0501 03:42:20.833740   69580 logs.go:276] 0 containers: []
	W0501 03:42:20.833748   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:42:20.833752   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:42:20.833799   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:42:20.879788   69580 cri.go:89] found id: ""
	I0501 03:42:20.879818   69580 logs.go:276] 0 containers: []
	W0501 03:42:20.879828   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:42:20.879839   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:42:20.879855   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:42:20.895266   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:42:20.895304   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:42:20.976429   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:42:20.976452   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:42:20.976465   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:42:21.063573   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:42:21.063611   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:42:21.113510   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:42:21.113543   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:42:20.346735   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:22.347096   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:20.658642   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:22.659841   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:22.011045   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:24.012756   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:23.672203   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:42:23.687849   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:42:23.687946   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:42:23.731428   69580 cri.go:89] found id: ""
	I0501 03:42:23.731455   69580 logs.go:276] 0 containers: []
	W0501 03:42:23.731467   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:42:23.731473   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:42:23.731534   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:42:23.772219   69580 cri.go:89] found id: ""
	I0501 03:42:23.772248   69580 logs.go:276] 0 containers: []
	W0501 03:42:23.772259   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:42:23.772266   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:42:23.772369   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:42:23.837203   69580 cri.go:89] found id: ""
	I0501 03:42:23.837235   69580 logs.go:276] 0 containers: []
	W0501 03:42:23.837247   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:42:23.837255   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:42:23.837317   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:42:23.884681   69580 cri.go:89] found id: ""
	I0501 03:42:23.884709   69580 logs.go:276] 0 containers: []
	W0501 03:42:23.884716   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:42:23.884722   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:42:23.884783   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:42:23.927544   69580 cri.go:89] found id: ""
	I0501 03:42:23.927576   69580 logs.go:276] 0 containers: []
	W0501 03:42:23.927584   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:42:23.927590   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:42:23.927652   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:42:23.970428   69580 cri.go:89] found id: ""
	I0501 03:42:23.970457   69580 logs.go:276] 0 containers: []
	W0501 03:42:23.970467   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:42:23.970476   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:42:23.970541   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:42:24.010545   69580 cri.go:89] found id: ""
	I0501 03:42:24.010573   69580 logs.go:276] 0 containers: []
	W0501 03:42:24.010583   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:42:24.010593   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:42:24.010653   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:42:24.053547   69580 cri.go:89] found id: ""
	I0501 03:42:24.053574   69580 logs.go:276] 0 containers: []
	W0501 03:42:24.053582   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:42:24.053591   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:42:24.053602   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:42:24.108416   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:42:24.108452   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:42:24.124052   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:42:24.124083   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:42:24.209024   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:42:24.209048   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:42:24.209063   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:42:24.291644   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:42:24.291693   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:42:24.846439   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:26.846750   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:25.157009   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:27.657022   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:26.510679   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:28.511049   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:30.511542   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:26.840623   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:42:26.856231   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:42:26.856320   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:42:26.897988   69580 cri.go:89] found id: ""
	I0501 03:42:26.898022   69580 logs.go:276] 0 containers: []
	W0501 03:42:26.898033   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:42:26.898041   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:42:26.898109   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:42:26.937608   69580 cri.go:89] found id: ""
	I0501 03:42:26.937638   69580 logs.go:276] 0 containers: []
	W0501 03:42:26.937660   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:42:26.937668   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:42:26.937731   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:42:26.979799   69580 cri.go:89] found id: ""
	I0501 03:42:26.979836   69580 logs.go:276] 0 containers: []
	W0501 03:42:26.979847   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:42:26.979854   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:42:26.979922   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:42:27.018863   69580 cri.go:89] found id: ""
	I0501 03:42:27.018896   69580 logs.go:276] 0 containers: []
	W0501 03:42:27.018903   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:42:27.018909   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:42:27.018959   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:42:27.057864   69580 cri.go:89] found id: ""
	I0501 03:42:27.057893   69580 logs.go:276] 0 containers: []
	W0501 03:42:27.057904   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:42:27.057912   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:42:27.057982   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:42:27.102909   69580 cri.go:89] found id: ""
	I0501 03:42:27.102939   69580 logs.go:276] 0 containers: []
	W0501 03:42:27.102950   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:42:27.102958   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:42:27.103019   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:42:27.148292   69580 cri.go:89] found id: ""
	I0501 03:42:27.148326   69580 logs.go:276] 0 containers: []
	W0501 03:42:27.148336   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:42:27.148344   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:42:27.148407   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:42:27.197557   69580 cri.go:89] found id: ""
	I0501 03:42:27.197581   69580 logs.go:276] 0 containers: []
	W0501 03:42:27.197588   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:42:27.197596   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:42:27.197609   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:42:27.281768   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:42:27.281793   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:42:27.281806   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:42:27.361496   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:42:27.361528   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:42:27.407640   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:42:27.407675   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:42:27.472533   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:42:27.472576   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:42:29.987773   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:42:30.003511   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:42:30.003619   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:42:30.049330   69580 cri.go:89] found id: ""
	I0501 03:42:30.049363   69580 logs.go:276] 0 containers: []
	W0501 03:42:30.049377   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:42:30.049384   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:42:30.049439   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:42:30.088521   69580 cri.go:89] found id: ""
	I0501 03:42:30.088549   69580 logs.go:276] 0 containers: []
	W0501 03:42:30.088560   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:42:30.088568   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:42:30.088624   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:42:30.132731   69580 cri.go:89] found id: ""
	I0501 03:42:30.132765   69580 logs.go:276] 0 containers: []
	W0501 03:42:30.132777   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:42:30.132784   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:42:30.132847   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:42:30.178601   69580 cri.go:89] found id: ""
	I0501 03:42:30.178639   69580 logs.go:276] 0 containers: []
	W0501 03:42:30.178648   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:42:30.178656   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:42:30.178714   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:42:30.230523   69580 cri.go:89] found id: ""
	I0501 03:42:30.230549   69580 logs.go:276] 0 containers: []
	W0501 03:42:30.230561   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:42:30.230569   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:42:30.230632   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:42:30.289234   69580 cri.go:89] found id: ""
	I0501 03:42:30.289262   69580 logs.go:276] 0 containers: []
	W0501 03:42:30.289270   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:42:30.289277   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:42:30.289342   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:42:30.332596   69580 cri.go:89] found id: ""
	I0501 03:42:30.332627   69580 logs.go:276] 0 containers: []
	W0501 03:42:30.332637   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:42:30.332644   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:42:30.332710   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:42:30.383871   69580 cri.go:89] found id: ""
	I0501 03:42:30.383901   69580 logs.go:276] 0 containers: []
	W0501 03:42:30.383908   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:42:30.383917   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:42:30.383929   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:42:30.464382   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:42:30.464404   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:42:30.464417   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:42:30.550604   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:42:30.550637   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:42:30.594927   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:42:30.594959   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:42:30.648392   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:42:30.648426   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:42:28.847271   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:31.345865   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:29.657316   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:31.657435   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:32.511887   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:35.011677   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:33.167591   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:42:33.183804   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:42:33.183874   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:42:33.223501   69580 cri.go:89] found id: ""
	I0501 03:42:33.223525   69580 logs.go:276] 0 containers: []
	W0501 03:42:33.223532   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:42:33.223539   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:42:33.223600   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:42:33.268674   69580 cri.go:89] found id: ""
	I0501 03:42:33.268705   69580 logs.go:276] 0 containers: []
	W0501 03:42:33.268741   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:42:33.268749   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:42:33.268807   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:42:33.310613   69580 cri.go:89] found id: ""
	I0501 03:42:33.310655   69580 logs.go:276] 0 containers: []
	W0501 03:42:33.310666   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:42:33.310674   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:42:33.310737   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:42:33.353156   69580 cri.go:89] found id: ""
	I0501 03:42:33.353177   69580 logs.go:276] 0 containers: []
	W0501 03:42:33.353184   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:42:33.353189   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:42:33.353237   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:42:33.389702   69580 cri.go:89] found id: ""
	I0501 03:42:33.389730   69580 logs.go:276] 0 containers: []
	W0501 03:42:33.389743   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:42:33.389751   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:42:33.389817   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:42:33.431244   69580 cri.go:89] found id: ""
	I0501 03:42:33.431275   69580 logs.go:276] 0 containers: []
	W0501 03:42:33.431290   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:42:33.431298   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:42:33.431384   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:42:33.472382   69580 cri.go:89] found id: ""
	I0501 03:42:33.472412   69580 logs.go:276] 0 containers: []
	W0501 03:42:33.472423   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:42:33.472431   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:42:33.472519   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:42:33.517042   69580 cri.go:89] found id: ""
	I0501 03:42:33.517064   69580 logs.go:276] 0 containers: []
	W0501 03:42:33.517071   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:42:33.517079   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:42:33.517091   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:42:33.573343   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:42:33.573372   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:42:33.588932   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:42:33.588963   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:42:33.674060   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:42:33.674090   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:42:33.674106   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:42:33.756635   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:42:33.756684   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:42:36.300909   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:42:36.320407   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:42:36.320474   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:42:36.367236   69580 cri.go:89] found id: ""
	I0501 03:42:36.367261   69580 logs.go:276] 0 containers: []
	W0501 03:42:36.367269   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:42:36.367274   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:42:36.367335   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:42:36.406440   69580 cri.go:89] found id: ""
	I0501 03:42:36.406471   69580 logs.go:276] 0 containers: []
	W0501 03:42:36.406482   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:42:36.406489   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:42:36.406552   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:42:36.443931   69580 cri.go:89] found id: ""
	I0501 03:42:36.443957   69580 logs.go:276] 0 containers: []
	W0501 03:42:36.443964   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:42:36.443969   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:42:36.444024   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:42:33.844832   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:35.845476   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:37.846291   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:34.156976   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:36.657001   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:38.657056   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:37.510534   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:39.511335   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:36.486169   69580 cri.go:89] found id: ""
	I0501 03:42:36.486200   69580 logs.go:276] 0 containers: []
	W0501 03:42:36.486213   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:42:36.486220   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:42:36.486276   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:42:36.532211   69580 cri.go:89] found id: ""
	I0501 03:42:36.532237   69580 logs.go:276] 0 containers: []
	W0501 03:42:36.532246   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:42:36.532251   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:42:36.532311   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:42:36.571889   69580 cri.go:89] found id: ""
	I0501 03:42:36.571921   69580 logs.go:276] 0 containers: []
	W0501 03:42:36.571933   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:42:36.571940   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:42:36.572000   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:42:36.612126   69580 cri.go:89] found id: ""
	I0501 03:42:36.612159   69580 logs.go:276] 0 containers: []
	W0501 03:42:36.612170   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:42:36.612177   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:42:36.612238   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:42:36.654067   69580 cri.go:89] found id: ""
	I0501 03:42:36.654096   69580 logs.go:276] 0 containers: []
	W0501 03:42:36.654106   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:42:36.654117   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:42:36.654129   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:42:36.740205   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:42:36.740226   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:42:36.740237   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:42:36.821403   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:42:36.821437   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:42:36.874829   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:42:36.874867   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:42:36.928312   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:42:36.928342   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:42:39.444598   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:42:39.460086   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:42:39.460151   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:42:39.500833   69580 cri.go:89] found id: ""
	I0501 03:42:39.500859   69580 logs.go:276] 0 containers: []
	W0501 03:42:39.500870   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:42:39.500879   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:42:39.500936   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:42:39.544212   69580 cri.go:89] found id: ""
	I0501 03:42:39.544238   69580 logs.go:276] 0 containers: []
	W0501 03:42:39.544248   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:42:39.544260   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:42:39.544326   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:42:39.582167   69580 cri.go:89] found id: ""
	I0501 03:42:39.582200   69580 logs.go:276] 0 containers: []
	W0501 03:42:39.582218   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:42:39.582231   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:42:39.582296   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:42:39.624811   69580 cri.go:89] found id: ""
	I0501 03:42:39.624837   69580 logs.go:276] 0 containers: []
	W0501 03:42:39.624848   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:42:39.624855   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:42:39.624913   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:42:39.666001   69580 cri.go:89] found id: ""
	I0501 03:42:39.666030   69580 logs.go:276] 0 containers: []
	W0501 03:42:39.666041   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:42:39.666048   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:42:39.666111   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:42:39.708790   69580 cri.go:89] found id: ""
	I0501 03:42:39.708820   69580 logs.go:276] 0 containers: []
	W0501 03:42:39.708831   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:42:39.708839   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:42:39.708896   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:42:39.750585   69580 cri.go:89] found id: ""
	I0501 03:42:39.750609   69580 logs.go:276] 0 containers: []
	W0501 03:42:39.750617   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:42:39.750622   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:42:39.750670   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:42:39.798576   69580 cri.go:89] found id: ""
	I0501 03:42:39.798612   69580 logs.go:276] 0 containers: []
	W0501 03:42:39.798624   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:42:39.798636   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:42:39.798651   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:42:39.891759   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:42:39.891782   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:42:39.891797   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:42:39.974419   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:42:39.974462   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:42:40.020700   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:42:40.020728   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:42:40.073946   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:42:40.073980   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:42:40.345975   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:42.350579   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:40.657403   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:42.658271   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:41.511780   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:43.512428   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:42.590933   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:42:42.606044   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:42:42.606120   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:42:42.653074   69580 cri.go:89] found id: ""
	I0501 03:42:42.653104   69580 logs.go:276] 0 containers: []
	W0501 03:42:42.653115   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:42:42.653123   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:42:42.653195   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:42:42.693770   69580 cri.go:89] found id: ""
	I0501 03:42:42.693809   69580 logs.go:276] 0 containers: []
	W0501 03:42:42.693821   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:42:42.693829   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:42:42.693885   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:42:42.739087   69580 cri.go:89] found id: ""
	I0501 03:42:42.739115   69580 logs.go:276] 0 containers: []
	W0501 03:42:42.739125   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:42:42.739133   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:42:42.739196   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:42:42.779831   69580 cri.go:89] found id: ""
	I0501 03:42:42.779863   69580 logs.go:276] 0 containers: []
	W0501 03:42:42.779876   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:42:42.779885   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:42:42.779950   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:42:42.826759   69580 cri.go:89] found id: ""
	I0501 03:42:42.826791   69580 logs.go:276] 0 containers: []
	W0501 03:42:42.826799   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:42:42.826804   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:42:42.826854   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:42:42.872602   69580 cri.go:89] found id: ""
	I0501 03:42:42.872629   69580 logs.go:276] 0 containers: []
	W0501 03:42:42.872640   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:42:42.872648   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:42:42.872707   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:42:42.913833   69580 cri.go:89] found id: ""
	I0501 03:42:42.913862   69580 logs.go:276] 0 containers: []
	W0501 03:42:42.913872   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:42:42.913879   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:42:42.913936   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:42:42.953629   69580 cri.go:89] found id: ""
	I0501 03:42:42.953657   69580 logs.go:276] 0 containers: []
	W0501 03:42:42.953667   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:42:42.953679   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:42:42.953695   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:42:42.968420   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:42:42.968447   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:42:43.046840   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:42:43.046874   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:42:43.046898   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:42:43.135453   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:42:43.135492   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:42:43.184103   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:42:43.184141   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:42:45.738246   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:42:45.753193   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:42:45.753258   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:42:45.791191   69580 cri.go:89] found id: ""
	I0501 03:42:45.791216   69580 logs.go:276] 0 containers: []
	W0501 03:42:45.791224   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:42:45.791236   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:42:45.791285   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:42:45.831935   69580 cri.go:89] found id: ""
	I0501 03:42:45.831967   69580 logs.go:276] 0 containers: []
	W0501 03:42:45.831978   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:42:45.831986   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:42:45.832041   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:42:45.869492   69580 cri.go:89] found id: ""
	I0501 03:42:45.869517   69580 logs.go:276] 0 containers: []
	W0501 03:42:45.869529   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:42:45.869536   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:42:45.869593   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:42:45.910642   69580 cri.go:89] found id: ""
	I0501 03:42:45.910672   69580 logs.go:276] 0 containers: []
	W0501 03:42:45.910682   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:42:45.910691   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:42:45.910754   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:42:45.951489   69580 cri.go:89] found id: ""
	I0501 03:42:45.951518   69580 logs.go:276] 0 containers: []
	W0501 03:42:45.951528   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:42:45.951535   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:42:45.951582   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:42:45.991388   69580 cri.go:89] found id: ""
	I0501 03:42:45.991410   69580 logs.go:276] 0 containers: []
	W0501 03:42:45.991418   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:42:45.991423   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:42:45.991467   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:42:46.036524   69580 cri.go:89] found id: ""
	I0501 03:42:46.036546   69580 logs.go:276] 0 containers: []
	W0501 03:42:46.036553   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:42:46.036560   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:42:46.036622   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:42:46.087472   69580 cri.go:89] found id: ""
	I0501 03:42:46.087495   69580 logs.go:276] 0 containers: []
	W0501 03:42:46.087504   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:42:46.087513   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:42:46.087526   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:42:46.101283   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:42:46.101314   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:42:46.176459   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:42:46.176491   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:42:46.176506   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:42:46.261921   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:42:46.261956   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:42:46.309879   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:42:46.309910   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:42:44.846042   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:47.349023   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:44.658318   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:47.155780   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:46.011347   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:48.511156   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:50.512175   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:48.867064   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:42:48.884082   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:42:48.884192   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:42:48.929681   69580 cri.go:89] found id: ""
	I0501 03:42:48.929708   69580 logs.go:276] 0 containers: []
	W0501 03:42:48.929716   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:42:48.929722   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:42:48.929789   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:42:48.977850   69580 cri.go:89] found id: ""
	I0501 03:42:48.977882   69580 logs.go:276] 0 containers: []
	W0501 03:42:48.977894   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:42:48.977901   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:42:48.977962   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:42:49.022590   69580 cri.go:89] found id: ""
	I0501 03:42:49.022619   69580 logs.go:276] 0 containers: []
	W0501 03:42:49.022629   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:42:49.022637   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:42:49.022706   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:42:49.064092   69580 cri.go:89] found id: ""
	I0501 03:42:49.064122   69580 logs.go:276] 0 containers: []
	W0501 03:42:49.064143   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:42:49.064152   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:42:49.064220   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:42:49.103962   69580 cri.go:89] found id: ""
	I0501 03:42:49.103990   69580 logs.go:276] 0 containers: []
	W0501 03:42:49.104002   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:42:49.104009   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:42:49.104070   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:42:49.144566   69580 cri.go:89] found id: ""
	I0501 03:42:49.144596   69580 logs.go:276] 0 containers: []
	W0501 03:42:49.144604   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:42:49.144610   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:42:49.144669   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:42:49.183110   69580 cri.go:89] found id: ""
	I0501 03:42:49.183141   69580 logs.go:276] 0 containers: []
	W0501 03:42:49.183161   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:42:49.183166   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:42:49.183239   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:42:49.225865   69580 cri.go:89] found id: ""
	I0501 03:42:49.225890   69580 logs.go:276] 0 containers: []
	W0501 03:42:49.225902   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:42:49.225912   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:42:49.225926   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:42:49.312967   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:42:49.313005   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:42:49.361171   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:42:49.361206   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:42:49.418731   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:42:49.418780   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:42:49.436976   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:42:49.437007   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:42:49.517994   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:42:49.848517   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:52.346908   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:49.160713   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:51.656444   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:53.659040   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:53.011092   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:55.011811   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:52.018675   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:42:52.033946   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:42:52.034022   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:42:52.081433   69580 cri.go:89] found id: ""
	I0501 03:42:52.081465   69580 logs.go:276] 0 containers: []
	W0501 03:42:52.081477   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:42:52.081485   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:42:52.081544   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:42:52.123914   69580 cri.go:89] found id: ""
	I0501 03:42:52.123947   69580 logs.go:276] 0 containers: []
	W0501 03:42:52.123958   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:42:52.123966   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:42:52.124023   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:42:52.164000   69580 cri.go:89] found id: ""
	I0501 03:42:52.164020   69580 logs.go:276] 0 containers: []
	W0501 03:42:52.164027   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:42:52.164033   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:42:52.164086   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:42:52.205984   69580 cri.go:89] found id: ""
	I0501 03:42:52.206011   69580 logs.go:276] 0 containers: []
	W0501 03:42:52.206023   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:42:52.206031   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:42:52.206096   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:42:52.252743   69580 cri.go:89] found id: ""
	I0501 03:42:52.252766   69580 logs.go:276] 0 containers: []
	W0501 03:42:52.252774   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:42:52.252779   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:42:52.252839   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:42:52.296814   69580 cri.go:89] found id: ""
	I0501 03:42:52.296838   69580 logs.go:276] 0 containers: []
	W0501 03:42:52.296856   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:42:52.296864   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:42:52.296928   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:42:52.335996   69580 cri.go:89] found id: ""
	I0501 03:42:52.336023   69580 logs.go:276] 0 containers: []
	W0501 03:42:52.336034   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:42:52.336042   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:42:52.336105   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:42:52.377470   69580 cri.go:89] found id: ""
	I0501 03:42:52.377498   69580 logs.go:276] 0 containers: []
	W0501 03:42:52.377513   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:42:52.377524   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:42:52.377540   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:42:52.432644   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:42:52.432680   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:42:52.447518   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:42:52.447552   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:42:52.530967   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:42:52.530992   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:42:52.531005   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:42:52.612280   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:42:52.612327   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:42:55.170134   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:42:55.185252   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:42:55.185328   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:42:55.227741   69580 cri.go:89] found id: ""
	I0501 03:42:55.227764   69580 logs.go:276] 0 containers: []
	W0501 03:42:55.227771   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:42:55.227777   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:42:55.227820   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:42:55.270796   69580 cri.go:89] found id: ""
	I0501 03:42:55.270823   69580 logs.go:276] 0 containers: []
	W0501 03:42:55.270834   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:42:55.270840   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:42:55.270898   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:42:55.312146   69580 cri.go:89] found id: ""
	I0501 03:42:55.312171   69580 logs.go:276] 0 containers: []
	W0501 03:42:55.312180   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:42:55.312190   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:42:55.312236   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:42:55.354410   69580 cri.go:89] found id: ""
	I0501 03:42:55.354436   69580 logs.go:276] 0 containers: []
	W0501 03:42:55.354445   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:42:55.354450   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:42:55.354509   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:42:55.393550   69580 cri.go:89] found id: ""
	I0501 03:42:55.393580   69580 logs.go:276] 0 containers: []
	W0501 03:42:55.393589   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:42:55.393594   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:42:55.393651   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:42:55.431468   69580 cri.go:89] found id: ""
	I0501 03:42:55.431497   69580 logs.go:276] 0 containers: []
	W0501 03:42:55.431507   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:42:55.431514   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:42:55.431566   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:42:55.470491   69580 cri.go:89] found id: ""
	I0501 03:42:55.470513   69580 logs.go:276] 0 containers: []
	W0501 03:42:55.470520   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:42:55.470526   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:42:55.470571   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:42:55.509849   69580 cri.go:89] found id: ""
	I0501 03:42:55.509875   69580 logs.go:276] 0 containers: []
	W0501 03:42:55.509885   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:42:55.509894   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:42:55.509909   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:42:55.566680   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:42:55.566762   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:42:55.584392   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:42:55.584423   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:42:55.663090   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:42:55.663116   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:42:55.663131   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:42:55.741459   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:42:55.741494   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:42:54.846549   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:56.848989   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:56.156918   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:58.157016   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:57.012980   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:59.513719   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:42:58.294435   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:42:58.310204   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:42:58.310267   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:42:58.350292   69580 cri.go:89] found id: ""
	I0501 03:42:58.350322   69580 logs.go:276] 0 containers: []
	W0501 03:42:58.350334   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:42:58.350343   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:42:58.350431   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:42:58.395998   69580 cri.go:89] found id: ""
	I0501 03:42:58.396029   69580 logs.go:276] 0 containers: []
	W0501 03:42:58.396041   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:42:58.396049   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:42:58.396131   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:42:58.434371   69580 cri.go:89] found id: ""
	I0501 03:42:58.434414   69580 logs.go:276] 0 containers: []
	W0501 03:42:58.434427   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:42:58.434434   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:42:58.434493   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:42:58.473457   69580 cri.go:89] found id: ""
	I0501 03:42:58.473489   69580 logs.go:276] 0 containers: []
	W0501 03:42:58.473499   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:42:58.473507   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:42:58.473572   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:42:58.515172   69580 cri.go:89] found id: ""
	I0501 03:42:58.515201   69580 logs.go:276] 0 containers: []
	W0501 03:42:58.515212   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:42:58.515221   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:42:58.515291   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:42:58.560305   69580 cri.go:89] found id: ""
	I0501 03:42:58.560333   69580 logs.go:276] 0 containers: []
	W0501 03:42:58.560341   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:42:58.560348   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:42:58.560407   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:42:58.617980   69580 cri.go:89] found id: ""
	I0501 03:42:58.618005   69580 logs.go:276] 0 containers: []
	W0501 03:42:58.618013   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:42:58.618019   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:42:58.618080   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:42:58.659800   69580 cri.go:89] found id: ""
	I0501 03:42:58.659827   69580 logs.go:276] 0 containers: []
	W0501 03:42:58.659838   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:42:58.659848   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:42:58.659862   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:42:58.718134   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:42:58.718169   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:42:58.733972   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:42:58.734001   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:42:58.813055   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:42:58.813082   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:42:58.813099   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:42:58.897293   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:42:58.897331   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:43:01.442980   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:43:01.459602   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:43:01.459687   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:42:58.849599   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:01.346264   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:00.157322   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:02.657002   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:02.012753   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:04.510896   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:01.502817   69580 cri.go:89] found id: ""
	I0501 03:43:01.502848   69580 logs.go:276] 0 containers: []
	W0501 03:43:01.502857   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:43:01.502863   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:43:01.502924   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:43:01.547251   69580 cri.go:89] found id: ""
	I0501 03:43:01.547289   69580 logs.go:276] 0 containers: []
	W0501 03:43:01.547301   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:43:01.547308   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:43:01.547376   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:43:01.590179   69580 cri.go:89] found id: ""
	I0501 03:43:01.590211   69580 logs.go:276] 0 containers: []
	W0501 03:43:01.590221   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:43:01.590228   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:43:01.590296   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:43:01.628772   69580 cri.go:89] found id: ""
	I0501 03:43:01.628814   69580 logs.go:276] 0 containers: []
	W0501 03:43:01.628826   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:43:01.628834   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:43:01.628893   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:43:01.677414   69580 cri.go:89] found id: ""
	I0501 03:43:01.677440   69580 logs.go:276] 0 containers: []
	W0501 03:43:01.677448   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:43:01.677453   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:43:01.677500   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:43:01.723107   69580 cri.go:89] found id: ""
	I0501 03:43:01.723139   69580 logs.go:276] 0 containers: []
	W0501 03:43:01.723152   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:43:01.723160   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:43:01.723225   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:43:01.771846   69580 cri.go:89] found id: ""
	I0501 03:43:01.771873   69580 logs.go:276] 0 containers: []
	W0501 03:43:01.771883   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:43:01.771890   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:43:01.771952   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:43:01.818145   69580 cri.go:89] found id: ""
	I0501 03:43:01.818179   69580 logs.go:276] 0 containers: []
	W0501 03:43:01.818191   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:43:01.818202   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:43:01.818218   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:43:01.881502   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:43:01.881546   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:43:01.897580   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:43:01.897614   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:43:01.981959   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:43:01.981980   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:43:01.981996   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:43:02.066228   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:43:02.066269   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:43:04.609855   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:43:04.626885   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:43:04.626962   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:43:04.668248   69580 cri.go:89] found id: ""
	I0501 03:43:04.668277   69580 logs.go:276] 0 containers: []
	W0501 03:43:04.668290   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:43:04.668298   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:43:04.668364   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:43:04.711032   69580 cri.go:89] found id: ""
	I0501 03:43:04.711057   69580 logs.go:276] 0 containers: []
	W0501 03:43:04.711068   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:43:04.711076   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:43:04.711136   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:43:04.754197   69580 cri.go:89] found id: ""
	I0501 03:43:04.754232   69580 logs.go:276] 0 containers: []
	W0501 03:43:04.754241   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:43:04.754248   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:43:04.754317   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:43:04.801062   69580 cri.go:89] found id: ""
	I0501 03:43:04.801089   69580 logs.go:276] 0 containers: []
	W0501 03:43:04.801097   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:43:04.801103   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:43:04.801163   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:43:04.849425   69580 cri.go:89] found id: ""
	I0501 03:43:04.849454   69580 logs.go:276] 0 containers: []
	W0501 03:43:04.849465   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:43:04.849473   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:43:04.849536   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:43:04.892555   69580 cri.go:89] found id: ""
	I0501 03:43:04.892589   69580 logs.go:276] 0 containers: []
	W0501 03:43:04.892597   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:43:04.892603   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:43:04.892661   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:43:04.934101   69580 cri.go:89] found id: ""
	I0501 03:43:04.934129   69580 logs.go:276] 0 containers: []
	W0501 03:43:04.934137   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:43:04.934142   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:43:04.934191   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:43:04.985720   69580 cri.go:89] found id: ""
	I0501 03:43:04.985747   69580 logs.go:276] 0 containers: []
	W0501 03:43:04.985760   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:43:04.985773   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:43:04.985789   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:43:05.060634   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:43:05.060692   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:43:05.082007   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:43:05.082036   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:43:05.164613   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:43:05.164636   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:43:05.164652   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:43:05.244064   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:43:05.244103   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:43:03.845495   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:06.346757   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:05.157929   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:07.657094   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:06.511168   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:08.511512   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:10.511984   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:07.793867   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:43:07.811161   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:43:07.811236   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:43:07.850738   69580 cri.go:89] found id: ""
	I0501 03:43:07.850765   69580 logs.go:276] 0 containers: []
	W0501 03:43:07.850775   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:43:07.850782   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:43:07.850841   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:43:07.892434   69580 cri.go:89] found id: ""
	I0501 03:43:07.892466   69580 logs.go:276] 0 containers: []
	W0501 03:43:07.892476   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:43:07.892483   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:43:07.892543   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:43:07.934093   69580 cri.go:89] found id: ""
	I0501 03:43:07.934122   69580 logs.go:276] 0 containers: []
	W0501 03:43:07.934133   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:43:07.934141   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:43:07.934200   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:43:07.976165   69580 cri.go:89] found id: ""
	I0501 03:43:07.976196   69580 logs.go:276] 0 containers: []
	W0501 03:43:07.976205   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:43:07.976216   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:43:07.976278   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:43:08.016925   69580 cri.go:89] found id: ""
	I0501 03:43:08.016956   69580 logs.go:276] 0 containers: []
	W0501 03:43:08.016968   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:43:08.016975   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:43:08.017038   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:43:08.063385   69580 cri.go:89] found id: ""
	I0501 03:43:08.063438   69580 logs.go:276] 0 containers: []
	W0501 03:43:08.063454   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:43:08.063465   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:43:08.063551   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:43:08.103586   69580 cri.go:89] found id: ""
	I0501 03:43:08.103610   69580 logs.go:276] 0 containers: []
	W0501 03:43:08.103618   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:43:08.103628   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:43:08.103672   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:43:08.142564   69580 cri.go:89] found id: ""
	I0501 03:43:08.142594   69580 logs.go:276] 0 containers: []
	W0501 03:43:08.142605   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:43:08.142617   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:43:08.142635   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:43:08.231532   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:43:08.231556   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:43:08.231571   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:43:08.311009   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:43:08.311053   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:43:08.357841   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:43:08.357877   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:43:08.409577   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:43:08.409610   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:43:10.924898   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:43:10.941525   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:43:10.941591   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:43:11.009214   69580 cri.go:89] found id: ""
	I0501 03:43:11.009238   69580 logs.go:276] 0 containers: []
	W0501 03:43:11.009247   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:43:11.009255   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:43:11.009316   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:43:11.072233   69580 cri.go:89] found id: ""
	I0501 03:43:11.072259   69580 logs.go:276] 0 containers: []
	W0501 03:43:11.072267   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:43:11.072273   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:43:11.072327   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:43:11.111662   69580 cri.go:89] found id: ""
	I0501 03:43:11.111691   69580 logs.go:276] 0 containers: []
	W0501 03:43:11.111701   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:43:11.111708   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:43:11.111765   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:43:11.151540   69580 cri.go:89] found id: ""
	I0501 03:43:11.151570   69580 logs.go:276] 0 containers: []
	W0501 03:43:11.151580   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:43:11.151594   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:43:11.151656   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:43:11.194030   69580 cri.go:89] found id: ""
	I0501 03:43:11.194064   69580 logs.go:276] 0 containers: []
	W0501 03:43:11.194076   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:43:11.194083   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:43:11.194146   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:43:11.233010   69580 cri.go:89] found id: ""
	I0501 03:43:11.233045   69580 logs.go:276] 0 containers: []
	W0501 03:43:11.233056   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:43:11.233063   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:43:11.233117   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:43:11.270979   69580 cri.go:89] found id: ""
	I0501 03:43:11.271009   69580 logs.go:276] 0 containers: []
	W0501 03:43:11.271019   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:43:11.271026   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:43:11.271088   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:43:11.312338   69580 cri.go:89] found id: ""
	I0501 03:43:11.312369   69580 logs.go:276] 0 containers: []
	W0501 03:43:11.312381   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:43:11.312393   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:43:11.312408   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:43:11.364273   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:43:11.364307   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:43:11.418603   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:43:11.418634   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:43:11.433409   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:43:11.433438   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0501 03:43:08.349537   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:10.845566   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:12.846699   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:10.157910   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:12.657859   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:12.512669   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:15.013314   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	W0501 03:43:11.511243   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:43:11.511265   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:43:11.511280   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:43:14.089834   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:43:14.104337   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:43:14.104419   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:43:14.148799   69580 cri.go:89] found id: ""
	I0501 03:43:14.148826   69580 logs.go:276] 0 containers: []
	W0501 03:43:14.148833   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:43:14.148839   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:43:14.148904   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:43:14.191330   69580 cri.go:89] found id: ""
	I0501 03:43:14.191366   69580 logs.go:276] 0 containers: []
	W0501 03:43:14.191378   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:43:14.191386   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:43:14.191448   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:43:14.245978   69580 cri.go:89] found id: ""
	I0501 03:43:14.246010   69580 logs.go:276] 0 containers: []
	W0501 03:43:14.246018   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:43:14.246024   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:43:14.246093   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:43:14.287188   69580 cri.go:89] found id: ""
	I0501 03:43:14.287215   69580 logs.go:276] 0 containers: []
	W0501 03:43:14.287223   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:43:14.287228   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:43:14.287276   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:43:14.328060   69580 cri.go:89] found id: ""
	I0501 03:43:14.328093   69580 logs.go:276] 0 containers: []
	W0501 03:43:14.328104   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:43:14.328113   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:43:14.328179   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:43:14.370734   69580 cri.go:89] found id: ""
	I0501 03:43:14.370765   69580 logs.go:276] 0 containers: []
	W0501 03:43:14.370776   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:43:14.370783   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:43:14.370837   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:43:14.414690   69580 cri.go:89] found id: ""
	I0501 03:43:14.414713   69580 logs.go:276] 0 containers: []
	W0501 03:43:14.414721   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:43:14.414726   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:43:14.414790   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:43:14.459030   69580 cri.go:89] found id: ""
	I0501 03:43:14.459060   69580 logs.go:276] 0 containers: []
	W0501 03:43:14.459072   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:43:14.459083   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:43:14.459098   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:43:14.519728   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:43:14.519761   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:43:14.535841   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:43:14.535871   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:43:14.615203   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:43:14.615231   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:43:14.615249   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:43:14.707677   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:43:14.707725   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:43:15.345927   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:17.846732   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:14.657956   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:17.156935   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:17.512424   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:20.012471   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:17.254918   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:43:17.270643   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:43:17.270698   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:43:17.310692   69580 cri.go:89] found id: ""
	I0501 03:43:17.310724   69580 logs.go:276] 0 containers: []
	W0501 03:43:17.310732   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:43:17.310739   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:43:17.310806   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:43:17.349932   69580 cri.go:89] found id: ""
	I0501 03:43:17.349959   69580 logs.go:276] 0 containers: []
	W0501 03:43:17.349969   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:43:17.349976   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:43:17.350040   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:43:17.393073   69580 cri.go:89] found id: ""
	I0501 03:43:17.393099   69580 logs.go:276] 0 containers: []
	W0501 03:43:17.393109   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:43:17.393116   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:43:17.393176   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:43:17.429736   69580 cri.go:89] found id: ""
	I0501 03:43:17.429763   69580 logs.go:276] 0 containers: []
	W0501 03:43:17.429773   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:43:17.429787   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:43:17.429858   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:43:17.473052   69580 cri.go:89] found id: ""
	I0501 03:43:17.473085   69580 logs.go:276] 0 containers: []
	W0501 03:43:17.473097   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:43:17.473105   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:43:17.473168   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:43:17.514035   69580 cri.go:89] found id: ""
	I0501 03:43:17.514062   69580 logs.go:276] 0 containers: []
	W0501 03:43:17.514071   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:43:17.514078   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:43:17.514126   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:43:17.553197   69580 cri.go:89] found id: ""
	I0501 03:43:17.553225   69580 logs.go:276] 0 containers: []
	W0501 03:43:17.553234   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:43:17.553240   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:43:17.553300   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:43:17.592170   69580 cri.go:89] found id: ""
	I0501 03:43:17.592192   69580 logs.go:276] 0 containers: []
	W0501 03:43:17.592199   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:43:17.592208   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:43:17.592220   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:43:17.647549   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:43:17.647584   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:43:17.663084   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:43:17.663114   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:43:17.748357   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:43:17.748385   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:43:17.748401   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:43:17.832453   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:43:17.832491   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:43:20.375927   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:43:20.391840   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:43:20.391918   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:43:20.434158   69580 cri.go:89] found id: ""
	I0501 03:43:20.434185   69580 logs.go:276] 0 containers: []
	W0501 03:43:20.434193   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:43:20.434198   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:43:20.434254   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:43:20.477209   69580 cri.go:89] found id: ""
	I0501 03:43:20.477237   69580 logs.go:276] 0 containers: []
	W0501 03:43:20.477253   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:43:20.477259   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:43:20.477309   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:43:20.517227   69580 cri.go:89] found id: ""
	I0501 03:43:20.517260   69580 logs.go:276] 0 containers: []
	W0501 03:43:20.517270   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:43:20.517282   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:43:20.517340   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:43:20.555771   69580 cri.go:89] found id: ""
	I0501 03:43:20.555802   69580 logs.go:276] 0 containers: []
	W0501 03:43:20.555812   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:43:20.555820   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:43:20.555866   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:43:20.598177   69580 cri.go:89] found id: ""
	I0501 03:43:20.598200   69580 logs.go:276] 0 containers: []
	W0501 03:43:20.598213   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:43:20.598218   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:43:20.598326   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:43:20.637336   69580 cri.go:89] found id: ""
	I0501 03:43:20.637364   69580 logs.go:276] 0 containers: []
	W0501 03:43:20.637373   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:43:20.637378   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:43:20.637435   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:43:20.687736   69580 cri.go:89] found id: ""
	I0501 03:43:20.687761   69580 logs.go:276] 0 containers: []
	W0501 03:43:20.687768   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:43:20.687782   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:43:20.687840   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:43:20.726102   69580 cri.go:89] found id: ""
	I0501 03:43:20.726135   69580 logs.go:276] 0 containers: []
	W0501 03:43:20.726143   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:43:20.726154   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:43:20.726169   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:43:20.780874   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:43:20.780905   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:43:20.795798   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:43:20.795836   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:43:20.882337   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:43:20.882367   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:43:20.882381   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:43:20.962138   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:43:20.962188   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:43:20.345887   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:22.346061   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:19.157165   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:21.657358   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:22.015676   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:24.511682   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:23.512174   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:43:23.528344   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:43:23.528417   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:43:23.567182   69580 cri.go:89] found id: ""
	I0501 03:43:23.567212   69580 logs.go:276] 0 containers: []
	W0501 03:43:23.567222   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:43:23.567230   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:43:23.567291   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:43:23.607522   69580 cri.go:89] found id: ""
	I0501 03:43:23.607556   69580 logs.go:276] 0 containers: []
	W0501 03:43:23.607567   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:43:23.607574   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:43:23.607637   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:43:23.650932   69580 cri.go:89] found id: ""
	I0501 03:43:23.650959   69580 logs.go:276] 0 containers: []
	W0501 03:43:23.650970   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:43:23.650976   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:43:23.651035   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:43:23.695392   69580 cri.go:89] found id: ""
	I0501 03:43:23.695419   69580 logs.go:276] 0 containers: []
	W0501 03:43:23.695428   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:43:23.695436   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:43:23.695514   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:43:23.736577   69580 cri.go:89] found id: ""
	I0501 03:43:23.736607   69580 logs.go:276] 0 containers: []
	W0501 03:43:23.736619   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:43:23.736627   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:43:23.736685   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:43:23.776047   69580 cri.go:89] found id: ""
	I0501 03:43:23.776070   69580 logs.go:276] 0 containers: []
	W0501 03:43:23.776077   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:43:23.776082   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:43:23.776134   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:43:23.813896   69580 cri.go:89] found id: ""
	I0501 03:43:23.813934   69580 logs.go:276] 0 containers: []
	W0501 03:43:23.813943   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:43:23.813949   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:43:23.813997   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:43:23.858898   69580 cri.go:89] found id: ""
	I0501 03:43:23.858925   69580 logs.go:276] 0 containers: []
	W0501 03:43:23.858936   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:43:23.858947   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:43:23.858964   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:43:23.901796   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:43:23.901850   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:43:23.957009   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:43:23.957040   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:43:23.972811   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:43:23.972839   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:43:24.055535   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:43:24.055557   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:43:24.055576   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:43:24.845310   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:26.847397   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:24.157453   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:26.661073   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:27.012181   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:29.511387   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:26.640114   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:43:26.657217   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:43:26.657285   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:43:26.701191   69580 cri.go:89] found id: ""
	I0501 03:43:26.701218   69580 logs.go:276] 0 containers: []
	W0501 03:43:26.701227   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:43:26.701232   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:43:26.701287   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:43:26.740710   69580 cri.go:89] found id: ""
	I0501 03:43:26.740737   69580 logs.go:276] 0 containers: []
	W0501 03:43:26.740745   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:43:26.740750   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:43:26.740808   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:43:26.778682   69580 cri.go:89] found id: ""
	I0501 03:43:26.778710   69580 logs.go:276] 0 containers: []
	W0501 03:43:26.778724   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:43:26.778730   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:43:26.778789   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:43:26.822143   69580 cri.go:89] found id: ""
	I0501 03:43:26.822190   69580 logs.go:276] 0 containers: []
	W0501 03:43:26.822201   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:43:26.822209   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:43:26.822270   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:43:26.865938   69580 cri.go:89] found id: ""
	I0501 03:43:26.865976   69580 logs.go:276] 0 containers: []
	W0501 03:43:26.865988   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:43:26.865996   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:43:26.866058   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:43:26.914939   69580 cri.go:89] found id: ""
	I0501 03:43:26.914969   69580 logs.go:276] 0 containers: []
	W0501 03:43:26.914979   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:43:26.914986   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:43:26.915043   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:43:26.961822   69580 cri.go:89] found id: ""
	I0501 03:43:26.961850   69580 logs.go:276] 0 containers: []
	W0501 03:43:26.961860   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:43:26.961867   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:43:26.961920   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:43:27.005985   69580 cri.go:89] found id: ""
	I0501 03:43:27.006012   69580 logs.go:276] 0 containers: []
	W0501 03:43:27.006021   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:43:27.006032   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:43:27.006046   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:43:27.058265   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:43:27.058303   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:43:27.076270   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:43:27.076308   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:43:27.152627   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:43:27.152706   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:43:27.152728   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:43:27.229638   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:43:27.229678   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:43:29.775960   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:43:29.792849   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:43:29.792925   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:43:29.832508   69580 cri.go:89] found id: ""
	I0501 03:43:29.832537   69580 logs.go:276] 0 containers: []
	W0501 03:43:29.832551   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:43:29.832559   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:43:29.832617   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:43:29.873160   69580 cri.go:89] found id: ""
	I0501 03:43:29.873188   69580 logs.go:276] 0 containers: []
	W0501 03:43:29.873199   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:43:29.873207   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:43:29.873271   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:43:29.919431   69580 cri.go:89] found id: ""
	I0501 03:43:29.919459   69580 logs.go:276] 0 containers: []
	W0501 03:43:29.919468   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:43:29.919474   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:43:29.919533   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:43:29.967944   69580 cri.go:89] found id: ""
	I0501 03:43:29.967976   69580 logs.go:276] 0 containers: []
	W0501 03:43:29.967987   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:43:29.967995   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:43:29.968060   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:43:30.011626   69580 cri.go:89] found id: ""
	I0501 03:43:30.011657   69580 logs.go:276] 0 containers: []
	W0501 03:43:30.011669   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:43:30.011678   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:43:30.011743   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:43:30.051998   69580 cri.go:89] found id: ""
	I0501 03:43:30.052020   69580 logs.go:276] 0 containers: []
	W0501 03:43:30.052028   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:43:30.052034   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:43:30.052095   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:43:30.094140   69580 cri.go:89] found id: ""
	I0501 03:43:30.094164   69580 logs.go:276] 0 containers: []
	W0501 03:43:30.094172   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:43:30.094179   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:43:30.094253   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:43:30.132363   69580 cri.go:89] found id: ""
	I0501 03:43:30.132391   69580 logs.go:276] 0 containers: []
	W0501 03:43:30.132399   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:43:30.132411   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:43:30.132422   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:43:30.221368   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:43:30.221410   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:43:30.271279   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:43:30.271317   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:43:30.325549   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:43:30.325586   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:43:30.345337   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:43:30.345376   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:43:30.427552   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:43:29.347108   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:31.846435   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:29.156483   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:31.156871   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:33.157355   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:32.015498   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:34.511190   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:32.928667   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:43:32.945489   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:43:32.945557   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:43:32.989604   69580 cri.go:89] found id: ""
	I0501 03:43:32.989628   69580 logs.go:276] 0 containers: []
	W0501 03:43:32.989636   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:43:32.989642   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:43:32.989701   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:43:33.030862   69580 cri.go:89] found id: ""
	I0501 03:43:33.030892   69580 logs.go:276] 0 containers: []
	W0501 03:43:33.030903   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:43:33.030912   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:43:33.030977   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:43:33.079795   69580 cri.go:89] found id: ""
	I0501 03:43:33.079827   69580 logs.go:276] 0 containers: []
	W0501 03:43:33.079835   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:43:33.079841   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:43:33.079898   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:43:33.120612   69580 cri.go:89] found id: ""
	I0501 03:43:33.120636   69580 logs.go:276] 0 containers: []
	W0501 03:43:33.120644   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:43:33.120649   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:43:33.120694   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:43:33.161824   69580 cri.go:89] found id: ""
	I0501 03:43:33.161851   69580 logs.go:276] 0 containers: []
	W0501 03:43:33.161861   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:43:33.161868   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:43:33.161924   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:43:33.200068   69580 cri.go:89] found id: ""
	I0501 03:43:33.200098   69580 logs.go:276] 0 containers: []
	W0501 03:43:33.200107   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:43:33.200113   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:43:33.200175   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:43:33.239314   69580 cri.go:89] found id: ""
	I0501 03:43:33.239341   69580 logs.go:276] 0 containers: []
	W0501 03:43:33.239351   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:43:33.239359   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:43:33.239427   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:43:33.281381   69580 cri.go:89] found id: ""
	I0501 03:43:33.281408   69580 logs.go:276] 0 containers: []
	W0501 03:43:33.281419   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:43:33.281431   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:43:33.281447   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:43:33.297992   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:43:33.298047   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:43:33.383273   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:43:33.383292   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:43:33.383303   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:43:33.465256   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:43:33.465289   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:43:33.509593   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:43:33.509621   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:43:36.065074   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:43:36.081361   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:43:36.081429   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:43:36.130394   69580 cri.go:89] found id: ""
	I0501 03:43:36.130436   69580 logs.go:276] 0 containers: []
	W0501 03:43:36.130448   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:43:36.130456   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:43:36.130524   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:43:36.171013   69580 cri.go:89] found id: ""
	I0501 03:43:36.171038   69580 logs.go:276] 0 containers: []
	W0501 03:43:36.171046   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:43:36.171052   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:43:36.171099   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:43:36.215372   69580 cri.go:89] found id: ""
	I0501 03:43:36.215411   69580 logs.go:276] 0 containers: []
	W0501 03:43:36.215424   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:43:36.215431   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:43:36.215493   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:43:36.257177   69580 cri.go:89] found id: ""
	I0501 03:43:36.257204   69580 logs.go:276] 0 containers: []
	W0501 03:43:36.257216   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:43:36.257223   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:43:36.257293   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:43:36.299035   69580 cri.go:89] found id: ""
	I0501 03:43:36.299066   69580 logs.go:276] 0 containers: []
	W0501 03:43:36.299085   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:43:36.299094   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:43:36.299166   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:43:36.339060   69580 cri.go:89] found id: ""
	I0501 03:43:36.339087   69580 logs.go:276] 0 containers: []
	W0501 03:43:36.339097   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:43:36.339105   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:43:36.339163   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:43:36.379982   69580 cri.go:89] found id: ""
	I0501 03:43:36.380016   69580 logs.go:276] 0 containers: []
	W0501 03:43:36.380028   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:43:36.380037   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:43:36.380100   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:43:36.419702   69580 cri.go:89] found id: ""
	I0501 03:43:36.419734   69580 logs.go:276] 0 containers: []
	W0501 03:43:36.419746   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:43:36.419758   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:43:36.419780   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:43:33.846499   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:35.846579   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:37.852802   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:35.159724   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:37.657040   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:36.516601   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:39.012001   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:36.472553   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:43:36.472774   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:43:36.488402   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:43:36.488439   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:43:36.566390   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:43:36.566433   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:43:36.566446   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:43:36.643493   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:43:36.643527   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:43:39.199060   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:43:39.216612   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:43:39.216695   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:43:39.262557   69580 cri.go:89] found id: ""
	I0501 03:43:39.262581   69580 logs.go:276] 0 containers: []
	W0501 03:43:39.262589   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:43:39.262595   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:43:39.262642   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:43:39.331051   69580 cri.go:89] found id: ""
	I0501 03:43:39.331076   69580 logs.go:276] 0 containers: []
	W0501 03:43:39.331093   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:43:39.331098   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:43:39.331162   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:43:39.382033   69580 cri.go:89] found id: ""
	I0501 03:43:39.382058   69580 logs.go:276] 0 containers: []
	W0501 03:43:39.382066   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:43:39.382071   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:43:39.382122   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:43:39.424019   69580 cri.go:89] found id: ""
	I0501 03:43:39.424049   69580 logs.go:276] 0 containers: []
	W0501 03:43:39.424058   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:43:39.424064   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:43:39.424120   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:43:39.465787   69580 cri.go:89] found id: ""
	I0501 03:43:39.465833   69580 logs.go:276] 0 containers: []
	W0501 03:43:39.465846   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:43:39.465855   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:43:39.465916   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:43:39.507746   69580 cri.go:89] found id: ""
	I0501 03:43:39.507781   69580 logs.go:276] 0 containers: []
	W0501 03:43:39.507791   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:43:39.507798   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:43:39.507861   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:43:39.550737   69580 cri.go:89] found id: ""
	I0501 03:43:39.550768   69580 logs.go:276] 0 containers: []
	W0501 03:43:39.550775   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:43:39.550781   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:43:39.550831   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:43:39.592279   69580 cri.go:89] found id: ""
	I0501 03:43:39.592329   69580 logs.go:276] 0 containers: []
	W0501 03:43:39.592343   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:43:39.592356   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:43:39.592373   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:43:39.648858   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:43:39.648896   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:43:39.665316   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:43:39.665343   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:43:39.743611   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:43:39.743632   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:43:39.743646   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:43:39.829285   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:43:39.829322   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:43:40.347121   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:42.845466   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:39.657888   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:41.657976   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:41.512061   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:44.017693   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:42.374457   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:43:42.389944   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:43:42.390002   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:43:42.431270   69580 cri.go:89] found id: ""
	I0501 03:43:42.431294   69580 logs.go:276] 0 containers: []
	W0501 03:43:42.431302   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:43:42.431308   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:43:42.431366   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:43:42.470515   69580 cri.go:89] found id: ""
	I0501 03:43:42.470546   69580 logs.go:276] 0 containers: []
	W0501 03:43:42.470558   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:43:42.470566   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:43:42.470619   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:43:42.518472   69580 cri.go:89] found id: ""
	I0501 03:43:42.518494   69580 logs.go:276] 0 containers: []
	W0501 03:43:42.518501   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:43:42.518506   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:43:42.518555   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:43:42.562192   69580 cri.go:89] found id: ""
	I0501 03:43:42.562220   69580 logs.go:276] 0 containers: []
	W0501 03:43:42.562231   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:43:42.562239   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:43:42.562300   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:43:42.599372   69580 cri.go:89] found id: ""
	I0501 03:43:42.599403   69580 logs.go:276] 0 containers: []
	W0501 03:43:42.599414   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:43:42.599422   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:43:42.599483   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:43:42.636738   69580 cri.go:89] found id: ""
	I0501 03:43:42.636766   69580 logs.go:276] 0 containers: []
	W0501 03:43:42.636777   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:43:42.636786   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:43:42.636845   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:43:42.682087   69580 cri.go:89] found id: ""
	I0501 03:43:42.682115   69580 logs.go:276] 0 containers: []
	W0501 03:43:42.682125   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:43:42.682133   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:43:42.682198   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:43:42.724280   69580 cri.go:89] found id: ""
	I0501 03:43:42.724316   69580 logs.go:276] 0 containers: []
	W0501 03:43:42.724328   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:43:42.724340   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:43:42.724354   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:43:42.771667   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:43:42.771702   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:43:42.827390   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:43:42.827428   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:43:42.843452   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:43:42.843480   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:43:42.925544   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:43:42.925563   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:43:42.925577   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:43:45.515104   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:43:45.529545   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:43:45.529619   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:43:45.573451   69580 cri.go:89] found id: ""
	I0501 03:43:45.573475   69580 logs.go:276] 0 containers: []
	W0501 03:43:45.573483   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:43:45.573489   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:43:45.573536   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:43:45.613873   69580 cri.go:89] found id: ""
	I0501 03:43:45.613897   69580 logs.go:276] 0 containers: []
	W0501 03:43:45.613905   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:43:45.613910   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:43:45.613954   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:43:45.660195   69580 cri.go:89] found id: ""
	I0501 03:43:45.660215   69580 logs.go:276] 0 containers: []
	W0501 03:43:45.660221   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:43:45.660226   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:43:45.660284   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:43:45.703539   69580 cri.go:89] found id: ""
	I0501 03:43:45.703566   69580 logs.go:276] 0 containers: []
	W0501 03:43:45.703574   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:43:45.703580   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:43:45.703637   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:43:45.754635   69580 cri.go:89] found id: ""
	I0501 03:43:45.754659   69580 logs.go:276] 0 containers: []
	W0501 03:43:45.754668   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:43:45.754675   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:43:45.754738   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:43:45.800836   69580 cri.go:89] found id: ""
	I0501 03:43:45.800866   69580 logs.go:276] 0 containers: []
	W0501 03:43:45.800884   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:43:45.800892   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:43:45.800955   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:43:45.859057   69580 cri.go:89] found id: ""
	I0501 03:43:45.859084   69580 logs.go:276] 0 containers: []
	W0501 03:43:45.859092   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:43:45.859098   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:43:45.859145   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:43:45.913173   69580 cri.go:89] found id: ""
	I0501 03:43:45.913204   69580 logs.go:276] 0 containers: []
	W0501 03:43:45.913216   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:43:45.913227   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:43:45.913243   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:43:45.930050   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:43:45.930087   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:43:46.006047   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:43:46.006081   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:43:46.006097   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:43:46.086630   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:43:46.086666   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:43:46.134635   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:43:46.134660   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:43:45.347071   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:47.845983   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:44.157143   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:46.157880   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:48.656747   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:46.510981   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:48.512854   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:48.690330   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:43:48.705024   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:43:48.705093   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:43:48.750244   69580 cri.go:89] found id: ""
	I0501 03:43:48.750278   69580 logs.go:276] 0 containers: []
	W0501 03:43:48.750299   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:43:48.750307   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:43:48.750377   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:43:48.791231   69580 cri.go:89] found id: ""
	I0501 03:43:48.791264   69580 logs.go:276] 0 containers: []
	W0501 03:43:48.791276   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:43:48.791283   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:43:48.791348   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:43:48.834692   69580 cri.go:89] found id: ""
	I0501 03:43:48.834720   69580 logs.go:276] 0 containers: []
	W0501 03:43:48.834731   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:43:48.834739   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:43:48.834809   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:43:48.877383   69580 cri.go:89] found id: ""
	I0501 03:43:48.877415   69580 logs.go:276] 0 containers: []
	W0501 03:43:48.877424   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:43:48.877430   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:43:48.877479   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:43:48.919728   69580 cri.go:89] found id: ""
	I0501 03:43:48.919756   69580 logs.go:276] 0 containers: []
	W0501 03:43:48.919767   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:43:48.919775   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:43:48.919836   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:43:48.962090   69580 cri.go:89] found id: ""
	I0501 03:43:48.962122   69580 logs.go:276] 0 containers: []
	W0501 03:43:48.962137   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:43:48.962144   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:43:48.962205   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:43:48.998456   69580 cri.go:89] found id: ""
	I0501 03:43:48.998487   69580 logs.go:276] 0 containers: []
	W0501 03:43:48.998498   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:43:48.998506   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:43:48.998566   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:43:49.042591   69580 cri.go:89] found id: ""
	I0501 03:43:49.042623   69580 logs.go:276] 0 containers: []
	W0501 03:43:49.042633   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:43:49.042645   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:43:49.042661   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:43:49.088533   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:43:49.088571   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:43:49.145252   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:43:49.145288   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:43:49.163093   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:43:49.163120   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:43:49.240805   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:43:49.240831   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:43:49.240844   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:43:49.848864   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:52.347128   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:50.656790   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:52.658130   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:51.011713   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:53.510598   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:55.512900   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:51.825530   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:43:51.839596   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:43:51.839669   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:43:51.879493   69580 cri.go:89] found id: ""
	I0501 03:43:51.879516   69580 logs.go:276] 0 containers: []
	W0501 03:43:51.879524   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:43:51.879530   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:43:51.879585   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:43:51.921577   69580 cri.go:89] found id: ""
	I0501 03:43:51.921608   69580 logs.go:276] 0 containers: []
	W0501 03:43:51.921620   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:43:51.921627   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:43:51.921693   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:43:51.961000   69580 cri.go:89] found id: ""
	I0501 03:43:51.961028   69580 logs.go:276] 0 containers: []
	W0501 03:43:51.961037   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:43:51.961043   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:43:51.961103   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:43:52.006087   69580 cri.go:89] found id: ""
	I0501 03:43:52.006118   69580 logs.go:276] 0 containers: []
	W0501 03:43:52.006129   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:43:52.006137   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:43:52.006201   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:43:52.047196   69580 cri.go:89] found id: ""
	I0501 03:43:52.047228   69580 logs.go:276] 0 containers: []
	W0501 03:43:52.047239   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:43:52.047250   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:43:52.047319   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:43:52.086380   69580 cri.go:89] found id: ""
	I0501 03:43:52.086423   69580 logs.go:276] 0 containers: []
	W0501 03:43:52.086434   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:43:52.086442   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:43:52.086499   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:43:52.128824   69580 cri.go:89] found id: ""
	I0501 03:43:52.128851   69580 logs.go:276] 0 containers: []
	W0501 03:43:52.128861   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:43:52.128868   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:43:52.128933   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:43:52.168743   69580 cri.go:89] found id: ""
	I0501 03:43:52.168769   69580 logs.go:276] 0 containers: []
	W0501 03:43:52.168776   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:43:52.168788   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:43:52.168802   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:43:52.184391   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:43:52.184419   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:43:52.268330   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:43:52.268368   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:43:52.268386   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:43:52.350556   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:43:52.350586   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:43:52.395930   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:43:52.395967   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:43:54.952879   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:43:54.968440   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:43:54.968517   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:43:55.008027   69580 cri.go:89] found id: ""
	I0501 03:43:55.008056   69580 logs.go:276] 0 containers: []
	W0501 03:43:55.008067   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:43:55.008074   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:43:55.008137   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:43:55.048848   69580 cri.go:89] found id: ""
	I0501 03:43:55.048869   69580 logs.go:276] 0 containers: []
	W0501 03:43:55.048877   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:43:55.048882   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:43:55.048931   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:43:55.085886   69580 cri.go:89] found id: ""
	I0501 03:43:55.085910   69580 logs.go:276] 0 containers: []
	W0501 03:43:55.085919   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:43:55.085924   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:43:55.085971   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:43:55.119542   69580 cri.go:89] found id: ""
	I0501 03:43:55.119567   69580 logs.go:276] 0 containers: []
	W0501 03:43:55.119574   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:43:55.119580   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:43:55.119636   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:43:55.158327   69580 cri.go:89] found id: ""
	I0501 03:43:55.158357   69580 logs.go:276] 0 containers: []
	W0501 03:43:55.158367   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:43:55.158374   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:43:55.158449   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:43:55.200061   69580 cri.go:89] found id: ""
	I0501 03:43:55.200085   69580 logs.go:276] 0 containers: []
	W0501 03:43:55.200093   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:43:55.200100   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:43:55.200146   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:43:55.239446   69580 cri.go:89] found id: ""
	I0501 03:43:55.239476   69580 logs.go:276] 0 containers: []
	W0501 03:43:55.239487   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:43:55.239493   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:43:55.239557   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:43:55.275593   69580 cri.go:89] found id: ""
	I0501 03:43:55.275623   69580 logs.go:276] 0 containers: []
	W0501 03:43:55.275635   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:43:55.275646   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:43:55.275662   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:43:55.356701   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:43:55.356724   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:43:55.356740   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:43:55.437445   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:43:55.437483   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:43:55.489024   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:43:55.489051   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:43:55.548083   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:43:55.548114   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:43:54.845529   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:57.348771   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:55.158591   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:57.657361   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:58.010099   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:00.010511   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:43:58.067063   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:43:58.080485   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:43:58.080539   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:43:58.121459   69580 cri.go:89] found id: ""
	I0501 03:43:58.121488   69580 logs.go:276] 0 containers: []
	W0501 03:43:58.121498   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:43:58.121505   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:43:58.121562   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:43:58.161445   69580 cri.go:89] found id: ""
	I0501 03:43:58.161479   69580 logs.go:276] 0 containers: []
	W0501 03:43:58.161489   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:43:58.161499   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:43:58.161560   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:43:58.203216   69580 cri.go:89] found id: ""
	I0501 03:43:58.203238   69580 logs.go:276] 0 containers: []
	W0501 03:43:58.203246   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:43:58.203251   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:43:58.203297   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:43:58.239496   69580 cri.go:89] found id: ""
	I0501 03:43:58.239526   69580 logs.go:276] 0 containers: []
	W0501 03:43:58.239538   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:43:58.239546   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:43:58.239605   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:43:58.280331   69580 cri.go:89] found id: ""
	I0501 03:43:58.280359   69580 logs.go:276] 0 containers: []
	W0501 03:43:58.280370   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:43:58.280378   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:43:58.280438   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:43:58.318604   69580 cri.go:89] found id: ""
	I0501 03:43:58.318634   69580 logs.go:276] 0 containers: []
	W0501 03:43:58.318646   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:43:58.318653   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:43:58.318712   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:43:58.359360   69580 cri.go:89] found id: ""
	I0501 03:43:58.359383   69580 logs.go:276] 0 containers: []
	W0501 03:43:58.359392   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:43:58.359398   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:43:58.359446   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:43:58.401172   69580 cri.go:89] found id: ""
	I0501 03:43:58.401202   69580 logs.go:276] 0 containers: []
	W0501 03:43:58.401211   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:43:58.401220   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:43:58.401232   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:43:58.416877   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:43:58.416907   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:43:58.489812   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:43:58.489835   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:43:58.489849   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:43:58.574971   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:43:58.575004   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:43:58.619526   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:43:58.619557   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:44:01.173759   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:44:01.187838   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:44:01.187922   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:44:01.227322   69580 cri.go:89] found id: ""
	I0501 03:44:01.227355   69580 logs.go:276] 0 containers: []
	W0501 03:44:01.227366   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:44:01.227372   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:44:01.227432   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:44:01.268418   69580 cri.go:89] found id: ""
	I0501 03:44:01.268453   69580 logs.go:276] 0 containers: []
	W0501 03:44:01.268465   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:44:01.268472   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:44:01.268530   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:44:01.314641   69580 cri.go:89] found id: ""
	I0501 03:44:01.314667   69580 logs.go:276] 0 containers: []
	W0501 03:44:01.314675   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:44:01.314681   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:44:01.314739   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:44:01.361237   69580 cri.go:89] found id: ""
	I0501 03:44:01.361272   69580 logs.go:276] 0 containers: []
	W0501 03:44:01.361288   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:44:01.361294   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:44:01.361348   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:44:01.400650   69580 cri.go:89] found id: ""
	I0501 03:44:01.400676   69580 logs.go:276] 0 containers: []
	W0501 03:44:01.400684   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:44:01.400690   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:44:01.400739   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:44:01.447998   69580 cri.go:89] found id: ""
	I0501 03:44:01.448023   69580 logs.go:276] 0 containers: []
	W0501 03:44:01.448032   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:44:01.448040   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:44:01.448101   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:43:59.845726   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:02.345826   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:00.155851   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:02.155998   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:02.010828   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:04.014801   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:01.492172   69580 cri.go:89] found id: ""
	I0501 03:44:01.492199   69580 logs.go:276] 0 containers: []
	W0501 03:44:01.492207   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:44:01.492213   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:44:01.492265   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:44:01.538589   69580 cri.go:89] found id: ""
	I0501 03:44:01.538617   69580 logs.go:276] 0 containers: []
	W0501 03:44:01.538628   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:44:01.538638   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:44:01.538653   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:44:01.592914   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:44:01.592952   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:44:01.611706   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:44:01.611754   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:44:01.693469   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:44:01.693488   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:44:01.693501   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:44:01.774433   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:44:01.774470   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:44:04.321593   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:44:04.335428   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:44:04.335497   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:44:04.378479   69580 cri.go:89] found id: ""
	I0501 03:44:04.378505   69580 logs.go:276] 0 containers: []
	W0501 03:44:04.378516   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:44:04.378525   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:44:04.378585   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:44:04.420025   69580 cri.go:89] found id: ""
	I0501 03:44:04.420050   69580 logs.go:276] 0 containers: []
	W0501 03:44:04.420059   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:44:04.420065   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:44:04.420113   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:44:04.464009   69580 cri.go:89] found id: ""
	I0501 03:44:04.464039   69580 logs.go:276] 0 containers: []
	W0501 03:44:04.464047   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:44:04.464052   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:44:04.464113   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:44:04.502039   69580 cri.go:89] found id: ""
	I0501 03:44:04.502069   69580 logs.go:276] 0 containers: []
	W0501 03:44:04.502081   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:44:04.502088   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:44:04.502150   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:44:04.544566   69580 cri.go:89] found id: ""
	I0501 03:44:04.544593   69580 logs.go:276] 0 containers: []
	W0501 03:44:04.544605   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:44:04.544614   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:44:04.544672   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:44:04.584067   69580 cri.go:89] found id: ""
	I0501 03:44:04.584095   69580 logs.go:276] 0 containers: []
	W0501 03:44:04.584104   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:44:04.584112   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:44:04.584174   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:44:04.625165   69580 cri.go:89] found id: ""
	I0501 03:44:04.625197   69580 logs.go:276] 0 containers: []
	W0501 03:44:04.625210   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:44:04.625219   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:44:04.625292   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:44:04.667796   69580 cri.go:89] found id: ""
	I0501 03:44:04.667830   69580 logs.go:276] 0 containers: []
	W0501 03:44:04.667839   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:44:04.667850   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:44:04.667868   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:44:04.722269   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:44:04.722303   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:44:04.738232   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:44:04.738265   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:44:04.821551   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:44:04.821578   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:44:04.821595   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:44:04.902575   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:44:04.902618   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:44:04.346197   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:06.845552   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:04.157333   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:06.157366   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:08.656837   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:06.513484   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:09.012004   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:07.449793   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:44:07.466348   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:44:07.466450   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:44:07.510325   69580 cri.go:89] found id: ""
	I0501 03:44:07.510352   69580 logs.go:276] 0 containers: []
	W0501 03:44:07.510363   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:44:07.510371   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:44:07.510450   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:44:07.550722   69580 cri.go:89] found id: ""
	I0501 03:44:07.550748   69580 logs.go:276] 0 containers: []
	W0501 03:44:07.550756   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:44:07.550762   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:44:07.550810   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:44:07.589592   69580 cri.go:89] found id: ""
	I0501 03:44:07.589617   69580 logs.go:276] 0 containers: []
	W0501 03:44:07.589625   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:44:07.589630   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:44:07.589678   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:44:07.631628   69580 cri.go:89] found id: ""
	I0501 03:44:07.631655   69580 logs.go:276] 0 containers: []
	W0501 03:44:07.631662   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:44:07.631668   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:44:07.631726   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:44:07.674709   69580 cri.go:89] found id: ""
	I0501 03:44:07.674743   69580 logs.go:276] 0 containers: []
	W0501 03:44:07.674753   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:44:07.674760   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:44:07.674811   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:44:07.714700   69580 cri.go:89] found id: ""
	I0501 03:44:07.714767   69580 logs.go:276] 0 containers: []
	W0501 03:44:07.714788   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:44:07.714797   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:44:07.714856   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:44:07.753440   69580 cri.go:89] found id: ""
	I0501 03:44:07.753467   69580 logs.go:276] 0 containers: []
	W0501 03:44:07.753478   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:44:07.753485   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:44:07.753549   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:44:07.791579   69580 cri.go:89] found id: ""
	I0501 03:44:07.791606   69580 logs.go:276] 0 containers: []
	W0501 03:44:07.791617   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:44:07.791628   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:44:07.791644   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:44:07.845568   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:44:07.845606   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:44:07.861861   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:44:07.861885   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:44:07.941719   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:44:07.941743   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:44:07.941757   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:44:08.022684   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:44:08.022720   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:44:10.575417   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:44:10.593408   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:44:10.593468   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:44:10.641322   69580 cri.go:89] found id: ""
	I0501 03:44:10.641357   69580 logs.go:276] 0 containers: []
	W0501 03:44:10.641370   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:44:10.641378   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:44:10.641442   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:44:10.686330   69580 cri.go:89] found id: ""
	I0501 03:44:10.686358   69580 logs.go:276] 0 containers: []
	W0501 03:44:10.686368   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:44:10.686377   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:44:10.686458   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:44:10.734414   69580 cri.go:89] found id: ""
	I0501 03:44:10.734444   69580 logs.go:276] 0 containers: []
	W0501 03:44:10.734456   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:44:10.734463   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:44:10.734527   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:44:10.776063   69580 cri.go:89] found id: ""
	I0501 03:44:10.776095   69580 logs.go:276] 0 containers: []
	W0501 03:44:10.776106   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:44:10.776113   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:44:10.776176   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:44:10.819035   69580 cri.go:89] found id: ""
	I0501 03:44:10.819065   69580 logs.go:276] 0 containers: []
	W0501 03:44:10.819076   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:44:10.819084   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:44:10.819150   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:44:10.868912   69580 cri.go:89] found id: ""
	I0501 03:44:10.868938   69580 logs.go:276] 0 containers: []
	W0501 03:44:10.868946   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:44:10.868952   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:44:10.869000   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:44:10.910517   69580 cri.go:89] found id: ""
	I0501 03:44:10.910549   69580 logs.go:276] 0 containers: []
	W0501 03:44:10.910572   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:44:10.910581   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:44:10.910678   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:44:10.949267   69580 cri.go:89] found id: ""
	I0501 03:44:10.949297   69580 logs.go:276] 0 containers: []
	W0501 03:44:10.949306   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:44:10.949314   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:44:10.949327   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:44:11.004731   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:44:11.004779   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:44:11.022146   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:44:11.022174   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:44:11.108992   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:44:11.109020   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:44:11.109035   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:44:11.192571   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:44:11.192605   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:44:08.846431   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:11.346295   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:10.657938   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:13.156112   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:11.012040   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:13.512166   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:15.512232   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:13.739336   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:44:13.758622   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:44:13.758721   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:44:13.805395   69580 cri.go:89] found id: ""
	I0501 03:44:13.805423   69580 logs.go:276] 0 containers: []
	W0501 03:44:13.805434   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:44:13.805442   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:44:13.805523   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:44:13.847372   69580 cri.go:89] found id: ""
	I0501 03:44:13.847400   69580 logs.go:276] 0 containers: []
	W0501 03:44:13.847409   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:44:13.847417   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:44:13.847474   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:44:13.891842   69580 cri.go:89] found id: ""
	I0501 03:44:13.891867   69580 logs.go:276] 0 containers: []
	W0501 03:44:13.891874   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:44:13.891880   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:44:13.891935   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:44:13.933382   69580 cri.go:89] found id: ""
	I0501 03:44:13.933411   69580 logs.go:276] 0 containers: []
	W0501 03:44:13.933422   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:44:13.933430   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:44:13.933490   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:44:13.973955   69580 cri.go:89] found id: ""
	I0501 03:44:13.973980   69580 logs.go:276] 0 containers: []
	W0501 03:44:13.973991   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:44:13.974000   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:44:13.974053   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:44:14.015202   69580 cri.go:89] found id: ""
	I0501 03:44:14.015226   69580 logs.go:276] 0 containers: []
	W0501 03:44:14.015234   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:44:14.015240   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:44:14.015287   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:44:14.057441   69580 cri.go:89] found id: ""
	I0501 03:44:14.057471   69580 logs.go:276] 0 containers: []
	W0501 03:44:14.057483   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:44:14.057491   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:44:14.057551   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:44:14.099932   69580 cri.go:89] found id: ""
	I0501 03:44:14.099961   69580 logs.go:276] 0 containers: []
	W0501 03:44:14.099972   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:44:14.099983   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:44:14.099996   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:44:14.160386   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:44:14.160418   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:44:14.176880   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:44:14.176908   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:44:14.272137   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:44:14.272155   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:44:14.272168   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:44:14.366523   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:44:14.366571   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:44:13.349770   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:15.351345   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:17.845182   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:15.156569   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:17.157994   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:17.512836   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:20.012034   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:16.914394   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:44:16.930976   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:44:16.931038   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:44:16.977265   69580 cri.go:89] found id: ""
	I0501 03:44:16.977294   69580 logs.go:276] 0 containers: []
	W0501 03:44:16.977303   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:44:16.977309   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:44:16.977363   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:44:17.015656   69580 cri.go:89] found id: ""
	I0501 03:44:17.015686   69580 logs.go:276] 0 containers: []
	W0501 03:44:17.015694   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:44:17.015700   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:44:17.015768   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:44:17.056079   69580 cri.go:89] found id: ""
	I0501 03:44:17.056111   69580 logs.go:276] 0 containers: []
	W0501 03:44:17.056121   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:44:17.056129   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:44:17.056188   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:44:17.099504   69580 cri.go:89] found id: ""
	I0501 03:44:17.099528   69580 logs.go:276] 0 containers: []
	W0501 03:44:17.099536   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:44:17.099542   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:44:17.099606   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:44:17.141371   69580 cri.go:89] found id: ""
	I0501 03:44:17.141401   69580 logs.go:276] 0 containers: []
	W0501 03:44:17.141410   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:44:17.141417   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:44:17.141484   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:44:17.184143   69580 cri.go:89] found id: ""
	I0501 03:44:17.184167   69580 logs.go:276] 0 containers: []
	W0501 03:44:17.184179   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:44:17.184193   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:44:17.184246   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:44:17.224012   69580 cri.go:89] found id: ""
	I0501 03:44:17.224049   69580 logs.go:276] 0 containers: []
	W0501 03:44:17.224061   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:44:17.224069   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:44:17.224136   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:44:17.268185   69580 cri.go:89] found id: ""
	I0501 03:44:17.268216   69580 logs.go:276] 0 containers: []
	W0501 03:44:17.268224   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:44:17.268233   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:44:17.268248   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:44:17.351342   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:44:17.351392   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:44:17.398658   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:44:17.398689   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:44:17.452476   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:44:17.452517   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:44:17.468734   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:44:17.468771   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:44:17.558971   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:44:20.059342   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:44:20.075707   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:44:20.075791   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:44:20.114436   69580 cri.go:89] found id: ""
	I0501 03:44:20.114472   69580 logs.go:276] 0 containers: []
	W0501 03:44:20.114486   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:44:20.114495   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:44:20.114562   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:44:20.155607   69580 cri.go:89] found id: ""
	I0501 03:44:20.155638   69580 logs.go:276] 0 containers: []
	W0501 03:44:20.155649   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:44:20.155657   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:44:20.155715   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:44:20.198188   69580 cri.go:89] found id: ""
	I0501 03:44:20.198218   69580 logs.go:276] 0 containers: []
	W0501 03:44:20.198227   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:44:20.198234   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:44:20.198291   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:44:20.237183   69580 cri.go:89] found id: ""
	I0501 03:44:20.237213   69580 logs.go:276] 0 containers: []
	W0501 03:44:20.237223   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:44:20.237232   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:44:20.237286   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:44:20.279289   69580 cri.go:89] found id: ""
	I0501 03:44:20.279320   69580 logs.go:276] 0 containers: []
	W0501 03:44:20.279332   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:44:20.279341   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:44:20.279409   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:44:20.334066   69580 cri.go:89] found id: ""
	I0501 03:44:20.334091   69580 logs.go:276] 0 containers: []
	W0501 03:44:20.334112   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:44:20.334121   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:44:20.334181   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:44:20.385740   69580 cri.go:89] found id: ""
	I0501 03:44:20.385775   69580 logs.go:276] 0 containers: []
	W0501 03:44:20.385785   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:44:20.385796   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:44:20.385860   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:44:20.425151   69580 cri.go:89] found id: ""
	I0501 03:44:20.425176   69580 logs.go:276] 0 containers: []
	W0501 03:44:20.425183   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:44:20.425193   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:44:20.425214   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:44:20.472563   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:44:20.472605   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:44:20.526589   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:44:20.526626   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:44:20.541978   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:44:20.542013   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:44:20.619513   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:44:20.619540   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:44:20.619555   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:44:19.846208   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:22.345166   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:19.658986   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:22.156821   68864 pod_ready.go:102] pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:23.159267   68864 pod_ready.go:81] duration metric: took 4m0.009511824s for pod "metrics-server-569cc877fc-p8j59" in "kube-system" namespace to be "Ready" ...
	E0501 03:44:23.159296   68864 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0501 03:44:23.159308   68864 pod_ready.go:38] duration metric: took 4m7.423794373s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0501 03:44:23.159327   68864 api_server.go:52] waiting for apiserver process to appear ...
	I0501 03:44:23.159362   68864 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:44:23.159422   68864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:44:23.225563   68864 cri.go:89] found id: "a96815c49ac45b34c359d7171b30be5c25cee80fae08d947fc004d479693df00"
	I0501 03:44:23.225590   68864 cri.go:89] found id: ""
	I0501 03:44:23.225607   68864 logs.go:276] 1 containers: [a96815c49ac45b34c359d7171b30be5c25cee80fae08d947fc004d479693df00]
	I0501 03:44:23.225663   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:44:23.231542   68864 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:44:23.231598   68864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:44:23.290847   68864 cri.go:89] found id: "d109948ffbbddd2e38484d83d2260690afce3c51e16abff0fe8713e844bfdcb3"
	I0501 03:44:23.290871   68864 cri.go:89] found id: ""
	I0501 03:44:23.290878   68864 logs.go:276] 1 containers: [d109948ffbbddd2e38484d83d2260690afce3c51e16abff0fe8713e844bfdcb3]
	I0501 03:44:23.290926   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:44:23.295697   68864 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:44:23.295755   68864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:44:23.348625   68864 cri.go:89] found id: "e3c74de489af3d78a4b0cf620ed16e1fba7c9ce1c7021a15c5b4bd241b7a934a"
	I0501 03:44:23.348652   68864 cri.go:89] found id: ""
	I0501 03:44:23.348661   68864 logs.go:276] 1 containers: [e3c74de489af3d78a4b0cf620ed16e1fba7c9ce1c7021a15c5b4bd241b7a934a]
	I0501 03:44:23.348717   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:44:23.355801   68864 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:44:23.355896   68864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:44:23.409428   68864 cri.go:89] found id: "1813f35574f4f0398e7d482fe77c5d7f8490726666a277035bda0a5cff80af6e"
	I0501 03:44:23.409461   68864 cri.go:89] found id: ""
	I0501 03:44:23.409471   68864 logs.go:276] 1 containers: [1813f35574f4f0398e7d482fe77c5d7f8490726666a277035bda0a5cff80af6e]
	I0501 03:44:23.409530   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:44:23.416480   68864 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:44:23.416560   68864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:44:23.466642   68864 cri.go:89] found id: "94afdb03c382269209154b2e73edbf200662e89960c248ba775a0ef2b8fbe6b1"
	I0501 03:44:23.466672   68864 cri.go:89] found id: ""
	I0501 03:44:23.466681   68864 logs.go:276] 1 containers: [94afdb03c382269209154b2e73edbf200662e89960c248ba775a0ef2b8fbe6b1]
	I0501 03:44:23.466739   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:44:23.472831   68864 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:44:23.472906   68864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:44:23.524815   68864 cri.go:89] found id: "7e7158f7ff3922e97ba5ee97108acc1d0ba0350dff2f9aa56c2f71519108791c"
	I0501 03:44:23.524841   68864 cri.go:89] found id: ""
	I0501 03:44:23.524850   68864 logs.go:276] 1 containers: [7e7158f7ff3922e97ba5ee97108acc1d0ba0350dff2f9aa56c2f71519108791c]
	I0501 03:44:23.524902   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:44:23.532092   68864 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:44:23.532161   68864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:44:23.577262   68864 cri.go:89] found id: ""
	I0501 03:44:23.577292   68864 logs.go:276] 0 containers: []
	W0501 03:44:23.577305   68864 logs.go:278] No container was found matching "kindnet"
	I0501 03:44:23.577312   68864 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0501 03:44:23.577374   68864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0501 03:44:23.623597   68864 cri.go:89] found id: "f9a8d2f0f9453969b410fb7f880f6cf4c7e6e2f3c17a687583906efe4181dd17"
	I0501 03:44:23.623626   68864 cri.go:89] found id: "aaae36261c5ce1accf3bfb5993be5a256a037a640d5f6f30de8384900dda06b2"
	I0501 03:44:23.623632   68864 cri.go:89] found id: ""
	I0501 03:44:23.623640   68864 logs.go:276] 2 containers: [f9a8d2f0f9453969b410fb7f880f6cf4c7e6e2f3c17a687583906efe4181dd17 aaae36261c5ce1accf3bfb5993be5a256a037a640d5f6f30de8384900dda06b2]
	I0501 03:44:23.623702   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:44:23.630189   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:44:23.635673   68864 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:44:23.635694   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:44:22.012084   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:24.511736   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:23.203031   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:44:23.219964   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:44:23.220043   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:44:23.264287   69580 cri.go:89] found id: ""
	I0501 03:44:23.264315   69580 logs.go:276] 0 containers: []
	W0501 03:44:23.264323   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:44:23.264328   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:44:23.264395   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:44:23.310337   69580 cri.go:89] found id: ""
	I0501 03:44:23.310366   69580 logs.go:276] 0 containers: []
	W0501 03:44:23.310375   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:44:23.310383   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:44:23.310461   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:44:23.364550   69580 cri.go:89] found id: ""
	I0501 03:44:23.364577   69580 logs.go:276] 0 containers: []
	W0501 03:44:23.364588   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:44:23.364596   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:44:23.364676   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:44:23.412620   69580 cri.go:89] found id: ""
	I0501 03:44:23.412647   69580 logs.go:276] 0 containers: []
	W0501 03:44:23.412657   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:44:23.412665   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:44:23.412726   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:44:23.461447   69580 cri.go:89] found id: ""
	I0501 03:44:23.461477   69580 logs.go:276] 0 containers: []
	W0501 03:44:23.461488   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:44:23.461496   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:44:23.461558   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:44:23.514868   69580 cri.go:89] found id: ""
	I0501 03:44:23.514896   69580 logs.go:276] 0 containers: []
	W0501 03:44:23.514915   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:44:23.514924   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:44:23.514984   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:44:23.559171   69580 cri.go:89] found id: ""
	I0501 03:44:23.559200   69580 logs.go:276] 0 containers: []
	W0501 03:44:23.559210   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:44:23.559218   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:44:23.559284   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:44:23.601713   69580 cri.go:89] found id: ""
	I0501 03:44:23.601740   69580 logs.go:276] 0 containers: []
	W0501 03:44:23.601749   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:44:23.601760   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:44:23.601772   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:44:23.656147   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:44:23.656187   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:44:23.673507   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:44:23.673545   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:44:23.771824   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:44:23.771846   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:44:23.771861   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:44:23.861128   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:44:23.861161   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:44:26.406507   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:44:26.421836   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:44:26.421894   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:44:26.462758   69580 cri.go:89] found id: ""
	I0501 03:44:26.462785   69580 logs.go:276] 0 containers: []
	W0501 03:44:26.462796   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:44:26.462804   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:44:26.462860   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:44:24.346534   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:26.847370   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:24.220047   68864 logs.go:123] Gathering logs for kubelet ...
	I0501 03:44:24.220087   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:44:24.279596   68864 logs.go:123] Gathering logs for kube-apiserver [a96815c49ac45b34c359d7171b30be5c25cee80fae08d947fc004d479693df00] ...
	I0501 03:44:24.279633   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a96815c49ac45b34c359d7171b30be5c25cee80fae08d947fc004d479693df00"
	I0501 03:44:24.336092   68864 logs.go:123] Gathering logs for etcd [d109948ffbbddd2e38484d83d2260690afce3c51e16abff0fe8713e844bfdcb3] ...
	I0501 03:44:24.336128   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d109948ffbbddd2e38484d83d2260690afce3c51e16abff0fe8713e844bfdcb3"
	I0501 03:44:24.396117   68864 logs.go:123] Gathering logs for storage-provisioner [f9a8d2f0f9453969b410fb7f880f6cf4c7e6e2f3c17a687583906efe4181dd17] ...
	I0501 03:44:24.396145   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f9a8d2f0f9453969b410fb7f880f6cf4c7e6e2f3c17a687583906efe4181dd17"
	I0501 03:44:24.443608   68864 logs.go:123] Gathering logs for storage-provisioner [aaae36261c5ce1accf3bfb5993be5a256a037a640d5f6f30de8384900dda06b2] ...
	I0501 03:44:24.443644   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aaae36261c5ce1accf3bfb5993be5a256a037a640d5f6f30de8384900dda06b2"
	I0501 03:44:24.499533   68864 logs.go:123] Gathering logs for kube-controller-manager [7e7158f7ff3922e97ba5ee97108acc1d0ba0350dff2f9aa56c2f71519108791c] ...
	I0501 03:44:24.499560   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7e7158f7ff3922e97ba5ee97108acc1d0ba0350dff2f9aa56c2f71519108791c"
	I0501 03:44:24.562990   68864 logs.go:123] Gathering logs for container status ...
	I0501 03:44:24.563028   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:44:24.622630   68864 logs.go:123] Gathering logs for dmesg ...
	I0501 03:44:24.622671   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:44:24.641106   68864 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:44:24.641145   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0501 03:44:24.781170   68864 logs.go:123] Gathering logs for coredns [e3c74de489af3d78a4b0cf620ed16e1fba7c9ce1c7021a15c5b4bd241b7a934a] ...
	I0501 03:44:24.781203   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e3c74de489af3d78a4b0cf620ed16e1fba7c9ce1c7021a15c5b4bd241b7a934a"
	I0501 03:44:24.824616   68864 logs.go:123] Gathering logs for kube-scheduler [1813f35574f4f0398e7d482fe77c5d7f8490726666a277035bda0a5cff80af6e] ...
	I0501 03:44:24.824643   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1813f35574f4f0398e7d482fe77c5d7f8490726666a277035bda0a5cff80af6e"
	I0501 03:44:24.871956   68864 logs.go:123] Gathering logs for kube-proxy [94afdb03c382269209154b2e73edbf200662e89960c248ba775a0ef2b8fbe6b1] ...
	I0501 03:44:24.871992   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 94afdb03c382269209154b2e73edbf200662e89960c248ba775a0ef2b8fbe6b1"
	I0501 03:44:27.424582   68864 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:44:27.447490   68864 api_server.go:72] duration metric: took 4m19.445111196s to wait for apiserver process to appear ...
	I0501 03:44:27.447522   68864 api_server.go:88] waiting for apiserver healthz status ...
	I0501 03:44:27.447555   68864 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:44:27.447601   68864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:44:27.494412   68864 cri.go:89] found id: "a96815c49ac45b34c359d7171b30be5c25cee80fae08d947fc004d479693df00"
	I0501 03:44:27.494437   68864 cri.go:89] found id: ""
	I0501 03:44:27.494445   68864 logs.go:276] 1 containers: [a96815c49ac45b34c359d7171b30be5c25cee80fae08d947fc004d479693df00]
	I0501 03:44:27.494490   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:44:27.503782   68864 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:44:27.503853   68864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:44:27.550991   68864 cri.go:89] found id: "d109948ffbbddd2e38484d83d2260690afce3c51e16abff0fe8713e844bfdcb3"
	I0501 03:44:27.551018   68864 cri.go:89] found id: ""
	I0501 03:44:27.551026   68864 logs.go:276] 1 containers: [d109948ffbbddd2e38484d83d2260690afce3c51e16abff0fe8713e844bfdcb3]
	I0501 03:44:27.551073   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:44:27.556919   68864 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:44:27.556983   68864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:44:27.606005   68864 cri.go:89] found id: "e3c74de489af3d78a4b0cf620ed16e1fba7c9ce1c7021a15c5b4bd241b7a934a"
	I0501 03:44:27.606033   68864 cri.go:89] found id: ""
	I0501 03:44:27.606042   68864 logs.go:276] 1 containers: [e3c74de489af3d78a4b0cf620ed16e1fba7c9ce1c7021a15c5b4bd241b7a934a]
	I0501 03:44:27.606100   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:44:27.611639   68864 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:44:27.611706   68864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:44:27.661151   68864 cri.go:89] found id: "1813f35574f4f0398e7d482fe77c5d7f8490726666a277035bda0a5cff80af6e"
	I0501 03:44:27.661172   68864 cri.go:89] found id: ""
	I0501 03:44:27.661179   68864 logs.go:276] 1 containers: [1813f35574f4f0398e7d482fe77c5d7f8490726666a277035bda0a5cff80af6e]
	I0501 03:44:27.661278   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:44:27.666443   68864 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:44:27.666514   68864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:44:27.712387   68864 cri.go:89] found id: "94afdb03c382269209154b2e73edbf200662e89960c248ba775a0ef2b8fbe6b1"
	I0501 03:44:27.712416   68864 cri.go:89] found id: ""
	I0501 03:44:27.712424   68864 logs.go:276] 1 containers: [94afdb03c382269209154b2e73edbf200662e89960c248ba775a0ef2b8fbe6b1]
	I0501 03:44:27.712480   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:44:27.717280   68864 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:44:27.717342   68864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:44:27.767124   68864 cri.go:89] found id: "7e7158f7ff3922e97ba5ee97108acc1d0ba0350dff2f9aa56c2f71519108791c"
	I0501 03:44:27.767154   68864 cri.go:89] found id: ""
	I0501 03:44:27.767163   68864 logs.go:276] 1 containers: [7e7158f7ff3922e97ba5ee97108acc1d0ba0350dff2f9aa56c2f71519108791c]
	I0501 03:44:27.767215   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:44:27.773112   68864 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:44:27.773183   68864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:44:27.829966   68864 cri.go:89] found id: ""
	I0501 03:44:27.829991   68864 logs.go:276] 0 containers: []
	W0501 03:44:27.829999   68864 logs.go:278] No container was found matching "kindnet"
	I0501 03:44:27.830005   68864 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0501 03:44:27.830056   68864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0501 03:44:27.873391   68864 cri.go:89] found id: "f9a8d2f0f9453969b410fb7f880f6cf4c7e6e2f3c17a687583906efe4181dd17"
	I0501 03:44:27.873415   68864 cri.go:89] found id: "aaae36261c5ce1accf3bfb5993be5a256a037a640d5f6f30de8384900dda06b2"
	I0501 03:44:27.873419   68864 cri.go:89] found id: ""
	I0501 03:44:27.873426   68864 logs.go:276] 2 containers: [f9a8d2f0f9453969b410fb7f880f6cf4c7e6e2f3c17a687583906efe4181dd17 aaae36261c5ce1accf3bfb5993be5a256a037a640d5f6f30de8384900dda06b2]
	I0501 03:44:27.873473   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:44:27.878537   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:44:27.883518   68864 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:44:27.883543   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0501 03:44:28.012337   68864 logs.go:123] Gathering logs for kube-apiserver [a96815c49ac45b34c359d7171b30be5c25cee80fae08d947fc004d479693df00] ...
	I0501 03:44:28.012377   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a96815c49ac45b34c359d7171b30be5c25cee80fae08d947fc004d479693df00"
	I0501 03:44:28.063686   68864 logs.go:123] Gathering logs for etcd [d109948ffbbddd2e38484d83d2260690afce3c51e16abff0fe8713e844bfdcb3] ...
	I0501 03:44:28.063715   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d109948ffbbddd2e38484d83d2260690afce3c51e16abff0fe8713e844bfdcb3"
	I0501 03:44:28.116507   68864 logs.go:123] Gathering logs for storage-provisioner [aaae36261c5ce1accf3bfb5993be5a256a037a640d5f6f30de8384900dda06b2] ...
	I0501 03:44:28.116535   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aaae36261c5ce1accf3bfb5993be5a256a037a640d5f6f30de8384900dda06b2"
	I0501 03:44:28.165593   68864 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:44:28.165636   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:44:28.595278   68864 logs.go:123] Gathering logs for container status ...
	I0501 03:44:28.595333   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:44:28.645790   68864 logs.go:123] Gathering logs for dmesg ...
	I0501 03:44:28.645836   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:44:28.662952   68864 logs.go:123] Gathering logs for coredns [e3c74de489af3d78a4b0cf620ed16e1fba7c9ce1c7021a15c5b4bd241b7a934a] ...
	I0501 03:44:28.662984   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e3c74de489af3d78a4b0cf620ed16e1fba7c9ce1c7021a15c5b4bd241b7a934a"
	I0501 03:44:28.710273   68864 logs.go:123] Gathering logs for kube-scheduler [1813f35574f4f0398e7d482fe77c5d7f8490726666a277035bda0a5cff80af6e] ...
	I0501 03:44:28.710302   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1813f35574f4f0398e7d482fe77c5d7f8490726666a277035bda0a5cff80af6e"
	I0501 03:44:28.761838   68864 logs.go:123] Gathering logs for kube-proxy [94afdb03c382269209154b2e73edbf200662e89960c248ba775a0ef2b8fbe6b1] ...
	I0501 03:44:28.761872   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 94afdb03c382269209154b2e73edbf200662e89960c248ba775a0ef2b8fbe6b1"
	I0501 03:44:28.810775   68864 logs.go:123] Gathering logs for kube-controller-manager [7e7158f7ff3922e97ba5ee97108acc1d0ba0350dff2f9aa56c2f71519108791c] ...
	I0501 03:44:28.810808   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7e7158f7ff3922e97ba5ee97108acc1d0ba0350dff2f9aa56c2f71519108791c"
	I0501 03:44:27.012119   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:29.510651   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:26.505067   69580 cri.go:89] found id: ""
	I0501 03:44:26.505098   69580 logs.go:276] 0 containers: []
	W0501 03:44:26.505110   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:44:26.505121   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:44:26.505182   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:44:26.544672   69580 cri.go:89] found id: ""
	I0501 03:44:26.544699   69580 logs.go:276] 0 containers: []
	W0501 03:44:26.544711   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:44:26.544717   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:44:26.544764   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:44:26.590579   69580 cri.go:89] found id: ""
	I0501 03:44:26.590605   69580 logs.go:276] 0 containers: []
	W0501 03:44:26.590614   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:44:26.590620   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:44:26.590670   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:44:26.637887   69580 cri.go:89] found id: ""
	I0501 03:44:26.637920   69580 logs.go:276] 0 containers: []
	W0501 03:44:26.637930   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:44:26.637939   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:44:26.637998   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:44:26.686778   69580 cri.go:89] found id: ""
	I0501 03:44:26.686807   69580 logs.go:276] 0 containers: []
	W0501 03:44:26.686815   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:44:26.686821   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:44:26.686882   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:44:26.729020   69580 cri.go:89] found id: ""
	I0501 03:44:26.729045   69580 logs.go:276] 0 containers: []
	W0501 03:44:26.729054   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:44:26.729060   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:44:26.729124   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:44:26.769022   69580 cri.go:89] found id: ""
	I0501 03:44:26.769043   69580 logs.go:276] 0 containers: []
	W0501 03:44:26.769051   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:44:26.769059   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:44:26.769073   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:44:26.854985   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:44:26.855011   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:44:26.855024   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:44:26.937031   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:44:26.937063   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:44:27.006267   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:44:27.006301   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:44:27.080503   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:44:27.080545   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:44:29.598176   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:44:29.614465   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:44:29.614523   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:44:29.662384   69580 cri.go:89] found id: ""
	I0501 03:44:29.662421   69580 logs.go:276] 0 containers: []
	W0501 03:44:29.662433   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:44:29.662439   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:44:29.662483   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:44:29.705262   69580 cri.go:89] found id: ""
	I0501 03:44:29.705286   69580 logs.go:276] 0 containers: []
	W0501 03:44:29.705295   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:44:29.705300   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:44:29.705345   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:44:29.752308   69580 cri.go:89] found id: ""
	I0501 03:44:29.752335   69580 logs.go:276] 0 containers: []
	W0501 03:44:29.752343   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:44:29.752349   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:44:29.752403   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:44:29.802702   69580 cri.go:89] found id: ""
	I0501 03:44:29.802729   69580 logs.go:276] 0 containers: []
	W0501 03:44:29.802741   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:44:29.802749   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:44:29.802814   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:44:29.854112   69580 cri.go:89] found id: ""
	I0501 03:44:29.854138   69580 logs.go:276] 0 containers: []
	W0501 03:44:29.854149   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:44:29.854157   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:44:29.854217   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:44:29.898447   69580 cri.go:89] found id: ""
	I0501 03:44:29.898470   69580 logs.go:276] 0 containers: []
	W0501 03:44:29.898480   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:44:29.898486   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:44:29.898545   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:44:29.938832   69580 cri.go:89] found id: ""
	I0501 03:44:29.938862   69580 logs.go:276] 0 containers: []
	W0501 03:44:29.938873   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:44:29.938881   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:44:29.938948   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:44:29.987697   69580 cri.go:89] found id: ""
	I0501 03:44:29.987721   69580 logs.go:276] 0 containers: []
	W0501 03:44:29.987730   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:44:29.987738   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:44:29.987753   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:44:30.042446   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:44:30.042473   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:44:30.095358   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:44:30.095389   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:44:30.110745   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:44:30.110782   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:44:30.190923   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:44:30.190951   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:44:30.190965   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:44:29.346013   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:31.347513   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:28.868838   68864 logs.go:123] Gathering logs for storage-provisioner [f9a8d2f0f9453969b410fb7f880f6cf4c7e6e2f3c17a687583906efe4181dd17] ...
	I0501 03:44:28.868876   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f9a8d2f0f9453969b410fb7f880f6cf4c7e6e2f3c17a687583906efe4181dd17"
	I0501 03:44:28.912436   68864 logs.go:123] Gathering logs for kubelet ...
	I0501 03:44:28.912474   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:44:31.469456   68864 api_server.go:253] Checking apiserver healthz at https://192.168.50.218:8443/healthz ...
	I0501 03:44:31.478498   68864 api_server.go:279] https://192.168.50.218:8443/healthz returned 200:
	ok
	I0501 03:44:31.479838   68864 api_server.go:141] control plane version: v1.30.0
	I0501 03:44:31.479861   68864 api_server.go:131] duration metric: took 4.032331979s to wait for apiserver health ...
	I0501 03:44:31.479869   68864 system_pods.go:43] waiting for kube-system pods to appear ...
	I0501 03:44:31.479889   68864 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:44:31.479930   68864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:44:31.531068   68864 cri.go:89] found id: "a96815c49ac45b34c359d7171b30be5c25cee80fae08d947fc004d479693df00"
	I0501 03:44:31.531088   68864 cri.go:89] found id: ""
	I0501 03:44:31.531095   68864 logs.go:276] 1 containers: [a96815c49ac45b34c359d7171b30be5c25cee80fae08d947fc004d479693df00]
	I0501 03:44:31.531137   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:44:31.536216   68864 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:44:31.536292   68864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:44:31.584155   68864 cri.go:89] found id: "d109948ffbbddd2e38484d83d2260690afce3c51e16abff0fe8713e844bfdcb3"
	I0501 03:44:31.584183   68864 cri.go:89] found id: ""
	I0501 03:44:31.584194   68864 logs.go:276] 1 containers: [d109948ffbbddd2e38484d83d2260690afce3c51e16abff0fe8713e844bfdcb3]
	I0501 03:44:31.584250   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:44:31.589466   68864 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:44:31.589528   68864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:44:31.639449   68864 cri.go:89] found id: "e3c74de489af3d78a4b0cf620ed16e1fba7c9ce1c7021a15c5b4bd241b7a934a"
	I0501 03:44:31.639476   68864 cri.go:89] found id: ""
	I0501 03:44:31.639484   68864 logs.go:276] 1 containers: [e3c74de489af3d78a4b0cf620ed16e1fba7c9ce1c7021a15c5b4bd241b7a934a]
	I0501 03:44:31.639535   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:44:31.644684   68864 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:44:31.644750   68864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:44:31.702095   68864 cri.go:89] found id: "1813f35574f4f0398e7d482fe77c5d7f8490726666a277035bda0a5cff80af6e"
	I0501 03:44:31.702119   68864 cri.go:89] found id: ""
	I0501 03:44:31.702125   68864 logs.go:276] 1 containers: [1813f35574f4f0398e7d482fe77c5d7f8490726666a277035bda0a5cff80af6e]
	I0501 03:44:31.702173   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:44:31.707443   68864 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:44:31.707508   68864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:44:31.758582   68864 cri.go:89] found id: "94afdb03c382269209154b2e73edbf200662e89960c248ba775a0ef2b8fbe6b1"
	I0501 03:44:31.758603   68864 cri.go:89] found id: ""
	I0501 03:44:31.758610   68864 logs.go:276] 1 containers: [94afdb03c382269209154b2e73edbf200662e89960c248ba775a0ef2b8fbe6b1]
	I0501 03:44:31.758656   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:44:31.764261   68864 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:44:31.764325   68864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:44:31.813385   68864 cri.go:89] found id: "7e7158f7ff3922e97ba5ee97108acc1d0ba0350dff2f9aa56c2f71519108791c"
	I0501 03:44:31.813407   68864 cri.go:89] found id: ""
	I0501 03:44:31.813414   68864 logs.go:276] 1 containers: [7e7158f7ff3922e97ba5ee97108acc1d0ba0350dff2f9aa56c2f71519108791c]
	I0501 03:44:31.813457   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:44:31.818289   68864 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:44:31.818348   68864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:44:31.862788   68864 cri.go:89] found id: ""
	I0501 03:44:31.862814   68864 logs.go:276] 0 containers: []
	W0501 03:44:31.862824   68864 logs.go:278] No container was found matching "kindnet"
	I0501 03:44:31.862832   68864 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0501 03:44:31.862890   68864 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0501 03:44:31.912261   68864 cri.go:89] found id: "f9a8d2f0f9453969b410fb7f880f6cf4c7e6e2f3c17a687583906efe4181dd17"
	I0501 03:44:31.912284   68864 cri.go:89] found id: "aaae36261c5ce1accf3bfb5993be5a256a037a640d5f6f30de8384900dda06b2"
	I0501 03:44:31.912298   68864 cri.go:89] found id: ""
	I0501 03:44:31.912312   68864 logs.go:276] 2 containers: [f9a8d2f0f9453969b410fb7f880f6cf4c7e6e2f3c17a687583906efe4181dd17 aaae36261c5ce1accf3bfb5993be5a256a037a640d5f6f30de8384900dda06b2]
	I0501 03:44:31.912367   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:44:31.917696   68864 ssh_runner.go:195] Run: which crictl
	I0501 03:44:31.922432   68864 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:44:31.922450   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:44:32.332797   68864 logs.go:123] Gathering logs for kubelet ...
	I0501 03:44:32.332836   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:44:32.396177   68864 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:44:32.396214   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0501 03:44:32.511915   68864 logs.go:123] Gathering logs for etcd [d109948ffbbddd2e38484d83d2260690afce3c51e16abff0fe8713e844bfdcb3] ...
	I0501 03:44:32.511953   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d109948ffbbddd2e38484d83d2260690afce3c51e16abff0fe8713e844bfdcb3"
	I0501 03:44:32.564447   68864 logs.go:123] Gathering logs for kube-proxy [94afdb03c382269209154b2e73edbf200662e89960c248ba775a0ef2b8fbe6b1] ...
	I0501 03:44:32.564475   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 94afdb03c382269209154b2e73edbf200662e89960c248ba775a0ef2b8fbe6b1"
	I0501 03:44:32.610196   68864 logs.go:123] Gathering logs for kube-controller-manager [7e7158f7ff3922e97ba5ee97108acc1d0ba0350dff2f9aa56c2f71519108791c] ...
	I0501 03:44:32.610235   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7e7158f7ff3922e97ba5ee97108acc1d0ba0350dff2f9aa56c2f71519108791c"
	I0501 03:44:32.665262   68864 logs.go:123] Gathering logs for storage-provisioner [aaae36261c5ce1accf3bfb5993be5a256a037a640d5f6f30de8384900dda06b2] ...
	I0501 03:44:32.665314   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aaae36261c5ce1accf3bfb5993be5a256a037a640d5f6f30de8384900dda06b2"
	I0501 03:44:32.707346   68864 logs.go:123] Gathering logs for container status ...
	I0501 03:44:32.707377   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:44:32.757693   68864 logs.go:123] Gathering logs for dmesg ...
	I0501 03:44:32.757726   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:44:32.775720   68864 logs.go:123] Gathering logs for kube-apiserver [a96815c49ac45b34c359d7171b30be5c25cee80fae08d947fc004d479693df00] ...
	I0501 03:44:32.775759   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a96815c49ac45b34c359d7171b30be5c25cee80fae08d947fc004d479693df00"
	I0501 03:44:32.831002   68864 logs.go:123] Gathering logs for coredns [e3c74de489af3d78a4b0cf620ed16e1fba7c9ce1c7021a15c5b4bd241b7a934a] ...
	I0501 03:44:32.831039   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e3c74de489af3d78a4b0cf620ed16e1fba7c9ce1c7021a15c5b4bd241b7a934a"
	I0501 03:44:32.878365   68864 logs.go:123] Gathering logs for kube-scheduler [1813f35574f4f0398e7d482fe77c5d7f8490726666a277035bda0a5cff80af6e] ...
	I0501 03:44:32.878416   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1813f35574f4f0398e7d482fe77c5d7f8490726666a277035bda0a5cff80af6e"
	I0501 03:44:32.935752   68864 logs.go:123] Gathering logs for storage-provisioner [f9a8d2f0f9453969b410fb7f880f6cf4c7e6e2f3c17a687583906efe4181dd17] ...
	I0501 03:44:32.935791   68864 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f9a8d2f0f9453969b410fb7f880f6cf4c7e6e2f3c17a687583906efe4181dd17"
	I0501 03:44:35.492575   68864 system_pods.go:59] 8 kube-system pods found
	I0501 03:44:35.492603   68864 system_pods.go:61] "coredns-7db6d8ff4d-sjplt" [6701ee8e-0630-4332-b01c-26741ed3a7b7] Running
	I0501 03:44:35.492607   68864 system_pods.go:61] "etcd-embed-certs-277128" [744d7481-dd80-4435-90da-685fc16a76a4] Running
	I0501 03:44:35.492612   68864 system_pods.go:61] "kube-apiserver-embed-certs-277128" [2f1d0f09-b270-4cdc-af3c-e17e3a244bbf] Running
	I0501 03:44:35.492616   68864 system_pods.go:61] "kube-controller-manager-embed-certs-277128" [729aa138-dc0d-4616-97d4-1592453971b8] Running
	I0501 03:44:35.492619   68864 system_pods.go:61] "kube-proxy-phx7x" [56c0381e-c140-4f69-bbe4-09d393db8b23] Running
	I0501 03:44:35.492621   68864 system_pods.go:61] "kube-scheduler-embed-certs-277128" [cc56e9bc-7f21-4f57-a65a-73a10ffd7145] Running
	I0501 03:44:35.492627   68864 system_pods.go:61] "metrics-server-569cc877fc-p8j59" [f8ad6c24-dd5d-4515-9052-c9aca7412b55] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0501 03:44:35.492631   68864 system_pods.go:61] "storage-provisioner" [785be666-58d5-4b9d-92fd-bcacdbdebeb2] Running
	I0501 03:44:35.492638   68864 system_pods.go:74] duration metric: took 4.012764043s to wait for pod list to return data ...
	I0501 03:44:35.492644   68864 default_sa.go:34] waiting for default service account to be created ...
	I0501 03:44:35.494580   68864 default_sa.go:45] found service account: "default"
	I0501 03:44:35.494599   68864 default_sa.go:55] duration metric: took 1.949121ms for default service account to be created ...
	I0501 03:44:35.494606   68864 system_pods.go:116] waiting for k8s-apps to be running ...
	I0501 03:44:35.499484   68864 system_pods.go:86] 8 kube-system pods found
	I0501 03:44:35.499507   68864 system_pods.go:89] "coredns-7db6d8ff4d-sjplt" [6701ee8e-0630-4332-b01c-26741ed3a7b7] Running
	I0501 03:44:35.499514   68864 system_pods.go:89] "etcd-embed-certs-277128" [744d7481-dd80-4435-90da-685fc16a76a4] Running
	I0501 03:44:35.499519   68864 system_pods.go:89] "kube-apiserver-embed-certs-277128" [2f1d0f09-b270-4cdc-af3c-e17e3a244bbf] Running
	I0501 03:44:35.499523   68864 system_pods.go:89] "kube-controller-manager-embed-certs-277128" [729aa138-dc0d-4616-97d4-1592453971b8] Running
	I0501 03:44:35.499526   68864 system_pods.go:89] "kube-proxy-phx7x" [56c0381e-c140-4f69-bbe4-09d393db8b23] Running
	I0501 03:44:35.499531   68864 system_pods.go:89] "kube-scheduler-embed-certs-277128" [cc56e9bc-7f21-4f57-a65a-73a10ffd7145] Running
	I0501 03:44:35.499537   68864 system_pods.go:89] "metrics-server-569cc877fc-p8j59" [f8ad6c24-dd5d-4515-9052-c9aca7412b55] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0501 03:44:35.499544   68864 system_pods.go:89] "storage-provisioner" [785be666-58d5-4b9d-92fd-bcacdbdebeb2] Running
	I0501 03:44:35.499550   68864 system_pods.go:126] duration metric: took 4.939659ms to wait for k8s-apps to be running ...
	I0501 03:44:35.499559   68864 system_svc.go:44] waiting for kubelet service to be running ....
	I0501 03:44:35.499599   68864 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 03:44:35.518471   68864 system_svc.go:56] duration metric: took 18.902776ms WaitForService to wait for kubelet
	I0501 03:44:35.518498   68864 kubeadm.go:576] duration metric: took 4m27.516125606s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0501 03:44:35.518521   68864 node_conditions.go:102] verifying NodePressure condition ...
	I0501 03:44:35.521936   68864 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0501 03:44:35.521956   68864 node_conditions.go:123] node cpu capacity is 2
	I0501 03:44:35.521966   68864 node_conditions.go:105] duration metric: took 3.439997ms to run NodePressure ...
	I0501 03:44:35.521976   68864 start.go:240] waiting for startup goroutines ...
	I0501 03:44:35.521983   68864 start.go:245] waiting for cluster config update ...
	I0501 03:44:35.521994   68864 start.go:254] writing updated cluster config ...
	I0501 03:44:35.522311   68864 ssh_runner.go:195] Run: rm -f paused
	I0501 03:44:35.572130   68864 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0501 03:44:35.573709   68864 out.go:177] * Done! kubectl is now configured to use "embed-certs-277128" cluster and "default" namespace by default
	I0501 03:44:31.512755   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:34.011892   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:32.772208   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:44:32.791063   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:44:32.791145   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:44:32.856883   69580 cri.go:89] found id: ""
	I0501 03:44:32.856909   69580 logs.go:276] 0 containers: []
	W0501 03:44:32.856920   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:44:32.856927   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:44:32.856988   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:44:32.928590   69580 cri.go:89] found id: ""
	I0501 03:44:32.928625   69580 logs.go:276] 0 containers: []
	W0501 03:44:32.928637   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:44:32.928644   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:44:32.928707   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:44:32.978068   69580 cri.go:89] found id: ""
	I0501 03:44:32.978100   69580 logs.go:276] 0 containers: []
	W0501 03:44:32.978113   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:44:32.978120   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:44:32.978184   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:44:33.018873   69580 cri.go:89] found id: ""
	I0501 03:44:33.018897   69580 logs.go:276] 0 containers: []
	W0501 03:44:33.018905   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:44:33.018911   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:44:33.018970   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:44:33.060633   69580 cri.go:89] found id: ""
	I0501 03:44:33.060661   69580 logs.go:276] 0 containers: []
	W0501 03:44:33.060673   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:44:33.060681   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:44:33.060735   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:44:33.099862   69580 cri.go:89] found id: ""
	I0501 03:44:33.099891   69580 logs.go:276] 0 containers: []
	W0501 03:44:33.099900   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:44:33.099906   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:44:33.099953   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:44:33.139137   69580 cri.go:89] found id: ""
	I0501 03:44:33.139163   69580 logs.go:276] 0 containers: []
	W0501 03:44:33.139171   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:44:33.139177   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:44:33.139224   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:44:33.178800   69580 cri.go:89] found id: ""
	I0501 03:44:33.178826   69580 logs.go:276] 0 containers: []
	W0501 03:44:33.178834   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:44:33.178842   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:44:33.178856   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:44:33.233811   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:44:33.233842   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:44:33.248931   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:44:33.248958   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:44:33.325530   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:44:33.325551   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:44:33.325563   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:44:33.412071   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:44:33.412103   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:44:35.954706   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:44:35.970256   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:44:35.970333   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:44:36.010417   69580 cri.go:89] found id: ""
	I0501 03:44:36.010443   69580 logs.go:276] 0 containers: []
	W0501 03:44:36.010452   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:44:36.010459   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:44:36.010524   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:44:36.051571   69580 cri.go:89] found id: ""
	I0501 03:44:36.051600   69580 logs.go:276] 0 containers: []
	W0501 03:44:36.051611   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:44:36.051619   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:44:36.051683   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:44:36.092148   69580 cri.go:89] found id: ""
	I0501 03:44:36.092176   69580 logs.go:276] 0 containers: []
	W0501 03:44:36.092185   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:44:36.092190   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:44:36.092247   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:44:36.136243   69580 cri.go:89] found id: ""
	I0501 03:44:36.136282   69580 logs.go:276] 0 containers: []
	W0501 03:44:36.136290   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:44:36.136296   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:44:36.136342   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:44:36.178154   69580 cri.go:89] found id: ""
	I0501 03:44:36.178183   69580 logs.go:276] 0 containers: []
	W0501 03:44:36.178193   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:44:36.178200   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:44:36.178264   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:44:36.217050   69580 cri.go:89] found id: ""
	I0501 03:44:36.217077   69580 logs.go:276] 0 containers: []
	W0501 03:44:36.217089   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:44:36.217096   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:44:36.217172   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:44:36.260438   69580 cri.go:89] found id: ""
	I0501 03:44:36.260470   69580 logs.go:276] 0 containers: []
	W0501 03:44:36.260481   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:44:36.260488   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:44:36.260546   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:44:36.303410   69580 cri.go:89] found id: ""
	I0501 03:44:36.303436   69580 logs.go:276] 0 containers: []
	W0501 03:44:36.303448   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:44:36.303459   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:44:36.303475   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:44:36.390427   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:44:36.390468   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:44:36.433631   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:44:36.433663   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:44:33.845863   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:35.847896   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:36.012448   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:38.510722   69237 pod_ready.go:102] pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:39.005005   69237 pod_ready.go:81] duration metric: took 4m0.000783466s for pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace to be "Ready" ...
	E0501 03:44:39.005036   69237 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-2btjj" in "kube-system" namespace to be "Ready" (will not retry!)
	I0501 03:44:39.005057   69237 pod_ready.go:38] duration metric: took 4m8.020392425s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0501 03:44:39.005089   69237 kubeadm.go:591] duration metric: took 4m17.941775807s to restartPrimaryControlPlane
	W0501 03:44:39.005175   69237 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0501 03:44:39.005208   69237 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0501 03:44:36.486334   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:44:36.486365   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:44:36.502145   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:44:36.502175   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:44:36.586733   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:44:39.087607   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:44:39.102475   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:44:39.102552   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:44:39.141916   69580 cri.go:89] found id: ""
	I0501 03:44:39.141947   69580 logs.go:276] 0 containers: []
	W0501 03:44:39.141958   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:44:39.141964   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:44:39.142012   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:44:39.188472   69580 cri.go:89] found id: ""
	I0501 03:44:39.188501   69580 logs.go:276] 0 containers: []
	W0501 03:44:39.188512   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:44:39.188520   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:44:39.188582   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:44:39.243282   69580 cri.go:89] found id: ""
	I0501 03:44:39.243306   69580 logs.go:276] 0 containers: []
	W0501 03:44:39.243313   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:44:39.243318   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:44:39.243377   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:44:39.288254   69580 cri.go:89] found id: ""
	I0501 03:44:39.288284   69580 logs.go:276] 0 containers: []
	W0501 03:44:39.288296   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:44:39.288304   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:44:39.288379   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:44:39.330846   69580 cri.go:89] found id: ""
	I0501 03:44:39.330879   69580 logs.go:276] 0 containers: []
	W0501 03:44:39.330892   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:44:39.330901   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:44:39.330969   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:44:39.377603   69580 cri.go:89] found id: ""
	I0501 03:44:39.377632   69580 logs.go:276] 0 containers: []
	W0501 03:44:39.377642   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:44:39.377649   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:44:39.377710   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:44:39.421545   69580 cri.go:89] found id: ""
	I0501 03:44:39.421574   69580 logs.go:276] 0 containers: []
	W0501 03:44:39.421585   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:44:39.421594   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:44:39.421653   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:44:39.463394   69580 cri.go:89] found id: ""
	I0501 03:44:39.463424   69580 logs.go:276] 0 containers: []
	W0501 03:44:39.463435   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:44:39.463447   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:44:39.463464   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:44:39.552196   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:44:39.552218   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:44:39.552229   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:44:39.648509   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:44:39.648549   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:44:39.702829   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:44:39.702866   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:44:39.757712   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:44:39.757746   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:44:38.347120   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:40.355310   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:42.847346   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:42.273443   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:44:42.289788   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:44:42.289856   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:44:42.336802   69580 cri.go:89] found id: ""
	I0501 03:44:42.336833   69580 logs.go:276] 0 containers: []
	W0501 03:44:42.336846   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:44:42.336854   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:44:42.336919   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:44:42.387973   69580 cri.go:89] found id: ""
	I0501 03:44:42.388017   69580 logs.go:276] 0 containers: []
	W0501 03:44:42.388028   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:44:42.388036   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:44:42.388103   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:44:42.444866   69580 cri.go:89] found id: ""
	I0501 03:44:42.444895   69580 logs.go:276] 0 containers: []
	W0501 03:44:42.444906   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:44:42.444914   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:44:42.444987   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:44:42.493647   69580 cri.go:89] found id: ""
	I0501 03:44:42.493676   69580 logs.go:276] 0 containers: []
	W0501 03:44:42.493686   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:44:42.493692   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:44:42.493748   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:44:42.535046   69580 cri.go:89] found id: ""
	I0501 03:44:42.535075   69580 logs.go:276] 0 containers: []
	W0501 03:44:42.535086   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:44:42.535093   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:44:42.535161   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:44:42.579453   69580 cri.go:89] found id: ""
	I0501 03:44:42.579486   69580 logs.go:276] 0 containers: []
	W0501 03:44:42.579499   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:44:42.579507   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:44:42.579568   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:44:42.621903   69580 cri.go:89] found id: ""
	I0501 03:44:42.621931   69580 logs.go:276] 0 containers: []
	W0501 03:44:42.621942   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:44:42.621950   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:44:42.622009   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:44:42.666202   69580 cri.go:89] found id: ""
	I0501 03:44:42.666232   69580 logs.go:276] 0 containers: []
	W0501 03:44:42.666243   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:44:42.666257   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:44:42.666272   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:44:42.736032   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:44:42.736078   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:44:42.750773   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:44:42.750799   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:44:42.836942   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:44:42.836975   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:44:42.836997   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:44:42.930660   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:44:42.930695   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:44:45.479619   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:44:45.495112   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:44:45.495174   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:44:45.536693   69580 cri.go:89] found id: ""
	I0501 03:44:45.536722   69580 logs.go:276] 0 containers: []
	W0501 03:44:45.536730   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:44:45.536737   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:44:45.536785   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:44:45.577838   69580 cri.go:89] found id: ""
	I0501 03:44:45.577866   69580 logs.go:276] 0 containers: []
	W0501 03:44:45.577876   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:44:45.577894   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:44:45.577958   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:44:45.615842   69580 cri.go:89] found id: ""
	I0501 03:44:45.615868   69580 logs.go:276] 0 containers: []
	W0501 03:44:45.615879   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:44:45.615892   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:44:45.615953   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:44:45.654948   69580 cri.go:89] found id: ""
	I0501 03:44:45.654972   69580 logs.go:276] 0 containers: []
	W0501 03:44:45.654980   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:44:45.654986   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:44:45.655042   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:44:45.695104   69580 cri.go:89] found id: ""
	I0501 03:44:45.695129   69580 logs.go:276] 0 containers: []
	W0501 03:44:45.695138   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:44:45.695145   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:44:45.695212   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:44:45.737609   69580 cri.go:89] found id: ""
	I0501 03:44:45.737633   69580 logs.go:276] 0 containers: []
	W0501 03:44:45.737641   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:44:45.737647   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:44:45.737693   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:44:45.778655   69580 cri.go:89] found id: ""
	I0501 03:44:45.778685   69580 logs.go:276] 0 containers: []
	W0501 03:44:45.778696   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:44:45.778702   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:44:45.778781   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:44:45.819430   69580 cri.go:89] found id: ""
	I0501 03:44:45.819452   69580 logs.go:276] 0 containers: []
	W0501 03:44:45.819460   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:44:45.819469   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:44:45.819485   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:44:45.875879   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:44:45.875911   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:44:45.892035   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:44:45.892062   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:44:45.975803   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:44:45.975836   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:44:45.975853   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0501 03:44:46.058183   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:44:46.058222   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:44:45.345465   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:47.346947   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:48.604991   69580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:44:48.621226   69580 kubeadm.go:591] duration metric: took 4m4.888665162s to restartPrimaryControlPlane
	W0501 03:44:48.621351   69580 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0501 03:44:48.621407   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0501 03:44:49.654748   69580 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.033320548s)
	I0501 03:44:49.654838   69580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 03:44:49.671511   69580 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0501 03:44:49.684266   69580 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0501 03:44:49.697079   69580 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0501 03:44:49.697101   69580 kubeadm.go:156] found existing configuration files:
	
	I0501 03:44:49.697159   69580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0501 03:44:49.710609   69580 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0501 03:44:49.710692   69580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0501 03:44:49.723647   69580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0501 03:44:49.736855   69580 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0501 03:44:49.737023   69580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0501 03:44:49.748842   69580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0501 03:44:49.760856   69580 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0501 03:44:49.760923   69580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0501 03:44:49.772685   69580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0501 03:44:49.784035   69580 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0501 03:44:49.784114   69580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0501 03:44:49.795699   69580 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0501 03:44:49.869387   69580 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0501 03:44:49.869481   69580 kubeadm.go:309] [preflight] Running pre-flight checks
	I0501 03:44:50.028858   69580 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0501 03:44:50.028999   69580 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0501 03:44:50.029182   69580 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0501 03:44:50.242773   69580 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0501 03:44:50.244816   69580 out.go:204]   - Generating certificates and keys ...
	I0501 03:44:50.244918   69580 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0501 03:44:50.245008   69580 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0501 03:44:50.245111   69580 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0501 03:44:50.245216   69580 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0501 03:44:50.245331   69580 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0501 03:44:50.245424   69580 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0501 03:44:50.245490   69580 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0501 03:44:50.245556   69580 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0501 03:44:50.245629   69580 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0501 03:44:50.245724   69580 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0501 03:44:50.245784   69580 kubeadm.go:309] [certs] Using the existing "sa" key
	I0501 03:44:50.245877   69580 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0501 03:44:50.501955   69580 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0501 03:44:50.683749   69580 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0501 03:44:50.905745   69580 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0501 03:44:51.005912   69580 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0501 03:44:51.025470   69580 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0501 03:44:51.029411   69580 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0501 03:44:51.029859   69580 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0501 03:44:51.181498   69580 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0501 03:44:51.183222   69580 out.go:204]   - Booting up control plane ...
	I0501 03:44:51.183334   69580 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0501 03:44:51.200394   69580 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0501 03:44:51.201612   69580 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0501 03:44:51.202445   69580 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0501 03:44:51.204681   69580 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0501 03:44:49.847629   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:52.345383   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:54.346479   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:56.348560   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:44:58.846207   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:45:01.345790   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:45:03.847746   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:45:06.346172   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:45:08.346693   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:45:10.846797   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:45:11.778923   69237 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.773690939s)
	I0501 03:45:11.778992   69237 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 03:45:11.796337   69237 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0501 03:45:11.810167   69237 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0501 03:45:11.822425   69237 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0501 03:45:11.822457   69237 kubeadm.go:156] found existing configuration files:
	
	I0501 03:45:11.822514   69237 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0501 03:45:11.834539   69237 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0501 03:45:11.834596   69237 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0501 03:45:11.848336   69237 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0501 03:45:11.860459   69237 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0501 03:45:11.860535   69237 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0501 03:45:11.873903   69237 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0501 03:45:11.887353   69237 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0501 03:45:11.887427   69237 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0501 03:45:11.900805   69237 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0501 03:45:11.912512   69237 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0501 03:45:11.912572   69237 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0501 03:45:11.924870   69237 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0501 03:45:12.149168   69237 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0501 03:45:13.348651   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:45:15.847148   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:45:20.882309   69237 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0501 03:45:20.882382   69237 kubeadm.go:309] [preflight] Running pre-flight checks
	I0501 03:45:20.882472   69237 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0501 03:45:20.882602   69237 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0501 03:45:20.882741   69237 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0501 03:45:20.882836   69237 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0501 03:45:20.884733   69237 out.go:204]   - Generating certificates and keys ...
	I0501 03:45:20.884837   69237 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0501 03:45:20.884894   69237 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0501 03:45:20.884996   69237 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0501 03:45:20.885106   69237 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0501 03:45:20.885209   69237 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0501 03:45:20.885316   69237 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0501 03:45:20.885400   69237 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0501 03:45:20.885483   69237 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0501 03:45:20.885590   69237 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0501 03:45:20.885702   69237 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0501 03:45:20.885759   69237 kubeadm.go:309] [certs] Using the existing "sa" key
	I0501 03:45:20.885838   69237 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0501 03:45:20.885915   69237 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0501 03:45:20.885996   69237 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0501 03:45:20.886074   69237 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0501 03:45:20.886164   69237 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0501 03:45:20.886233   69237 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0501 03:45:20.886362   69237 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0501 03:45:20.886492   69237 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0501 03:45:20.888113   69237 out.go:204]   - Booting up control plane ...
	I0501 03:45:20.888194   69237 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0501 03:45:20.888264   69237 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0501 03:45:20.888329   69237 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0501 03:45:20.888458   69237 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0501 03:45:20.888570   69237 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0501 03:45:20.888627   69237 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0501 03:45:20.888777   69237 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0501 03:45:20.888863   69237 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0501 03:45:20.888964   69237 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 502.867448ms
	I0501 03:45:20.889080   69237 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0501 03:45:20.889177   69237 kubeadm.go:309] [api-check] The API server is healthy after 5.503139909s
	I0501 03:45:20.889335   69237 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0501 03:45:20.889506   69237 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0501 03:45:20.889579   69237 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0501 03:45:20.889817   69237 kubeadm.go:309] [mark-control-plane] Marking the node default-k8s-diff-port-715118 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0501 03:45:20.889868   69237 kubeadm.go:309] [bootstrap-token] Using token: 2vhvw6.gdesonhc2twrukzt
	I0501 03:45:20.892253   69237 out.go:204]   - Configuring RBAC rules ...
	I0501 03:45:20.892395   69237 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0501 03:45:20.892475   69237 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0501 03:45:20.892652   69237 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0501 03:45:20.892812   69237 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0501 03:45:20.892931   69237 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0501 03:45:20.893040   69237 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0501 03:45:20.893201   69237 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0501 03:45:20.893264   69237 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0501 03:45:20.893309   69237 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0501 03:45:20.893319   69237 kubeadm.go:309] 
	I0501 03:45:20.893367   69237 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0501 03:45:20.893373   69237 kubeadm.go:309] 
	I0501 03:45:20.893450   69237 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0501 03:45:20.893458   69237 kubeadm.go:309] 
	I0501 03:45:20.893481   69237 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0501 03:45:20.893544   69237 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0501 03:45:20.893591   69237 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0501 03:45:20.893597   69237 kubeadm.go:309] 
	I0501 03:45:20.893643   69237 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0501 03:45:20.893650   69237 kubeadm.go:309] 
	I0501 03:45:20.893685   69237 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0501 03:45:20.893690   69237 kubeadm.go:309] 
	I0501 03:45:20.893741   69237 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0501 03:45:20.893805   69237 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0501 03:45:20.893858   69237 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0501 03:45:20.893863   69237 kubeadm.go:309] 
	I0501 03:45:20.893946   69237 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0501 03:45:20.894035   69237 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0501 03:45:20.894045   69237 kubeadm.go:309] 
	I0501 03:45:20.894139   69237 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8444 --token 2vhvw6.gdesonhc2twrukzt \
	I0501 03:45:20.894267   69237 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:bd94cc6acfb95fe36f70b12c6a1f2980601b8235e7c4dea65cebdf35eb514754 \
	I0501 03:45:20.894294   69237 kubeadm.go:309] 	--control-plane 
	I0501 03:45:20.894301   69237 kubeadm.go:309] 
	I0501 03:45:20.894368   69237 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0501 03:45:20.894375   69237 kubeadm.go:309] 
	I0501 03:45:20.894498   69237 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8444 --token 2vhvw6.gdesonhc2twrukzt \
	I0501 03:45:20.894605   69237 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:bd94cc6acfb95fe36f70b12c6a1f2980601b8235e7c4dea65cebdf35eb514754 
	I0501 03:45:20.894616   69237 cni.go:84] Creating CNI manager for ""
	I0501 03:45:20.894623   69237 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0501 03:45:20.896151   69237 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0501 03:45:18.346276   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:45:20.846958   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:45:20.897443   69237 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0501 03:45:20.911935   69237 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0501 03:45:20.941109   69237 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0501 03:45:20.941193   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:20.941249   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-715118 minikube.k8s.io/updated_at=2024_05_01T03_45_20_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=2c4eae41cda912e6a762d77f0d8868e00f97bb4e minikube.k8s.io/name=default-k8s-diff-port-715118 minikube.k8s.io/primary=true
	I0501 03:45:20.971300   69237 ops.go:34] apiserver oom_adj: -16
	I0501 03:45:21.143744   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:21.643800   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:22.144096   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:22.643852   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:23.144726   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:23.644174   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:24.143735   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:24.643947   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:25.143871   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:25.644557   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:23.345774   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:45:25.346189   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:45:27.348026   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:45:26.144443   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:26.643761   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:27.144691   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:27.644445   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:28.144006   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:28.643904   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:29.144077   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:29.644690   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:30.144692   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:30.644604   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:31.207553   69580 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0501 03:45:31.208328   69580 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0501 03:45:31.208516   69580 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0501 03:45:29.845785   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:45:32.348020   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:45:31.144517   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:31.644673   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:32.143793   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:32.644380   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:33.144729   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:33.644415   69237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:45:33.752056   69237 kubeadm.go:1107] duration metric: took 12.810918189s to wait for elevateKubeSystemPrivileges
	W0501 03:45:33.752096   69237 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0501 03:45:33.752105   69237 kubeadm.go:393] duration metric: took 5m12.753721662s to StartCluster
	I0501 03:45:33.752124   69237 settings.go:142] acquiring lock: {Name:mkcfe08bcadd45d99c00ed1eaf4e9c226a489fe2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 03:45:33.752219   69237 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18779-13391/kubeconfig
	I0501 03:45:33.753829   69237 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18779-13391/kubeconfig: {Name:mk5d1131557f885800877622151c0915337cb23a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 03:45:33.754094   69237 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.72.158 Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0501 03:45:33.755764   69237 out.go:177] * Verifying Kubernetes components...
	I0501 03:45:33.754178   69237 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0501 03:45:33.754310   69237 config.go:182] Loaded profile config "default-k8s-diff-port-715118": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0501 03:45:33.757144   69237 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 03:45:33.757151   69237 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-715118"
	I0501 03:45:33.757172   69237 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-715118"
	I0501 03:45:33.757189   69237 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-715118"
	I0501 03:45:33.757213   69237 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-715118"
	I0501 03:45:33.757221   69237 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-715118"
	W0501 03:45:33.757230   69237 addons.go:243] addon metrics-server should already be in state true
	I0501 03:45:33.757264   69237 host.go:66] Checking if "default-k8s-diff-port-715118" exists ...
	I0501 03:45:33.757180   69237 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-715118"
	W0501 03:45:33.757295   69237 addons.go:243] addon storage-provisioner should already be in state true
	I0501 03:45:33.757355   69237 host.go:66] Checking if "default-k8s-diff-port-715118" exists ...
	I0501 03:45:33.757596   69237 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:45:33.757624   69237 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:45:33.757630   69237 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:45:33.757762   69237 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:45:33.757808   69237 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:45:33.757662   69237 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:45:33.773846   69237 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44313
	I0501 03:45:33.774442   69237 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:45:33.775002   69237 main.go:141] libmachine: Using API Version  1
	I0501 03:45:33.775024   69237 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:45:33.775438   69237 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:45:33.776086   69237 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:45:33.776117   69237 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:45:33.777715   69237 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37079
	I0501 03:45:33.777835   69237 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38097
	I0501 03:45:33.778170   69237 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:45:33.778240   69237 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:45:33.778701   69237 main.go:141] libmachine: Using API Version  1
	I0501 03:45:33.778734   69237 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:45:33.778778   69237 main.go:141] libmachine: Using API Version  1
	I0501 03:45:33.778795   69237 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:45:33.779142   69237 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:45:33.779150   69237 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:45:33.779427   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetState
	I0501 03:45:33.779721   69237 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:45:33.779769   69237 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:45:33.783493   69237 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-715118"
	W0501 03:45:33.783519   69237 addons.go:243] addon default-storageclass should already be in state true
	I0501 03:45:33.783551   69237 host.go:66] Checking if "default-k8s-diff-port-715118" exists ...
	I0501 03:45:33.783922   69237 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:45:33.783965   69237 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:45:33.795373   69237 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43491
	I0501 03:45:33.795988   69237 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:45:33.796557   69237 main.go:141] libmachine: Using API Version  1
	I0501 03:45:33.796579   69237 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:45:33.796931   69237 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:45:33.797093   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetState
	I0501 03:45:33.797155   69237 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46755
	I0501 03:45:33.797806   69237 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:45:33.798383   69237 main.go:141] libmachine: Using API Version  1
	I0501 03:45:33.798442   69237 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:45:33.798848   69237 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:45:33.799052   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetState
	I0501 03:45:33.799105   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .DriverName
	I0501 03:45:33.801809   69237 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0501 03:45:33.800600   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .DriverName
	I0501 03:45:33.803752   69237 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0501 03:45:33.803779   69237 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0501 03:45:33.803800   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHHostname
	I0501 03:45:33.805235   69237 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0501 03:45:33.804172   69237 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35709
	I0501 03:45:33.806635   69237 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0501 03:45:33.806651   69237 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0501 03:45:33.806670   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHHostname
	I0501 03:45:33.806889   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:45:33.806967   69237 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:45:33.807292   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:12:31", ip: ""} in network mk-default-k8s-diff-port-715118: {Iface:virbr3 ExpiryTime:2024-05-01 04:40:05 +0000 UTC Type:0 Mac:52:54:00:87:12:31 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-715118 Clientid:01:52:54:00:87:12:31}
	I0501 03:45:33.807426   69237 main.go:141] libmachine: Using API Version  1
	I0501 03:45:33.807428   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHPort
	I0501 03:45:33.807437   69237 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:45:33.807449   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined IP address 192.168.72.158 and MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:45:33.807578   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHKeyPath
	I0501 03:45:33.807680   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHUsername
	I0501 03:45:33.807799   69237 sshutil.go:53] new ssh client: &{IP:192.168.72.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/default-k8s-diff-port-715118/id_rsa Username:docker}
	I0501 03:45:33.808171   69237 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:45:33.808625   69237 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:45:33.808660   69237 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:45:33.810668   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:45:33.811266   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:12:31", ip: ""} in network mk-default-k8s-diff-port-715118: {Iface:virbr3 ExpiryTime:2024-05-01 04:40:05 +0000 UTC Type:0 Mac:52:54:00:87:12:31 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-715118 Clientid:01:52:54:00:87:12:31}
	I0501 03:45:33.811297   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined IP address 192.168.72.158 and MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:45:33.811595   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHPort
	I0501 03:45:33.811794   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHKeyPath
	I0501 03:45:33.811983   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHUsername
	I0501 03:45:33.812124   69237 sshutil.go:53] new ssh client: &{IP:192.168.72.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/default-k8s-diff-port-715118/id_rsa Username:docker}
	I0501 03:45:33.825315   69237 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35461
	I0501 03:45:33.825891   69237 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:45:33.826334   69237 main.go:141] libmachine: Using API Version  1
	I0501 03:45:33.826351   69237 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:45:33.826679   69237 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:45:33.826912   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetState
	I0501 03:45:33.828659   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .DriverName
	I0501 03:45:33.828931   69237 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0501 03:45:33.828946   69237 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0501 03:45:33.828963   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHHostname
	I0501 03:45:33.832151   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:45:33.832632   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:12:31", ip: ""} in network mk-default-k8s-diff-port-715118: {Iface:virbr3 ExpiryTime:2024-05-01 04:40:05 +0000 UTC Type:0 Mac:52:54:00:87:12:31 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:default-k8s-diff-port-715118 Clientid:01:52:54:00:87:12:31}
	I0501 03:45:33.832656   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | domain default-k8s-diff-port-715118 has defined IP address 192.168.72.158 and MAC address 52:54:00:87:12:31 in network mk-default-k8s-diff-port-715118
	I0501 03:45:33.832863   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHPort
	I0501 03:45:33.833010   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHKeyPath
	I0501 03:45:33.833146   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .GetSSHUsername
	I0501 03:45:33.833302   69237 sshutil.go:53] new ssh client: &{IP:192.168.72.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/default-k8s-diff-port-715118/id_rsa Username:docker}
	I0501 03:45:34.014287   69237 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0501 03:45:34.047199   69237 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-715118" to be "Ready" ...
	I0501 03:45:34.069000   69237 node_ready.go:49] node "default-k8s-diff-port-715118" has status "Ready":"True"
	I0501 03:45:34.069023   69237 node_ready.go:38] duration metric: took 21.790599ms for node "default-k8s-diff-port-715118" to be "Ready" ...
	I0501 03:45:34.069033   69237 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0501 03:45:34.077182   69237 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-bg755" in "kube-system" namespace to be "Ready" ...
	I0501 03:45:34.151001   69237 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0501 03:45:34.166362   69237 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0501 03:45:34.166385   69237 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0501 03:45:34.214624   69237 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0501 03:45:34.329110   69237 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0501 03:45:34.329133   69237 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0501 03:45:34.436779   69237 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0501 03:45:34.436804   69237 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0501 03:45:34.611410   69237 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0501 03:45:34.698997   69237 main.go:141] libmachine: Making call to close driver server
	I0501 03:45:34.699026   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .Close
	I0501 03:45:34.699321   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | Closing plugin on server side
	I0501 03:45:34.699389   69237 main.go:141] libmachine: Successfully made call to close driver server
	I0501 03:45:34.699408   69237 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 03:45:34.699423   69237 main.go:141] libmachine: Making call to close driver server
	I0501 03:45:34.699437   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .Close
	I0501 03:45:34.699684   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | Closing plugin on server side
	I0501 03:45:34.699726   69237 main.go:141] libmachine: Successfully made call to close driver server
	I0501 03:45:34.699734   69237 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 03:45:34.708143   69237 main.go:141] libmachine: Making call to close driver server
	I0501 03:45:34.708171   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .Close
	I0501 03:45:34.708438   69237 main.go:141] libmachine: Successfully made call to close driver server
	I0501 03:45:34.708457   69237 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 03:45:34.708463   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | Closing plugin on server side
	I0501 03:45:35.510225   69237 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.295555956s)
	I0501 03:45:35.510274   69237 main.go:141] libmachine: Making call to close driver server
	I0501 03:45:35.510286   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .Close
	I0501 03:45:35.510700   69237 main.go:141] libmachine: Successfully made call to close driver server
	I0501 03:45:35.510721   69237 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 03:45:35.510732   69237 main.go:141] libmachine: Making call to close driver server
	I0501 03:45:35.510728   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | Closing plugin on server side
	I0501 03:45:35.510740   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .Close
	I0501 03:45:35.510961   69237 main.go:141] libmachine: Successfully made call to close driver server
	I0501 03:45:35.510979   69237 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 03:45:35.510983   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | Closing plugin on server side
	I0501 03:45:35.845633   69237 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.234178466s)
	I0501 03:45:35.845691   69237 main.go:141] libmachine: Making call to close driver server
	I0501 03:45:35.845708   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .Close
	I0501 03:45:35.845997   69237 main.go:141] libmachine: Successfully made call to close driver server
	I0501 03:45:35.846017   69237 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 03:45:35.846027   69237 main.go:141] libmachine: Making call to close driver server
	I0501 03:45:35.846026   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | Closing plugin on server side
	I0501 03:45:35.846036   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) Calling .Close
	I0501 03:45:35.847736   69237 main.go:141] libmachine: Successfully made call to close driver server
	I0501 03:45:35.847767   69237 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 03:45:35.847781   69237 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-715118"
	I0501 03:45:35.847786   69237 main.go:141] libmachine: (default-k8s-diff-port-715118) DBG | Closing plugin on server side
	I0501 03:45:35.849438   69237 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0501 03:45:36.209029   69580 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0501 03:45:36.209300   69580 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0501 03:45:34.848699   68640 pod_ready.go:102] pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace has status "Ready":"False"
	I0501 03:45:37.338985   68640 pod_ready.go:81] duration metric: took 4m0.000306796s for pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace to be "Ready" ...
	E0501 03:45:37.339010   68640 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-k8jnl" in "kube-system" namespace to be "Ready" (will not retry!)
	I0501 03:45:37.339029   68640 pod_ready.go:38] duration metric: took 4m9.062496127s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0501 03:45:37.339089   68640 kubeadm.go:591] duration metric: took 4m19.268153875s to restartPrimaryControlPlane
	W0501 03:45:37.339148   68640 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0501 03:45:37.339176   68640 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0501 03:45:35.851156   69237 addons.go:505] duration metric: took 2.096980743s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0501 03:45:36.085176   69237 pod_ready.go:102] pod "coredns-7db6d8ff4d-bg755" in "kube-system" namespace has status "Ready":"False"
	I0501 03:45:36.585390   69237 pod_ready.go:92] pod "coredns-7db6d8ff4d-bg755" in "kube-system" namespace has status "Ready":"True"
	I0501 03:45:36.585415   69237 pod_ready.go:81] duration metric: took 2.508204204s for pod "coredns-7db6d8ff4d-bg755" in "kube-system" namespace to be "Ready" ...
	I0501 03:45:36.585428   69237 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-mp6f5" in "kube-system" namespace to be "Ready" ...
	I0501 03:45:36.594575   69237 pod_ready.go:92] pod "coredns-7db6d8ff4d-mp6f5" in "kube-system" namespace has status "Ready":"True"
	I0501 03:45:36.594600   69237 pod_ready.go:81] duration metric: took 9.163923ms for pod "coredns-7db6d8ff4d-mp6f5" in "kube-system" namespace to be "Ready" ...
	I0501 03:45:36.594613   69237 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-715118" in "kube-system" namespace to be "Ready" ...
	I0501 03:45:36.606784   69237 pod_ready.go:92] pod "etcd-default-k8s-diff-port-715118" in "kube-system" namespace has status "Ready":"True"
	I0501 03:45:36.606807   69237 pod_ready.go:81] duration metric: took 12.186129ms for pod "etcd-default-k8s-diff-port-715118" in "kube-system" namespace to be "Ready" ...
	I0501 03:45:36.606819   69237 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-715118" in "kube-system" namespace to be "Ready" ...
	I0501 03:45:36.617373   69237 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-715118" in "kube-system" namespace has status "Ready":"True"
	I0501 03:45:36.617394   69237 pod_ready.go:81] duration metric: took 10.566278ms for pod "kube-apiserver-default-k8s-diff-port-715118" in "kube-system" namespace to be "Ready" ...
	I0501 03:45:36.617404   69237 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-715118" in "kube-system" namespace to be "Ready" ...
	I0501 03:45:36.622441   69237 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-715118" in "kube-system" namespace has status "Ready":"True"
	I0501 03:45:36.622460   69237 pod_ready.go:81] duration metric: took 5.049948ms for pod "kube-controller-manager-default-k8s-diff-port-715118" in "kube-system" namespace to be "Ready" ...
	I0501 03:45:36.622469   69237 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2knrp" in "kube-system" namespace to be "Ready" ...
	I0501 03:45:36.981490   69237 pod_ready.go:92] pod "kube-proxy-2knrp" in "kube-system" namespace has status "Ready":"True"
	I0501 03:45:36.981513   69237 pod_ready.go:81] duration metric: took 359.038927ms for pod "kube-proxy-2knrp" in "kube-system" namespace to be "Ready" ...
	I0501 03:45:36.981523   69237 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-715118" in "kube-system" namespace to be "Ready" ...
	I0501 03:45:37.381970   69237 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-715118" in "kube-system" namespace has status "Ready":"True"
	I0501 03:45:37.381999   69237 pod_ready.go:81] duration metric: took 400.468372ms for pod "kube-scheduler-default-k8s-diff-port-715118" in "kube-system" namespace to be "Ready" ...
	I0501 03:45:37.382011   69237 pod_ready.go:38] duration metric: took 3.312967983s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0501 03:45:37.382028   69237 api_server.go:52] waiting for apiserver process to appear ...
	I0501 03:45:37.382091   69237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:45:37.401961   69237 api_server.go:72] duration metric: took 3.647829991s to wait for apiserver process to appear ...
	I0501 03:45:37.401992   69237 api_server.go:88] waiting for apiserver healthz status ...
	I0501 03:45:37.402016   69237 api_server.go:253] Checking apiserver healthz at https://192.168.72.158:8444/healthz ...
	I0501 03:45:37.407177   69237 api_server.go:279] https://192.168.72.158:8444/healthz returned 200:
	ok
	I0501 03:45:37.408020   69237 api_server.go:141] control plane version: v1.30.0
	I0501 03:45:37.408037   69237 api_server.go:131] duration metric: took 6.036621ms to wait for apiserver health ...
	I0501 03:45:37.408046   69237 system_pods.go:43] waiting for kube-system pods to appear ...
	I0501 03:45:37.586052   69237 system_pods.go:59] 9 kube-system pods found
	I0501 03:45:37.586081   69237 system_pods.go:61] "coredns-7db6d8ff4d-bg755" [884d489a-bc1e-442c-8e00-4616f983d3e9] Running
	I0501 03:45:37.586085   69237 system_pods.go:61] "coredns-7db6d8ff4d-mp6f5" [4c8550d0-0029-48f1-a892-1800f6639c75] Running
	I0501 03:45:37.586090   69237 system_pods.go:61] "etcd-default-k8s-diff-port-715118" [12be9bec-1d84-49ee-898c-499ff75a8026] Running
	I0501 03:45:37.586094   69237 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-715118" [ae9a476b-03cf-4d4d-9990-5e760db82e60] Running
	I0501 03:45:37.586098   69237 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-715118" [542bbe50-58b6-40fb-b81b-0cc2444a3401] Running
	I0501 03:45:37.586101   69237 system_pods.go:61] "kube-proxy-2knrp" [cf1406ff-8a6e-49bb-b180-1e72f4b54fbf] Running
	I0501 03:45:37.586104   69237 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-715118" [d24f02a2-67a9-4f28-9acc-445e0e74a68d] Running
	I0501 03:45:37.586109   69237 system_pods.go:61] "metrics-server-569cc877fc-xwxx9" [a66f5df4-355c-47f0-8b6e-da29e1c4394e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0501 03:45:37.586113   69237 system_pods.go:61] "storage-provisioner" [debb3a59-143a-46d3-87da-c2403e264861] Running
	I0501 03:45:37.586123   69237 system_pods.go:74] duration metric: took 178.07045ms to wait for pod list to return data ...
	I0501 03:45:37.586132   69237 default_sa.go:34] waiting for default service account to be created ...
	I0501 03:45:37.780696   69237 default_sa.go:45] found service account: "default"
	I0501 03:45:37.780720   69237 default_sa.go:55] duration metric: took 194.582743ms for default service account to be created ...
	I0501 03:45:37.780728   69237 system_pods.go:116] waiting for k8s-apps to be running ...
	I0501 03:45:37.985342   69237 system_pods.go:86] 9 kube-system pods found
	I0501 03:45:37.985368   69237 system_pods.go:89] "coredns-7db6d8ff4d-bg755" [884d489a-bc1e-442c-8e00-4616f983d3e9] Running
	I0501 03:45:37.985374   69237 system_pods.go:89] "coredns-7db6d8ff4d-mp6f5" [4c8550d0-0029-48f1-a892-1800f6639c75] Running
	I0501 03:45:37.985378   69237 system_pods.go:89] "etcd-default-k8s-diff-port-715118" [12be9bec-1d84-49ee-898c-499ff75a8026] Running
	I0501 03:45:37.985383   69237 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-715118" [ae9a476b-03cf-4d4d-9990-5e760db82e60] Running
	I0501 03:45:37.985387   69237 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-715118" [542bbe50-58b6-40fb-b81b-0cc2444a3401] Running
	I0501 03:45:37.985391   69237 system_pods.go:89] "kube-proxy-2knrp" [cf1406ff-8a6e-49bb-b180-1e72f4b54fbf] Running
	I0501 03:45:37.985395   69237 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-715118" [d24f02a2-67a9-4f28-9acc-445e0e74a68d] Running
	I0501 03:45:37.985401   69237 system_pods.go:89] "metrics-server-569cc877fc-xwxx9" [a66f5df4-355c-47f0-8b6e-da29e1c4394e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0501 03:45:37.985405   69237 system_pods.go:89] "storage-provisioner" [debb3a59-143a-46d3-87da-c2403e264861] Running
	I0501 03:45:37.985412   69237 system_pods.go:126] duration metric: took 204.679545ms to wait for k8s-apps to be running ...
	I0501 03:45:37.985418   69237 system_svc.go:44] waiting for kubelet service to be running ....
	I0501 03:45:37.985463   69237 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 03:45:38.002421   69237 system_svc.go:56] duration metric: took 16.992346ms WaitForService to wait for kubelet
	I0501 03:45:38.002458   69237 kubeadm.go:576] duration metric: took 4.248332952s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0501 03:45:38.002477   69237 node_conditions.go:102] verifying NodePressure condition ...
	I0501 03:45:38.181465   69237 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0501 03:45:38.181496   69237 node_conditions.go:123] node cpu capacity is 2
	I0501 03:45:38.181510   69237 node_conditions.go:105] duration metric: took 179.027834ms to run NodePressure ...
	I0501 03:45:38.181524   69237 start.go:240] waiting for startup goroutines ...
	I0501 03:45:38.181534   69237 start.go:245] waiting for cluster config update ...
	I0501 03:45:38.181547   69237 start.go:254] writing updated cluster config ...
	I0501 03:45:38.181810   69237 ssh_runner.go:195] Run: rm -f paused
	I0501 03:45:38.244075   69237 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0501 03:45:38.246261   69237 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-715118" cluster and "default" namespace by default
	I0501 03:45:46.209837   69580 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0501 03:45:46.210120   69580 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0501 03:46:06.211471   69580 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0501 03:46:06.211673   69580 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0501 03:46:09.967666   68640 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.628454657s)
	I0501 03:46:09.967737   68640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 03:46:09.985802   68640 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0501 03:46:09.996494   68640 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0501 03:46:10.006956   68640 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0501 03:46:10.006979   68640 kubeadm.go:156] found existing configuration files:
	
	I0501 03:46:10.007025   68640 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0501 03:46:10.017112   68640 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0501 03:46:10.017174   68640 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0501 03:46:10.027747   68640 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0501 03:46:10.037853   68640 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0501 03:46:10.037910   68640 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0501 03:46:10.048023   68640 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0501 03:46:10.057354   68640 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0501 03:46:10.057408   68640 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0501 03:46:10.067352   68640 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0501 03:46:10.076696   68640 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0501 03:46:10.076741   68640 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0501 03:46:10.086799   68640 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0501 03:46:10.150816   68640 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0501 03:46:10.150871   68640 kubeadm.go:309] [preflight] Running pre-flight checks
	I0501 03:46:10.325430   68640 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0501 03:46:10.325546   68640 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0501 03:46:10.325669   68640 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0501 03:46:10.581934   68640 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0501 03:46:10.585119   68640 out.go:204]   - Generating certificates and keys ...
	I0501 03:46:10.585221   68640 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0501 03:46:10.585319   68640 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0501 03:46:10.585416   68640 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0501 03:46:10.585522   68640 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0501 03:46:10.585620   68640 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0501 03:46:10.585695   68640 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0501 03:46:10.585781   68640 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0501 03:46:10.585861   68640 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0501 03:46:10.585959   68640 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0501 03:46:10.586064   68640 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0501 03:46:10.586116   68640 kubeadm.go:309] [certs] Using the existing "sa" key
	I0501 03:46:10.586208   68640 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0501 03:46:10.789482   68640 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0501 03:46:10.991219   68640 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0501 03:46:11.194897   68640 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0501 03:46:11.411926   68640 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0501 03:46:11.994791   68640 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0501 03:46:11.995468   68640 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0501 03:46:11.998463   68640 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0501 03:46:12.000394   68640 out.go:204]   - Booting up control plane ...
	I0501 03:46:12.000521   68640 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0501 03:46:12.000632   68640 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0501 03:46:12.000721   68640 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0501 03:46:12.022371   68640 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0501 03:46:12.023628   68640 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0501 03:46:12.023709   68640 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0501 03:46:12.178475   68640 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0501 03:46:12.178615   68640 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0501 03:46:12.680307   68640 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 502.179909ms
	I0501 03:46:12.680409   68640 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0501 03:46:18.182830   68640 kubeadm.go:309] [api-check] The API server is healthy after 5.502227274s
	I0501 03:46:18.197822   68640 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0501 03:46:18.217282   68640 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0501 03:46:18.247591   68640 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0501 03:46:18.247833   68640 kubeadm.go:309] [mark-control-plane] Marking the node no-preload-892672 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0501 03:46:18.259687   68640 kubeadm.go:309] [bootstrap-token] Using token: 8rc6kt.ele1oeavg6hezahw
	I0501 03:46:18.261204   68640 out.go:204]   - Configuring RBAC rules ...
	I0501 03:46:18.261333   68640 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0501 03:46:18.272461   68640 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0501 03:46:18.284615   68640 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0501 03:46:18.288686   68640 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0501 03:46:18.292005   68640 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0501 03:46:18.295772   68640 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0501 03:46:18.591035   68640 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0501 03:46:19.028299   68640 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0501 03:46:19.598192   68640 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0501 03:46:19.598219   68640 kubeadm.go:309] 
	I0501 03:46:19.598323   68640 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0501 03:46:19.598337   68640 kubeadm.go:309] 
	I0501 03:46:19.598490   68640 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0501 03:46:19.598514   68640 kubeadm.go:309] 
	I0501 03:46:19.598542   68640 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0501 03:46:19.598604   68640 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0501 03:46:19.598648   68640 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0501 03:46:19.598673   68640 kubeadm.go:309] 
	I0501 03:46:19.598771   68640 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0501 03:46:19.598784   68640 kubeadm.go:309] 
	I0501 03:46:19.598850   68640 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0501 03:46:19.598860   68640 kubeadm.go:309] 
	I0501 03:46:19.598963   68640 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0501 03:46:19.599069   68640 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0501 03:46:19.599167   68640 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0501 03:46:19.599183   68640 kubeadm.go:309] 
	I0501 03:46:19.599283   68640 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0501 03:46:19.599389   68640 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0501 03:46:19.599400   68640 kubeadm.go:309] 
	I0501 03:46:19.599500   68640 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 8rc6kt.ele1oeavg6hezahw \
	I0501 03:46:19.599626   68640 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:bd94cc6acfb95fe36f70b12c6a1f2980601b8235e7c4dea65cebdf35eb514754 \
	I0501 03:46:19.599666   68640 kubeadm.go:309] 	--control-plane 
	I0501 03:46:19.599676   68640 kubeadm.go:309] 
	I0501 03:46:19.599779   68640 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0501 03:46:19.599807   68640 kubeadm.go:309] 
	I0501 03:46:19.599931   68640 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 8rc6kt.ele1oeavg6hezahw \
	I0501 03:46:19.600079   68640 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:bd94cc6acfb95fe36f70b12c6a1f2980601b8235e7c4dea65cebdf35eb514754 
	I0501 03:46:19.600763   68640 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0501 03:46:19.600786   68640 cni.go:84] Creating CNI manager for ""
	I0501 03:46:19.600792   68640 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0501 03:46:19.602473   68640 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0501 03:46:19.603816   68640 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0501 03:46:19.621706   68640 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0501 03:46:19.649643   68640 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0501 03:46:19.649762   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:19.649787   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-892672 minikube.k8s.io/updated_at=2024_05_01T03_46_19_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=2c4eae41cda912e6a762d77f0d8868e00f97bb4e minikube.k8s.io/name=no-preload-892672 minikube.k8s.io/primary=true
	I0501 03:46:19.892482   68640 ops.go:34] apiserver oom_adj: -16
	I0501 03:46:19.892631   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:20.393436   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:20.893412   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:21.393634   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:21.893273   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:22.393031   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:22.893498   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:23.393599   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:23.893024   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:24.393544   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:24.893431   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:25.393290   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:25.892718   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:26.392928   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:26.893101   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:27.393045   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:27.892722   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:28.393102   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:28.892871   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:29.392650   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:29.893034   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:30.393561   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:30.893661   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:31.393235   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:31.892889   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:32.393263   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:32.893427   68640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:46:33.046965   68640 kubeadm.go:1107] duration metric: took 13.397277159s to wait for elevateKubeSystemPrivileges
	W0501 03:46:33.047010   68640 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0501 03:46:33.047020   68640 kubeadm.go:393] duration metric: took 5m15.038324633s to StartCluster
	I0501 03:46:33.047042   68640 settings.go:142] acquiring lock: {Name:mkcfe08bcadd45d99c00ed1eaf4e9c226a489fe2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 03:46:33.047126   68640 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18779-13391/kubeconfig
	I0501 03:46:33.048731   68640 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18779-13391/kubeconfig: {Name:mk5d1131557f885800877622151c0915337cb23a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 03:46:33.048988   68640 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.144 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0501 03:46:33.050376   68640 out.go:177] * Verifying Kubernetes components...
	I0501 03:46:33.049030   68640 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0501 03:46:33.049253   68640 config.go:182] Loaded profile config "no-preload-892672": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0501 03:46:33.051595   68640 addons.go:69] Setting storage-provisioner=true in profile "no-preload-892672"
	I0501 03:46:33.051616   68640 addons.go:69] Setting metrics-server=true in profile "no-preload-892672"
	I0501 03:46:33.051639   68640 addons.go:234] Setting addon storage-provisioner=true in "no-preload-892672"
	I0501 03:46:33.051644   68640 addons.go:234] Setting addon metrics-server=true in "no-preload-892672"
	W0501 03:46:33.051649   68640 addons.go:243] addon storage-provisioner should already be in state true
	W0501 03:46:33.051653   68640 addons.go:243] addon metrics-server should already be in state true
	I0501 03:46:33.051675   68640 host.go:66] Checking if "no-preload-892672" exists ...
	I0501 03:46:33.051680   68640 host.go:66] Checking if "no-preload-892672" exists ...
	I0501 03:46:33.051599   68640 addons.go:69] Setting default-storageclass=true in profile "no-preload-892672"
	I0501 03:46:33.051760   68640 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-892672"
	I0501 03:46:33.051600   68640 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 03:46:33.052016   68640 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:46:33.052047   68640 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:46:33.052064   68640 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:46:33.052095   68640 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:46:33.052110   68640 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:46:33.052135   68640 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:46:33.068515   68640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46407
	I0501 03:46:33.069115   68640 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:46:33.069702   68640 main.go:141] libmachine: Using API Version  1
	I0501 03:46:33.069728   68640 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:46:33.070085   68640 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:46:33.070731   68640 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:46:33.070763   68640 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:46:33.072166   68640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46609
	I0501 03:46:33.072179   68640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46109
	I0501 03:46:33.072632   68640 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:46:33.072770   68640 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:46:33.073161   68640 main.go:141] libmachine: Using API Version  1
	I0501 03:46:33.073180   68640 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:46:33.073318   68640 main.go:141] libmachine: Using API Version  1
	I0501 03:46:33.073333   68640 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:46:33.073467   68640 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:46:33.073893   68640 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:46:33.074056   68640 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:46:33.074065   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetState
	I0501 03:46:33.074092   68640 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:46:33.077976   68640 addons.go:234] Setting addon default-storageclass=true in "no-preload-892672"
	W0501 03:46:33.077997   68640 addons.go:243] addon default-storageclass should already be in state true
	I0501 03:46:33.078110   68640 host.go:66] Checking if "no-preload-892672" exists ...
	I0501 03:46:33.078535   68640 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:46:33.078566   68640 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:46:33.092605   68640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39707
	I0501 03:46:33.092996   68640 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:46:33.093578   68640 main.go:141] libmachine: Using API Version  1
	I0501 03:46:33.093597   68640 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:46:33.093602   68640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36671
	I0501 03:46:33.093778   68640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34727
	I0501 03:46:33.093893   68640 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:46:33.094117   68640 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:46:33.094169   68640 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:46:33.094250   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetState
	I0501 03:46:33.094577   68640 main.go:141] libmachine: Using API Version  1
	I0501 03:46:33.094602   68640 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:46:33.094986   68640 main.go:141] libmachine: Using API Version  1
	I0501 03:46:33.095004   68640 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:46:33.095062   68640 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:46:33.095389   68640 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:46:33.096401   68640 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:46:33.096423   68640 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:46:33.096665   68640 main.go:141] libmachine: (no-preload-892672) Calling .DriverName
	I0501 03:46:33.096678   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetState
	I0501 03:46:33.098465   68640 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0501 03:46:33.099842   68640 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0501 03:46:33.099861   68640 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0501 03:46:33.099879   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHHostname
	I0501 03:46:33.098734   68640 main.go:141] libmachine: (no-preload-892672) Calling .DriverName
	I0501 03:46:33.101305   68640 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0501 03:46:33.102491   68640 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0501 03:46:33.102512   68640 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0501 03:46:33.102531   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHHostname
	I0501 03:46:33.103006   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:46:33.103617   68640 main.go:141] libmachine: (no-preload-892672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6d:9a", ip: ""} in network mk-no-preload-892672: {Iface:virbr1 ExpiryTime:2024-05-01 04:40:47 +0000 UTC Type:0 Mac:52:54:00:c7:6d:9a Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:no-preload-892672 Clientid:01:52:54:00:c7:6d:9a}
	I0501 03:46:33.103641   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined IP address 192.168.39.144 and MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:46:33.103799   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHPort
	I0501 03:46:33.103977   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHKeyPath
	I0501 03:46:33.104143   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHUsername
	I0501 03:46:33.104272   68640 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/no-preload-892672/id_rsa Username:docker}
	I0501 03:46:33.105452   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:46:33.105795   68640 main.go:141] libmachine: (no-preload-892672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6d:9a", ip: ""} in network mk-no-preload-892672: {Iface:virbr1 ExpiryTime:2024-05-01 04:40:47 +0000 UTC Type:0 Mac:52:54:00:c7:6d:9a Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:no-preload-892672 Clientid:01:52:54:00:c7:6d:9a}
	I0501 03:46:33.105821   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined IP address 192.168.39.144 and MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:46:33.106142   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHPort
	I0501 03:46:33.106290   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHKeyPath
	I0501 03:46:33.106392   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHUsername
	I0501 03:46:33.106511   68640 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/no-preload-892672/id_rsa Username:docker}
	I0501 03:46:33.113012   68640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42615
	I0501 03:46:33.113365   68640 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:46:33.113813   68640 main.go:141] libmachine: Using API Version  1
	I0501 03:46:33.113834   68640 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:46:33.114127   68640 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:46:33.114304   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetState
	I0501 03:46:33.115731   68640 main.go:141] libmachine: (no-preload-892672) Calling .DriverName
	I0501 03:46:33.115997   68640 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0501 03:46:33.116010   68640 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0501 03:46:33.116023   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHHostname
	I0501 03:46:33.119272   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:46:33.119644   68640 main.go:141] libmachine: (no-preload-892672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6d:9a", ip: ""} in network mk-no-preload-892672: {Iface:virbr1 ExpiryTime:2024-05-01 04:40:47 +0000 UTC Type:0 Mac:52:54:00:c7:6d:9a Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:no-preload-892672 Clientid:01:52:54:00:c7:6d:9a}
	I0501 03:46:33.119661   68640 main.go:141] libmachine: (no-preload-892672) DBG | domain no-preload-892672 has defined IP address 192.168.39.144 and MAC address 52:54:00:c7:6d:9a in network mk-no-preload-892672
	I0501 03:46:33.119845   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHPort
	I0501 03:46:33.120223   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHKeyPath
	I0501 03:46:33.120358   68640 main.go:141] libmachine: (no-preload-892672) Calling .GetSSHUsername
	I0501 03:46:33.120449   68640 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/no-preload-892672/id_rsa Username:docker}
	I0501 03:46:33.296711   68640 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0501 03:46:33.342215   68640 node_ready.go:35] waiting up to 6m0s for node "no-preload-892672" to be "Ready" ...
	I0501 03:46:33.355677   68640 node_ready.go:49] node "no-preload-892672" has status "Ready":"True"
	I0501 03:46:33.355707   68640 node_ready.go:38] duration metric: took 13.392381ms for node "no-preload-892672" to be "Ready" ...
	I0501 03:46:33.355718   68640 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0501 03:46:33.367706   68640 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-57k52" in "kube-system" namespace to be "Ready" ...
	I0501 03:46:33.413444   68640 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0501 03:46:33.418869   68640 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0501 03:46:33.439284   68640 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0501 03:46:33.439318   68640 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0501 03:46:33.512744   68640 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0501 03:46:33.512768   68640 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0501 03:46:33.594777   68640 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0501 03:46:33.594798   68640 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0501 03:46:33.658506   68640 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0501 03:46:34.013890   68640 main.go:141] libmachine: Making call to close driver server
	I0501 03:46:34.013919   68640 main.go:141] libmachine: (no-preload-892672) Calling .Close
	I0501 03:46:34.014023   68640 main.go:141] libmachine: Making call to close driver server
	I0501 03:46:34.014056   68640 main.go:141] libmachine: (no-preload-892672) Calling .Close
	I0501 03:46:34.014250   68640 main.go:141] libmachine: Successfully made call to close driver server
	I0501 03:46:34.014284   68640 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 03:46:34.014297   68640 main.go:141] libmachine: Making call to close driver server
	I0501 03:46:34.014306   68640 main.go:141] libmachine: (no-preload-892672) Calling .Close
	I0501 03:46:34.014353   68640 main.go:141] libmachine: Successfully made call to close driver server
	I0501 03:46:34.014370   68640 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 03:46:34.014383   68640 main.go:141] libmachine: Making call to close driver server
	I0501 03:46:34.014393   68640 main.go:141] libmachine: (no-preload-892672) Calling .Close
	I0501 03:46:34.014642   68640 main.go:141] libmachine: Successfully made call to close driver server
	I0501 03:46:34.014664   68640 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 03:46:34.016263   68640 main.go:141] libmachine: (no-preload-892672) DBG | Closing plugin on server side
	I0501 03:46:34.016263   68640 main.go:141] libmachine: Successfully made call to close driver server
	I0501 03:46:34.016288   68640 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 03:46:34.031961   68640 main.go:141] libmachine: Making call to close driver server
	I0501 03:46:34.031996   68640 main.go:141] libmachine: (no-preload-892672) Calling .Close
	I0501 03:46:34.032303   68640 main.go:141] libmachine: Successfully made call to close driver server
	I0501 03:46:34.032324   68640 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 03:46:34.260154   68640 main.go:141] libmachine: Making call to close driver server
	I0501 03:46:34.260180   68640 main.go:141] libmachine: (no-preload-892672) Calling .Close
	I0501 03:46:34.260600   68640 main.go:141] libmachine: Successfully made call to close driver server
	I0501 03:46:34.260629   68640 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 03:46:34.260641   68640 main.go:141] libmachine: Making call to close driver server
	I0501 03:46:34.260650   68640 main.go:141] libmachine: (no-preload-892672) Calling .Close
	I0501 03:46:34.260876   68640 main.go:141] libmachine: Successfully made call to close driver server
	I0501 03:46:34.260888   68640 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 03:46:34.260899   68640 addons.go:470] Verifying addon metrics-server=true in "no-preload-892672"
	I0501 03:46:34.262520   68640 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0501 03:46:34.264176   68640 addons.go:505] duration metric: took 1.215147486s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0501 03:46:35.384910   68640 pod_ready.go:102] pod "coredns-7db6d8ff4d-57k52" in "kube-system" namespace has status "Ready":"False"
	I0501 03:46:36.377298   68640 pod_ready.go:92] pod "coredns-7db6d8ff4d-57k52" in "kube-system" namespace has status "Ready":"True"
	I0501 03:46:36.377321   68640 pod_ready.go:81] duration metric: took 3.009581117s for pod "coredns-7db6d8ff4d-57k52" in "kube-system" namespace to be "Ready" ...
	I0501 03:46:36.377331   68640 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-c6lnj" in "kube-system" namespace to be "Ready" ...
	I0501 03:46:36.383022   68640 pod_ready.go:92] pod "coredns-7db6d8ff4d-c6lnj" in "kube-system" namespace has status "Ready":"True"
	I0501 03:46:36.383042   68640 pod_ready.go:81] duration metric: took 5.704691ms for pod "coredns-7db6d8ff4d-c6lnj" in "kube-system" namespace to be "Ready" ...
	I0501 03:46:36.383051   68640 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-892672" in "kube-system" namespace to be "Ready" ...
	I0501 03:46:36.387456   68640 pod_ready.go:92] pod "etcd-no-preload-892672" in "kube-system" namespace has status "Ready":"True"
	I0501 03:46:36.387476   68640 pod_ready.go:81] duration metric: took 4.418883ms for pod "etcd-no-preload-892672" in "kube-system" namespace to be "Ready" ...
	I0501 03:46:36.387485   68640 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-892672" in "kube-system" namespace to be "Ready" ...
	I0501 03:46:36.392348   68640 pod_ready.go:92] pod "kube-apiserver-no-preload-892672" in "kube-system" namespace has status "Ready":"True"
	I0501 03:46:36.392366   68640 pod_ready.go:81] duration metric: took 4.874928ms for pod "kube-apiserver-no-preload-892672" in "kube-system" namespace to be "Ready" ...
	I0501 03:46:36.392375   68640 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-892672" in "kube-system" namespace to be "Ready" ...
	I0501 03:46:36.397155   68640 pod_ready.go:92] pod "kube-controller-manager-no-preload-892672" in "kube-system" namespace has status "Ready":"True"
	I0501 03:46:36.397175   68640 pod_ready.go:81] duration metric: took 4.794583ms for pod "kube-controller-manager-no-preload-892672" in "kube-system" namespace to be "Ready" ...
	I0501 03:46:36.397185   68640 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-czsqz" in "kube-system" namespace to be "Ready" ...
	I0501 03:46:36.774003   68640 pod_ready.go:92] pod "kube-proxy-czsqz" in "kube-system" namespace has status "Ready":"True"
	I0501 03:46:36.774025   68640 pod_ready.go:81] duration metric: took 376.83321ms for pod "kube-proxy-czsqz" in "kube-system" namespace to be "Ready" ...
	I0501 03:46:36.774036   68640 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-892672" in "kube-system" namespace to be "Ready" ...
	I0501 03:46:37.171504   68640 pod_ready.go:92] pod "kube-scheduler-no-preload-892672" in "kube-system" namespace has status "Ready":"True"
	I0501 03:46:37.171526   68640 pod_ready.go:81] duration metric: took 397.484706ms for pod "kube-scheduler-no-preload-892672" in "kube-system" namespace to be "Ready" ...
	I0501 03:46:37.171535   68640 pod_ready.go:38] duration metric: took 3.815806043s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0501 03:46:37.171549   68640 api_server.go:52] waiting for apiserver process to appear ...
	I0501 03:46:37.171609   68640 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:46:37.189446   68640 api_server.go:72] duration metric: took 4.140414812s to wait for apiserver process to appear ...
	I0501 03:46:37.189473   68640 api_server.go:88] waiting for apiserver healthz status ...
	I0501 03:46:37.189494   68640 api_server.go:253] Checking apiserver healthz at https://192.168.39.144:8443/healthz ...
	I0501 03:46:37.195052   68640 api_server.go:279] https://192.168.39.144:8443/healthz returned 200:
	ok
	I0501 03:46:37.196163   68640 api_server.go:141] control plane version: v1.30.0
	I0501 03:46:37.196183   68640 api_server.go:131] duration metric: took 6.703804ms to wait for apiserver health ...
	I0501 03:46:37.196191   68640 system_pods.go:43] waiting for kube-system pods to appear ...
	I0501 03:46:37.375742   68640 system_pods.go:59] 9 kube-system pods found
	I0501 03:46:37.375775   68640 system_pods.go:61] "coredns-7db6d8ff4d-57k52" [f98cb358-71ba-49c5-8213-0f3160c6e38b] Running
	I0501 03:46:37.375784   68640 system_pods.go:61] "coredns-7db6d8ff4d-c6lnj" [f8b8c1f1-7696-43f2-98be-339f99963e7c] Running
	I0501 03:46:37.375789   68640 system_pods.go:61] "etcd-no-preload-892672" [5f92eb1b-6611-4663-95f0-8c071a3a37c9] Running
	I0501 03:46:37.375796   68640 system_pods.go:61] "kube-apiserver-no-preload-892672" [90bcaa82-61b0-49d5-b50c-76288b099683] Running
	I0501 03:46:37.375804   68640 system_pods.go:61] "kube-controller-manager-no-preload-892672" [f80af654-aa81-4cd2-b5ce-4f31f6e49e9f] Running
	I0501 03:46:37.375809   68640 system_pods.go:61] "kube-proxy-czsqz" [4254b019-b6c8-4ff9-a361-c96eaf20dc65] Running
	I0501 03:46:37.375813   68640 system_pods.go:61] "kube-scheduler-no-preload-892672" [6753a5df-86d1-47bf-9514-6b8352acf969] Running
	I0501 03:46:37.375824   68640 system_pods.go:61] "metrics-server-569cc877fc-5m5qf" [a1ec3e6c-fe90-4168-b0ec-54f82f17b46d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0501 03:46:37.375830   68640 system_pods.go:61] "storage-provisioner" [b55b7e8b-4de0-40f8-96ff-bf0b550699d1] Running
	I0501 03:46:37.375841   68640 system_pods.go:74] duration metric: took 179.642731ms to wait for pod list to return data ...
	I0501 03:46:37.375857   68640 default_sa.go:34] waiting for default service account to be created ...
	I0501 03:46:37.572501   68640 default_sa.go:45] found service account: "default"
	I0501 03:46:37.572530   68640 default_sa.go:55] duration metric: took 196.664812ms for default service account to be created ...
	I0501 03:46:37.572542   68640 system_pods.go:116] waiting for k8s-apps to be running ...
	I0501 03:46:37.778012   68640 system_pods.go:86] 9 kube-system pods found
	I0501 03:46:37.778053   68640 system_pods.go:89] "coredns-7db6d8ff4d-57k52" [f98cb358-71ba-49c5-8213-0f3160c6e38b] Running
	I0501 03:46:37.778062   68640 system_pods.go:89] "coredns-7db6d8ff4d-c6lnj" [f8b8c1f1-7696-43f2-98be-339f99963e7c] Running
	I0501 03:46:37.778068   68640 system_pods.go:89] "etcd-no-preload-892672" [5f92eb1b-6611-4663-95f0-8c071a3a37c9] Running
	I0501 03:46:37.778075   68640 system_pods.go:89] "kube-apiserver-no-preload-892672" [90bcaa82-61b0-49d5-b50c-76288b099683] Running
	I0501 03:46:37.778082   68640 system_pods.go:89] "kube-controller-manager-no-preload-892672" [f80af654-aa81-4cd2-b5ce-4f31f6e49e9f] Running
	I0501 03:46:37.778088   68640 system_pods.go:89] "kube-proxy-czsqz" [4254b019-b6c8-4ff9-a361-c96eaf20dc65] Running
	I0501 03:46:37.778094   68640 system_pods.go:89] "kube-scheduler-no-preload-892672" [6753a5df-86d1-47bf-9514-6b8352acf969] Running
	I0501 03:46:37.778104   68640 system_pods.go:89] "metrics-server-569cc877fc-5m5qf" [a1ec3e6c-fe90-4168-b0ec-54f82f17b46d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0501 03:46:37.778112   68640 system_pods.go:89] "storage-provisioner" [b55b7e8b-4de0-40f8-96ff-bf0b550699d1] Running
	I0501 03:46:37.778127   68640 system_pods.go:126] duration metric: took 205.578312ms to wait for k8s-apps to be running ...
	I0501 03:46:37.778148   68640 system_svc.go:44] waiting for kubelet service to be running ....
	I0501 03:46:37.778215   68640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 03:46:37.794660   68640 system_svc.go:56] duration metric: took 16.509214ms WaitForService to wait for kubelet
	I0501 03:46:37.794694   68640 kubeadm.go:576] duration metric: took 4.745668881s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0501 03:46:37.794721   68640 node_conditions.go:102] verifying NodePressure condition ...
	I0501 03:46:37.972621   68640 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0501 03:46:37.972647   68640 node_conditions.go:123] node cpu capacity is 2
	I0501 03:46:37.972660   68640 node_conditions.go:105] duration metric: took 177.933367ms to run NodePressure ...
	I0501 03:46:37.972676   68640 start.go:240] waiting for startup goroutines ...
	I0501 03:46:37.972684   68640 start.go:245] waiting for cluster config update ...
	I0501 03:46:37.972699   68640 start.go:254] writing updated cluster config ...
	I0501 03:46:37.972951   68640 ssh_runner.go:195] Run: rm -f paused
	I0501 03:46:38.023054   68640 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0501 03:46:38.025098   68640 out.go:177] * Done! kubectl is now configured to use "no-preload-892672" cluster and "default" namespace by default
	I0501 03:46:46.214470   69580 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0501 03:46:46.214695   69580 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0501 03:46:46.214721   69580 kubeadm.go:309] 
	I0501 03:46:46.214770   69580 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0501 03:46:46.214837   69580 kubeadm.go:309] 		timed out waiting for the condition
	I0501 03:46:46.214875   69580 kubeadm.go:309] 
	I0501 03:46:46.214936   69580 kubeadm.go:309] 	This error is likely caused by:
	I0501 03:46:46.214983   69580 kubeadm.go:309] 		- The kubelet is not running
	I0501 03:46:46.215076   69580 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0501 03:46:46.215084   69580 kubeadm.go:309] 
	I0501 03:46:46.215169   69580 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0501 03:46:46.215201   69580 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0501 03:46:46.215233   69580 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0501 03:46:46.215239   69580 kubeadm.go:309] 
	I0501 03:46:46.215380   69580 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0501 03:46:46.215489   69580 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0501 03:46:46.215505   69580 kubeadm.go:309] 
	I0501 03:46:46.215657   69580 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0501 03:46:46.215782   69580 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0501 03:46:46.215882   69580 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0501 03:46:46.215972   69580 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0501 03:46:46.215984   69580 kubeadm.go:309] 
	I0501 03:46:46.217243   69580 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0501 03:46:46.217352   69580 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0501 03:46:46.217426   69580 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0501 03:46:46.217550   69580 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0501 03:46:46.217611   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0501 03:46:47.375634   69580 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.157990231s)
	I0501 03:46:47.375723   69580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 03:46:47.392333   69580 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0501 03:46:47.404983   69580 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0501 03:46:47.405007   69580 kubeadm.go:156] found existing configuration files:
	
	I0501 03:46:47.405054   69580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0501 03:46:47.417437   69580 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0501 03:46:47.417501   69580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0501 03:46:47.429929   69580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0501 03:46:47.441141   69580 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0501 03:46:47.441215   69580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0501 03:46:47.453012   69580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0501 03:46:47.463702   69580 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0501 03:46:47.463759   69580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0501 03:46:47.474783   69580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0501 03:46:47.485793   69580 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0501 03:46:47.485853   69580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0501 03:46:47.497706   69580 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0501 03:46:47.588221   69580 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0501 03:46:47.588340   69580 kubeadm.go:309] [preflight] Running pre-flight checks
	I0501 03:46:47.759631   69580 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0501 03:46:47.759801   69580 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0501 03:46:47.759949   69580 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0501 03:46:47.978077   69580 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0501 03:46:47.980130   69580 out.go:204]   - Generating certificates and keys ...
	I0501 03:46:47.980240   69580 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0501 03:46:47.980323   69580 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0501 03:46:47.980455   69580 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0501 03:46:47.980579   69580 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0501 03:46:47.980679   69580 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0501 03:46:47.980771   69580 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0501 03:46:47.980864   69580 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0501 03:46:47.981256   69580 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0501 03:46:47.981616   69580 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0501 03:46:47.981858   69580 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0501 03:46:47.981907   69580 kubeadm.go:309] [certs] Using the existing "sa" key
	I0501 03:46:47.981991   69580 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0501 03:46:48.100377   69580 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0501 03:46:48.463892   69580 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0501 03:46:48.521991   69580 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0501 03:46:48.735222   69580 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0501 03:46:48.753098   69580 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0501 03:46:48.756950   69580 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0501 03:46:48.757379   69580 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0501 03:46:48.937039   69580 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0501 03:46:48.939065   69580 out.go:204]   - Booting up control plane ...
	I0501 03:46:48.939183   69580 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0501 03:46:48.961380   69580 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0501 03:46:48.962890   69580 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0501 03:46:48.963978   69580 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0501 03:46:48.971754   69580 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0501 03:47:28.974873   69580 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0501 03:47:28.975296   69580 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0501 03:47:28.975545   69580 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0501 03:47:33.976469   69580 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0501 03:47:33.976699   69580 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0501 03:47:43.977443   69580 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0501 03:47:43.977663   69580 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0501 03:48:03.979113   69580 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0501 03:48:03.979409   69580 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0501 03:48:43.982479   69580 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0501 03:48:43.982781   69580 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0501 03:48:43.983363   69580 kubeadm.go:309] 
	I0501 03:48:43.983427   69580 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0501 03:48:43.983484   69580 kubeadm.go:309] 		timed out waiting for the condition
	I0501 03:48:43.983490   69580 kubeadm.go:309] 
	I0501 03:48:43.983520   69580 kubeadm.go:309] 	This error is likely caused by:
	I0501 03:48:43.983547   69580 kubeadm.go:309] 		- The kubelet is not running
	I0501 03:48:43.983633   69580 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0501 03:48:43.983637   69580 kubeadm.go:309] 
	I0501 03:48:43.983721   69580 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0501 03:48:43.983748   69580 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0501 03:48:43.983774   69580 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0501 03:48:43.983778   69580 kubeadm.go:309] 
	I0501 03:48:43.983861   69580 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0501 03:48:43.983928   69580 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0501 03:48:43.983932   69580 kubeadm.go:309] 
	I0501 03:48:43.984023   69580 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0501 03:48:43.984094   69580 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0501 03:48:43.984155   69580 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0501 03:48:43.984212   69580 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0501 03:48:43.984216   69580 kubeadm.go:309] 
	I0501 03:48:43.985577   69580 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0501 03:48:43.985777   69580 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0501 03:48:43.985875   69580 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0501 03:48:43.985971   69580 kubeadm.go:393] duration metric: took 8m0.315126498s to StartCluster
	I0501 03:48:43.986025   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0501 03:48:43.986092   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0501 03:48:44.038296   69580 cri.go:89] found id: ""
	I0501 03:48:44.038328   69580 logs.go:276] 0 containers: []
	W0501 03:48:44.038339   69580 logs.go:278] No container was found matching "kube-apiserver"
	I0501 03:48:44.038346   69580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0501 03:48:44.038426   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0501 03:48:44.081855   69580 cri.go:89] found id: ""
	I0501 03:48:44.081891   69580 logs.go:276] 0 containers: []
	W0501 03:48:44.081904   69580 logs.go:278] No container was found matching "etcd"
	I0501 03:48:44.081913   69580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0501 03:48:44.081996   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0501 03:48:44.131400   69580 cri.go:89] found id: ""
	I0501 03:48:44.131435   69580 logs.go:276] 0 containers: []
	W0501 03:48:44.131445   69580 logs.go:278] No container was found matching "coredns"
	I0501 03:48:44.131451   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0501 03:48:44.131519   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0501 03:48:44.178274   69580 cri.go:89] found id: ""
	I0501 03:48:44.178302   69580 logs.go:276] 0 containers: []
	W0501 03:48:44.178310   69580 logs.go:278] No container was found matching "kube-scheduler"
	I0501 03:48:44.178316   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0501 03:48:44.178376   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0501 03:48:44.223087   69580 cri.go:89] found id: ""
	I0501 03:48:44.223115   69580 logs.go:276] 0 containers: []
	W0501 03:48:44.223125   69580 logs.go:278] No container was found matching "kube-proxy"
	I0501 03:48:44.223133   69580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0501 03:48:44.223196   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0501 03:48:44.266093   69580 cri.go:89] found id: ""
	I0501 03:48:44.266122   69580 logs.go:276] 0 containers: []
	W0501 03:48:44.266135   69580 logs.go:278] No container was found matching "kube-controller-manager"
	I0501 03:48:44.266143   69580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0501 03:48:44.266204   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0501 03:48:44.307766   69580 cri.go:89] found id: ""
	I0501 03:48:44.307795   69580 logs.go:276] 0 containers: []
	W0501 03:48:44.307806   69580 logs.go:278] No container was found matching "kindnet"
	I0501 03:48:44.307813   69580 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0501 03:48:44.307876   69580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0501 03:48:44.348548   69580 cri.go:89] found id: ""
	I0501 03:48:44.348576   69580 logs.go:276] 0 containers: []
	W0501 03:48:44.348585   69580 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0501 03:48:44.348594   69580 logs.go:123] Gathering logs for container status ...
	I0501 03:48:44.348614   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 03:48:44.394160   69580 logs.go:123] Gathering logs for kubelet ...
	I0501 03:48:44.394209   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 03:48:44.449845   69580 logs.go:123] Gathering logs for dmesg ...
	I0501 03:48:44.449879   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 03:48:44.467663   69580 logs.go:123] Gathering logs for describe nodes ...
	I0501 03:48:44.467694   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0501 03:48:44.556150   69580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0501 03:48:44.556183   69580 logs.go:123] Gathering logs for CRI-O ...
	I0501 03:48:44.556199   69580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W0501 03:48:44.661110   69580 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0501 03:48:44.661169   69580 out.go:239] * 
	W0501 03:48:44.661226   69580 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0501 03:48:44.661246   69580 out.go:239] * 
	W0501 03:48:44.662064   69580 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0501 03:48:44.665608   69580 out.go:177] 
	W0501 03:48:44.666799   69580 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0501 03:48:44.666851   69580 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0501 03:48:44.666870   69580 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0501 03:48:44.668487   69580 out.go:177] 
	
	
	==> CRI-O <==
	May 01 04:00:14 old-k8s-version-503971 crio[647]: time="2024-05-01 04:00:14.312328253Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714536014312287779,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=aa88822f-b666-4630-b42f-0bd502290e87 name=/runtime.v1.ImageService/ImageFsInfo
	May 01 04:00:14 old-k8s-version-503971 crio[647]: time="2024-05-01 04:00:14.313026894Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a7bdd24a-5724-4aee-b02d-d65d3cdfcf71 name=/runtime.v1.RuntimeService/ListContainers
	May 01 04:00:14 old-k8s-version-503971 crio[647]: time="2024-05-01 04:00:14.313074835Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a7bdd24a-5724-4aee-b02d-d65d3cdfcf71 name=/runtime.v1.RuntimeService/ListContainers
	May 01 04:00:14 old-k8s-version-503971 crio[647]: time="2024-05-01 04:00:14.313182452Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=a7bdd24a-5724-4aee-b02d-d65d3cdfcf71 name=/runtime.v1.RuntimeService/ListContainers
	May 01 04:00:14 old-k8s-version-503971 crio[647]: time="2024-05-01 04:00:14.357963771Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9d4a41f3-c3d3-4edc-a632-1703cd24ae80 name=/runtime.v1.RuntimeService/Version
	May 01 04:00:14 old-k8s-version-503971 crio[647]: time="2024-05-01 04:00:14.358208203Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9d4a41f3-c3d3-4edc-a632-1703cd24ae80 name=/runtime.v1.RuntimeService/Version
	May 01 04:00:14 old-k8s-version-503971 crio[647]: time="2024-05-01 04:00:14.360201057Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fe57c842-2e38-4e66-b17b-cfdff8ad7872 name=/runtime.v1.ImageService/ImageFsInfo
	May 01 04:00:14 old-k8s-version-503971 crio[647]: time="2024-05-01 04:00:14.360761880Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714536014360720467,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fe57c842-2e38-4e66-b17b-cfdff8ad7872 name=/runtime.v1.ImageService/ImageFsInfo
	May 01 04:00:14 old-k8s-version-503971 crio[647]: time="2024-05-01 04:00:14.361924894Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5aecfe63-0357-4306-8680-97b17920826a name=/runtime.v1.RuntimeService/ListContainers
	May 01 04:00:14 old-k8s-version-503971 crio[647]: time="2024-05-01 04:00:14.362017458Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5aecfe63-0357-4306-8680-97b17920826a name=/runtime.v1.RuntimeService/ListContainers
	May 01 04:00:14 old-k8s-version-503971 crio[647]: time="2024-05-01 04:00:14.362054355Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=5aecfe63-0357-4306-8680-97b17920826a name=/runtime.v1.RuntimeService/ListContainers
	May 01 04:00:14 old-k8s-version-503971 crio[647]: time="2024-05-01 04:00:14.399331762Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6fd5e355-07eb-4248-901a-cdfaaef7e1d6 name=/runtime.v1.RuntimeService/Version
	May 01 04:00:14 old-k8s-version-503971 crio[647]: time="2024-05-01 04:00:14.399456324Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6fd5e355-07eb-4248-901a-cdfaaef7e1d6 name=/runtime.v1.RuntimeService/Version
	May 01 04:00:14 old-k8s-version-503971 crio[647]: time="2024-05-01 04:00:14.401208924Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=de759967-2c49-46f9-8631-ebd4ea774ad1 name=/runtime.v1.ImageService/ImageFsInfo
	May 01 04:00:14 old-k8s-version-503971 crio[647]: time="2024-05-01 04:00:14.401661969Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714536014401624233,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=de759967-2c49-46f9-8631-ebd4ea774ad1 name=/runtime.v1.ImageService/ImageFsInfo
	May 01 04:00:14 old-k8s-version-503971 crio[647]: time="2024-05-01 04:00:14.402409899Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=563854d6-43a1-41bf-bac4-fe5c9b1ffda7 name=/runtime.v1.RuntimeService/ListContainers
	May 01 04:00:14 old-k8s-version-503971 crio[647]: time="2024-05-01 04:00:14.402491011Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=563854d6-43a1-41bf-bac4-fe5c9b1ffda7 name=/runtime.v1.RuntimeService/ListContainers
	May 01 04:00:14 old-k8s-version-503971 crio[647]: time="2024-05-01 04:00:14.402525225Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=563854d6-43a1-41bf-bac4-fe5c9b1ffda7 name=/runtime.v1.RuntimeService/ListContainers
	May 01 04:00:14 old-k8s-version-503971 crio[647]: time="2024-05-01 04:00:14.440647948Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=dee03830-f892-4ff0-a551-0c8de2a54f5c name=/runtime.v1.RuntimeService/Version
	May 01 04:00:14 old-k8s-version-503971 crio[647]: time="2024-05-01 04:00:14.440749262Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=dee03830-f892-4ff0-a551-0c8de2a54f5c name=/runtime.v1.RuntimeService/Version
	May 01 04:00:14 old-k8s-version-503971 crio[647]: time="2024-05-01 04:00:14.442240687Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f9253c4c-186e-4476-9dc7-572420e19214 name=/runtime.v1.ImageService/ImageFsInfo
	May 01 04:00:14 old-k8s-version-503971 crio[647]: time="2024-05-01 04:00:14.442674747Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714536014442649651,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f9253c4c-186e-4476-9dc7-572420e19214 name=/runtime.v1.ImageService/ImageFsInfo
	May 01 04:00:14 old-k8s-version-503971 crio[647]: time="2024-05-01 04:00:14.443669606Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=872fb479-1cb1-4f54-8054-6f50181e4a04 name=/runtime.v1.RuntimeService/ListContainers
	May 01 04:00:14 old-k8s-version-503971 crio[647]: time="2024-05-01 04:00:14.443777797Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=872fb479-1cb1-4f54-8054-6f50181e4a04 name=/runtime.v1.RuntimeService/ListContainers
	May 01 04:00:14 old-k8s-version-503971 crio[647]: time="2024-05-01 04:00:14.443813729Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=872fb479-1cb1-4f54-8054-6f50181e4a04 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[May 1 03:40] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.055665] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.051850] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.015816] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.551540] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.720618] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.127424] systemd-fstab-generator[566]: Ignoring "noauto" option for root device
	[  +0.059671] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.072683] systemd-fstab-generator[579]: Ignoring "noauto" option for root device
	[  +0.239117] systemd-fstab-generator[592]: Ignoring "noauto" option for root device
	[  +0.162286] systemd-fstab-generator[604]: Ignoring "noauto" option for root device
	[  +0.321649] systemd-fstab-generator[630]: Ignoring "noauto" option for root device
	[  +7.891142] systemd-fstab-generator[842]: Ignoring "noauto" option for root device
	[  +0.068807] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.309273] systemd-fstab-generator[969]: Ignoring "noauto" option for root device
	[ +12.277413] kauditd_printk_skb: 46 callbacks suppressed
	[May 1 03:44] systemd-fstab-generator[5009]: Ignoring "noauto" option for root device
	[May 1 03:46] systemd-fstab-generator[5290]: Ignoring "noauto" option for root device
	[  +0.082733] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 04:00:14 up 19 min,  0 users,  load average: 0.00, 0.02, 0.05
	Linux old-k8s-version-503971 5.10.207 #1 SMP Tue Apr 30 22:38:43 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	May 01 04:00:11 old-k8s-version-503971 kubelet[6778]:         /usr/local/go/src/net/cgo_unix.go:228 +0xc7
	May 01 04:00:11 old-k8s-version-503971 kubelet[6778]: goroutine 142 [runnable]:
	May 01 04:00:11 old-k8s-version-503971 kubelet[6778]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*http2Client).reader(0xc000bfc000)
	May 01 04:00:11 old-k8s-version-503971 kubelet[6778]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:1242
	May 01 04:00:11 old-k8s-version-503971 kubelet[6778]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	May 01 04:00:11 old-k8s-version-503971 kubelet[6778]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:300 +0xd31
	May 01 04:00:11 old-k8s-version-503971 kubelet[6778]: goroutine 143 [select]:
	May 01 04:00:11 old-k8s-version-503971 kubelet[6778]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*controlBuffer).get(0xc000b410e0, 0xc000bd7101, 0xc000b3cf00, 0xc000b7b060, 0xc000bd87c0, 0xc000bd8780)
	May 01 04:00:11 old-k8s-version-503971 kubelet[6778]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:395 +0x125
	May 01 04:00:11 old-k8s-version-503971 kubelet[6778]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*loopyWriter).run(0xc000bd71a0, 0x0, 0x0)
	May 01 04:00:11 old-k8s-version-503971 kubelet[6778]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:513 +0x1d3
	May 01 04:00:11 old-k8s-version-503971 kubelet[6778]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client.func3(0xc000bfc000)
	May 01 04:00:11 old-k8s-version-503971 kubelet[6778]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:346 +0x7b
	May 01 04:00:11 old-k8s-version-503971 kubelet[6778]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	May 01 04:00:11 old-k8s-version-503971 kubelet[6778]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:344 +0xefc
	May 01 04:00:11 old-k8s-version-503971 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	May 01 04:00:11 old-k8s-version-503971 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	May 01 04:00:11 old-k8s-version-503971 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 138.
	May 01 04:00:11 old-k8s-version-503971 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	May 01 04:00:11 old-k8s-version-503971 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	May 01 04:00:12 old-k8s-version-503971 kubelet[6787]: I0501 04:00:12.061855    6787 server.go:416] Version: v1.20.0
	May 01 04:00:12 old-k8s-version-503971 kubelet[6787]: I0501 04:00:12.062182    6787 server.go:837] Client rotation is on, will bootstrap in background
	May 01 04:00:12 old-k8s-version-503971 kubelet[6787]: I0501 04:00:12.064072    6787 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	May 01 04:00:12 old-k8s-version-503971 kubelet[6787]: W0501 04:00:12.065324    6787 manager.go:159] Cannot detect current cgroup on cgroup v2
	May 01 04:00:12 old-k8s-version-503971 kubelet[6787]: I0501 04:00:12.065475    6787 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-503971 -n old-k8s-version-503971
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-503971 -n old-k8s-version-503971: exit status 2 (257.844866ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-503971" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (144.19s)

                                                
                                    

Test pass (243/311)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 35.27
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.57
9 TestDownloadOnly/v1.20.0/DeleteAll 0.14
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.30.0/json-events 12.98
13 TestDownloadOnly/v1.30.0/preload-exists 0
17 TestDownloadOnly/v1.30.0/LogsDuration 0.07
18 TestDownloadOnly/v1.30.0/DeleteAll 0.14
19 TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.57
22 TestOffline 63.45
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 209.15
29 TestAddons/parallel/Registry 19.16
31 TestAddons/parallel/InspektorGadget 10.88
33 TestAddons/parallel/HelmTiller 12.19
35 TestAddons/parallel/CSI 54.92
36 TestAddons/parallel/Headlamp 14.99
37 TestAddons/parallel/CloudSpanner 5.6
39 TestAddons/parallel/NvidiaDevicePlugin 6.54
40 TestAddons/parallel/Yakd 6.01
43 TestAddons/serial/GCPAuth/Namespaces 0.12
45 TestCertOptions 54.62
46 TestCertExpiration 299.22
48 TestForceSystemdFlag 59.17
49 TestForceSystemdEnv 100.84
51 TestKVMDriverInstallOrUpdate 5.16
55 TestErrorSpam/setup 44.23
56 TestErrorSpam/start 0.37
57 TestErrorSpam/status 0.77
58 TestErrorSpam/pause 1.63
59 TestErrorSpam/unpause 1.75
60 TestErrorSpam/stop 5.03
63 TestFunctional/serial/CopySyncFile 0
64 TestFunctional/serial/StartWithProxy 99.41
65 TestFunctional/serial/AuditLog 0
66 TestFunctional/serial/SoftStart 374.88
67 TestFunctional/serial/KubeContext 0.04
68 TestFunctional/serial/KubectlGetPods 0.08
71 TestFunctional/serial/CacheCmd/cache/add_remote 3.36
72 TestFunctional/serial/CacheCmd/cache/add_local 2.26
73 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
74 TestFunctional/serial/CacheCmd/cache/list 0.06
75 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.23
76 TestFunctional/serial/CacheCmd/cache/cache_reload 1.66
77 TestFunctional/serial/CacheCmd/cache/delete 0.11
78 TestFunctional/serial/MinikubeKubectlCmd 0.11
79 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
80 TestFunctional/serial/ExtraConfig 56.58
81 TestFunctional/serial/ComponentHealth 0.07
82 TestFunctional/serial/LogsCmd 1.7
83 TestFunctional/serial/LogsFileCmd 1.7
84 TestFunctional/serial/InvalidService 4.13
86 TestFunctional/parallel/ConfigCmd 0.39
87 TestFunctional/parallel/DashboardCmd 14.54
88 TestFunctional/parallel/DryRun 0.51
89 TestFunctional/parallel/InternationalLanguage 0.16
90 TestFunctional/parallel/StatusCmd 1.07
94 TestFunctional/parallel/ServiceCmdConnect 15.68
95 TestFunctional/parallel/AddonsCmd 0.18
96 TestFunctional/parallel/PersistentVolumeClaim 63.15
98 TestFunctional/parallel/SSHCmd 0.47
99 TestFunctional/parallel/CpCmd 1.46
100 TestFunctional/parallel/MySQL 23.56
101 TestFunctional/parallel/FileSync 0.22
102 TestFunctional/parallel/CertSync 1.45
106 TestFunctional/parallel/NodeLabels 0.07
108 TestFunctional/parallel/NonActiveRuntimeDisabled 0.47
110 TestFunctional/parallel/License 0.67
111 TestFunctional/parallel/ImageCommands/ImageListShort 0.29
112 TestFunctional/parallel/ImageCommands/ImageListTable 0.27
113 TestFunctional/parallel/ImageCommands/ImageListJson 0.25
114 TestFunctional/parallel/ImageCommands/ImageListYaml 0.27
115 TestFunctional/parallel/ImageCommands/ImageBuild 6.29
116 TestFunctional/parallel/ImageCommands/Setup 2.17
117 TestFunctional/parallel/UpdateContextCmd/no_changes 0.1
118 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.1
119 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.11
120 TestFunctional/parallel/ServiceCmd/DeployApp 27.19
121 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 5.41
131 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 9.74
132 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 7.53
133 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.97
135 TestFunctional/parallel/ServiceCmd/List 0.43
136 TestFunctional/parallel/ServiceCmd/JSONOutput 0.52
137 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 2.9
138 TestFunctional/parallel/ServiceCmd/HTTPS 0.43
139 TestFunctional/parallel/ServiceCmd/Format 0.5
140 TestFunctional/parallel/ServiceCmd/URL 0.37
141 TestFunctional/parallel/ProfileCmd/profile_not_create 0.32
142 TestFunctional/parallel/ProfileCmd/profile_list 0.3
143 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.34
144 TestFunctional/parallel/ProfileCmd/profile_json_output 0.3
145 TestFunctional/parallel/MountCmd/any-port 9.9
146 TestFunctional/parallel/Version/short 0.07
147 TestFunctional/parallel/Version/components 0.64
148 TestFunctional/parallel/MountCmd/specific-port 1.99
149 TestFunctional/parallel/MountCmd/VerifyCleanup 1.4
150 TestFunctional/delete_addon-resizer_images 0.07
151 TestFunctional/delete_my-image_image 0.01
152 TestFunctional/delete_minikube_cached_images 0.02
156 TestMultiControlPlane/serial/StartCluster 223.45
157 TestMultiControlPlane/serial/DeployApp 8.17
158 TestMultiControlPlane/serial/PingHostFromPods 1.36
159 TestMultiControlPlane/serial/AddWorkerNode 47.82
160 TestMultiControlPlane/serial/NodeLabels 0.07
161 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.56
162 TestMultiControlPlane/serial/CopyFile 13.8
164 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 3.51
166 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.41
168 TestMultiControlPlane/serial/DeleteSecondaryNode 17.68
169 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.4
171 TestMultiControlPlane/serial/RestartCluster 380.19
172 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.4
173 TestMultiControlPlane/serial/AddSecondaryNode 76.59
174 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.57
178 TestJSONOutput/start/Command 100.1
179 TestJSONOutput/start/Audit 0
181 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
182 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
184 TestJSONOutput/pause/Command 0.77
185 TestJSONOutput/pause/Audit 0
187 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
188 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
190 TestJSONOutput/unpause/Command 0.68
191 TestJSONOutput/unpause/Audit 0
193 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
196 TestJSONOutput/stop/Command 7.4
197 TestJSONOutput/stop/Audit 0
199 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
200 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
201 TestErrorJSONOutput 0.21
206 TestMainNoArgs 0.06
207 TestMinikubeProfile 95.72
210 TestMountStart/serial/StartWithMountFirst 28.57
211 TestMountStart/serial/VerifyMountFirst 0.39
212 TestMountStart/serial/StartWithMountSecond 30.29
213 TestMountStart/serial/VerifyMountSecond 0.39
214 TestMountStart/serial/DeleteFirst 0.7
215 TestMountStart/serial/VerifyMountPostDelete 0.41
216 TestMountStart/serial/Stop 1.43
217 TestMountStart/serial/RestartStopped 23.44
218 TestMountStart/serial/VerifyMountPostStop 0.41
221 TestMultiNode/serial/FreshStart2Nodes 131.68
222 TestMultiNode/serial/DeployApp2Nodes 5.01
223 TestMultiNode/serial/PingHostFrom2Pods 0.85
224 TestMultiNode/serial/AddNode 41.55
225 TestMultiNode/serial/MultiNodeLabels 0.06
226 TestMultiNode/serial/ProfileList 0.23
227 TestMultiNode/serial/CopyFile 7.41
228 TestMultiNode/serial/StopNode 3.17
229 TestMultiNode/serial/StartAfterStop 31.12
231 TestMultiNode/serial/DeleteNode 2.46
233 TestMultiNode/serial/RestartMultiNode 178.9
234 TestMultiNode/serial/ValidateNameConflict 48.03
241 TestScheduledStopUnix 118.42
245 TestRunningBinaryUpgrade 156.78
250 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
251 TestNoKubernetes/serial/StartWithK8s 127.92
252 TestStoppedBinaryUpgrade/Setup 2.61
253 TestStoppedBinaryUpgrade/Upgrade 143.39
254 TestNoKubernetes/serial/StartWithStopK8s 45.22
255 TestNoKubernetes/serial/Start 31.29
256 TestNoKubernetes/serial/VerifyK8sNotRunning 0.23
257 TestNoKubernetes/serial/ProfileList 6.7
258 TestNoKubernetes/serial/Stop 1.47
259 TestNoKubernetes/serial/StartNoArgs 34.87
260 TestStoppedBinaryUpgrade/MinikubeLogs 0.99
261 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.21
270 TestPause/serial/Start 111
278 TestNetworkPlugins/group/false 3.68
286 TestStartStop/group/no-preload/serial/FirstStart 107.49
288 TestStartStop/group/embed-certs/serial/FirstStart 111.74
289 TestStartStop/group/no-preload/serial/DeployApp 10.35
291 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 59.35
292 TestStartStop/group/embed-certs/serial/DeployApp 11.34
293 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.19
295 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.11
297 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.3
298 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.98
302 TestStartStop/group/no-preload/serial/SecondStart 710.42
305 TestStartStop/group/embed-certs/serial/SecondStart 582.09
307 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 587.98
308 TestStartStop/group/old-k8s-version/serial/Stop 4.6
309 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
320 TestStartStop/group/newest-cni/serial/FirstStart 63.81
321 TestNetworkPlugins/group/auto/Start 89.45
322 TestNetworkPlugins/group/kindnet/Start 118.93
323 TestStartStop/group/newest-cni/serial/DeployApp 0
324 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.19
325 TestStartStop/group/newest-cni/serial/Stop 7.41
326 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.26
327 TestStartStop/group/newest-cni/serial/SecondStart 55.6
328 TestNetworkPlugins/group/auto/KubeletFlags 0.25
329 TestNetworkPlugins/group/auto/NetCatPod 12.28
330 TestNetworkPlugins/group/auto/DNS 0.19
331 TestNetworkPlugins/group/auto/Localhost 0.15
332 TestNetworkPlugins/group/auto/HairPin 0.16
333 TestNetworkPlugins/group/calico/Start 94.33
334 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
335 TestNetworkPlugins/group/custom-flannel/Start 105.59
336 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
337 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
338 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.27
339 TestStartStop/group/newest-cni/serial/Pause 2.98
340 TestNetworkPlugins/group/kindnet/KubeletFlags 0.27
341 TestNetworkPlugins/group/kindnet/NetCatPod 10.3
342 TestNetworkPlugins/group/enable-default-cni/Start 149.68
343 TestNetworkPlugins/group/kindnet/DNS 0.16
344 TestNetworkPlugins/group/kindnet/Localhost 0.13
345 TestNetworkPlugins/group/kindnet/HairPin 0.13
346 TestNetworkPlugins/group/flannel/Start 132.84
347 TestNetworkPlugins/group/calico/ControllerPod 6.01
348 TestNetworkPlugins/group/calico/KubeletFlags 0.26
349 TestNetworkPlugins/group/calico/NetCatPod 12.26
350 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.25
351 TestNetworkPlugins/group/custom-flannel/NetCatPod 13.29
352 TestNetworkPlugins/group/calico/DNS 0.25
353 TestNetworkPlugins/group/calico/Localhost 0.17
354 TestNetworkPlugins/group/calico/HairPin 0.18
355 TestNetworkPlugins/group/custom-flannel/DNS 0.22
356 TestNetworkPlugins/group/custom-flannel/Localhost 0.16
357 TestNetworkPlugins/group/custom-flannel/HairPin 0.19
358 TestNetworkPlugins/group/bridge/Start 100.32
359 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.38
360 TestNetworkPlugins/group/enable-default-cni/NetCatPod 13.01
361 TestNetworkPlugins/group/flannel/ControllerPod 6.01
362 TestNetworkPlugins/group/enable-default-cni/DNS 0.16
363 TestNetworkPlugins/group/enable-default-cni/Localhost 0.14
364 TestNetworkPlugins/group/enable-default-cni/HairPin 0.14
365 TestNetworkPlugins/group/flannel/KubeletFlags 0.22
366 TestNetworkPlugins/group/flannel/NetCatPod 10.24
367 TestNetworkPlugins/group/flannel/DNS 0.18
368 TestNetworkPlugins/group/flannel/Localhost 0.17
369 TestNetworkPlugins/group/flannel/HairPin 0.15
370 TestNetworkPlugins/group/bridge/KubeletFlags 0.22
371 TestNetworkPlugins/group/bridge/NetCatPod 11.25
372 TestNetworkPlugins/group/bridge/DNS 0.17
373 TestNetworkPlugins/group/bridge/Localhost 0.13
374 TestNetworkPlugins/group/bridge/HairPin 0.12
x
+
TestDownloadOnly/v1.20.0/json-events (35.27s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-099811 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-099811 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (35.271389054s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (35.27s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.57s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-099811
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-099811: exit status 85 (572.292249ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-099811 | jenkins | v1.33.0 | 01 May 24 02:07 UTC |          |
	|         | -p download-only-099811        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/01 02:07:04
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0501 02:07:04.988432   20737 out.go:291] Setting OutFile to fd 1 ...
	I0501 02:07:04.988534   20737 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 02:07:04.988546   20737 out.go:304] Setting ErrFile to fd 2...
	I0501 02:07:04.988551   20737 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 02:07:04.988765   20737 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18779-13391/.minikube/bin
	W0501 02:07:04.988919   20737 root.go:314] Error reading config file at /home/jenkins/minikube-integration/18779-13391/.minikube/config/config.json: open /home/jenkins/minikube-integration/18779-13391/.minikube/config/config.json: no such file or directory
	I0501 02:07:04.989536   20737 out.go:298] Setting JSON to true
	I0501 02:07:04.990419   20737 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2968,"bootTime":1714526257,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0501 02:07:04.990476   20737 start.go:139] virtualization: kvm guest
	I0501 02:07:04.992998   20737 out.go:97] [download-only-099811] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0501 02:07:04.994498   20737 out.go:169] MINIKUBE_LOCATION=18779
	W0501 02:07:04.993096   20737 preload.go:294] Failed to list preload files: open /home/jenkins/minikube-integration/18779-13391/.minikube/cache/preloaded-tarball: no such file or directory
	I0501 02:07:04.993122   20737 notify.go:220] Checking for updates...
	I0501 02:07:04.997111   20737 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0501 02:07:04.998528   20737 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18779-13391/kubeconfig
	I0501 02:07:04.999839   20737 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18779-13391/.minikube
	I0501 02:07:05.001059   20737 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0501 02:07:05.003443   20737 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0501 02:07:05.003648   20737 driver.go:392] Setting default libvirt URI to qemu:///system
	I0501 02:07:05.099796   20737 out.go:97] Using the kvm2 driver based on user configuration
	I0501 02:07:05.099821   20737 start.go:297] selected driver: kvm2
	I0501 02:07:05.099826   20737 start.go:901] validating driver "kvm2" against <nil>
	I0501 02:07:05.100136   20737 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0501 02:07:05.100242   20737 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18779-13391/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0501 02:07:05.114554   20737 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0501 02:07:05.114606   20737 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0501 02:07:05.115108   20737 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0501 02:07:05.115282   20737 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0501 02:07:05.115355   20737 cni.go:84] Creating CNI manager for ""
	I0501 02:07:05.115371   20737 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0501 02:07:05.115379   20737 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0501 02:07:05.115453   20737 start.go:340] cluster config:
	{Name:download-only-099811 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-099811 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 02:07:05.115656   20737 iso.go:125] acquiring lock: {Name:mkcd2496aadb29931e179193d707c731bfda98b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0501 02:07:05.117430   20737 out.go:97] Downloading VM boot image ...
	I0501 02:07:05.117467   20737 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso.sha256 -> /home/jenkins/minikube-integration/18779-13391/.minikube/cache/iso/amd64/minikube-v1.33.0-1714498396-18779-amd64.iso
	I0501 02:07:15.190623   20737 out.go:97] Starting "download-only-099811" primary control-plane node in "download-only-099811" cluster
	I0501 02:07:15.190652   20737 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0501 02:07:15.297567   20737 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0501 02:07:15.297603   20737 cache.go:56] Caching tarball of preloaded images
	I0501 02:07:15.297799   20737 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0501 02:07:15.299680   20737 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0501 02:07:15.299701   20737 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0501 02:07:15.497208   20737 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/18779-13391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0501 02:07:33.852225   20737 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0501 02:07:33.852342   20737 preload.go:255] verifying checksum of /home/jenkins/minikube-integration/18779-13391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0501 02:07:34.754377   20737 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0501 02:07:34.754748   20737 profile.go:143] Saving config to /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/download-only-099811/config.json ...
	I0501 02:07:34.754781   20737 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/download-only-099811/config.json: {Name:mkfac7659c018384bd1028735fff36fe5b54065d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:07:34.754957   20737 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0501 02:07:34.755167   20737 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/18779-13391/.minikube/cache/linux/amd64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-099811 host does not exist
	  To start a cluster, run: "minikube start -p download-only-099811"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.57s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-099811
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/json-events (12.98s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-686563 --force --alsologtostderr --kubernetes-version=v1.30.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-686563 --force --alsologtostderr --kubernetes-version=v1.30.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (12.982766203s)
--- PASS: TestDownloadOnly/v1.30.0/json-events (12.98s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/preload-exists
--- PASS: TestDownloadOnly/v1.30.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-686563
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-686563: exit status 85 (68.783526ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-099811 | jenkins | v1.33.0 | 01 May 24 02:07 UTC |                     |
	|         | -p download-only-099811        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.0 | 01 May 24 02:07 UTC | 01 May 24 02:07 UTC |
	| delete  | -p download-only-099811        | download-only-099811 | jenkins | v1.33.0 | 01 May 24 02:07 UTC | 01 May 24 02:07 UTC |
	| start   | -o=json --download-only        | download-only-686563 | jenkins | v1.33.0 | 01 May 24 02:07 UTC |                     |
	|         | -p download-only-686563        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/01 02:07:41
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0501 02:07:41.097100   21012 out.go:291] Setting OutFile to fd 1 ...
	I0501 02:07:41.097333   21012 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 02:07:41.097342   21012 out.go:304] Setting ErrFile to fd 2...
	I0501 02:07:41.097346   21012 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 02:07:41.097511   21012 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18779-13391/.minikube/bin
	I0501 02:07:41.098052   21012 out.go:298] Setting JSON to true
	I0501 02:07:41.098913   21012 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3004,"bootTime":1714526257,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0501 02:07:41.098974   21012 start.go:139] virtualization: kvm guest
	I0501 02:07:41.101128   21012 out.go:97] [download-only-686563] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0501 02:07:41.102912   21012 out.go:169] MINIKUBE_LOCATION=18779
	I0501 02:07:41.101286   21012 notify.go:220] Checking for updates...
	I0501 02:07:41.106215   21012 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0501 02:07:41.107581   21012 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18779-13391/kubeconfig
	I0501 02:07:41.108906   21012 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18779-13391/.minikube
	I0501 02:07:41.110159   21012 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0501 02:07:41.112671   21012 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0501 02:07:41.112909   21012 driver.go:392] Setting default libvirt URI to qemu:///system
	I0501 02:07:41.143576   21012 out.go:97] Using the kvm2 driver based on user configuration
	I0501 02:07:41.143608   21012 start.go:297] selected driver: kvm2
	I0501 02:07:41.143614   21012 start.go:901] validating driver "kvm2" against <nil>
	I0501 02:07:41.143923   21012 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0501 02:07:41.143991   21012 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18779-13391/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0501 02:07:41.157908   21012 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0501 02:07:41.157953   21012 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0501 02:07:41.158437   21012 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0501 02:07:41.158570   21012 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0501 02:07:41.158620   21012 cni.go:84] Creating CNI manager for ""
	I0501 02:07:41.158634   21012 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0501 02:07:41.158643   21012 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0501 02:07:41.158687   21012 start.go:340] cluster config:
	{Name:download-only-686563 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:download-only-686563 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 02:07:41.158766   21012 iso.go:125] acquiring lock: {Name:mkcd2496aadb29931e179193d707c731bfda98b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0501 02:07:41.160531   21012 out.go:97] Starting "download-only-686563" primary control-plane node in "download-only-686563" cluster
	I0501 02:07:41.160548   21012 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0501 02:07:41.274241   21012 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0501 02:07:41.274269   21012 cache.go:56] Caching tarball of preloaded images
	I0501 02:07:41.274483   21012 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0501 02:07:41.276421   21012 out.go:97] Downloading Kubernetes v1.30.0 preload ...
	I0501 02:07:41.276442   21012 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 ...
	I0501 02:07:41.384174   21012 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:5927bd9d05f26d08fc05540d1d92e5d8 -> /home/jenkins/minikube-integration/18779-13391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-686563 host does not exist
	  To start a cluster, run: "minikube start -p download-only-686563"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.30.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-686563
--- PASS: TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.57s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-940490 --alsologtostderr --binary-mirror http://127.0.0.1:33553 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-940490" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-940490
--- PASS: TestBinaryMirror (0.57s)

                                                
                                    
x
+
TestOffline (63.45s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-590287 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-590287 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m2.422999714s)
helpers_test.go:175: Cleaning up "offline-crio-590287" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-590287
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-590287: (1.027168706s)
--- PASS: TestOffline (63.45s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-286595
addons_test.go:928: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-286595: exit status 85 (62.473539ms)

                                                
                                                
-- stdout --
	* Profile "addons-286595" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-286595"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-286595
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-286595: exit status 85 (60.369132ms)

                                                
                                                
-- stdout --
	* Profile "addons-286595" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-286595"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (209.15s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-amd64 start -p addons-286595 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-linux-amd64 start -p addons-286595 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (3m29.1503651s)
--- PASS: TestAddons/Setup (209.15s)

                                                
                                    
x
+
TestAddons/parallel/Registry (19.16s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 23.915451ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-f6tfr" [cf6f5911-c14d-4b26-9767-c66913822a34] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.00573516s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-6hksn" [f6f624f2-3e51-4453-b84e-7d908b7736fa] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.0058422s
addons_test.go:340: (dbg) Run:  kubectl --context addons-286595 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-286595 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-286595 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (7.195392784s)
addons_test.go:359: (dbg) Run:  out/minikube-linux-amd64 -p addons-286595 ip
addons_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p addons-286595 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (19.16s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.88s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-2kks5" [75ff423d-9cdc-42fa-b494-8c88c4d10371] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.005860534s
addons_test.go:841: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-286595
addons_test.go:841: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-286595: (5.874370066s)
--- PASS: TestAddons/parallel/InspektorGadget (10.88s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (12.19s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:456: tiller-deploy stabilized in 4.319585ms
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-6677d64bcd-btpph" [f3632fb8-1c95-4630-b3ce-f08c09d4a4ff] Running
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.005444557s
addons_test.go:473: (dbg) Run:  kubectl --context addons-286595 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-286595 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (6.495034089s)
addons_test.go:490: (dbg) Run:  out/minikube-linux-amd64 -p addons-286595 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (12.19s)

                                                
                                    
x
+
TestAddons/parallel/CSI (54.92s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 32.888109ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-286595 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-286595 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-286595 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-286595 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-286595 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [50ab988d-cce5-49e8-bfd9-96ce3f89c9a1] Pending
helpers_test.go:344: "task-pv-pod" [50ab988d-cce5-49e8-bfd9-96ce3f89c9a1] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [50ab988d-cce5-49e8-bfd9-96ce3f89c9a1] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 16.00454254s
addons_test.go:584: (dbg) Run:  kubectl --context addons-286595 create -f testdata/csi-hostpath-driver/snapshot.yaml
2024/05/01 02:11:42 [DEBUG] GET http://192.168.39.173:5000
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-286595 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-286595 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-286595 delete pod task-pv-pod
addons_test.go:600: (dbg) Run:  kubectl --context addons-286595 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-286595 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-286595 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-286595 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-286595 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-286595 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-286595 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-286595 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-286595 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-286595 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-286595 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-286595 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-286595 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-286595 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-286595 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-286595 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-286595 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-286595 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-286595 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-286595 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-286595 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [0a0d57b3-6719-46d9-bc50-16105c0b3e28] Pending
helpers_test.go:344: "task-pv-pod-restore" [0a0d57b3-6719-46d9-bc50-16105c0b3e28] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [0a0d57b3-6719-46d9-bc50-16105c0b3e28] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.004498805s
addons_test.go:626: (dbg) Run:  kubectl --context addons-286595 delete pod task-pv-pod-restore
addons_test.go:630: (dbg) Run:  kubectl --context addons-286595 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-286595 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-linux-amd64 -p addons-286595 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-linux-amd64 -p addons-286595 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.939028856s)
addons_test.go:642: (dbg) Run:  out/minikube-linux-amd64 -p addons-286595 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:642: (dbg) Done: out/minikube-linux-amd64 -p addons-286595 addons disable volumesnapshots --alsologtostderr -v=1: (1.097191488s)
--- PASS: TestAddons/parallel/CSI (54.92s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (14.99s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-286595 --alsologtostderr -v=1
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7559bf459f-844d4" [d7626782-57eb-48f0-907d-f0d1e86e250c] Pending
helpers_test.go:344: "headlamp-7559bf459f-844d4" [d7626782-57eb-48f0-907d-f0d1e86e250c] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7559bf459f-844d4" [d7626782-57eb-48f0-907d-f0d1e86e250c] Running / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7559bf459f-844d4" [d7626782-57eb-48f0-907d-f0d1e86e250c] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 14.004665568s
--- PASS: TestAddons/parallel/Headlamp (14.99s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.6s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6dc8d859f6-cn66h" [ca892efb-3063-4fa7-afb6-2b0b667f0a94] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003791021s
addons_test.go:860: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-286595
--- PASS: TestAddons/parallel/CloudSpanner (5.60s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.54s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-rkmjq" [ed0cb4b4-ad39-4ba6-8e70-771dffc9b32e] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.00592469s
addons_test.go:955: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-286595
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.54s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-5ddbf7d777-q2wzp" [85549b73-ebbe-4fa9-9fe0-72d18004bc71] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004886289s
--- PASS: TestAddons/parallel/Yakd (6.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-286595 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-286595 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestCertOptions (54.62s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-582976 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-582976 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (51.96527555s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-582976 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-582976 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-582976 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-582976" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-582976
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-582976: (2.079983963s)
--- PASS: TestCertOptions (54.62s)

                                                
                                    
x
+
TestCertExpiration (299.22s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-640426 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-640426 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m14.499843213s)
E0501 03:24:39.249264   20724 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/functional-960026/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-640426 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-640426 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (43.388926967s)
helpers_test.go:175: Cleaning up "cert-expiration-640426" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-640426
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-640426: (1.329169793s)
--- PASS: TestCertExpiration (299.22s)

                                                
                                    
x
+
TestForceSystemdFlag (59.17s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-616131 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-616131 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (57.880373983s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-616131 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-616131" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-616131
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-616131: (1.055172919s)
--- PASS: TestForceSystemdFlag (59.17s)

                                                
                                    
x
+
TestForceSystemdEnv (100.84s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-604747 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-604747 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m40.018060205s)
helpers_test.go:175: Cleaning up "force-systemd-env-604747" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-604747
--- PASS: TestForceSystemdEnv (100.84s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (5.16s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (5.16s)

                                                
                                    
x
+
TestErrorSpam/setup (44.23s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-960562 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-960562 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-960562 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-960562 --driver=kvm2  --container-runtime=crio: (44.226761934s)
--- PASS: TestErrorSpam/setup (44.23s)

                                                
                                    
x
+
TestErrorSpam/start (0.37s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-960562 --log_dir /tmp/nospam-960562 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-960562 --log_dir /tmp/nospam-960562 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-960562 --log_dir /tmp/nospam-960562 start --dry-run
--- PASS: TestErrorSpam/start (0.37s)

                                                
                                    
x
+
TestErrorSpam/status (0.77s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-960562 --log_dir /tmp/nospam-960562 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-960562 --log_dir /tmp/nospam-960562 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-960562 --log_dir /tmp/nospam-960562 status
--- PASS: TestErrorSpam/status (0.77s)

                                                
                                    
x
+
TestErrorSpam/pause (1.63s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-960562 --log_dir /tmp/nospam-960562 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-960562 --log_dir /tmp/nospam-960562 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-960562 --log_dir /tmp/nospam-960562 pause
--- PASS: TestErrorSpam/pause (1.63s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.75s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-960562 --log_dir /tmp/nospam-960562 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-960562 --log_dir /tmp/nospam-960562 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-960562 --log_dir /tmp/nospam-960562 unpause
--- PASS: TestErrorSpam/unpause (1.75s)

                                                
                                    
x
+
TestErrorSpam/stop (5.03s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-960562 --log_dir /tmp/nospam-960562 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-960562 --log_dir /tmp/nospam-960562 stop: (2.294508263s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-960562 --log_dir /tmp/nospam-960562 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-960562 --log_dir /tmp/nospam-960562 stop: (1.537309409s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-960562 --log_dir /tmp/nospam-960562 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-960562 --log_dir /tmp/nospam-960562 stop: (1.201650851s)
--- PASS: TestErrorSpam/stop (5.03s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/18779-13391/.minikube/files/etc/test/nested/copy/20724/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (99.41s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-960026 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E0501 02:21:24.422306   20724 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/addons-286595/client.crt: no such file or directory
E0501 02:21:24.428120   20724 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/addons-286595/client.crt: no such file or directory
E0501 02:21:24.438368   20724 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/addons-286595/client.crt: no such file or directory
E0501 02:21:24.458697   20724 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/addons-286595/client.crt: no such file or directory
E0501 02:21:24.499025   20724 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/addons-286595/client.crt: no such file or directory
E0501 02:21:24.579351   20724 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/addons-286595/client.crt: no such file or directory
E0501 02:21:24.739749   20724 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/addons-286595/client.crt: no such file or directory
E0501 02:21:25.060426   20724 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/addons-286595/client.crt: no such file or directory
E0501 02:21:25.701490   20724 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/addons-286595/client.crt: no such file or directory
E0501 02:21:26.981882   20724 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/addons-286595/client.crt: no such file or directory
E0501 02:21:29.542535   20724 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/addons-286595/client.crt: no such file or directory
E0501 02:21:34.662682   20724 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/addons-286595/client.crt: no such file or directory
E0501 02:21:44.903049   20724 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/addons-286595/client.crt: no such file or directory
E0501 02:22:05.383211   20724 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/addons-286595/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-960026 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m39.406166607s)
--- PASS: TestFunctional/serial/StartWithProxy (99.41s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (374.88s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-960026 --alsologtostderr -v=8
E0501 02:22:46.343945   20724 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/addons-286595/client.crt: no such file or directory
E0501 02:24:08.265033   20724 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/addons-286595/client.crt: no such file or directory
E0501 02:26:24.419274   20724 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/addons-286595/client.crt: no such file or directory
E0501 02:26:52.106383   20724 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/addons-286595/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-960026 --alsologtostderr -v=8: (6m14.879322756s)
functional_test.go:659: soft start took 6m14.879923845s for "functional-960026" cluster.
--- PASS: TestFunctional/serial/SoftStart (374.88s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-960026 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.36s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-960026 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-960026 cache add registry.k8s.io/pause:3.1: (1.060427684s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-960026 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-960026 cache add registry.k8s.io/pause:3.3: (1.213661363s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-960026 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-960026 cache add registry.k8s.io/pause:latest: (1.090470694s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.36s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.26s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-960026 /tmp/TestFunctionalserialCacheCmdcacheadd_local1019979828/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-960026 cache add minikube-local-cache-test:functional-960026
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-960026 cache add minikube-local-cache-test:functional-960026: (1.905311872s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-960026 cache delete minikube-local-cache-test:functional-960026
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-960026
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.26s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-960026 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.66s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-960026 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-960026 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-960026 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (221.786245ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-960026 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-960026 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.66s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-960026 kubectl -- --context functional-960026 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-960026 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (56.58s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-960026 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-960026 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (56.576804044s)
functional_test.go:757: restart took 56.576921147s for "functional-960026" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (56.58s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-960026 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.7s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-960026 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-960026 logs: (1.694935545s)
--- PASS: TestFunctional/serial/LogsCmd (1.70s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.7s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-960026 logs --file /tmp/TestFunctionalserialLogsFileCmd625351319/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-960026 logs --file /tmp/TestFunctionalserialLogsFileCmd625351319/001/logs.txt: (1.697049704s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.70s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.13s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-960026 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-960026
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-960026: exit status 115 (312.816251ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.179:32403 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-960026 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.13s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-960026 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-960026 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-960026 config get cpus: exit status 14 (68.115566ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-960026 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-960026 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-960026 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-960026 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-960026 config get cpus: exit status 14 (59.074404ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (14.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-960026 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-960026 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 31831: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (14.54s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-960026 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-960026 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (141.46655ms)

                                                
                                                
-- stdout --
	* [functional-960026] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18779
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18779-13391/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18779-13391/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0501 02:30:29.149337   31600 out.go:291] Setting OutFile to fd 1 ...
	I0501 02:30:29.150705   31600 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 02:30:29.150729   31600 out.go:304] Setting ErrFile to fd 2...
	I0501 02:30:29.150737   31600 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 02:30:29.151195   31600 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18779-13391/.minikube/bin
	I0501 02:30:29.151794   31600 out.go:298] Setting JSON to false
	I0501 02:30:29.152643   31600 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4372,"bootTime":1714526257,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0501 02:30:29.152700   31600 start.go:139] virtualization: kvm guest
	I0501 02:30:29.154752   31600 out.go:177] * [functional-960026] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0501 02:30:29.156591   31600 out.go:177]   - MINIKUBE_LOCATION=18779
	I0501 02:30:29.156598   31600 notify.go:220] Checking for updates...
	I0501 02:30:29.158044   31600 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0501 02:30:29.159186   31600 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18779-13391/kubeconfig
	I0501 02:30:29.160413   31600 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18779-13391/.minikube
	I0501 02:30:29.161719   31600 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0501 02:30:29.163039   31600 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0501 02:30:29.164682   31600 config.go:182] Loaded profile config "functional-960026": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0501 02:30:29.165225   31600 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:30:29.165268   31600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:30:29.181215   31600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37761
	I0501 02:30:29.181595   31600 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:30:29.182136   31600 main.go:141] libmachine: Using API Version  1
	I0501 02:30:29.182159   31600 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:30:29.182464   31600 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:30:29.182603   31600 main.go:141] libmachine: (functional-960026) Calling .DriverName
	I0501 02:30:29.182851   31600 driver.go:392] Setting default libvirt URI to qemu:///system
	I0501 02:30:29.183159   31600 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:30:29.183185   31600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:30:29.197326   31600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41837
	I0501 02:30:29.197665   31600 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:30:29.198136   31600 main.go:141] libmachine: Using API Version  1
	I0501 02:30:29.198160   31600 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:30:29.198501   31600 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:30:29.198720   31600 main.go:141] libmachine: (functional-960026) Calling .DriverName
	I0501 02:30:29.229386   31600 out.go:177] * Using the kvm2 driver based on existing profile
	I0501 02:30:29.230862   31600 start.go:297] selected driver: kvm2
	I0501 02:30:29.230875   31600 start.go:901] validating driver "kvm2" against &{Name:functional-960026 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.0 ClusterName:functional-960026 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.179 Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 02:30:29.230971   31600 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0501 02:30:29.232913   31600 out.go:177] 
	W0501 02:30:29.234067   31600 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0501 02:30:29.235358   31600 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-960026 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-960026 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-960026 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (156.91502ms)

                                                
                                                
-- stdout --
	* [functional-960026] minikube v1.33.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18779
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18779-13391/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18779-13391/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0501 02:30:29.674959   31715 out.go:291] Setting OutFile to fd 1 ...
	I0501 02:30:29.675211   31715 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 02:30:29.675222   31715 out.go:304] Setting ErrFile to fd 2...
	I0501 02:30:29.675229   31715 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 02:30:29.675520   31715 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18779-13391/.minikube/bin
	I0501 02:30:29.676037   31715 out.go:298] Setting JSON to false
	I0501 02:30:29.676964   31715 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4373,"bootTime":1714526257,"procs":217,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0501 02:30:29.677026   31715 start.go:139] virtualization: kvm guest
	I0501 02:30:29.679041   31715 out.go:177] * [functional-960026] minikube v1.33.0 sur Ubuntu 20.04 (kvm/amd64)
	I0501 02:30:29.680324   31715 out.go:177]   - MINIKUBE_LOCATION=18779
	I0501 02:30:29.681582   31715 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0501 02:30:29.680399   31715 notify.go:220] Checking for updates...
	I0501 02:30:29.683971   31715 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18779-13391/kubeconfig
	I0501 02:30:29.685299   31715 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18779-13391/.minikube
	I0501 02:30:29.686628   31715 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0501 02:30:29.687979   31715 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0501 02:30:29.689611   31715 config.go:182] Loaded profile config "functional-960026": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0501 02:30:29.689977   31715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:30:29.690015   31715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:30:29.704148   31715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38851
	I0501 02:30:29.704610   31715 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:30:29.705188   31715 main.go:141] libmachine: Using API Version  1
	I0501 02:30:29.705213   31715 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:30:29.705561   31715 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:30:29.705731   31715 main.go:141] libmachine: (functional-960026) Calling .DriverName
	I0501 02:30:29.706003   31715 driver.go:392] Setting default libvirt URI to qemu:///system
	I0501 02:30:29.706375   31715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 02:30:29.706435   31715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:30:29.723182   31715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34591
	I0501 02:30:29.723582   31715 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:30:29.724151   31715 main.go:141] libmachine: Using API Version  1
	I0501 02:30:29.724171   31715 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:30:29.724643   31715 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:30:29.724841   31715 main.go:141] libmachine: (functional-960026) Calling .DriverName
	I0501 02:30:29.759058   31715 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0501 02:30:29.760313   31715 start.go:297] selected driver: kvm2
	I0501 02:30:29.760332   31715 start.go:901] validating driver "kvm2" against &{Name:functional-960026 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.0 ClusterName:functional-960026 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.179 Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 02:30:29.760426   31715 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0501 02:30:29.762496   31715 out.go:177] 
	W0501 02:30:29.763791   31715 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0501 02:30:29.765021   31715 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-960026 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-960026 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-960026 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.07s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (15.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-960026 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-960026 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-57b4589c47-m27lx" [b7b71124-6529-4b69-9d48-6a2c2d5865c0] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-57b4589c47-m27lx" [b7b71124-6529-4b69-9d48-6a2c2d5865c0] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 15.004443595s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 -p functional-960026 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.39.179:30837
functional_test.go:1671: http://192.168.39.179:30837: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-57b4589c47-m27lx

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.179:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.179:30837
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (15.68s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-amd64 -p functional-960026 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-amd64 -p functional-960026 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (63.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [5a0f4dcd-4766-47fb-8509-5e02d8949853] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.005962042s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-960026 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-960026 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-960026 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-960026 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-960026 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [ca3c4384-59f3-4630-9a84-353a737c5d80] Pending
helpers_test.go:344: "sp-pod" [ca3c4384-59f3-4630-9a84-353a737c5d80] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [ca3c4384-59f3-4630-9a84-353a737c5d80] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 47.003729827s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-960026 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-960026 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-960026 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [be7420e0-6d05-4b9e-9068-7963fb96fb94] Pending
helpers_test.go:344: "sp-pod" [be7420e0-6d05-4b9e-9068-7963fb96fb94] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [be7420e0-6d05-4b9e-9068-7963fb96fb94] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.005174649s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-960026 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (63.15s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-amd64 -p functional-960026 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-amd64 -p functional-960026 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-960026 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-960026 ssh -n functional-960026 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-960026 cp functional-960026:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3785130396/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-960026 ssh -n functional-960026 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-960026 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-960026 ssh -n functional-960026 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.46s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (23.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-960026 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-64454c8b5c-dq7mk" [30e6638d-8624-4040-b7d3-a5cb1507238f] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-64454c8b5c-dq7mk" [30e6638d-8624-4040-b7d3-a5cb1507238f] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 23.004924794s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-960026 exec mysql-64454c8b5c-dq7mk -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (23.56s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/20724/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-960026 ssh "sudo cat /etc/test/nested/copy/20724/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/20724.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-960026 ssh "sudo cat /etc/ssl/certs/20724.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/20724.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-960026 ssh "sudo cat /usr/share/ca-certificates/20724.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-960026 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/207242.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-960026 ssh "sudo cat /etc/ssl/certs/207242.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/207242.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-960026 ssh "sudo cat /usr/share/ca-certificates/207242.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-960026 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.45s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-960026 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-960026 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-960026 ssh "sudo systemctl is-active docker": exit status 1 (232.105ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-960026 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-960026 ssh "sudo systemctl is-active containerd": exit status 1 (235.165077ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-960026 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-960026 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.0
registry.k8s.io/kube-proxy:v1.30.0
registry.k8s.io/kube-controller-manager:v1.30.0
registry.k8s.io/kube-apiserver:v1.30.0
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
localhost/minikube-local-cache-test:functional-960026
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-960026
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20240202-8f1494ea
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-960026 image ls --format short --alsologtostderr:
I0501 02:30:36.429332   31978 out.go:291] Setting OutFile to fd 1 ...
I0501 02:30:36.429575   31978 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0501 02:30:36.429584   31978 out.go:304] Setting ErrFile to fd 2...
I0501 02:30:36.429588   31978 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0501 02:30:36.429783   31978 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18779-13391/.minikube/bin
I0501 02:30:36.430437   31978 config.go:182] Loaded profile config "functional-960026": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
I0501 02:30:36.430577   31978 config.go:182] Loaded profile config "functional-960026": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
I0501 02:30:36.430921   31978 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0501 02:30:36.430958   31978 main.go:141] libmachine: Launching plugin server for driver kvm2
I0501 02:30:36.445971   31978 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45451
I0501 02:30:36.446446   31978 main.go:141] libmachine: () Calling .GetVersion
I0501 02:30:36.446947   31978 main.go:141] libmachine: Using API Version  1
I0501 02:30:36.446968   31978 main.go:141] libmachine: () Calling .SetConfigRaw
I0501 02:30:36.447312   31978 main.go:141] libmachine: () Calling .GetMachineName
I0501 02:30:36.447496   31978 main.go:141] libmachine: (functional-960026) Calling .GetState
I0501 02:30:36.449380   31978 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0501 02:30:36.449435   31978 main.go:141] libmachine: Launching plugin server for driver kvm2
I0501 02:30:36.463718   31978 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32939
I0501 02:30:36.464146   31978 main.go:141] libmachine: () Calling .GetVersion
I0501 02:30:36.464684   31978 main.go:141] libmachine: Using API Version  1
I0501 02:30:36.464706   31978 main.go:141] libmachine: () Calling .SetConfigRaw
I0501 02:30:36.465042   31978 main.go:141] libmachine: () Calling .GetMachineName
I0501 02:30:36.465262   31978 main.go:141] libmachine: (functional-960026) Calling .DriverName
I0501 02:30:36.465495   31978 ssh_runner.go:195] Run: systemctl --version
I0501 02:30:36.465516   31978 main.go:141] libmachine: (functional-960026) Calling .GetSSHHostname
I0501 02:30:36.468124   31978 main.go:141] libmachine: (functional-960026) DBG | domain functional-960026 has defined MAC address 52:54:00:6c:71:bc in network mk-functional-960026
I0501 02:30:36.468602   31978 main.go:141] libmachine: (functional-960026) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:71:bc", ip: ""} in network mk-functional-960026: {Iface:virbr1 ExpiryTime:2024-05-01 03:21:04 +0000 UTC Type:0 Mac:52:54:00:6c:71:bc Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:functional-960026 Clientid:01:52:54:00:6c:71:bc}
I0501 02:30:36.468634   31978 main.go:141] libmachine: (functional-960026) DBG | domain functional-960026 has defined IP address 192.168.39.179 and MAC address 52:54:00:6c:71:bc in network mk-functional-960026
I0501 02:30:36.468738   31978 main.go:141] libmachine: (functional-960026) Calling .GetSSHPort
I0501 02:30:36.468922   31978 main.go:141] libmachine: (functional-960026) Calling .GetSSHKeyPath
I0501 02:30:36.469071   31978 main.go:141] libmachine: (functional-960026) Calling .GetSSHUsername
I0501 02:30:36.469214   31978 sshutil.go:53] new ssh client: &{IP:192.168.39.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/functional-960026/id_rsa Username:docker}
I0501 02:30:36.576118   31978 ssh_runner.go:195] Run: sudo crictl images --output json
I0501 02:30:36.663156   31978 main.go:141] libmachine: Making call to close driver server
I0501 02:30:36.663171   31978 main.go:141] libmachine: (functional-960026) Calling .Close
I0501 02:30:36.663477   31978 main.go:141] libmachine: Successfully made call to close driver server
I0501 02:30:36.663492   31978 main.go:141] libmachine: Making call to close connection to plugin binary
I0501 02:30:36.663502   31978 main.go:141] libmachine: Making call to close driver server
I0501 02:30:36.663510   31978 main.go:141] libmachine: (functional-960026) Calling .Close
I0501 02:30:36.663742   31978 main.go:141] libmachine: Successfully made call to close driver server
I0501 02:30:36.663760   31978 main.go:141] libmachine: Making call to close connection to plugin binary
I0501 02:30:36.663767   31978 main.go:141] libmachine: (functional-960026) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-960026 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-960026 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| docker.io/kindest/kindnetd              | v20240202-8f1494ea | 4950bb10b3f87 | 65.3MB |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| registry.k8s.io/kube-scheduler          | v1.30.0            | 259c8277fcbbc | 63MB   |
| registry.k8s.io/kube-controller-manager | v1.30.0            | c7aad43836fa5 | 112MB  |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| gcr.io/k8s-minikube/busybox             | latest             | beae173ccac6a | 1.46MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/etcd                    | 3.5.12-0           | 3861cfcd7c04c | 151MB  |
| registry.k8s.io/coredns/coredns         | v1.11.1            | cbb01a7bd410d | 61.2MB |
| registry.k8s.io/kube-apiserver          | v1.30.0            | c42f13656d0b2 | 118MB  |
| registry.k8s.io/kube-proxy              | v1.30.0            | a0bf559e280cf | 85.9MB |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| docker.io/library/nginx                 | latest             | 7383c266ef252 | 192MB  |
| gcr.io/google-containers/addon-resizer  | functional-960026  | ffd4cfbbe753e | 34.1MB |
| localhost/minikube-local-cache-test     | functional-960026  | 34efbaae515ab | 3.33kB |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-960026 image ls --format table --alsologtostderr:
I0501 02:30:41.860112   32551 out.go:291] Setting OutFile to fd 1 ...
I0501 02:30:41.860285   32551 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0501 02:30:41.860298   32551 out.go:304] Setting ErrFile to fd 2...
I0501 02:30:41.860304   32551 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0501 02:30:41.860608   32551 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18779-13391/.minikube/bin
I0501 02:30:41.861383   32551 config.go:182] Loaded profile config "functional-960026": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
I0501 02:30:41.861489   32551 config.go:182] Loaded profile config "functional-960026": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
I0501 02:30:41.861944   32551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0501 02:30:41.861993   32551 main.go:141] libmachine: Launching plugin server for driver kvm2
I0501 02:30:41.876716   32551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45161
I0501 02:30:41.877232   32551 main.go:141] libmachine: () Calling .GetVersion
I0501 02:30:41.877833   32551 main.go:141] libmachine: Using API Version  1
I0501 02:30:41.877859   32551 main.go:141] libmachine: () Calling .SetConfigRaw
I0501 02:30:41.878223   32551 main.go:141] libmachine: () Calling .GetMachineName
I0501 02:30:41.878428   32551 main.go:141] libmachine: (functional-960026) Calling .GetState
I0501 02:30:41.880275   32551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0501 02:30:41.880324   32551 main.go:141] libmachine: Launching plugin server for driver kvm2
I0501 02:30:41.894795   32551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44911
I0501 02:30:41.895243   32551 main.go:141] libmachine: () Calling .GetVersion
I0501 02:30:41.895688   32551 main.go:141] libmachine: Using API Version  1
I0501 02:30:41.895708   32551 main.go:141] libmachine: () Calling .SetConfigRaw
I0501 02:30:41.896061   32551 main.go:141] libmachine: () Calling .GetMachineName
I0501 02:30:41.896258   32551 main.go:141] libmachine: (functional-960026) Calling .DriverName
I0501 02:30:41.896448   32551 ssh_runner.go:195] Run: systemctl --version
I0501 02:30:41.896473   32551 main.go:141] libmachine: (functional-960026) Calling .GetSSHHostname
I0501 02:30:41.899412   32551 main.go:141] libmachine: (functional-960026) DBG | domain functional-960026 has defined MAC address 52:54:00:6c:71:bc in network mk-functional-960026
I0501 02:30:41.899898   32551 main.go:141] libmachine: (functional-960026) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:71:bc", ip: ""} in network mk-functional-960026: {Iface:virbr1 ExpiryTime:2024-05-01 03:21:04 +0000 UTC Type:0 Mac:52:54:00:6c:71:bc Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:functional-960026 Clientid:01:52:54:00:6c:71:bc}
I0501 02:30:41.899923   32551 main.go:141] libmachine: (functional-960026) DBG | domain functional-960026 has defined IP address 192.168.39.179 and MAC address 52:54:00:6c:71:bc in network mk-functional-960026
I0501 02:30:41.900051   32551 main.go:141] libmachine: (functional-960026) Calling .GetSSHPort
I0501 02:30:41.900221   32551 main.go:141] libmachine: (functional-960026) Calling .GetSSHKeyPath
I0501 02:30:41.900378   32551 main.go:141] libmachine: (functional-960026) Calling .GetSSHUsername
I0501 02:30:41.900536   32551 sshutil.go:53] new ssh client: &{IP:192.168.39.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/functional-960026/id_rsa Username:docker}
I0501 02:30:41.987557   32551 ssh_runner.go:195] Run: sudo crictl images --output json
I0501 02:30:42.057636   32551 main.go:141] libmachine: Making call to close driver server
I0501 02:30:42.057657   32551 main.go:141] libmachine: (functional-960026) Calling .Close
I0501 02:30:42.057961   32551 main.go:141] libmachine: (functional-960026) DBG | Closing plugin on server side
I0501 02:30:42.057998   32551 main.go:141] libmachine: Successfully made call to close driver server
I0501 02:30:42.058011   32551 main.go:141] libmachine: Making call to close connection to plugin binary
I0501 02:30:42.058020   32551 main.go:141] libmachine: Making call to close driver server
I0501 02:30:42.058031   32551 main.go:141] libmachine: (functional-960026) Calling .Close
I0501 02:30:42.058243   32551 main.go:141] libmachine: Successfully made call to close driver server
I0501 02:30:42.058260   32551 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-960026 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-960026 image ls --format json --alsologtostderr:
[{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-960026"],"size":"34114467"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDigests":["registry.k8s.io/etcd@sh
a256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"150779692"},{"id":"a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b","repoDigests":["registry.k8s.io/kube-proxy@sha256:880f26b53295d384d2f1fed06aa4d58567e3038157f70a1151a7dd8ef8afaa68","registry.k8s.io/kube-proxy@sha256:ec532ff47eaf39822387e51ec73f1f2502eb74658c6303319db88d2c380d0210"],"repoTags":["registry.k8s.io/kube-proxy:v1.30.0"],"size":"85932953"},{"id":"259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced","repoDigests":["registry.k8s.io/kube-scheduler@sha256:2353c3a1803229970fcb571cffc9b2f120372350e01c7381b4b650c4a02b9d67","registry.k8s.io/kube-scheduler@sha256:d2c2a1d9de7a42d91bfedba5ed4f58126f9cff702d35419d78ce4e7cb07f3b7a"],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.0"],"size":"63026502"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b
0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5","repoDigests":["docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988","docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"],"repoTags":["docker.io/kindest/kindnetd:v20240202-8f1494ea"],"size":"65291810"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8
443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0","repoDigests":["registry.k8s.io/kube-apiserver@sha256:31282cf15b67192cd35f847715a9571f5dd4ac0e130290a408a866bd040bcd81","registry.k8s.io/kube-apiserver@sha256:6b8e197b2d39c321189a475ac755a77896e34b56729425590fbc99f3a96468a3"],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.0"],"size":"117609952"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry
.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:5f52f00f17d5784b5ca004dffca59710fa1a9eec8d54cebdf9433a1d134150fe","registry.k8s.io/kube-controller-manager@sha256:b7622a0826b7690a307eea994e2abc918f35a27a08e30c37b58c9e3f8336a450"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.0"],"size":"112170310"},{"id":"7383c266ef252ad70806f3072ee8e63d2a16d1e6bafa6146a2da867fc7c41759","repoDigests":["docker.io/library/nginx@sha256:4d5a113fd08c4dd57aae6870942f8ab4a7d5fd1594b9749c4ae1b505cfd1e7d8","docker.io/library/nginx@sha256:ed6d2c43c8fbcd3eaa44c9dab6d94cb346234476230dc1681227aa72d07181ee"],"repoTags":["docker.io/library/nginx:latest"],"size":"191760844"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisi
oner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"34efbaae515abdf4af1dfc3bc081497566f566d6f292d1a5504edd44e28a677b","repoDigests":["localhost/minikube-local-cache-test@sha256:6f2122daf41540e768d2014a4a1f3b09628857966a9d8d75c9be141cfffdb10d"],"repoTags":["localhost/minikube-local-cache-test:functional-960026"],"size":"3330"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1","registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"61245718"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registr
y.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-960026 image ls --format json --alsologtostderr:
I0501 02:30:41.603428   32528 out.go:291] Setting OutFile to fd 1 ...
I0501 02:30:41.603536   32528 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0501 02:30:41.603547   32528 out.go:304] Setting ErrFile to fd 2...
I0501 02:30:41.603551   32528 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0501 02:30:41.603738   32528 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18779-13391/.minikube/bin
I0501 02:30:41.604265   32528 config.go:182] Loaded profile config "functional-960026": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
I0501 02:30:41.604366   32528 config.go:182] Loaded profile config "functional-960026": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
I0501 02:30:41.604717   32528 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0501 02:30:41.604757   32528 main.go:141] libmachine: Launching plugin server for driver kvm2
I0501 02:30:41.619843   32528 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45273
I0501 02:30:41.620233   32528 main.go:141] libmachine: () Calling .GetVersion
I0501 02:30:41.620848   32528 main.go:141] libmachine: Using API Version  1
I0501 02:30:41.620893   32528 main.go:141] libmachine: () Calling .SetConfigRaw
I0501 02:30:41.621260   32528 main.go:141] libmachine: () Calling .GetMachineName
I0501 02:30:41.621477   32528 main.go:141] libmachine: (functional-960026) Calling .GetState
I0501 02:30:41.623277   32528 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0501 02:30:41.623318   32528 main.go:141] libmachine: Launching plugin server for driver kvm2
I0501 02:30:41.637748   32528 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42557
I0501 02:30:41.638105   32528 main.go:141] libmachine: () Calling .GetVersion
I0501 02:30:41.638597   32528 main.go:141] libmachine: Using API Version  1
I0501 02:30:41.638621   32528 main.go:141] libmachine: () Calling .SetConfigRaw
I0501 02:30:41.638967   32528 main.go:141] libmachine: () Calling .GetMachineName
I0501 02:30:41.639164   32528 main.go:141] libmachine: (functional-960026) Calling .DriverName
I0501 02:30:41.639358   32528 ssh_runner.go:195] Run: systemctl --version
I0501 02:30:41.639380   32528 main.go:141] libmachine: (functional-960026) Calling .GetSSHHostname
I0501 02:30:41.642214   32528 main.go:141] libmachine: (functional-960026) DBG | domain functional-960026 has defined MAC address 52:54:00:6c:71:bc in network mk-functional-960026
I0501 02:30:41.642671   32528 main.go:141] libmachine: (functional-960026) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:71:bc", ip: ""} in network mk-functional-960026: {Iface:virbr1 ExpiryTime:2024-05-01 03:21:04 +0000 UTC Type:0 Mac:52:54:00:6c:71:bc Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:functional-960026 Clientid:01:52:54:00:6c:71:bc}
I0501 02:30:41.642702   32528 main.go:141] libmachine: (functional-960026) DBG | domain functional-960026 has defined IP address 192.168.39.179 and MAC address 52:54:00:6c:71:bc in network mk-functional-960026
I0501 02:30:41.642810   32528 main.go:141] libmachine: (functional-960026) Calling .GetSSHPort
I0501 02:30:41.642989   32528 main.go:141] libmachine: (functional-960026) Calling .GetSSHKeyPath
I0501 02:30:41.643120   32528 main.go:141] libmachine: (functional-960026) Calling .GetSSHUsername
I0501 02:30:41.643234   32528 sshutil.go:53] new ssh client: &{IP:192.168.39.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/functional-960026/id_rsa Username:docker}
I0501 02:30:41.725765   32528 ssh_runner.go:195] Run: sudo crictl images --output json
I0501 02:30:41.789725   32528 main.go:141] libmachine: Making call to close driver server
I0501 02:30:41.789742   32528 main.go:141] libmachine: (functional-960026) Calling .Close
I0501 02:30:41.790010   32528 main.go:141] libmachine: Successfully made call to close driver server
I0501 02:30:41.790032   32528 main.go:141] libmachine: Making call to close connection to plugin binary
I0501 02:30:41.790048   32528 main.go:141] libmachine: Making call to close driver server
I0501 02:30:41.790057   32528 main.go:141] libmachine: (functional-960026) Calling .Close
I0501 02:30:41.790480   32528 main.go:141] libmachine: (functional-960026) DBG | Closing plugin on server side
I0501 02:30:41.790500   32528 main.go:141] libmachine: Successfully made call to close driver server
I0501 02:30:41.790514   32528 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-960026 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-960026 image ls --format yaml --alsologtostderr:
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-960026
size: "34114467"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b
repoDigests:
- registry.k8s.io/kube-proxy@sha256:880f26b53295d384d2f1fed06aa4d58567e3038157f70a1151a7dd8ef8afaa68
- registry.k8s.io/kube-proxy@sha256:ec532ff47eaf39822387e51ec73f1f2502eb74658c6303319db88d2c380d0210
repoTags:
- registry.k8s.io/kube-proxy:v1.30.0
size: "85932953"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 34efbaae515abdf4af1dfc3bc081497566f566d6f292d1a5504edd44e28a677b
repoDigests:
- localhost/minikube-local-cache-test@sha256:6f2122daf41540e768d2014a4a1f3b09628857966a9d8d75c9be141cfffdb10d
repoTags:
- localhost/minikube-local-cache-test:functional-960026
size: "3330"
- id: c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:31282cf15b67192cd35f847715a9571f5dd4ac0e130290a408a866bd040bcd81
- registry.k8s.io/kube-apiserver@sha256:6b8e197b2d39c321189a475ac755a77896e34b56729425590fbc99f3a96468a3
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.0
size: "117609952"
- id: c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:5f52f00f17d5784b5ca004dffca59710fa1a9eec8d54cebdf9433a1d134150fe
- registry.k8s.io/kube-controller-manager@sha256:b7622a0826b7690a307eea994e2abc918f35a27a08e30c37b58c9e3f8336a450
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.0
size: "112170310"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"
- id: 3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests:
- registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62
- registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "150779692"
- id: 259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:2353c3a1803229970fcb571cffc9b2f120372350e01c7381b4b650c4a02b9d67
- registry.k8s.io/kube-scheduler@sha256:d2c2a1d9de7a42d91bfedba5ed4f58126f9cff702d35419d78ce4e7cb07f3b7a
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.0
size: "63026502"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5
repoDigests:
- docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988
- docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac
repoTags:
- docker.io/kindest/kindnetd:v20240202-8f1494ea
size: "65291810"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 7383c266ef252ad70806f3072ee8e63d2a16d1e6bafa6146a2da867fc7c41759
repoDigests:
- docker.io/library/nginx@sha256:4d5a113fd08c4dd57aae6870942f8ab4a7d5fd1594b9749c4ae1b505cfd1e7d8
- docker.io/library/nginx@sha256:ed6d2c43c8fbcd3eaa44c9dab6d94cb346234476230dc1681227aa72d07181ee
repoTags:
- docker.io/library/nginx:latest
size: "191760844"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
- registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "61245718"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-960026 image ls --format yaml --alsologtostderr:
I0501 02:30:36.722090   32002 out.go:291] Setting OutFile to fd 1 ...
I0501 02:30:36.722197   32002 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0501 02:30:36.722206   32002 out.go:304] Setting ErrFile to fd 2...
I0501 02:30:36.722210   32002 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0501 02:30:36.722453   32002 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18779-13391/.minikube/bin
I0501 02:30:36.723003   32002 config.go:182] Loaded profile config "functional-960026": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
I0501 02:30:36.723097   32002 config.go:182] Loaded profile config "functional-960026": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
I0501 02:30:36.723441   32002 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0501 02:30:36.723489   32002 main.go:141] libmachine: Launching plugin server for driver kvm2
I0501 02:30:36.739148   32002 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35767
I0501 02:30:36.739593   32002 main.go:141] libmachine: () Calling .GetVersion
I0501 02:30:36.740155   32002 main.go:141] libmachine: Using API Version  1
I0501 02:30:36.740175   32002 main.go:141] libmachine: () Calling .SetConfigRaw
I0501 02:30:36.740569   32002 main.go:141] libmachine: () Calling .GetMachineName
I0501 02:30:36.740823   32002 main.go:141] libmachine: (functional-960026) Calling .GetState
I0501 02:30:36.742836   32002 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0501 02:30:36.742886   32002 main.go:141] libmachine: Launching plugin server for driver kvm2
I0501 02:30:36.758630   32002 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42523
I0501 02:30:36.759054   32002 main.go:141] libmachine: () Calling .GetVersion
I0501 02:30:36.759638   32002 main.go:141] libmachine: Using API Version  1
I0501 02:30:36.759674   32002 main.go:141] libmachine: () Calling .SetConfigRaw
I0501 02:30:36.760008   32002 main.go:141] libmachine: () Calling .GetMachineName
I0501 02:30:36.760198   32002 main.go:141] libmachine: (functional-960026) Calling .DriverName
I0501 02:30:36.760388   32002 ssh_runner.go:195] Run: systemctl --version
I0501 02:30:36.760413   32002 main.go:141] libmachine: (functional-960026) Calling .GetSSHHostname
I0501 02:30:36.763083   32002 main.go:141] libmachine: (functional-960026) DBG | domain functional-960026 has defined MAC address 52:54:00:6c:71:bc in network mk-functional-960026
I0501 02:30:36.763491   32002 main.go:141] libmachine: (functional-960026) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:71:bc", ip: ""} in network mk-functional-960026: {Iface:virbr1 ExpiryTime:2024-05-01 03:21:04 +0000 UTC Type:0 Mac:52:54:00:6c:71:bc Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:functional-960026 Clientid:01:52:54:00:6c:71:bc}
I0501 02:30:36.763515   32002 main.go:141] libmachine: (functional-960026) DBG | domain functional-960026 has defined IP address 192.168.39.179 and MAC address 52:54:00:6c:71:bc in network mk-functional-960026
I0501 02:30:36.763723   32002 main.go:141] libmachine: (functional-960026) Calling .GetSSHPort
I0501 02:30:36.763875   32002 main.go:141] libmachine: (functional-960026) Calling .GetSSHKeyPath
I0501 02:30:36.764032   32002 main.go:141] libmachine: (functional-960026) Calling .GetSSHUsername
I0501 02:30:36.764194   32002 sshutil.go:53] new ssh client: &{IP:192.168.39.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/functional-960026/id_rsa Username:docker}
I0501 02:30:36.857635   32002 ssh_runner.go:195] Run: sudo crictl images --output json
I0501 02:30:36.934821   32002 main.go:141] libmachine: Making call to close driver server
I0501 02:30:36.934837   32002 main.go:141] libmachine: (functional-960026) Calling .Close
I0501 02:30:36.935118   32002 main.go:141] libmachine: Successfully made call to close driver server
I0501 02:30:36.935135   32002 main.go:141] libmachine: Making call to close connection to plugin binary
I0501 02:30:36.935146   32002 main.go:141] libmachine: (functional-960026) DBG | Closing plugin on server side
I0501 02:30:36.935157   32002 main.go:141] libmachine: Making call to close driver server
I0501 02:30:36.935167   32002 main.go:141] libmachine: (functional-960026) Calling .Close
I0501 02:30:36.935386   32002 main.go:141] libmachine: Successfully made call to close driver server
I0501 02:30:36.935401   32002 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (6.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-960026 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-960026 ssh pgrep buildkitd: exit status 1 (271.304592ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-960026 image build -t localhost/my-image:functional-960026 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-960026 image build -t localhost/my-image:functional-960026 testdata/build --alsologtostderr: (5.787373654s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-960026 image build -t localhost/my-image:functional-960026 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 502856f2371
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-960026
--> ecf78b27b8d
Successfully tagged localhost/my-image:functional-960026
ecf78b27b8d6514ba748daa0be0f370e4f54b580ba5c58507f45e650eb5a662f
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-960026 image build -t localhost/my-image:functional-960026 testdata/build --alsologtostderr:
I0501 02:30:37.275031   32094 out.go:291] Setting OutFile to fd 1 ...
I0501 02:30:37.275231   32094 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0501 02:30:37.275244   32094 out.go:304] Setting ErrFile to fd 2...
I0501 02:30:37.275251   32094 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0501 02:30:37.275551   32094 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18779-13391/.minikube/bin
I0501 02:30:37.276341   32094 config.go:182] Loaded profile config "functional-960026": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
I0501 02:30:37.276903   32094 config.go:182] Loaded profile config "functional-960026": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
I0501 02:30:37.277258   32094 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0501 02:30:37.277290   32094 main.go:141] libmachine: Launching plugin server for driver kvm2
I0501 02:30:37.292397   32094 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34849
I0501 02:30:37.292853   32094 main.go:141] libmachine: () Calling .GetVersion
I0501 02:30:37.293425   32094 main.go:141] libmachine: Using API Version  1
I0501 02:30:37.293452   32094 main.go:141] libmachine: () Calling .SetConfigRaw
I0501 02:30:37.293817   32094 main.go:141] libmachine: () Calling .GetMachineName
I0501 02:30:37.294007   32094 main.go:141] libmachine: (functional-960026) Calling .GetState
I0501 02:30:37.295856   32094 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0501 02:30:37.295900   32094 main.go:141] libmachine: Launching plugin server for driver kvm2
I0501 02:30:37.310527   32094 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42627
I0501 02:30:37.310971   32094 main.go:141] libmachine: () Calling .GetVersion
I0501 02:30:37.311579   32094 main.go:141] libmachine: Using API Version  1
I0501 02:30:37.311615   32094 main.go:141] libmachine: () Calling .SetConfigRaw
I0501 02:30:37.311970   32094 main.go:141] libmachine: () Calling .GetMachineName
I0501 02:30:37.312142   32094 main.go:141] libmachine: (functional-960026) Calling .DriverName
I0501 02:30:37.312356   32094 ssh_runner.go:195] Run: systemctl --version
I0501 02:30:37.312378   32094 main.go:141] libmachine: (functional-960026) Calling .GetSSHHostname
I0501 02:30:37.315213   32094 main.go:141] libmachine: (functional-960026) DBG | domain functional-960026 has defined MAC address 52:54:00:6c:71:bc in network mk-functional-960026
I0501 02:30:37.315586   32094 main.go:141] libmachine: (functional-960026) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:71:bc", ip: ""} in network mk-functional-960026: {Iface:virbr1 ExpiryTime:2024-05-01 03:21:04 +0000 UTC Type:0 Mac:52:54:00:6c:71:bc Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:functional-960026 Clientid:01:52:54:00:6c:71:bc}
I0501 02:30:37.315619   32094 main.go:141] libmachine: (functional-960026) DBG | domain functional-960026 has defined IP address 192.168.39.179 and MAC address 52:54:00:6c:71:bc in network mk-functional-960026
I0501 02:30:37.315807   32094 main.go:141] libmachine: (functional-960026) Calling .GetSSHPort
I0501 02:30:37.315975   32094 main.go:141] libmachine: (functional-960026) Calling .GetSSHKeyPath
I0501 02:30:37.316133   32094 main.go:141] libmachine: (functional-960026) Calling .GetSSHUsername
I0501 02:30:37.316288   32094 sshutil.go:53] new ssh client: &{IP:192.168.39.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/functional-960026/id_rsa Username:docker}
I0501 02:30:37.415796   32094 build_images.go:161] Building image from path: /tmp/build.2879251121.tar
I0501 02:30:37.415865   32094 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0501 02:30:37.429667   32094 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2879251121.tar
I0501 02:30:37.445177   32094 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2879251121.tar: stat -c "%s %y" /var/lib/minikube/build/build.2879251121.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2879251121.tar': No such file or directory
I0501 02:30:37.445206   32094 ssh_runner.go:362] scp /tmp/build.2879251121.tar --> /var/lib/minikube/build/build.2879251121.tar (3072 bytes)
I0501 02:30:37.518747   32094 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2879251121
I0501 02:30:37.550587   32094 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2879251121 -xf /var/lib/minikube/build/build.2879251121.tar
I0501 02:30:37.590677   32094 crio.go:315] Building image: /var/lib/minikube/build/build.2879251121
I0501 02:30:37.590748   32094 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-960026 /var/lib/minikube/build/build.2879251121 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0501 02:30:42.964601   32094 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-960026 /var/lib/minikube/build/build.2879251121 --cgroup-manager=cgroupfs: (5.373814672s)
I0501 02:30:42.964684   32094 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2879251121
I0501 02:30:42.978886   32094 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2879251121.tar
I0501 02:30:42.994207   32094 build_images.go:217] Built localhost/my-image:functional-960026 from /tmp/build.2879251121.tar
I0501 02:30:42.994243   32094 build_images.go:133] succeeded building to: functional-960026
I0501 02:30:42.994248   32094 build_images.go:134] failed building to: 
I0501 02:30:42.994268   32094 main.go:141] libmachine: Making call to close driver server
I0501 02:30:42.994275   32094 main.go:141] libmachine: (functional-960026) Calling .Close
I0501 02:30:42.994630   32094 main.go:141] libmachine: Successfully made call to close driver server
I0501 02:30:42.994652   32094 main.go:141] libmachine: Making call to close connection to plugin binary
I0501 02:30:42.994661   32094 main.go:141] libmachine: Making call to close driver server
I0501 02:30:42.994669   32094 main.go:141] libmachine: (functional-960026) Calling .Close
I0501 02:30:42.994677   32094 main.go:141] libmachine: (functional-960026) DBG | Closing plugin on server side
I0501 02:30:42.994974   32094 main.go:141] libmachine: (functional-960026) DBG | Closing plugin on server side
I0501 02:30:42.994982   32094 main.go:141] libmachine: Successfully made call to close driver server
I0501 02:30:42.995006   32094 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-960026 image ls
2024/05/01 02:30:43 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (6.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.149915123s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-960026
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.17s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-960026 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-960026 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-960026 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (27.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-960026 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-960026 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6d85cfcfd8-jfbjm" [8fdacd2e-fd01-4f10-b10a-a8462897e59d] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6d85cfcfd8-jfbjm" [8fdacd2e-fd01-4f10-b10a-a8462897e59d] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 27.004822575s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (27.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-960026 image load --daemon gcr.io/google-containers/addon-resizer:functional-960026 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-960026 image load --daemon gcr.io/google-containers/addon-resizer:functional-960026 --alsologtostderr: (5.051303104s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-960026 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (9.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-960026 image load --daemon gcr.io/google-containers/addon-resizer:functional-960026 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-960026 image load --daemon gcr.io/google-containers/addon-resizer:functional-960026 --alsologtostderr: (9.409353675s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-960026 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (9.74s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (7.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.446336997s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-960026
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-960026 image load --daemon gcr.io/google-containers/addon-resizer:functional-960026 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-960026 image load --daemon gcr.io/google-containers/addon-resizer:functional-960026 --alsologtostderr: (4.699104236s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-960026 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (7.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-960026 image save gcr.io/google-containers/addon-resizer:functional-960026 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-960026 image save gcr.io/google-containers/addon-resizer:functional-960026 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (1.972569589s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.97s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-amd64 -p functional-960026 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-amd64 -p functional-960026 service list -o json
functional_test.go:1490: Took "519.76997ms" to run "out/minikube-linux-amd64 -p functional-960026 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-960026 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-960026 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (2.643460359s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-960026 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.90s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-amd64 -p functional-960026 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.39.179:31312
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-amd64 -p functional-960026 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-amd64 -p functional-960026 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.39.179:31312
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1311: Took "239.227831ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1325: Took "59.074107ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-960026
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-960026 image save --daemon gcr.io/google-containers/addon-resizer:functional-960026 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-960026 image save --daemon gcr.io/google-containers/addon-resizer:functional-960026 --alsologtostderr: (1.304096986s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-960026
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.34s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1362: Took "238.446342ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1375: Took "66.065654ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (9.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-960026 /tmp/TestFunctionalparallelMountCmdany-port3947861150/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1714530628264655155" to /tmp/TestFunctionalparallelMountCmdany-port3947861150/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1714530628264655155" to /tmp/TestFunctionalparallelMountCmdany-port3947861150/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1714530628264655155" to /tmp/TestFunctionalparallelMountCmdany-port3947861150/001/test-1714530628264655155
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-960026 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-960026 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (297.708292ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-960026 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-960026 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 May  1 02:30 created-by-test
-rw-r--r-- 1 docker docker 24 May  1 02:30 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 May  1 02:30 test-1714530628264655155
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-960026 ssh cat /mount-9p/test-1714530628264655155
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-960026 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [8517bc38-60de-4662-a0b8-bfcddff04fed] Pending
helpers_test.go:344: "busybox-mount" [8517bc38-60de-4662-a0b8-bfcddff04fed] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [8517bc38-60de-4662-a0b8-bfcddff04fed] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [8517bc38-60de-4662-a0b8-bfcddff04fed] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 7.004687359s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-960026 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-960026 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-960026 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-960026 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-960026 /tmp/TestFunctionalparallelMountCmdany-port3947861150/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (9.90s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-960026 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-960026 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-960026 /tmp/TestFunctionalparallelMountCmdspecific-port1546009034/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-960026 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-960026 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (243.117467ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-960026 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-960026 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-960026 /tmp/TestFunctionalparallelMountCmdspecific-port1546009034/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-960026 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-960026 ssh "sudo umount -f /mount-9p": exit status 1 (236.231247ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-960026 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-960026 /tmp/TestFunctionalparallelMountCmdspecific-port1546009034/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.99s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-960026 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2594446214/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-960026 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2594446214/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-960026 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2594446214/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-960026 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-960026 ssh "findmnt -T" /mount1: exit status 1 (319.775262ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-960026 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-960026 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-960026 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-960026 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-960026 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2594446214/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-960026 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2594446214/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-960026 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2594446214/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.40s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-960026
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-960026
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-960026
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (223.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-329926 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0501 02:31:24.419278   20724 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/addons-286595/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-329926 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m42.748001777s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-329926 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (223.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (8.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-329926 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-329926 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-329926 -- rollout status deployment/busybox: (5.78093021s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-329926 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-329926 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-329926 -- exec busybox-fc5497c4f-h8dxv -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-329926 -- exec busybox-fc5497c4f-nwj5x -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-329926 -- exec busybox-fc5497c4f-s528n -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-329926 -- exec busybox-fc5497c4f-h8dxv -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-329926 -- exec busybox-fc5497c4f-nwj5x -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-329926 -- exec busybox-fc5497c4f-s528n -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-329926 -- exec busybox-fc5497c4f-h8dxv -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-329926 -- exec busybox-fc5497c4f-nwj5x -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-329926 -- exec busybox-fc5497c4f-s528n -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (8.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-329926 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-329926 -- exec busybox-fc5497c4f-h8dxv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-329926 -- exec busybox-fc5497c4f-h8dxv -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-329926 -- exec busybox-fc5497c4f-nwj5x -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-329926 -- exec busybox-fc5497c4f-nwj5x -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-329926 -- exec busybox-fc5497c4f-s528n -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-329926 -- exec busybox-fc5497c4f-s528n -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (47.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-329926 -v=7 --alsologtostderr
E0501 02:34:56.199190   20724 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/functional-960026/client.crt: no such file or directory
E0501 02:34:56.204538   20724 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/functional-960026/client.crt: no such file or directory
E0501 02:34:56.214882   20724 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/functional-960026/client.crt: no such file or directory
E0501 02:34:56.235174   20724 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/functional-960026/client.crt: no such file or directory
E0501 02:34:56.275486   20724 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/functional-960026/client.crt: no such file or directory
E0501 02:34:56.356010   20724 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/functional-960026/client.crt: no such file or directory
E0501 02:34:56.516397   20724 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/functional-960026/client.crt: no such file or directory
E0501 02:34:56.837001   20724 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/functional-960026/client.crt: no such file or directory
E0501 02:34:57.477734   20724 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/functional-960026/client.crt: no such file or directory
E0501 02:34:58.758046   20724 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/functional-960026/client.crt: no such file or directory
E0501 02:35:01.318767   20724 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/functional-960026/client.crt: no such file or directory
E0501 02:35:06.439959   20724 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/functional-960026/client.crt: no such file or directory
E0501 02:35:16.681167   20724 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/functional-960026/client.crt: no such file or directory
E0501 02:35:37.161955   20724 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/functional-960026/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-329926 -v=7 --alsologtostderr: (46.938599119s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-329926 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (47.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-329926 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (13.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-329926 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-329926 cp testdata/cp-test.txt ha-329926:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-329926 ssh -n ha-329926 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-329926 cp ha-329926:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile895580191/001/cp-test_ha-329926.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-329926 ssh -n ha-329926 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-329926 cp ha-329926:/home/docker/cp-test.txt ha-329926-m02:/home/docker/cp-test_ha-329926_ha-329926-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-329926 ssh -n ha-329926 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-329926 ssh -n ha-329926-m02 "sudo cat /home/docker/cp-test_ha-329926_ha-329926-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-329926 cp ha-329926:/home/docker/cp-test.txt ha-329926-m03:/home/docker/cp-test_ha-329926_ha-329926-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-329926 ssh -n ha-329926 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-329926 ssh -n ha-329926-m03 "sudo cat /home/docker/cp-test_ha-329926_ha-329926-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-329926 cp ha-329926:/home/docker/cp-test.txt ha-329926-m04:/home/docker/cp-test_ha-329926_ha-329926-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-329926 ssh -n ha-329926 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-329926 ssh -n ha-329926-m04 "sudo cat /home/docker/cp-test_ha-329926_ha-329926-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-329926 cp testdata/cp-test.txt ha-329926-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-329926 ssh -n ha-329926-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-329926 cp ha-329926-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile895580191/001/cp-test_ha-329926-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-329926 ssh -n ha-329926-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-329926 cp ha-329926-m02:/home/docker/cp-test.txt ha-329926:/home/docker/cp-test_ha-329926-m02_ha-329926.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-329926 ssh -n ha-329926-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-329926 ssh -n ha-329926 "sudo cat /home/docker/cp-test_ha-329926-m02_ha-329926.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-329926 cp ha-329926-m02:/home/docker/cp-test.txt ha-329926-m03:/home/docker/cp-test_ha-329926-m02_ha-329926-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-329926 ssh -n ha-329926-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-329926 ssh -n ha-329926-m03 "sudo cat /home/docker/cp-test_ha-329926-m02_ha-329926-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-329926 cp ha-329926-m02:/home/docker/cp-test.txt ha-329926-m04:/home/docker/cp-test_ha-329926-m02_ha-329926-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-329926 ssh -n ha-329926-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-329926 ssh -n ha-329926-m04 "sudo cat /home/docker/cp-test_ha-329926-m02_ha-329926-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-329926 cp testdata/cp-test.txt ha-329926-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-329926 ssh -n ha-329926-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-329926 cp ha-329926-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile895580191/001/cp-test_ha-329926-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-329926 ssh -n ha-329926-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-329926 cp ha-329926-m03:/home/docker/cp-test.txt ha-329926:/home/docker/cp-test_ha-329926-m03_ha-329926.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-329926 ssh -n ha-329926-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-329926 ssh -n ha-329926 "sudo cat /home/docker/cp-test_ha-329926-m03_ha-329926.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-329926 cp ha-329926-m03:/home/docker/cp-test.txt ha-329926-m02:/home/docker/cp-test_ha-329926-m03_ha-329926-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-329926 ssh -n ha-329926-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-329926 ssh -n ha-329926-m02 "sudo cat /home/docker/cp-test_ha-329926-m03_ha-329926-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-329926 cp ha-329926-m03:/home/docker/cp-test.txt ha-329926-m04:/home/docker/cp-test_ha-329926-m03_ha-329926-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-329926 ssh -n ha-329926-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-329926 ssh -n ha-329926-m04 "sudo cat /home/docker/cp-test_ha-329926-m03_ha-329926-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-329926 cp testdata/cp-test.txt ha-329926-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-329926 ssh -n ha-329926-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-329926 cp ha-329926-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile895580191/001/cp-test_ha-329926-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-329926 ssh -n ha-329926-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-329926 cp ha-329926-m04:/home/docker/cp-test.txt ha-329926:/home/docker/cp-test_ha-329926-m04_ha-329926.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-329926 ssh -n ha-329926-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-329926 ssh -n ha-329926 "sudo cat /home/docker/cp-test_ha-329926-m04_ha-329926.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-329926 cp ha-329926-m04:/home/docker/cp-test.txt ha-329926-m02:/home/docker/cp-test_ha-329926-m04_ha-329926-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-329926 ssh -n ha-329926-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-329926 ssh -n ha-329926-m02 "sudo cat /home/docker/cp-test_ha-329926-m04_ha-329926-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-329926 cp ha-329926-m04:/home/docker/cp-test.txt ha-329926-m03:/home/docker/cp-test_ha-329926-m04_ha-329926-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-329926 ssh -n ha-329926-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-329926 ssh -n ha-329926-m03 "sudo cat /home/docker/cp-test_ha-329926-m04_ha-329926-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (13.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.50513451s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (17.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-329926 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-329926 node delete m03 -v=7 --alsologtostderr: (16.915304167s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-329926 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (17.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (380.19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-329926 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0501 02:49:56.198551   20724 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/functional-960026/client.crt: no such file or directory
E0501 02:51:19.245921   20724 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/functional-960026/client.crt: no such file or directory
E0501 02:51:24.422258   20724 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/addons-286595/client.crt: no such file or directory
E0501 02:54:27.468376   20724 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/addons-286595/client.crt: no such file or directory
E0501 02:54:56.198337   20724 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/functional-960026/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-329926 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (6m19.420535246s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-329926 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (380.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (76.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-329926 --control-plane -v=7 --alsologtostderr
E0501 02:56:24.419987   20724 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/addons-286595/client.crt: no such file or directory
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-329926 --control-plane -v=7 --alsologtostderr: (1m15.715997821s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-329926 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (76.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.57s)

                                                
                                    
x
+
TestJSONOutput/start/Command (100.1s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-347249 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-347249 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (1m40.099459481s)
--- PASS: TestJSONOutput/start/Command (100.10s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.77s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-347249 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.77s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.68s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-347249 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.68s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.4s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-347249 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-347249 --output=json --user=testUser: (7.402734839s)
--- PASS: TestJSONOutput/stop/Command (7.40s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-399698 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-399698 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (74.459233ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"d6c1d501-d526-4061-a07d-7ab1776ed4cb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-399698] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"b0129d83-f3bb-467c-a4e1-baac44801b12","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18779"}}
	{"specversion":"1.0","id":"902980f5-39ad-4dec-9e39-c7c77ff7d80d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"87c26f0b-60c6-4442-b7fc-359c5b2df7ee","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18779-13391/kubeconfig"}}
	{"specversion":"1.0","id":"efcbe3c6-5560-4f26-b391-797e3fbced62","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18779-13391/.minikube"}}
	{"specversion":"1.0","id":"00b1ff8d-e134-48f2-b527-4056411e1a22","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"a498c3be-8f42-48ac-9865-fdab13478f12","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"e5fec343-e94d-4382-8236-53929e50fe79","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-399698" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-399698
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (95.72s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-092732 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-092732 --driver=kvm2  --container-runtime=crio: (46.995130004s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-094893 --driver=kvm2  --container-runtime=crio
E0501 02:59:56.198482   20724 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/functional-960026/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-094893 --driver=kvm2  --container-runtime=crio: (45.789470406s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-092732
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-094893
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-094893" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-094893
helpers_test.go:175: Cleaning up "first-092732" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-092732
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-092732: (1.000785049s)
--- PASS: TestMinikubeProfile (95.72s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (28.57s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-508014 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-508014 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (27.574123406s)
--- PASS: TestMountStart/serial/StartWithMountFirst (28.57s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-508014 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-508014 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (30.29s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-519696 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-519696 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (29.292123838s)
--- PASS: TestMountStart/serial/StartWithMountSecond (30.29s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-519696 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-519696 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.39s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.7s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-508014 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.70s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-519696 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-519696 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.41s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.43s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-519696
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-519696: (1.425067823s)
--- PASS: TestMountStart/serial/Stop (1.43s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (23.44s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-519696
E0501 03:01:24.422289   20724 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/addons-286595/client.crt: no such file or directory
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-519696: (22.435971059s)
--- PASS: TestMountStart/serial/RestartStopped (23.44s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-519696 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-519696 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.41s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (131.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-282238 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-282238 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (2m11.273095606s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-282238 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (131.68s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-282238 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-282238 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-282238 -- rollout status deployment/busybox: (3.401383291s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-282238 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-282238 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-282238 -- exec busybox-fc5497c4f-dpfrf -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-282238 -- exec busybox-fc5497c4f-mpfsk -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-282238 -- exec busybox-fc5497c4f-dpfrf -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-282238 -- exec busybox-fc5497c4f-mpfsk -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-282238 -- exec busybox-fc5497c4f-dpfrf -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-282238 -- exec busybox-fc5497c4f-mpfsk -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.01s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-282238 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-282238 -- exec busybox-fc5497c4f-dpfrf -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-282238 -- exec busybox-fc5497c4f-dpfrf -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-282238 -- exec busybox-fc5497c4f-mpfsk -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-282238 -- exec busybox-fc5497c4f-mpfsk -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.85s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (41.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-282238 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-282238 -v 3 --alsologtostderr: (40.965464864s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-282238 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (41.55s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-282238 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.23s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-282238 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-282238 cp testdata/cp-test.txt multinode-282238:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-282238 ssh -n multinode-282238 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-282238 cp multinode-282238:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2058267319/001/cp-test_multinode-282238.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-282238 ssh -n multinode-282238 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-282238 cp multinode-282238:/home/docker/cp-test.txt multinode-282238-m02:/home/docker/cp-test_multinode-282238_multinode-282238-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-282238 ssh -n multinode-282238 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-282238 ssh -n multinode-282238-m02 "sudo cat /home/docker/cp-test_multinode-282238_multinode-282238-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-282238 cp multinode-282238:/home/docker/cp-test.txt multinode-282238-m03:/home/docker/cp-test_multinode-282238_multinode-282238-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-282238 ssh -n multinode-282238 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-282238 ssh -n multinode-282238-m03 "sudo cat /home/docker/cp-test_multinode-282238_multinode-282238-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-282238 cp testdata/cp-test.txt multinode-282238-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-282238 ssh -n multinode-282238-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-282238 cp multinode-282238-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2058267319/001/cp-test_multinode-282238-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-282238 ssh -n multinode-282238-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-282238 cp multinode-282238-m02:/home/docker/cp-test.txt multinode-282238:/home/docker/cp-test_multinode-282238-m02_multinode-282238.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-282238 ssh -n multinode-282238-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-282238 ssh -n multinode-282238 "sudo cat /home/docker/cp-test_multinode-282238-m02_multinode-282238.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-282238 cp multinode-282238-m02:/home/docker/cp-test.txt multinode-282238-m03:/home/docker/cp-test_multinode-282238-m02_multinode-282238-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-282238 ssh -n multinode-282238-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-282238 ssh -n multinode-282238-m03 "sudo cat /home/docker/cp-test_multinode-282238-m02_multinode-282238-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-282238 cp testdata/cp-test.txt multinode-282238-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-282238 ssh -n multinode-282238-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-282238 cp multinode-282238-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2058267319/001/cp-test_multinode-282238-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-282238 ssh -n multinode-282238-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-282238 cp multinode-282238-m03:/home/docker/cp-test.txt multinode-282238:/home/docker/cp-test_multinode-282238-m03_multinode-282238.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-282238 ssh -n multinode-282238-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-282238 ssh -n multinode-282238 "sudo cat /home/docker/cp-test_multinode-282238-m03_multinode-282238.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-282238 cp multinode-282238-m03:/home/docker/cp-test.txt multinode-282238-m02:/home/docker/cp-test_multinode-282238-m03_multinode-282238-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-282238 ssh -n multinode-282238-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-282238 ssh -n multinode-282238-m02 "sudo cat /home/docker/cp-test_multinode-282238-m03_multinode-282238-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.41s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (3.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-282238 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-282238 node stop m03: (2.297271368s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-282238 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-282238 status: exit status 7 (434.175094ms)

                                                
                                                
-- stdout --
	multinode-282238
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-282238-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-282238-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-282238 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-282238 status --alsologtostderr: exit status 7 (435.250337ms)

                                                
                                                
-- stdout --
	multinode-282238
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-282238-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-282238-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0501 03:04:50.718974   50530 out.go:291] Setting OutFile to fd 1 ...
	I0501 03:04:50.719134   50530 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 03:04:50.719146   50530 out.go:304] Setting ErrFile to fd 2...
	I0501 03:04:50.719151   50530 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 03:04:50.719399   50530 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18779-13391/.minikube/bin
	I0501 03:04:50.719564   50530 out.go:298] Setting JSON to false
	I0501 03:04:50.719585   50530 mustload.go:65] Loading cluster: multinode-282238
	I0501 03:04:50.719695   50530 notify.go:220] Checking for updates...
	I0501 03:04:50.720030   50530 config.go:182] Loaded profile config "multinode-282238": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0501 03:04:50.720050   50530 status.go:255] checking status of multinode-282238 ...
	I0501 03:04:50.720583   50530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:04:50.720621   50530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:04:50.735608   50530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45711
	I0501 03:04:50.736012   50530 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:04:50.736533   50530 main.go:141] libmachine: Using API Version  1
	I0501 03:04:50.736579   50530 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:04:50.736959   50530 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:04:50.737203   50530 main.go:141] libmachine: (multinode-282238) Calling .GetState
	I0501 03:04:50.738769   50530 status.go:330] multinode-282238 host status = "Running" (err=<nil>)
	I0501 03:04:50.738793   50530 host.go:66] Checking if "multinode-282238" exists ...
	I0501 03:04:50.739098   50530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:04:50.739151   50530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:04:50.753570   50530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39115
	I0501 03:04:50.753898   50530 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:04:50.754327   50530 main.go:141] libmachine: Using API Version  1
	I0501 03:04:50.754345   50530 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:04:50.754691   50530 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:04:50.754862   50530 main.go:141] libmachine: (multinode-282238) Calling .GetIP
	I0501 03:04:50.757173   50530 main.go:141] libmachine: (multinode-282238) DBG | domain multinode-282238 has defined MAC address 52:54:00:3c:33:06 in network mk-multinode-282238
	I0501 03:04:50.757526   50530 main.go:141] libmachine: (multinode-282238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:33:06", ip: ""} in network mk-multinode-282238: {Iface:virbr1 ExpiryTime:2024-05-01 04:01:56 +0000 UTC Type:0 Mac:52:54:00:3c:33:06 Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:multinode-282238 Clientid:01:52:54:00:3c:33:06}
	I0501 03:04:50.757557   50530 main.go:141] libmachine: (multinode-282238) DBG | domain multinode-282238 has defined IP address 192.168.39.139 and MAC address 52:54:00:3c:33:06 in network mk-multinode-282238
	I0501 03:04:50.757665   50530 host.go:66] Checking if "multinode-282238" exists ...
	I0501 03:04:50.758073   50530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:04:50.758119   50530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:04:50.773185   50530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45221
	I0501 03:04:50.773585   50530 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:04:50.774036   50530 main.go:141] libmachine: Using API Version  1
	I0501 03:04:50.774056   50530 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:04:50.774349   50530 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:04:50.774565   50530 main.go:141] libmachine: (multinode-282238) Calling .DriverName
	I0501 03:04:50.774750   50530 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0501 03:04:50.774770   50530 main.go:141] libmachine: (multinode-282238) Calling .GetSSHHostname
	I0501 03:04:50.777180   50530 main.go:141] libmachine: (multinode-282238) DBG | domain multinode-282238 has defined MAC address 52:54:00:3c:33:06 in network mk-multinode-282238
	I0501 03:04:50.777541   50530 main.go:141] libmachine: (multinode-282238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:33:06", ip: ""} in network mk-multinode-282238: {Iface:virbr1 ExpiryTime:2024-05-01 04:01:56 +0000 UTC Type:0 Mac:52:54:00:3c:33:06 Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:multinode-282238 Clientid:01:52:54:00:3c:33:06}
	I0501 03:04:50.777572   50530 main.go:141] libmachine: (multinode-282238) DBG | domain multinode-282238 has defined IP address 192.168.39.139 and MAC address 52:54:00:3c:33:06 in network mk-multinode-282238
	I0501 03:04:50.777691   50530 main.go:141] libmachine: (multinode-282238) Calling .GetSSHPort
	I0501 03:04:50.777918   50530 main.go:141] libmachine: (multinode-282238) Calling .GetSSHKeyPath
	I0501 03:04:50.778054   50530 main.go:141] libmachine: (multinode-282238) Calling .GetSSHUsername
	I0501 03:04:50.778168   50530 sshutil.go:53] new ssh client: &{IP:192.168.39.139 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/multinode-282238/id_rsa Username:docker}
	I0501 03:04:50.859326   50530 ssh_runner.go:195] Run: systemctl --version
	I0501 03:04:50.866041   50530 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 03:04:50.885664   50530 kubeconfig.go:125] found "multinode-282238" server: "https://192.168.39.139:8443"
	I0501 03:04:50.885691   50530 api_server.go:166] Checking apiserver status ...
	I0501 03:04:50.885719   50530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:04:50.901559   50530 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1163/cgroup
	W0501 03:04:50.913783   50530 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1163/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0501 03:04:50.913840   50530 ssh_runner.go:195] Run: ls
	I0501 03:04:50.919202   50530 api_server.go:253] Checking apiserver healthz at https://192.168.39.139:8443/healthz ...
	I0501 03:04:50.923180   50530 api_server.go:279] https://192.168.39.139:8443/healthz returned 200:
	ok
	I0501 03:04:50.923205   50530 status.go:422] multinode-282238 apiserver status = Running (err=<nil>)
	I0501 03:04:50.923218   50530 status.go:257] multinode-282238 status: &{Name:multinode-282238 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0501 03:04:50.923259   50530 status.go:255] checking status of multinode-282238-m02 ...
	I0501 03:04:50.923560   50530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:04:50.923593   50530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:04:50.938626   50530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33323
	I0501 03:04:50.939005   50530 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:04:50.939382   50530 main.go:141] libmachine: Using API Version  1
	I0501 03:04:50.939402   50530 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:04:50.939734   50530 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:04:50.939914   50530 main.go:141] libmachine: (multinode-282238-m02) Calling .GetState
	I0501 03:04:50.941483   50530 status.go:330] multinode-282238-m02 host status = "Running" (err=<nil>)
	I0501 03:04:50.941500   50530 host.go:66] Checking if "multinode-282238-m02" exists ...
	I0501 03:04:50.941788   50530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:04:50.941824   50530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:04:50.957673   50530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39381
	I0501 03:04:50.958144   50530 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:04:50.958637   50530 main.go:141] libmachine: Using API Version  1
	I0501 03:04:50.958656   50530 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:04:50.958926   50530 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:04:50.959051   50530 main.go:141] libmachine: (multinode-282238-m02) Calling .GetIP
	I0501 03:04:50.961527   50530 main.go:141] libmachine: (multinode-282238-m02) DBG | domain multinode-282238-m02 has defined MAC address 52:54:00:c2:87:be in network mk-multinode-282238
	I0501 03:04:50.961942   50530 main.go:141] libmachine: (multinode-282238-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:87:be", ip: ""} in network mk-multinode-282238: {Iface:virbr1 ExpiryTime:2024-05-01 04:03:26 +0000 UTC Type:0 Mac:52:54:00:c2:87:be Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:multinode-282238-m02 Clientid:01:52:54:00:c2:87:be}
	I0501 03:04:50.961972   50530 main.go:141] libmachine: (multinode-282238-m02) DBG | domain multinode-282238-m02 has defined IP address 192.168.39.29 and MAC address 52:54:00:c2:87:be in network mk-multinode-282238
	I0501 03:04:50.962116   50530 host.go:66] Checking if "multinode-282238-m02" exists ...
	I0501 03:04:50.962425   50530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:04:50.962462   50530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:04:50.979381   50530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39665
	I0501 03:04:50.979812   50530 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:04:50.980378   50530 main.go:141] libmachine: Using API Version  1
	I0501 03:04:50.980405   50530 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:04:50.980702   50530 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:04:50.980893   50530 main.go:141] libmachine: (multinode-282238-m02) Calling .DriverName
	I0501 03:04:50.981046   50530 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0501 03:04:50.981070   50530 main.go:141] libmachine: (multinode-282238-m02) Calling .GetSSHHostname
	I0501 03:04:50.983779   50530 main.go:141] libmachine: (multinode-282238-m02) DBG | domain multinode-282238-m02 has defined MAC address 52:54:00:c2:87:be in network mk-multinode-282238
	I0501 03:04:50.984202   50530 main.go:141] libmachine: (multinode-282238-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:87:be", ip: ""} in network mk-multinode-282238: {Iface:virbr1 ExpiryTime:2024-05-01 04:03:26 +0000 UTC Type:0 Mac:52:54:00:c2:87:be Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:multinode-282238-m02 Clientid:01:52:54:00:c2:87:be}
	I0501 03:04:50.984231   50530 main.go:141] libmachine: (multinode-282238-m02) DBG | domain multinode-282238-m02 has defined IP address 192.168.39.29 and MAC address 52:54:00:c2:87:be in network mk-multinode-282238
	I0501 03:04:50.984388   50530 main.go:141] libmachine: (multinode-282238-m02) Calling .GetSSHPort
	I0501 03:04:50.984559   50530 main.go:141] libmachine: (multinode-282238-m02) Calling .GetSSHKeyPath
	I0501 03:04:50.984705   50530 main.go:141] libmachine: (multinode-282238-m02) Calling .GetSSHUsername
	I0501 03:04:50.984817   50530 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13391/.minikube/machines/multinode-282238-m02/id_rsa Username:docker}
	I0501 03:04:51.062713   50530 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 03:04:51.078609   50530 status.go:257] multinode-282238-m02 status: &{Name:multinode-282238-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0501 03:04:51.078645   50530 status.go:255] checking status of multinode-282238-m03 ...
	I0501 03:04:51.078952   50530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0501 03:04:51.078988   50530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:04:51.095062   50530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46599
	I0501 03:04:51.095479   50530 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:04:51.095922   50530 main.go:141] libmachine: Using API Version  1
	I0501 03:04:51.095952   50530 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:04:51.096242   50530 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:04:51.096484   50530 main.go:141] libmachine: (multinode-282238-m03) Calling .GetState
	I0501 03:04:51.098132   50530 status.go:330] multinode-282238-m03 host status = "Stopped" (err=<nil>)
	I0501 03:04:51.098148   50530 status.go:343] host is not running, skipping remaining checks
	I0501 03:04:51.098156   50530 status.go:257] multinode-282238-m03 status: &{Name:multinode-282238-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (3.17s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (31.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-282238 node start m03 -v=7 --alsologtostderr
E0501 03:04:56.198538   20724 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/functional-960026/client.crt: no such file or directory
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-282238 node start m03 -v=7 --alsologtostderr: (30.48676455s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-282238 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (31.12s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-282238 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-282238 node delete m03: (1.908810471s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-282238 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.46s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (178.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-282238 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0501 03:14:56.198774   20724 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/functional-960026/client.crt: no such file or directory
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-282238 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (2m58.363901143s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-282238 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (178.90s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (48.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-282238
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-282238-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-282238-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (72.099835ms)

                                                
                                                
-- stdout --
	* [multinode-282238-m02] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18779
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18779-13391/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18779-13391/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-282238-m02' is duplicated with machine name 'multinode-282238-m02' in profile 'multinode-282238'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-282238-m03 --driver=kvm2  --container-runtime=crio
E0501 03:16:24.419682   20724 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/addons-286595/client.crt: no such file or directory
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-282238-m03 --driver=kvm2  --container-runtime=crio: (46.706052024s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-282238
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-282238: exit status 80 (226.613276ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-282238 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-282238-m03 already exists in multinode-282238-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-282238-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (48.03s)

                                                
                                    
x
+
TestScheduledStopUnix (118.42s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-151224 --memory=2048 --driver=kvm2  --container-runtime=crio
E0501 03:21:24.422200   20724 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/addons-286595/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-151224 --memory=2048 --driver=kvm2  --container-runtime=crio: (46.671364175s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-151224 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-151224 -n scheduled-stop-151224
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-151224 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-151224 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-151224 -n scheduled-stop-151224
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-151224
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-151224 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-151224
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-151224: exit status 7 (78.453149ms)

                                                
                                                
-- stdout --
	scheduled-stop-151224
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-151224 -n scheduled-stop-151224
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-151224 -n scheduled-stop-151224: exit status 7 (75.374762ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-151224" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-151224
--- PASS: TestScheduledStopUnix (118.42s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (156.78s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.35628022 start -p running-upgrade-179111 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
E0501 03:27:47.469699   20724 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/addons-286595/client.crt: no such file or directory
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.35628022 start -p running-upgrade-179111 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m29.035293117s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-179111 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-179111 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m3.481499502s)
helpers_test.go:175: Cleaning up "running-upgrade-179111" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-179111
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-179111: (1.45838987s)
--- PASS: TestRunningBinaryUpgrade (156.78s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-588224 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-588224 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (100.604811ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-588224] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18779
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18779-13391/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18779-13391/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (127.92s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-588224 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-588224 --driver=kvm2  --container-runtime=crio: (2m7.65975254s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-588224 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (127.92s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.61s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.61s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (143.39s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.2893043639 start -p stopped-upgrade-535170 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
E0501 03:24:56.199192   20724 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/functional-960026/client.crt: no such file or directory
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.2893043639 start -p stopped-upgrade-535170 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m29.31225003s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.2893043639 -p stopped-upgrade-535170 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.2893043639 -p stopped-upgrade-535170 stop: (2.129792378s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-535170 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-535170 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (51.951972973s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (143.39s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (45.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-588224 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-588224 --no-kubernetes --driver=kvm2  --container-runtime=crio: (43.331947518s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-588224 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-588224 status -o json: exit status 2 (273.300907ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-588224","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-588224
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-588224: (1.610322824s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (45.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (31.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-588224 --no-kubernetes --driver=kvm2  --container-runtime=crio
E0501 03:26:24.419720   20724 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/addons-286595/client.crt: no such file or directory
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-588224 --no-kubernetes --driver=kvm2  --container-runtime=crio: (31.286994382s)
--- PASS: TestNoKubernetes/serial/Start (31.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-588224 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-588224 "sudo systemctl is-active --quiet service kubelet": exit status 1 (226.791106ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (6.7s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (3.631273299s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (3.070715946s)
--- PASS: TestNoKubernetes/serial/ProfileList (6.70s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.47s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-588224
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-588224: (1.466806421s)
--- PASS: TestNoKubernetes/serial/Stop (1.47s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (34.87s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-588224 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-588224 --driver=kvm2  --container-runtime=crio: (34.874576495s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (34.87s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.99s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-535170
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.99s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-588224 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-588224 "sudo systemctl is-active --quiet service kubelet": exit status 1 (210.754558ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                    
x
+
TestPause/serial/Start (111s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-542495 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-542495 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m51.001930349s)
--- PASS: TestPause/serial/Start (111.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.68s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-731347 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-731347 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (134.785042ms)

                                                
                                                
-- stdout --
	* [false-731347] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18779
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18779-13391/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18779-13391/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0501 03:28:15.874511   62648 out.go:291] Setting OutFile to fd 1 ...
	I0501 03:28:15.874645   62648 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 03:28:15.874656   62648 out.go:304] Setting ErrFile to fd 2...
	I0501 03:28:15.874663   62648 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 03:28:15.874938   62648 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18779-13391/.minikube/bin
	I0501 03:28:15.875737   62648 out.go:298] Setting JSON to false
	I0501 03:28:15.877009   62648 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7839,"bootTime":1714526257,"procs":213,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0501 03:28:15.877088   62648 start.go:139] virtualization: kvm guest
	I0501 03:28:15.879418   62648 out.go:177] * [false-731347] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0501 03:28:15.881099   62648 out.go:177]   - MINIKUBE_LOCATION=18779
	I0501 03:28:15.882599   62648 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0501 03:28:15.881079   62648 notify.go:220] Checking for updates...
	I0501 03:28:15.884080   62648 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18779-13391/kubeconfig
	I0501 03:28:15.885333   62648 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18779-13391/.minikube
	I0501 03:28:15.886486   62648 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0501 03:28:15.887601   62648 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0501 03:28:15.889354   62648 config.go:182] Loaded profile config "kubernetes-upgrade-046243": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0501 03:28:15.889498   62648 config.go:182] Loaded profile config "pause-542495": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0501 03:28:15.889606   62648 config.go:182] Loaded profile config "running-upgrade-179111": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0501 03:28:15.889709   62648 driver.go:392] Setting default libvirt URI to qemu:///system
	I0501 03:28:15.932760   62648 out.go:177] * Using the kvm2 driver based on user configuration
	I0501 03:28:15.934101   62648 start.go:297] selected driver: kvm2
	I0501 03:28:15.934119   62648 start.go:901] validating driver "kvm2" against <nil>
	I0501 03:28:15.934132   62648 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0501 03:28:15.936306   62648 out.go:177] 
	W0501 03:28:15.937464   62648 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0501 03:28:15.938732   62648 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-731347 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-731347

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-731347

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-731347

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-731347

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-731347

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-731347

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-731347

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-731347

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-731347

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-731347

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-731347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-731347"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-731347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-731347"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-731347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-731347"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-731347

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-731347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-731347"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-731347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-731347"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-731347" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-731347" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-731347" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-731347" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-731347" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-731347" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-731347" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-731347" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-731347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-731347"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-731347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-731347"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-731347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-731347"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-731347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-731347"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-731347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-731347"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-731347" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-731347" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-731347" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-731347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-731347"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-731347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-731347"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-731347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-731347"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-731347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-731347"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-731347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-731347"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-731347

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-731347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-731347"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-731347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-731347"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-731347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-731347"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-731347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-731347"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-731347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-731347"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-731347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-731347"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-731347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-731347"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-731347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-731347"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-731347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-731347"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-731347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-731347"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-731347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-731347"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-731347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-731347"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-731347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-731347"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-731347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-731347"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-731347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-731347"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-731347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-731347"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-731347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-731347"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-731347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-731347"

                                                
                                                
----------------------- debugLogs end: false-731347 [took: 3.382055542s] --------------------------------
helpers_test.go:175: Cleaning up "false-731347" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-731347
--- PASS: TestNetworkPlugins/group/false (3.68s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (107.49s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-892672 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-892672 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0: (1m47.485101359s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (107.49s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (111.74s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-277128 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-277128 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0: (1m51.742146523s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (111.74s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.35s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-892672 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [7bc3e3a7-9315-4cac-a1ee-2a2cbc1aabf2] Pending
helpers_test.go:344: "busybox" [7bc3e3a7-9315-4cac-a1ee-2a2cbc1aabf2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [7bc3e3a7-9315-4cac-a1ee-2a2cbc1aabf2] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.00452148s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-892672 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.35s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (59.35s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-715118 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-715118 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0: (59.352886664s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (59.35s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (11.34s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-277128 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [ff71ad09-63da-4dd0-99c7-9ffb0f4eeae3] Pending
helpers_test.go:344: "busybox" [ff71ad09-63da-4dd0-99c7-9ffb0f4eeae3] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [ff71ad09-63da-4dd0-99c7-9ffb0f4eeae3] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 11.004705283s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-277128 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (11.34s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-892672 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-892672 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.115630418s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-892672 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-277128 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-277128 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.028227074s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-277128 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-715118 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [aa3b27b0-fdc5-4764-9f07-388d0df58006] Pending
helpers_test.go:344: "busybox" [aa3b27b0-fdc5-4764-9f07-388d0df58006] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [aa3b27b0-fdc5-4764-9f07-388d0df58006] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.004722821s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-715118 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.98s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-715118 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-715118 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.98s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (710.42s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-892672 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-892672 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0: (11m50.152716904s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-892672 -n no-preload-892672
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (710.42s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (582.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-277128 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0
E0501 03:34:56.198796   20724 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/functional-960026/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-277128 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0: (9m41.816272146s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-277128 -n embed-certs-277128
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (582.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (587.98s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-715118 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0
E0501 03:36:24.421757   20724 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/addons-286595/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-715118 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0: (9m47.658277106s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-715118 -n default-k8s-diff-port-715118
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (587.98s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (4.6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-503971 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-503971 --alsologtostderr -v=3: (4.603011219s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (4.60s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-503971 -n old-k8s-version-503971
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-503971 -n old-k8s-version-503971: exit status 7 (74.682814ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-503971 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (63.81s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-906018 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-906018 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0: (1m3.81252025s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (63.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (89.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-731347 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-731347 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m29.452143489s)
--- PASS: TestNetworkPlugins/group/auto/Start (89.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (118.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-731347 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
E0501 04:01:07.471192   20724 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/addons-286595/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-731347 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m58.924965124s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (118.93s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-906018 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-906018 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.189606529s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (7.41s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-906018 --alsologtostderr -v=3
E0501 04:01:24.419616   20724 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/addons-286595/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-906018 --alsologtostderr -v=3: (7.414669752s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (7.41s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-906018 -n newest-cni-906018
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-906018 -n newest-cni-906018: exit status 7 (91.824649ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-906018 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (55.6s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-906018 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-906018 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0: (55.255334139s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-906018 -n newest-cni-906018
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (55.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-731347 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-731347 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-cgjff" [9a2cf290-7876-472d-b6d3-8f9d42260674] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-cgjff" [9a2cf290-7876-472d-b6d3-8f9d42260674] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.004006132s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-731347 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-731347 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-731347 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (94.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-731347 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-731347 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m34.33408009s)
--- PASS: TestNetworkPlugins/group/calico/Start (94.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-mtnp9" [6edb3969-d3a0-44fa-bbe2-11a7047742c6] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.006968344s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (105.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-731347 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-731347 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m45.585953571s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (105.59s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-906018 image list --format=json
E0501 04:02:25.448475   20724 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/no-preload-892672/client.crt: no such file or directory
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.98s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-906018 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-906018 -n newest-cni-906018
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-906018 -n newest-cni-906018: exit status 2 (277.896942ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-906018 -n newest-cni-906018
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-906018 -n newest-cni-906018: exit status 2 (308.805763ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-906018 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-906018 -n newest-cni-906018
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-906018 -n newest-cni-906018
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.98s)
E0501 04:04:48.809571   20724 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/no-preload-892672/client.crt: no such file or directory
E0501 04:04:56.199189   20724 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/functional-960026/client.crt: no such file or directory

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-731347 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-731347 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-sqxcb" [91100913-bfd4-4a11-bd8f-59eb31ee5556] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-sqxcb" [91100913-bfd4-4a11-bd8f-59eb31ee5556] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.005551174s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (149.68s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-731347 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-731347 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (2m29.684699229s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (149.68s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-731347 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-731347 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-731347 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (132.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-731347 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
E0501 04:03:09.089854   20724 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/default-k8s-diff-port-715118/client.crt: no such file or directory
E0501 04:03:09.095126   20724 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/default-k8s-diff-port-715118/client.crt: no such file or directory
E0501 04:03:09.105401   20724 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/default-k8s-diff-port-715118/client.crt: no such file or directory
E0501 04:03:09.125698   20724 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/default-k8s-diff-port-715118/client.crt: no such file or directory
E0501 04:03:09.165824   20724 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/default-k8s-diff-port-715118/client.crt: no such file or directory
E0501 04:03:09.246228   20724 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/default-k8s-diff-port-715118/client.crt: no such file or directory
E0501 04:03:09.407312   20724 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/default-k8s-diff-port-715118/client.crt: no such file or directory
E0501 04:03:09.728128   20724 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/default-k8s-diff-port-715118/client.crt: no such file or directory
E0501 04:03:10.369284   20724 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/default-k8s-diff-port-715118/client.crt: no such file or directory
E0501 04:03:11.650364   20724 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/default-k8s-diff-port-715118/client.crt: no such file or directory
E0501 04:03:14.210884   20724 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/default-k8s-diff-port-715118/client.crt: no such file or directory
E0501 04:03:19.331056   20724 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/default-k8s-diff-port-715118/client.crt: no such file or directory
E0501 04:03:26.889079   20724 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/no-preload-892672/client.crt: no such file or directory
E0501 04:03:29.571854   20724 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/default-k8s-diff-port-715118/client.crt: no such file or directory
E0501 04:03:48.280830   20724 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/old-k8s-version-503971/client.crt: no such file or directory
E0501 04:03:48.286144   20724 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/old-k8s-version-503971/client.crt: no such file or directory
E0501 04:03:48.296458   20724 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/old-k8s-version-503971/client.crt: no such file or directory
E0501 04:03:48.316830   20724 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/old-k8s-version-503971/client.crt: no such file or directory
E0501 04:03:48.357163   20724 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/old-k8s-version-503971/client.crt: no such file or directory
E0501 04:03:48.437553   20724 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/old-k8s-version-503971/client.crt: no such file or directory
E0501 04:03:48.597994   20724 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/old-k8s-version-503971/client.crt: no such file or directory
E0501 04:03:48.918139   20724 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/old-k8s-version-503971/client.crt: no such file or directory
E0501 04:03:49.559032   20724 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/old-k8s-version-503971/client.crt: no such file or directory
E0501 04:03:50.052209   20724 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/default-k8s-diff-port-715118/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-731347 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (2m12.837496458s)
--- PASS: TestNetworkPlugins/group/flannel/Start (132.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-nzxmj" [1ba84a33-e160-4a11-a8ae-b2b8fb8f4ce7] Running
E0501 04:03:50.839441   20724 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/old-k8s-version-503971/client.crt: no such file or directory
E0501 04:03:53.400172   20724 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/old-k8s-version-503971/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.007771362s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-731347 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-731347 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-gmj2v" [6b8d7e8d-7602-4dc2-ae05-6486208fdf6e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0501 04:03:58.520356   20724 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/old-k8s-version-503971/client.crt: no such file or directory
helpers_test.go:344: "netcat-6bc787d567-gmj2v" [6b8d7e8d-7602-4dc2-ae05-6486208fdf6e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.005131263s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-731347 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (13.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-731347 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-ccmb7" [859785d2-6e2a-4439-b3ee-1035ce78f2a7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-ccmb7" [859785d2-6e2a-4439-b3ee-1035ce78f2a7] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 13.005426623s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (13.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-731347 exec deployment/netcat -- nslookup kubernetes.default
E0501 04:04:08.760880   20724 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/old-k8s-version-503971/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/calico/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-731347 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-731347 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-731347 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-731347 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-731347 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (100.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-731347 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
E0501 04:04:29.241410   20724 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/old-k8s-version-503971/client.crt: no such file or directory
E0501 04:04:31.013050   20724 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/default-k8s-diff-port-715118/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-731347 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m40.317179664s)
--- PASS: TestNetworkPlugins/group/bridge/Start (100.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-731347 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-731347 replace --force -f testdata/netcat-deployment.yaml
net_test.go:149: (dbg) Done: kubectl --context enable-default-cni-731347 replace --force -f testdata/netcat-deployment.yaml: (1.836589229s)
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-wjfzl" [18d4f4f8-3d4f-45c6-9a79-13cd5e4f55d1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-wjfzl" [18d4f4f8-3d4f-45c6-9a79-13cd5e4f55d1] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.004455641s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-msjc2" [96b7141c-9e36-4717-acea-18810e91f650] Running
E0501 04:05:10.202239   20724 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13391/.minikube/profiles/old-k8s-version-503971/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.006300897s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-731347 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-731347 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-731347 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-731347 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-731347 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-2bzgm" [67b4ea42-b35d-4aee-9be2-d94bc2567586] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-2bzgm" [67b4ea42-b35d-4aee-9be2-d94bc2567586] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.005773666s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-731347 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-731347 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-731347 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-731347 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-731347 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-8c7x7" [0d964ffb-969b-4cca-89ed-c66f43c45d81] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-8c7x7" [0d964ffb-969b-4cca-89ed-c66f43c45d81] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.003816131s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-731347 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-731347 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-731347 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                    

Test skip (36/311)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.30.0/cached-images 0
15 TestDownloadOnly/v1.30.0/binaries 0
16 TestDownloadOnly/v1.30.0/kubectl 0
20 TestDownloadOnlyKic 0
34 TestAddons/parallel/Olm 0
47 TestDockerFlags 0
50 TestDockerEnvContainerd 0
52 TestHyperKitDriverInstallOrUpdate 0
53 TestHyperkitDriverSkipUpgrade 0
104 TestFunctional/parallel/DockerEnv 0
105 TestFunctional/parallel/PodmanEnv 0
123 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
124 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
125 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
126 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
127 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
128 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
129 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
130 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
153 TestGvisorAddon 0
175 TestImageBuild 0
202 TestKicCustomNetwork 0
203 TestKicExistingNetwork 0
204 TestKicCustomSubnet 0
205 TestKicStaticIP 0
237 TestChangeNoneUser 0
240 TestScheduledStopWindows 0
242 TestSkaffold 0
244 TestInsufficientStorage 0
248 TestMissingContainerUpgrade 0
267 TestStartStop/group/disable-driver-mounts 0.17
273 TestNetworkPlugins/group/kubenet 4.8
281 TestNetworkPlugins/group/cilium 4.04
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.30.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-483221" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-483221
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-731347 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-731347

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-731347

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-731347

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-731347

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-731347

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-731347

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-731347

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-731347

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-731347

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-731347

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-731347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-731347"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-731347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-731347"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-731347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-731347"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-731347

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-731347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-731347"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-731347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-731347"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-731347" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-731347" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-731347" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-731347" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-731347" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-731347" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-731347" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-731347" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-731347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-731347"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-731347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-731347"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-731347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-731347"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-731347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-731347"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-731347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-731347"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-731347" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-731347" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-731347" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-731347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-731347"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-731347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-731347"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-731347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-731347"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-731347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-731347"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-731347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-731347"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-731347

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-731347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-731347"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-731347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-731347"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-731347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-731347"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-731347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-731347"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-731347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-731347"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-731347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-731347"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-731347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-731347"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-731347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-731347"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-731347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-731347"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-731347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-731347"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-731347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-731347"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-731347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-731347"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-731347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-731347"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-731347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-731347"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-731347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-731347"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-731347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-731347"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-731347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-731347"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-731347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-731347"

                                                
                                                
----------------------- debugLogs end: kubenet-731347 [took: 4.627839337s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-731347" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-731347
--- SKIP: TestNetworkPlugins/group/kubenet (4.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-731347 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-731347

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-731347

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-731347

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-731347

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-731347

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-731347

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-731347

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-731347

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-731347

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-731347

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-731347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-731347"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-731347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-731347"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-731347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-731347"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-731347

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-731347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-731347"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-731347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-731347"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-731347" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-731347" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-731347" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-731347" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-731347" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-731347" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-731347" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-731347" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-731347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-731347"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-731347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-731347"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-731347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-731347"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-731347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-731347"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-731347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-731347"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-731347

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-731347

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-731347" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-731347" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-731347

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-731347

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-731347" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-731347" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-731347" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-731347" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-731347" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-731347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-731347"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-731347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-731347"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-731347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-731347"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-731347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-731347"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-731347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-731347"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-731347

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-731347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-731347"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-731347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-731347"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-731347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-731347"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-731347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-731347"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-731347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-731347"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-731347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-731347"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-731347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-731347"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-731347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-731347"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-731347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-731347"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-731347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-731347"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-731347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-731347"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-731347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-731347"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-731347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-731347"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-731347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-731347"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-731347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-731347"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-731347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-731347"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-731347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-731347"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-731347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-731347"

                                                
                                                
----------------------- debugLogs end: cilium-731347 [took: 3.869739715s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-731347" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-731347
--- SKIP: TestNetworkPlugins/group/cilium (4.04s)

                                                
                                    
Copied to clipboard